tighten test_process_pixels_gray bound #5690
Open
+1
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposed change(s)
The test
test_process_pixels_gray
intest_rpc_utils.py
has an assertion bound (assert (np.mean((in_array.mean(axis=2, keepdims=True) - out_array)) < 0.01)
) that is too loose. This means potential bug in the code could still pass the original test.To quantify this I conducted some experiments where I generated multiple mutations of the source code under test and ran each mutant and the original code 100 times to build a distribution of their outputs. Each mutant is generated using simple mutation operators (e.g. > can become < ) on source code covered by the test. I used KS-test to find mutants that produced a different distribution from the original and use those mutants as a proxy for bugs that could be introduced. In the graph below I show the distribution of both the original code and also the mutants with a different distribution.
Here we see that the bound of
0.01
is too loose since the original distribution (in orange) is less than0.01
. Furthermore in this graph we can observe that there are many mutants (proxy for bugs) that are below the bound as well and that is undesirable since the test should aim to catch potential bugs in code. I quantify the "bug detection" of this assertion by varying the bound in a trade-off graph below.In this graph, I plot the mutant catch rate (ratio of mutant outputs that fail the test) and the original pass rate (ratio of original output that pass the test). The original bound of
0.01
(red dotted line) has a catch rate of 0.33.To improve this test, I propose to tighten the bound to
0.002
(the blue dotted line). The new bound has a catch rate of 0.67 (+0.34 increase compare to original) while still has >99 % pass rate (test is not flaky, I ran the updated test 500 times and observed 100 % pass rate). I think this is a good balance between improving the bug-detection ability of the test while keep the flakiness of the test low.Do you guys think this makes sense? Please let me know if this looks good or if you have any other suggestions or questions.
My Environment:
my ml-agents Experiment SHA:
d2ee1e2569aa68d3ac0e1dd8d91f1bf812b9df27
Types of change(s)
Checklist