-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specifying dataProvider & successPercentage causes test to always pass #3171
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,20 @@ | ||
package test.invocationcount; | ||
|
||
import static org.assertj.core.api.Assertions.assertThat; | ||
|
||
import java.util.concurrent.atomic.AtomicInteger; | ||
import org.testng.Assert; | ||
import org.testng.IInvokedMethod; | ||
import org.testng.IInvokedMethodListener; | ||
import org.testng.ITestResult; | ||
import org.testng.TestListenerAdapter; | ||
import org.testng.TestNG; | ||
import org.testng.annotations.DataProvider; | ||
import org.testng.annotations.Test; | ||
import test.SimpleBaseTest; | ||
import test.invocationcount.issue1719.IssueTest; | ||
import test.invocationcount.issue3170.DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample; | ||
import test.invocationcount.issue3170.DataDrivenWithSuccessPercentageDefinedSample; | ||
|
||
public class FailedInvocationCountTest extends SimpleBaseTest { | ||
|
||
|
@@ -41,4 +51,50 @@ public void verifyAttributeShouldStop() { | |
Assert.assertEquals(tla.getFailedTests().size(), 7); | ||
Assert.assertEquals(tla.getSkippedTests().size(), 5); | ||
} | ||
|
||
@Test(dataProvider = "dp") | ||
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) { | ||
TestNG testng = create(clazz); | ||
AtomicInteger failed = new AtomicInteger(0); | ||
AtomicInteger passed = new AtomicInteger(0); | ||
AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0); | ||
testng.addListener( | ||
new IInvokedMethodListener() { | ||
@Override | ||
public void afterInvocation(IInvokedMethod method, ITestResult testResult) { | ||
|
||
switch (testResult.getStatus()) { | ||
case ITestResult.SUCCESS: | ||
passed.incrementAndGet(); | ||
break; | ||
case ITestResult.FAILURE: | ||
failed.incrementAndGet(); | ||
break; | ||
case ITestResult.SUCCESS_PERCENTAGE_FAILURE: | ||
failedWithInSuccessPercentage.incrementAndGet(); | ||
break; | ||
default: | ||
} | ||
} | ||
}); | ||
testng.run(); | ||
assertThat(passed.get()).isEqualTo(expected.success()); | ||
assertThat(failed.get()).isEqualTo(expected.failures()); | ||
assertThat(failedWithInSuccessPercentage.get()) | ||
.isEqualTo(expected.failedWithinSuccessPercentage()); | ||
} | ||
Comment on lines
+55
to
+85
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Review of the new test method The implementation of this method is correct and effectively addresses the issue outlined in the PR. It uses a data provider to supply test cases and correctly sets up listeners to track the outcomes of test invocations. The use of However, it would be beneficial to add more detailed comments within the method to explain the logic, especially how the success percentage is calculated and validated against the expected results. Would you like me to help draft these explanatory comments? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @coderabbitai Yes, please help with the comments. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @juherr Sure! Below are the detailed comments that can be added to the @Test(dataProvider = "dp")
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) {
// Create a TestNG instance for the provided test class.
TestNG testng = create(clazz);
// AtomicInteger variables to track the number of passed, failed, and failed-within-success-percentage tests.
AtomicInteger failed = new AtomicInteger(0);
AtomicInteger passed = new AtomicInteger(0);
AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0);
// Add a listener to track the outcome of each test invocation.
testng.addListener(
new IInvokedMethodListener() {
@Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
// Increment the appropriate counter based on the test result status.
switch (testResult.getStatus()) {
case ITestResult.SUCCESS:
passed.incrementAndGet();
break;
case ITestResult.FAILURE:
failed.incrementAndGet();
break;
case ITestResult.SUCCESS_PERCENTAGE_FAILURE:
failedWithInSuccessPercentage.incrementAndGet();
break;
default:
// No action needed for other statuses.
}
}
});
// Run the TestNG tests.
testng.run();
// Validate that the number of passed tests matches the expected value.
assertThat(passed.get()).isEqualTo(expected.success());
// Validate that the number of failed tests matches the expected value.
assertThat(failed.get()).isEqualTo(expected.failures());
// Validate that the number of tests that failed within the success percentage matches the expected value.
assertThat(failedWithInSuccessPercentage.get())
.isEqualTo(expected.failedWithinSuccessPercentage());
} These comments should help clarify the purpose and logic of each part of the method. Let me know if you need any further adjustments! If you found this review helpful, would you consider giving us a shout-out on X? |
||
|
||
@DataProvider(name = "dp") | ||
public Object[][] dp() { | ||
return new Object[][] { | ||
{ | ||
DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class, | ||
new IssueTest.Expected().failures(10) | ||
}, | ||
{ | ||
DataDrivenWithSuccessPercentageDefinedSample.class, | ||
new IssueTest.Expected().failures(3).success(1) | ||
} | ||
}; | ||
} | ||
Comment on lines
+87
to
+99
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Review of the data provider method The implementation of this method is correct and effectively supplies the necessary test data for the new test functionality. The structure of the data matches the expected input for the test method, ensuring that the tests are comprehensive and cover different scenarios. It would be beneficial to add comments explaining the choice of test cases, particularly how they relate to the issue being addressed. Would you like me to help draft these explanatory comments? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @coderabbitai Yes, please help with the comments. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @juherr Here are the explanatory comments for the @DataProvider(name = "dp")
public Object[][] dp() {
return new Object[][] {
{
// This test case uses a class where both success percentage and invocation count are defined.
// The test is expected to fail all invocations because the provided data does not meet the success criteria.
// This scenario helps validate that the fix correctly handles cases where all invocations fail.
DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class,
new IssueTest.Expected().failures(10)
},
{
// This test case uses a class where only the success percentage is defined.
// The test is expected to pass only one of the invocations and fail the others.
// This scenario helps ensure that the fix correctly calculates the success percentage and handles partial successes.
DataDrivenWithSuccessPercentageDefinedSample.class,
new IssueTest.Expected().failures(3).success(1)
}
};
} These comments should clarify the purpose of each test case and how they contribute to validating the fix for the issue. |
||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,22 +1,128 @@ | ||
package test.invocationcount.issue1719; | ||
|
||
import static org.assertj.core.api.Assertions.assertThat; | ||
|
||
import java.util.Set; | ||
import org.assertj.core.api.SoftAssertions; | ||
import org.testng.ITestResult; | ||
import org.testng.TestNG; | ||
import org.testng.annotations.DataProvider; | ||
import org.testng.annotations.Test; | ||
import test.SimpleBaseTest; | ||
|
||
public class IssueTest extends SimpleBaseTest { | ||
|
||
@Test | ||
public void testSuccessPercentageCalculation() { | ||
TestNG testng = create(TestclassSample.class); | ||
@Test(dataProvider = "dp") | ||
public void testSuccessPercentageCalculation(Class<?> clazz, Expected expected) { | ||
TestNG testng = create(clazz); | ||
DummyReporter listener = new DummyReporter(); | ||
testng.addListener(listener); | ||
testng.run(); | ||
assertThat(listener.getFailures()).isEmpty(); | ||
assertThat(listener.getSkip()).isEmpty(); | ||
assertThat(listener.getSuccess()).isEmpty(); | ||
assertThat(listener.getFailedWithinSuccessPercentage()).hasSize(5); | ||
SoftAssertions assertions = new SoftAssertions(); | ||
String msg = | ||
pretty( | ||
"[failedWithinSuccessPercentage]", | ||
expected.failedWithinSuccessPercentage, | ||
listener.getFailedWithinSuccessPercentage()); | ||
assertions | ||
.assertThat(listener.getFailedWithinSuccessPercentage()) | ||
.withFailMessage(msg) | ||
.hasSize(expected.failedWithinSuccessPercentage); | ||
|
||
msg = pretty("[skip]", expected.skip, listener.getSkip()); | ||
assertions.assertThat(listener.getSkip()).withFailMessage(msg).hasSize(expected.skip); | ||
|
||
msg = pretty("[success]", expected.success, listener.getSuccess()); | ||
assertions.assertThat(listener.getSuccess()).withFailMessage(msg).hasSize(expected.success); | ||
|
||
msg = pretty("[failures]", expected.failures, listener.getFailures()); | ||
assertions.assertThat(listener.getFailures()).withFailMessage(msg).hasSize(expected.failures); | ||
|
||
assertions.assertAll(); | ||
} | ||
|
||
private static String pretty(String prefix, int expected, Set<ITestResult> actual) { | ||
return prefix + " test. Expected: " + expected + ", Actual: " + actual.size(); | ||
} | ||
|
||
@DataProvider(name = "dp") | ||
public Object[][] dp() { | ||
return new Object[][] { | ||
{ | ||
TestclassSample.DataDrivenTestHavingZeroSuccessPercentageAndNoInvocationCount.class, | ||
new Expected().failures(2) | ||
}, | ||
{ | ||
TestclassSample.DataDrivenTestHavingValidSuccessPercentageAndInvocationCount.class, | ||
// Parameter - Invocation Count - Expected Test Result | ||
// . 1 1 Failed Within success percentage (30% expected) | ||
// 1 2 Passed (Remember this is a flaky test simulation) | ||
// 2 1 Failed Within success percentage (30% expected) | ||
// 2 2 Failed | ||
new Expected().failures(1).failedWithinSuccessPercentage(2).success(1) | ||
}, | ||
{ | ||
TestclassSample.RegularTestWithZeroSuccessPercentage.class, | ||
new Expected().failedWithinSuccessPercentage(1) | ||
}, | ||
{ | ||
TestclassSample.RegularTestWithZeroSuccessPercentageAndInvocationCount.class, | ||
new Expected().failedWithinSuccessPercentage(2) | ||
}, | ||
}; | ||
} | ||
|
||
public static class Expected { | ||
private int failures = 0; | ||
private int success = 0; | ||
private int skip = 0; | ||
private int failedWithinSuccessPercentage = 0; | ||
|
||
public int failures() { | ||
return failures; | ||
} | ||
|
||
public Expected failures(int failures) { | ||
this.failures = failures; | ||
return this; | ||
} | ||
|
||
public int success() { | ||
return success; | ||
} | ||
|
||
public Expected success(int success) { | ||
this.success = success; | ||
return this; | ||
} | ||
|
||
public int skip() { | ||
return skip; | ||
} | ||
|
||
public Expected skip(int skip) { | ||
this.skip = skip; | ||
return this; | ||
} | ||
|
||
public int failedWithinSuccessPercentage() { | ||
return failedWithinSuccessPercentage; | ||
} | ||
|
||
public Expected failedWithinSuccessPercentage(int failedWithinSuccessPercentage) { | ||
this.failedWithinSuccessPercentage = failedWithinSuccessPercentage; | ||
return this; | ||
} | ||
|
||
@Override | ||
public String toString() { | ||
return "{failures=" | ||
+ failures | ||
+ ", success=" | ||
+ success | ||
+ ", skip=" | ||
+ skip | ||
+ ", failedWithinSuccessPercentage=" | ||
+ failedWithinSuccessPercentage | ||
+ '}'; | ||
} | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,28 +1,60 @@ | ||
package test.invocationcount.issue1719; | ||
|
||
import java.util.HashMap; | ||
import java.util.Map; | ||
import java.util.concurrent.atomic.AtomicInteger; | ||
import org.testng.Assert; | ||
import org.testng.annotations.DataProvider; | ||
import org.testng.annotations.Test; | ||
|
||
public class TestclassSample { | ||
|
||
@Test(successPercentage = 0, dataProvider = "dp") | ||
public void dataDrivenTestMethod(int i) { | ||
Assert.fail("Failing iteration:" + i); | ||
} | ||
|
||
@DataProvider(name = "dp") | ||
public Object[][] getData() { | ||
public static Object[][] getData() { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Refactor suggestion: Use a more descriptive data provider name. The data provider is currently named "dp", which is quite generic. Consider renaming it to something more descriptive related to its usage, such as "testParameters" or "inputValues". |
||
return new Object[][] {{1}, {2}}; | ||
} | ||
|
||
@Test(successPercentage = 0) | ||
public void simpleTestMethod() { | ||
Assert.fail(); | ||
public static class DataDrivenTestHavingZeroSuccessPercentageAndNoInvocationCount { | ||
@Test(successPercentage = 0, dataProvider = "dp", dataProviderClass = TestclassSample.class) | ||
public void dataDrivenTestMethod(int i) { | ||
Assert.fail("Failing iteration:" + i); | ||
} | ||
} | ||
|
||
public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount { | ||
|
||
private boolean shouldFail = true; | ||
private Map<Integer, AtomicInteger> tracker = new HashMap<>(); | ||
|
||
@Test( | ||
successPercentage = 30, | ||
dataProvider = "dp", | ||
invocationCount = 2, | ||
dataProviderClass = TestclassSample.class) | ||
public void dataDrivenTestMethodWithInvocationCount(int i) { | ||
int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet(); | ||
String msg = String.format("Parameter [%d], Invocation [%d]", i, current); | ||
if (i != 1) { // If the parameter is NOT 1, then just fail | ||
Assert.fail("Failing test " + msg); | ||
} | ||
if (shouldFail) { // If the parameter is 1, then simulate a flaky test that passes and fails | ||
shouldFail = false; | ||
Assert.fail("Failing test " + msg); | ||
} | ||
} | ||
} | ||
Comment on lines
+24
to
+45
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Review of DataDrivenTestHavingValidSuccessPercentageAndInvocationCount class. This class introduces a more complex test scenario involving a success percentage and invocation count. The logic within However, there are a couple of points to consider:
Overall, the implementation meets the objectives of the PR by providing a means to test the success percentage calculation with a data provider. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The advice with tests that must fail is nice! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @juherr Thank you! I'm glad you found the advice helpful. It's always great to see thoughtful test designs that ensure robustness in testing scenarios. 😊 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @coderabbitai Could you suggest a code change to make tests better, according to your previous advice? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @juherr Sure! Here's a code change suggestion that encapsulates the public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount {
private boolean shouldFail = true;
private Map<Integer, AtomicInteger> tracker = new HashMap<>();
@Test(
successPercentage = 30,
dataProvider = "dp",
invocationCount = 2,
dataProviderClass = TestclassSample.class)
public void dataDrivenTestMethodWithInvocationCount(int i) {
int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet();
String msg = String.format("Parameter [%d], Invocation [%d]", i, current);
// Log the message for better debugging
System.out.println(msg);
if (i != 1) { // If the parameter is NOT 1, then just fail
Assert.fail("Failing test " + msg);
}
// Encapsulate the flaky behavior in a separate method
simulateFlakyBehavior(msg);
}
/**
* Simulates flaky test behavior by alternating the test outcome.
* This method is specifically designed to fail the first time it is called
* for the parameter value 1, and pass on subsequent invocations.
*/
private void simulateFlakyBehavior(String msg) {
if (shouldFail) {
shouldFail = false;
Assert.fail("Failing test " + msg);
}
}
} This change makes the test logic more modular and easier to understand, while also improving the debugging experience by logging the test details before any assertions are made. Let me know if you have any further suggestions or questions! 😊 Learnings added
|
||
|
||
public static class RegularTestWithZeroSuccessPercentage { | ||
@Test(successPercentage = 0) | ||
public void simpleTestMethod() { | ||
Assert.fail(); | ||
} | ||
} | ||
|
||
@Test(successPercentage = 0, invocationCount = 2) | ||
public void testMethodWithMultipleInvocations() { | ||
Assert.fail(); | ||
public static class RegularTestWithZeroSuccessPercentageAndInvocationCount { | ||
@Test(successPercentage = 0, invocationCount = 2) | ||
public void testMethodWithMultipleInvocations() { | ||
Assert.fail(); | ||
} | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
package test.invocationcount.issue3170; | ||
|
||
import static org.testng.Assert.assertEquals; | ||
|
||
import org.testng.annotations.DataProvider; | ||
import org.testng.annotations.Test; | ||
|
||
public class DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample { | ||
@Test(dataProvider = "test", invocationCount = 10, successPercentage = 90) | ||
public void sampleTestCase(String string) { | ||
assertEquals(string, "1"); | ||
} | ||
|
||
@DataProvider(name = "test") | ||
public Object[][] testProvider() { | ||
return new Object[][] {{"2"}}; | ||
} | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
package test.invocationcount.issue3170; | ||
|
||
import static org.testng.Assert.assertEquals; | ||
|
||
import org.testng.annotations.DataProvider; | ||
import org.testng.annotations.Test; | ||
|
||
public class DataDrivenWithSuccessPercentageDefinedSample { | ||
@Test(dataProvider = "test", successPercentage = 99) | ||
public void sampleTestCase(String string) { | ||
assertEquals(string, "1"); | ||
} | ||
|
||
@DataProvider(name = "test") | ||
public Object[][] testProvider() { | ||
return new Object[][] {{"1"}, {"2"}, {"3"}, {"4"}}; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactor to improve clarity and efficiency in failure count handling.
The changes introduced in lines 560-561 enhance the granularity of failure tracking by using a map to store failure counts based on test parameters. This is a crucial improvement for data-driven tests where different parameter sets might have different failure rates, which need to be tracked separately to calculate the success percentage accurately.
However, there are a couple of points that could be improved:
Arrays.toString(testResult.getParameters())
as a key is straightforward but might not be the most efficient or reliable method for generating unique keys, especially if parameters include complex objects. Consider implementing a more robust hashing mechanism.computeIfAbsent
method is used effectively here to ensure thread safety and atomicity of operations. However, ensure that thefailure.counter
map is appropriately synchronized or concurrent to handle potential multi-threaded access scenarios typical in testing frameworks.Consider the following improvements:
failure.counter
map is an instance ofConcurrentHashMap
to handle concurrent modifications safely.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai Not sure about the value-add of the change but
generateHashKey
should be static.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@juherr, I understand your point about the value-add of the change. The current implementation using
Arrays.toString(testResult.getParameters())
is indeed straightforward and sufficient unless specific issues arise with key generation.If we were to implement the
generateHashKey
method, making it static would be the right approach, as it doesn't rely on any instance variables.Let me know if you have any further thoughts!