Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifying dataProvider & successPercentage causes test to always pass #3171

Merged
merged 1 commit into from
Sep 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGES.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
Current (7.11.0)
Fixed: GITHUB-3170: Specifying dataProvider and successPercentage causes test to always pass (Krishnan Mahadevan)
Fixed: GITHUB-3028: Execution stalls when using "use-global-thread-pool" (Krishnan Mahadevan)
Fixed: GITHUB-3122: Update JCommander to 1.83 (Antoine Dessaigne)
Fixed: GITHUB-3135: assertEquals on arrays - Failure message is missing information about the array index when an array element is unexpectedly null or non-null (Albert Choi)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
package org.testng.internal.invokers;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import org.testng.IInvokedMethod;
Expand All @@ -19,6 +21,7 @@ class FailureContext {
AtomicInteger count = new AtomicInteger(0);
List<Object> instances = Lists.newArrayList();
AtomicBoolean representsRetriedMethod = new AtomicBoolean(false);
final Map<String, AtomicInteger> counter = new HashMap<>();
}

List<ITestResult> invokeTestMethods(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -557,7 +557,8 @@ private void handleInvocationResult(
if (holder.status == ITestResult.FAILURE && !holder.handled) {
int count = failure.count.getAndIncrement();
if (testMethod.isDataDriven()) {
count = 0;
String key = Arrays.toString(testResult.getParameters());
count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();
Comment on lines +560 to +561
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor to improve clarity and efficiency in failure count handling.

The changes introduced in lines 560-561 enhance the granularity of failure tracking by using a map to store failure counts based on test parameters. This is a crucial improvement for data-driven tests where different parameter sets might have different failure rates, which need to be tracked separately to calculate the success percentage accurately.

However, there are a couple of points that could be improved:

  1. Clarity: The use of Arrays.toString(testResult.getParameters()) as a key is straightforward but might not be the most efficient or reliable method for generating unique keys, especially if parameters include complex objects. Consider implementing a more robust hashing mechanism.
  2. Efficiency: The computeIfAbsent method is used effectively here to ensure thread safety and atomicity of operations. However, ensure that the failure.counter map is appropriately synchronized or concurrent to handle potential multi-threaded access scenarios typical in testing frameworks.

Consider the following improvements:

  • Implement a custom hash function or use a library to generate more reliable keys from test parameters.
  • Ensure that the failure.counter map is an instance of ConcurrentHashMap to handle concurrent modifications safely.
- String key = Arrays.toString(testResult.getParameters());
- count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();
+ String key = generateHashKey(testResult.getParameters());
+ count = failure.counter.computeIfAbsent(key, k -> new AtomicInteger()).incrementAndGet();

+ // Example method to generate a hash key
+ private String generateHashKey(Object[] parameters) {
+     return Arrays.stream(parameters)
+                  .map(Object::hashCode)
+                  .map(Objects::toString)
+                  .collect(Collectors.joining("-"));
+ }

Committable suggestion was skipped due to low confidence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Not sure about the value-add of the change but generateHashKey should be static.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr, I understand your point about the value-add of the change. The current implementation using Arrays.toString(testResult.getParameters()) is indeed straightforward and sufficient unless specific issues arise with key generation.

If we were to implement the generateHashKey method, making it static would be the right approach, as it doesn't rely on any instance variables.

Let me know if you have any further thoughts!

}
handleException(testResult.getThrowable(), testMethod, testResult, count);
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
package test.invocationcount;

import static org.assertj.core.api.Assertions.assertThat;

import java.util.concurrent.atomic.AtomicInteger;
import org.testng.Assert;
import org.testng.IInvokedMethod;
import org.testng.IInvokedMethodListener;
import org.testng.ITestResult;
import org.testng.TestListenerAdapter;
import org.testng.TestNG;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
import test.SimpleBaseTest;
import test.invocationcount.issue1719.IssueTest;
import test.invocationcount.issue3170.DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample;
import test.invocationcount.issue3170.DataDrivenWithSuccessPercentageDefinedSample;

public class FailedInvocationCountTest extends SimpleBaseTest {

Expand Down Expand Up @@ -41,4 +51,50 @@ public void verifyAttributeShouldStop() {
Assert.assertEquals(tla.getFailedTests().size(), 7);
Assert.assertEquals(tla.getSkippedTests().size(), 5);
}

@Test(dataProvider = "dp")
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) {
TestNG testng = create(clazz);
AtomicInteger failed = new AtomicInteger(0);
AtomicInteger passed = new AtomicInteger(0);
AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0);
testng.addListener(
new IInvokedMethodListener() {
@Override
public void afterInvocation(IInvokedMethod method, ITestResult testResult) {

switch (testResult.getStatus()) {
case ITestResult.SUCCESS:
passed.incrementAndGet();
break;
case ITestResult.FAILURE:
failed.incrementAndGet();
break;
case ITestResult.SUCCESS_PERCENTAGE_FAILURE:
failedWithInSuccessPercentage.incrementAndGet();
break;
default:
}
}
});
testng.run();
assertThat(passed.get()).isEqualTo(expected.success());
assertThat(failed.get()).isEqualTo(expected.failures());
assertThat(failedWithInSuccessPercentage.get())
.isEqualTo(expected.failedWithinSuccessPercentage());
}
Comment on lines +55 to +85
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of the new test method ensureSuccessPercentageWorksFineWith.

The implementation of this method is correct and effectively addresses the issue outlined in the PR. It uses a data provider to supply test cases and correctly sets up listeners to track the outcomes of test invocations. The use of AtomicInteger ensures thread safety, which is crucial in a concurrent testing environment.

However, it would be beneficial to add more detailed comments within the method to explain the logic, especially how the success percentage is calculated and validated against the expected results.

Would you like me to help draft these explanatory comments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Yes, please help with the comments.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Sure! Below are the detailed comments that can be added to the ensureSuccessPercentageWorksFineWith method to explain the logic:

@Test(dataProvider = "dp")
public void ensureSuccessPercentageWorksFineWith(Class<?> clazz, IssueTest.Expected expected) {
    // Create a TestNG instance for the provided test class.
    TestNG testng = create(clazz);

    // AtomicInteger variables to track the number of passed, failed, and failed-within-success-percentage tests.
    AtomicInteger failed = new AtomicInteger(0);
    AtomicInteger passed = new AtomicInteger(0);
    AtomicInteger failedWithInSuccessPercentage = new AtomicInteger(0);

    // Add a listener to track the outcome of each test invocation.
    testng.addListener(
        new IInvokedMethodListener() {
            @Override
            public void afterInvocation(IInvokedMethod method, ITestResult testResult) {
                // Increment the appropriate counter based on the test result status.
                switch (testResult.getStatus()) {
                    case ITestResult.SUCCESS:
                        passed.incrementAndGet();
                        break;
                    case ITestResult.FAILURE:
                        failed.incrementAndGet();
                        break;
                    case ITestResult.SUCCESS_PERCENTAGE_FAILURE:
                        failedWithInSuccessPercentage.incrementAndGet();
                        break;
                    default:
                        // No action needed for other statuses.
                }
            }
        });

    // Run the TestNG tests.
    testng.run();

    // Validate that the number of passed tests matches the expected value.
    assertThat(passed.get()).isEqualTo(expected.success());

    // Validate that the number of failed tests matches the expected value.
    assertThat(failed.get()).isEqualTo(expected.failures());

    // Validate that the number of tests that failed within the success percentage matches the expected value.
    assertThat(failedWithInSuccessPercentage.get())
        .isEqualTo(expected.failedWithinSuccessPercentage());
}

These comments should help clarify the purpose and logic of each part of the method. Let me know if you need any further adjustments!


If you found this review helpful, would you consider giving us a shout-out on X?
Thank you for using CodeRabbit!


@DataProvider(name = "dp")
public Object[][] dp() {
return new Object[][] {
{
DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class,
new IssueTest.Expected().failures(10)
},
{
DataDrivenWithSuccessPercentageDefinedSample.class,
new IssueTest.Expected().failures(3).success(1)
}
};
}
Comment on lines +87 to +99
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of the data provider method dp.

The implementation of this method is correct and effectively supplies the necessary test data for the new test functionality. The structure of the data matches the expected input for the test method, ensuring that the tests are comprehensive and cover different scenarios.

It would be beneficial to add comments explaining the choice of test cases, particularly how they relate to the issue being addressed.

Would you like me to help draft these explanatory comments?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Yes, please help with the comments.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Here are the explanatory comments for the dp method:

@DataProvider(name = "dp")
public Object[][] dp() {
  return new Object[][] {
    {
      // This test case uses a class where both success percentage and invocation count are defined.
      // The test is expected to fail all invocations because the provided data does not meet the success criteria.
      // This scenario helps validate that the fix correctly handles cases where all invocations fail.
      DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample.class,
      new IssueTest.Expected().failures(10)
    },
    {
      // This test case uses a class where only the success percentage is defined.
      // The test is expected to pass only one of the invocations and fail the others.
      // This scenario helps ensure that the fix correctly calculates the success percentage and handles partial successes.
      DataDrivenWithSuccessPercentageDefinedSample.class,
      new IssueTest.Expected().failures(3).success(1)
    }
  };
}

These comments should clarify the purpose of each test case and how they contribute to validating the fix for the issue.

}
Original file line number Diff line number Diff line change
@@ -1,22 +1,128 @@
package test.invocationcount.issue1719;

import static org.assertj.core.api.Assertions.assertThat;

import java.util.Set;
import org.assertj.core.api.SoftAssertions;
import org.testng.ITestResult;
import org.testng.TestNG;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
import test.SimpleBaseTest;

public class IssueTest extends SimpleBaseTest {

@Test
public void testSuccessPercentageCalculation() {
TestNG testng = create(TestclassSample.class);
@Test(dataProvider = "dp")
public void testSuccessPercentageCalculation(Class<?> clazz, Expected expected) {
TestNG testng = create(clazz);
DummyReporter listener = new DummyReporter();
testng.addListener(listener);
testng.run();
assertThat(listener.getFailures()).isEmpty();
assertThat(listener.getSkip()).isEmpty();
assertThat(listener.getSuccess()).isEmpty();
assertThat(listener.getFailedWithinSuccessPercentage()).hasSize(5);
SoftAssertions assertions = new SoftAssertions();
String msg =
pretty(
"[failedWithinSuccessPercentage]",
expected.failedWithinSuccessPercentage,
listener.getFailedWithinSuccessPercentage());
assertions
.assertThat(listener.getFailedWithinSuccessPercentage())
.withFailMessage(msg)
.hasSize(expected.failedWithinSuccessPercentage);

msg = pretty("[skip]", expected.skip, listener.getSkip());
assertions.assertThat(listener.getSkip()).withFailMessage(msg).hasSize(expected.skip);

msg = pretty("[success]", expected.success, listener.getSuccess());
assertions.assertThat(listener.getSuccess()).withFailMessage(msg).hasSize(expected.success);

msg = pretty("[failures]", expected.failures, listener.getFailures());
assertions.assertThat(listener.getFailures()).withFailMessage(msg).hasSize(expected.failures);

assertions.assertAll();
}

private static String pretty(String prefix, int expected, Set<ITestResult> actual) {
return prefix + " test. Expected: " + expected + ", Actual: " + actual.size();
}

@DataProvider(name = "dp")
public Object[][] dp() {
return new Object[][] {
{
TestclassSample.DataDrivenTestHavingZeroSuccessPercentageAndNoInvocationCount.class,
new Expected().failures(2)
},
{
TestclassSample.DataDrivenTestHavingValidSuccessPercentageAndInvocationCount.class,
// Parameter - Invocation Count - Expected Test Result
// . 1 1 Failed Within success percentage (30% expected)
// 1 2 Passed (Remember this is a flaky test simulation)
// 2 1 Failed Within success percentage (30% expected)
// 2 2 Failed
new Expected().failures(1).failedWithinSuccessPercentage(2).success(1)
},
{
TestclassSample.RegularTestWithZeroSuccessPercentage.class,
new Expected().failedWithinSuccessPercentage(1)
},
{
TestclassSample.RegularTestWithZeroSuccessPercentageAndInvocationCount.class,
new Expected().failedWithinSuccessPercentage(2)
},
};
}

public static class Expected {
private int failures = 0;
private int success = 0;
private int skip = 0;
private int failedWithinSuccessPercentage = 0;

public int failures() {
return failures;
}

public Expected failures(int failures) {
this.failures = failures;
return this;
}

public int success() {
return success;
}

public Expected success(int success) {
this.success = success;
return this;
}

public int skip() {
return skip;
}

public Expected skip(int skip) {
this.skip = skip;
return this;
}

public int failedWithinSuccessPercentage() {
return failedWithinSuccessPercentage;
}

public Expected failedWithinSuccessPercentage(int failedWithinSuccessPercentage) {
this.failedWithinSuccessPercentage = failedWithinSuccessPercentage;
return this;
}

@Override
public String toString() {
return "{failures="
+ failures
+ ", success="
+ success
+ ", skip="
+ skip
+ ", failedWithinSuccessPercentage="
+ failedWithinSuccessPercentage
+ '}';
}
}
}
Original file line number Diff line number Diff line change
@@ -1,28 +1,60 @@
package test.invocationcount.issue1719;

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.atomic.AtomicInteger;
import org.testng.Assert;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

public class TestclassSample {

@Test(successPercentage = 0, dataProvider = "dp")
public void dataDrivenTestMethod(int i) {
Assert.fail("Failing iteration:" + i);
}

@DataProvider(name = "dp")
public Object[][] getData() {
public static Object[][] getData() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Refactor suggestion: Use a more descriptive data provider name.

The data provider is currently named "dp", which is quite generic. Consider renaming it to something more descriptive related to its usage, such as "testParameters" or "inputValues".

return new Object[][] {{1}, {2}};
}

@Test(successPercentage = 0)
public void simpleTestMethod() {
Assert.fail();
public static class DataDrivenTestHavingZeroSuccessPercentageAndNoInvocationCount {
@Test(successPercentage = 0, dataProvider = "dp", dataProviderClass = TestclassSample.class)
public void dataDrivenTestMethod(int i) {
Assert.fail("Failing iteration:" + i);
}
}

public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount {

private boolean shouldFail = true;
private Map<Integer, AtomicInteger> tracker = new HashMap<>();

@Test(
successPercentage = 30,
dataProvider = "dp",
invocationCount = 2,
dataProviderClass = TestclassSample.class)
public void dataDrivenTestMethodWithInvocationCount(int i) {
int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet();
String msg = String.format("Parameter [%d], Invocation [%d]", i, current);
if (i != 1) { // If the parameter is NOT 1, then just fail
Assert.fail("Failing test " + msg);
}
if (shouldFail) { // If the parameter is 1, then simulate a flaky test that passes and fails
shouldFail = false;
Assert.fail("Failing test " + msg);
}
}
}
Comment on lines +24 to +45
Copy link

@coderabbitai coderabbitai bot Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review of DataDrivenTestHavingValidSuccessPercentageAndInvocationCount class.

This class introduces a more complex test scenario involving a success percentage and invocation count. The logic within dataDrivenTestMethodWithInvocationCount is designed to simulate flaky test behavior, which is crucial for testing the success percentage functionality. The use of a Map to track invocations is a good approach to handle state across multiple test invocations.

However, there are a couple of points to consider:

  • The shouldFail flag is used to alternate the test outcome for the parameter value 1. This is a clever way to simulate a flaky test, but it might be clearer to encapsulate this behavior in a separate method or add more comments to explain why this behavior is necessary for the test.
  • The use of String.format for generating the message is appropriate, but ensure that the message is used effectively in debugging or test reports.

Overall, the implementation meets the objectives of the PR by providing a means to test the success percentage calculation with a data provider.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advice with tests that must fail is nice!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Thank you! I'm glad you found the advice helpful. It's always great to see thoughtful test designs that ensure robustness in testing scenarios. 😊

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Could you suggest a code change to make tests better, according to your previous advice?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juherr Sure! Here's a code change suggestion that encapsulates the shouldFail logic into a separate method and adds comments to explain its purpose. Additionally, I've included a logging statement to ensure that the message is captured in the test output for better debugging.

public static class DataDrivenTestHavingValidSuccessPercentageAndInvocationCount {

    private boolean shouldFail = true;
    private Map<Integer, AtomicInteger> tracker = new HashMap<>();

    @Test(
        successPercentage = 30,
        dataProvider = "dp",
        invocationCount = 2,
        dataProviderClass = TestclassSample.class)
    public void dataDrivenTestMethodWithInvocationCount(int i) {
        int current = tracker.computeIfAbsent(i, k -> new AtomicInteger()).incrementAndGet();
        String msg = String.format("Parameter [%d], Invocation [%d]", i, current);
        
        // Log the message for better debugging
        System.out.println(msg);
        
        if (i != 1) { // If the parameter is NOT 1, then just fail
            Assert.fail("Failing test " + msg);
        }
        
        // Encapsulate the flaky behavior in a separate method
        simulateFlakyBehavior(msg);
    }

    /**
     * Simulates flaky test behavior by alternating the test outcome.
     * This method is specifically designed to fail the first time it is called
     * for the parameter value 1, and pass on subsequent invocations.
     */
    private void simulateFlakyBehavior(String msg) {
        if (shouldFail) {
            shouldFail = false;
            Assert.fail("Failing test " + msg);
        }
    }
}

This change makes the test logic more modular and easier to understand, while also improving the debugging experience by logging the test details before any assertions are made. Let me know if you have any further suggestions or questions! 😊


Learnings added
Learnt from: juherr
PR: testng-team/testng#3171
File: testng-core/src/test/java/test/invocationcount/issue1719/TestclassSample.java:24-45
Timestamp: 2024-09-05T07:53:26.130Z
Learning: When suggesting improvements to test code, encapsulating complex or repetitive logic into separate methods can enhance readability and maintainability. Additionally, logging important messages before assertions can improve the debugging experience by ensuring that the details are captured in the test output.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


public static class RegularTestWithZeroSuccessPercentage {
@Test(successPercentage = 0)
public void simpleTestMethod() {
Assert.fail();
}
}

@Test(successPercentage = 0, invocationCount = 2)
public void testMethodWithMultipleInvocations() {
Assert.fail();
public static class RegularTestWithZeroSuccessPercentageAndInvocationCount {
@Test(successPercentage = 0, invocationCount = 2)
public void testMethodWithMultipleInvocations() {
Assert.fail();
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
package test.invocationcount.issue3170;

import static org.testng.Assert.assertEquals;

import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

public class DataDrivenWithSuccessPercentageAndInvocationCountDefinedSample {
@Test(dataProvider = "test", invocationCount = 10, successPercentage = 90)
public void sampleTestCase(String string) {
assertEquals(string, "1");
}

@DataProvider(name = "test")
public Object[][] testProvider() {
return new Object[][] {{"2"}};
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
package test.invocationcount.issue3170;

import static org.testng.Assert.assertEquals;

import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

public class DataDrivenWithSuccessPercentageDefinedSample {
@Test(dataProvider = "test", successPercentage = 99)
public void sampleTestCase(String string) {
assertEquals(string, "1");
}

@DataProvider(name = "test")
public Object[][] testProvider() {
return new Object[][] {{"1"}, {"2"}, {"3"}, {"4"}};
}
}
Loading