-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test result is "Fail" when all testSteps completed #470
Comments
I can confirm this error as well. Even the example from this page (https://kuttl.dev/docs/kuttl-test-harness.html#run-the-tests) results in an error in a live cluster:
|
I have the same problem but observed that the test namespace generated by kuttl takes some time to delete . In my case, this is due to finalizers being run to clean up the resources deployed during my tests. When i remove the resources with a TestStep using the I suppose the I would suggest a configurable timeout for namespace deletion could help here, maybe as optional property on the kuttl TestSuite and as command line flag. |
According to this line, the same timeout value used for the TestSuite configuration is used for the cleanup process. I can confirm that it works since I had the exact same issue and creating a |
What happened:
I use Kuttl to validate the creation of certain k8s custom resources. The first 3 test cases are run in Host cluster and succeeds. In the last testStep I generate another kubeconfig for a vcluster where I want to run the last testAssert using --kubeconfig. Running the test I see all of them success and for each of them I see "test step completed" , but the end result of kuttl is
"case.go:114: timed out waiting for the condition
=== CONT kuttl
harness.go:405: run tests finished
harness.go:513: cleaning up
harness.go:570: removing temp folder: ""
--- FAIL: kuttl (544.85s)
--- FAIL: kuttl/harness (0.00s)
--- FAIL: kuttl/harness/create (537.77s)
FAIL"
What you expected to happen:
As long as all conditions are met during assert and all test are completed and not failed , I expect the end result of test to be "Pass"
How to reproduce it (as minimally and precisely as possible):
create another kubeconfig during one of the teststeps and run the assert using "--kubeconfig"
Anything else we need to know?:
The result becomes "Pass" if I omit the latest step which is using another Kubeconfig.
Environment:
kubectl version
):kubectl kuttl version
):uname -a
):The text was updated successfully, but these errors were encountered: