You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the CI tests steps are really dependant on each other. To such extend that we cannot cancel a step (it is mandatory for all of them to finish), we also cannot retry a single step. This makes it really irritating having to retry a failed unit test step, as first you need to wait for integration and compose tests to finish (~20 minutes).
Upon quick research, it seems like the if: ${{ always() }} does not only make the job run if the jobs its dependent on fail, but it also makes it non cancellable.
If we make the test steps independent on each other and run in parallel, our issue will be solved. Also, by design, those tests should not depend on each other. One thing to keep in mind is that previously there were attempts to parallelise tests, but the machines on which the runners were on were incapable of handling that much load. However, now we run self-hosted runners, so there is a chance there is an improvement on that front.
🛠️ Proposed solution
Change the CI yaml to run unit, integration and compose tests independently of each other
Assure tests still pass and there is no effect on the sequence in which they are ran
Assure we can retry single test suite
Assure we can cancel a test suite
The text was updated successfully, but these errors were encountered:
🎯 Problem to be solved
Currently the CI tests steps are really dependant on each other. To such extend that we cannot cancel a step (it is mandatory for all of them to finish), we also cannot retry a single step. This makes it really irritating having to retry a failed unit test step, as first you need to wait for integration and compose tests to finish (~20 minutes).
Upon quick research, it seems like the
if: ${{ always() }}
does not only make the job run if the jobs its dependent on fail, but it also makes it non cancellable.If we make the test steps independent on each other and run in parallel, our issue will be solved. Also, by design, those tests should not depend on each other. One thing to keep in mind is that previously there were attempts to parallelise tests, but the machines on which the runners were on were incapable of handling that much load. However, now we run self-hosted runners, so there is a chance there is an improvement on that front.
🛠️ Proposed solution
The text was updated successfully, but these errors were encountered: