You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Those samples can be very useful for identifying possible regressions and for the community, we use them in our documentation, we often use them during our troubleshooting and we should treat them as a first-class citizens.
Manual solution
In order to verify that our code didn't have any regression and to maintain them up to date being sure that they work every time we should try them every new release of a stable version of pack/lifecycle.
Downside
It requires lots of time and effort from our side and we can potentially forget about it.
Proposal
So I thought about an automatic and more efficient way to do so:
My proposal would be to have some smoke tests that we run (or that our CI runs -better-) every time we release a new version of lifecycle/pack towards those examples.
It will force us to update those samples with the latest changes in order to release a stable version and of course to verify that we didn't introduce any regression.
The process can looks like:
Build lifecycle
Run lifecycle components one by one using our samples
Do that for every important/basic scenario
Check for failures
If every thing went well we can release otherwise we need to adjust our samples first.
Summary
I noticed that some of our samples don't work properly with the current version of lifecycle. (i.e. buildpacks/samples#167, buildpacks/samples#163)
Those samples can be very useful for identifying possible regressions and for the community, we use them in our documentation, we often use them during our troubleshooting and we should treat them as a first-class citizens.
Manual solution
In order to verify that our code didn't have any regression and to maintain them up to date being sure that they work every time we should try them every new release of a stable version of pack/lifecycle.
Downside
It requires lots of time and effort from our side and we can potentially forget about it.
Proposal
So I thought about an automatic and more efficient way to do so:
My proposal would be to have some smoke tests that we run (or that our CI runs -better-) every time we release a new version of lifecycle/pack towards those examples.
It will force us to update those samples with the latest changes in order to release a stable version and of course to verify that we didn't introduce any regression.
The process can looks like:
Context
https://www.guru99.com/smoke-testing.html
https://martinfowler.com/articles/practical-test-pyramid.html
The text was updated successfully, but these errors were encountered: