🔥🚧Improving test coverage for helpers and other core functionality #259
timburke-hackit
started this conversation in
Firebreak April 24
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
🌵What is the problem or issue we're trying to address?
Helpers and some functionality provided by platform (alerts, automation tasks) are used extensively by different users. Changes a user makes to the behaviour of these functions may have unintended consequences for others and these should be caught by test coverage before they're deployed.
🎯How is this affecting producers, consumers or platform engineers?
Producers
May use helper functions in their workflows that were there behaviour to change would either break their implementation or affect the data they are bringing on to platform
Platform Engineers
Want to know that changes they make to helpers will not adversely affect existing implementations, or that the changes they made are breaking changes before they're deployed to platform.
📝What is the proposed task?
Review test coverage of existing functions.
Create tests where this coverage is lacking.
Agree and document a coverage standard for helper and utility functions
Identify utility lambdas / other code that is owned by platform (rather than a particularly producers implementation) and apply the same process
🤔How might this work be carried out?
Group code review?
⌛How urgent is this work?
Pretty urgent, we've had a few breaking changes recently that should have been caught by the CI but weren't
💪How much effort do you think this will take?
M
🛠️What skills are needed?
Python
📃Additional Info:
No response
Beta Was this translation helpful? Give feedback.
All reactions