You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some tests need to report results past the simple PASS/FAIL status, and, more importantly, we need to be able to track them. Here are the ideas so far:
We use the "DONE" status to signify that the test doesn't have a simple PASS/FAIL status, and we need to look elsewhere for results.
We attach a set of metric fields to each test (result), one for each value type. I.e. an integer, a floating-point number, a string, etc. We call it "value", so it reads "test->value".
This would be close to the "status" level of abstraction. "Metric" would be confusing, as it would be closer to "test metric" meaning, that is a measurement about the test itself.
We create separate columns in the test table for each type to enable indexing and thus fast comparison. In JSON we only allow one of the value types at once. E.g.:
"status": "DONE",
"value": {"float": 3.145},
Or:
"status": "DONE",
"value": {"int": 42},
If people want to have more than one value as the test output, they would need to report a (synthetic) subtest for each one. This way the system would be flexible enough to track each metric separately. If any tests require tracking of a combination of metrics, then they should boil them down to a single value on their own, and report it in a parent test of the combined metrics.
The text was updated successfully, but these errors were encountered:
Some tests need to report results past the simple PASS/FAIL status, and, more importantly, we need to be able to track them. Here are the ideas so far:
We use the "DONE" status to signify that the test doesn't have a simple PASS/FAIL status, and we need to look elsewhere for results.
We attach a set of metric fields to each test (result), one for each value type. I.e. an integer, a floating-point number, a string, etc. We call it "value", so it reads "test->value".
This would be close to the "status" level of abstraction. "Metric" would be confusing, as it would be closer to "test metric" meaning, that is a measurement about the test itself.
We create separate columns in the test table for each type to enable indexing and thus fast comparison. In JSON we only allow one of the value types at once. E.g.:
Or:
If people want to have more than one value as the test output, they would need to report a (synthetic) subtest for each one. This way the system would be flexible enough to track each metric separately. If any tests require tracking of a combination of metrics, then they should boil them down to a single value on their own, and report it in a parent test of the combined metrics.
The text was updated successfully, but these errors were encountered: