Replies: 3 comments 15 replies
-
@marioaag about the extra items to add here, we are open to add new things if needed. So you can open an issue and we discuss/implement there. Depending on the item I can help adding support. |
Beta Was this translation helpful? Give feedback.
-
@marioaag about your second question |
Beta Was this translation helpful? Give feedback.
-
Even though the whiteboard solution covers in some way what I'm trying to do, don't you thing that with the possibility of have different configurations for each one of the test, get the test config values and parameters values by default in the results.json file will be useful? In my specific scenario will help to don't need to add the whiteboard print in each one of the tests. |
Beta Was this translation helpful? Give feedback.
-
I'm interested to add extra details to the results.json file in the tests section, is it a mechanism to include extra values?
Example:
''''
"tests": [
{
"end": 4.214863779,
"fail_reason": "",
"id": "160621-162417945045.BlueGenParallel.-1-/Users/marioalvarado/git/iLabConfig/test/avocado/testcases/CheckHostStatus/CheckHostStatus.py:CheckHostStatus.test_os_info",
"logdir": "/tmp/avocado_logs/job-2021-06-16T11.24-51227b3/test-results/160621-162417945045.BlueGenParallel.-1-_Users_marioalvarado_git_iLabConfig_test_avocado_testcases_CheckHostStatus_CheckHostStatus.py_CheckHostStatus.test_os_info",
"logfile": "",
"start": 0.757385825,
"status": "PASS",
"tags": {},
"time": 3.457477954,
"whiteboard": ""
},
'''
I would like to add extra values here.
And also another quick question, what is the meaning of the values in the start and end values? what is the format used?
Beta Was this translation helpful? Give feedback.
All reactions