You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We discussed with @proppy the idea of creating a dashboard summarizing test results and coverage for the RLE/DBE encoders and decoders. The dashboard would collect information about encoding correctness, throughput, and throughput/area comparison. If we are working on adding this type of functionality, we can think about making it more universal and not tied to the encoding examples.
The dashboard generation can be a separate rule that will trigger all the necessary tests and collect information about the results. While the overall status of the test can be obtained using for example Bazel Build Event Protocol, (However, I'm not sure if that is the best option that we have) the details about the throughput measurements have to be provided separately. This may require creating additional test results files or printing additional information directly to the log.
What do you think about this approach? Maybe you have some ideas about the possible implementation of this feature or certain requirements that you want to include as a part of this functionality.
We discussed with @proppy the idea of creating a dashboard summarizing test results and coverage for the RLE/DBE encoders and decoders. The dashboard would collect information about encoding correctness, throughput, and throughput/area comparison. If we are working on adding this type of functionality, we can think about making it more universal and not tied to the encoding examples.
The dashboard generation can be a separate rule that will trigger all the necessary tests and collect information about the results. While the overall status of the test can be obtained using for example Bazel Build Event Protocol, (However, I'm not sure if that is the best option that we have) the details about the throughput measurements have to be provided separately. This may require creating additional test results files or printing additional information directly to the log.
What do you think about this approach? Maybe you have some ideas about the possible implementation of this feature or certain requirements that you want to include as a part of this functionality.
CC @proppy
The text was updated successfully, but these errors were encountered: