You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should we only set the pre-existing bads when generating the HTML repr for the original data, or should we set the pre-existing bads + all bads from the data quality checks?
To me the optimal thing to do here I think is to have two report entries:
One which is the data as it comes originally from the dataset author with bads from the sidecar. It tells you what you start with before doing anything to it with the pipeline
Another which is the data after our autobads (this section only needs to exist if we do some autobad detection). It tells you what the automated approaches did to the data
To me it's okay for brevity to have (1) have all the nice raw data plots, and have (2) maybe have less -- like just a list of channels removed or something? I think we're already kind of close to this organization:
But from that data quality plot it's not immediately clear which channels were added as bad, if any. Even just a HTML entry that lists the bads (when bad enabled) and flats (when flat enabled) would be a nice help
The text was updated successfully, but these errors were encountered:
Originally posted by @larsoner in #931 (comment)
To me the optimal thing to do here I think is to have two report entries:
To me it's okay for brevity to have (1) have all the nice raw data plots, and have (2) maybe have less -- like just a list of channels removed or something? I think we're already kind of close to this organization:
But from that data quality plot it's not immediately clear which channels were added as bad, if any. Even just a HTML entry that lists the bads (when bad enabled) and flats (when flat enabled) would be a nice help
The text was updated successfully, but these errors were encountered: