-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluation of stitcher #61
Milestone
Comments
Given the new output format that's being discussed in #41, the evaluation plan is as follow
|
Done pretty much as described above with one difference. Trying to mimic the sample rate was impossible since the app at the moment only accept milliseconds and the rate used for the annotation was using some number of frames I think. So I just used a frame for the annotation that was within at most 32ms. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Because
We want to see if the stitcher/smoothing added via #33 is doing well, independent from the accuracy of image-level classification model.
Done when
Controlled evaluation is done to measure the effectiveness of the stitcher. The evaluation at high-level, should be measuring the performance difference between raw image classification results and re-constructed image classification results from
TimeFrame
annotations.Additional context
Original idea of having this evaluated was proposed by @owencking in his email on 12/15/2023. Here's an excerpt from it.
The text was updated successfully, but these errors were encountered: