You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Respectively we should label each sample in the interactive data set as True/False in order to begin establishing best practices for fall detection labeling that other researches can use to contribute additional data training and test data. Ultimately the goal is to use the accumulated community contributed data for a new fall detection DNN classification model.
For each known category with challenges we should have around 10 examples with positive and negative labeled detections.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
We currently have a notebook that shows how PoseNet and MoveNet perform on various fall data.
https://github.com/ambianic/fall-detection/blob/main/MoveNet_Vs_PoseNet.ipynb
We also have a testsuite that establishes a baseline for fall detection performance.
https://github.com/ambianic/fall-detection/tree/main/tests
However we don't have an interactive representation of data samples which demonstrates the cases where fall detection is accurate and where it fails.
Describe the solution you'd like
It would be helpful to have an interactive notebook that provides examples of fall detection performance in the various cases of challenges that we've discovered and documented in this blog post:
https://blog.ambianic.ai/2021/09/02/movenet-vs-posenet-person-fall-detection.html
Respectively we should label each sample in the interactive data set as True/False in order to begin establishing best practices for fall detection labeling that other researches can use to contribute additional data training and test data. Ultimately the goal is to use the accumulated community contributed data for a new fall detection DNN classification model.
For each known category with challenges we should have around 10 examples with positive and negative labeled detections.
The text was updated successfully, but these errors were encountered: