Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding the DHG-14/28 dataset #11

Open
mashrurmorshed opened this issue Nov 3, 2021 · 0 comments
Open

Question regarding the DHG-14/28 dataset #11

mashrurmorshed opened this issue Nov 3, 2021 · 0 comments
Assignees

Comments

@mashrurmorshed
Copy link

mashrurmorshed commented Nov 3, 2021

Hello. I wished to open a PR sometime to add support for the DHG-14/28 dataset [ site | paper ]. It's a challenging dynamic hand-gesture recognition dataset consisting of three modalities:

  • Depth videos / sequences of 16-bit depth-maps, at resolution 640x480
  • Sequences of 2D skeleton coordinates (in the image space) of 22 hand joints (frames, 22*2)
  • Sequences of 3D skeleton coordinates (in the world space), (frames, 22*3)

However, there's a small issue: the standard evaluation process of this dataset is a bit different from the norm.

There are exactly 2800 data instances in the dataset, performed by 20 unique people. Benchmarks on this dataset are evaluated through a 20-fold, 'leave-one-out' cross validation process. Models are trained 20 times: each time 19 people's data is used for training, while 1 person's data is strictly isolated and used as validation. This prevents any data leakage, and is supposed to increase the robustness of the evaluation.

The instructions in MultiBench mention implementing get_dataloader and having it return 3 dataloaders for train, val and test respectively. However there is no test in this dataset, rather 20 combinations of train and val.

Would it be okay to implement it in such a way that it returns training and validation dataloaders only?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants