You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I wished to open a PR sometime to add support for the DHG-14/28 dataset [ site | paper ]. It's a challenging dynamic hand-gesture recognition dataset consisting of three modalities:
Depth videos / sequences of 16-bit depth-maps, at resolution 640x480
Sequences of 2D skeleton coordinates (in the image space) of 22 hand joints (frames, 22*2)
Sequences of 3D skeleton coordinates (in the world space), (frames, 22*3)
However, there's a small issue: the standard evaluation process of this dataset is a bit different from the norm.
There are exactly 2800 data instances in the dataset, performed by 20 unique people. Benchmarks on this dataset are evaluated through a 20-fold, 'leave-one-out' cross validation process. Models are trained 20 times: each time 19 people's data is used for training, while 1 person's data is strictly isolated and used as validation. This prevents any data leakage, and is supposed to increase the robustness of the evaluation.
The instructions in MultiBench mention implementing get_dataloader and having it return 3 dataloaders for train, val and test respectively. However there is no test in this dataset, rather 20 combinations of train and val.
Would it be okay to implement it in such a way that it returns training and validation dataloaders only?
The text was updated successfully, but these errors were encountered:
Hello. I wished to open a PR sometime to add support for the DHG-14/28 dataset [ site | paper ]. It's a challenging dynamic hand-gesture recognition dataset consisting of three modalities:
However, there's a small issue: the standard evaluation process of this dataset is a bit different from the norm.
There are exactly 2800 data instances in the dataset, performed by 20 unique people. Benchmarks on this dataset are evaluated through a 20-fold, 'leave-one-out' cross validation process. Models are trained 20 times: each time 19 people's data is used for training, while 1 person's data is strictly isolated and used as validation. This prevents any data leakage, and is supposed to increase the robustness of the evaluation.
The instructions in MultiBench mention implementing
get_dataloader
and having it return 3 dataloaders for train, val and test respectively. However there is no test in this dataset, rather 20 combinations of train and val.Would it be okay to implement it in such a way that it returns training and validation dataloaders only?
The text was updated successfully, but these errors were encountered: