This repository is based on Action2Modion and bvh-converter.
model: https://github.com/EricGuo5513/action-to-motion
data preprocessing: https://github.com/tekulvw/bvh-converter
git clone https://github.com/CSID-DGU/NIA-MoCap-2.git
cd NIA-MoCap-2
pip install torch
pip install pillow
pip install scipy
pip install matplotlib
pip install opencv-python
pip install pandas
pip install joblib
(To access the data, please use a VPN to change your location to South Korea and then access the link above.)
There are 142 action classes in the DGU-HAU dataset.
Each action class has about 100 data samples, so there are 14,116 data samples in total.
The joint number of 3D human skeleton data (motion capture data).
The detailed position of the joint is described in a paper. (the paper link is TBU)
Spine: [0, 3, 6, 9, 12, 15]
Legs: [0, 1, 4, 7, 10], [0, 2, 5, 8, 11]
Arms: [9, 13, 16, 18, 20, 22], [9, 14, 17, 19, 21, 23]
@article{park2023dgu,
title={DGU-HAU: A Dataset for 3D Human Action Analysis on Utterances},
author={Park, Jiho and Park, Kwangryeol and Kim, Dongho},
journal={Electronics},
volume={12},
number={23},
pages={4793},
year={2023},
publisher={MDPI}
}