Skip to content

Latest commit

 

History

History
113 lines (76 loc) · 5.64 KB

agcn2s.md

File metadata and controls

113 lines (76 loc) · 5.64 KB

简体中文 | English

CTR-GCN


Contents

Introduction

模型结构图

2s-AGCN is an improved article on ST-GCN published in CVPR2019. It proposes a dual-flow adaptive convolutional network, which improves the shortcomings of the original ST-GCN. In the existing GCN based approach, the topology of the graph is set manually and fixed to all layers and input samples. In addition, the second-order information of bone data (bone length and orientation) is naturally more beneficial and discriminating for motion recognition, which was rarely studied in the methods at that time. Therefore, this paper puts forward a node and bones of two kinds of information fusion based on skeleton shuangliu network, and join in figure convolution adjacency matrix adaptive matrix, a sharp rise in the bones of gesture recognition accuracy, also has laid the foundation for subsequent work (the subsequent basic skeleton gesture recognition are based on the flow of network framework).

Data

Data download and processing are consistent with CTR-GCN. For details, please refer to NTU-RGBD Data Preparation

Train

Train on NTU-RGBD

Train CTR-GCN on NTU-RGBD scripts using single gpu:

# train cross subject with bone data
python main.py --validate -c configs/recognition/agcn2s/agcn2s_ntucs_bone.yaml --seed 1
# train cross subject with joint data
python main.py --validate -c configs/recognition/agcn2s/agcn2s_ntucs_joint.yaml --seed 1
# train cross view with bone data
python main.py --validate -c configs/recognition/agcn2s/agcn2s_ntucv_bone.yaml --seed 1
# train cross view with joint data
python main.py --validate -c configs/recognition/agcn2s/agcn2s_ntucv_joint.yaml --seed 1

config file agcn2s_ntucs_joint.yaml corresponding to the config of 2s-AGCN on NTU-RGB+D dataset with cross-subject splits.

Test

Test on NTU-RGB+D

Test scripts:

# test cross subject with bone data
python main.py --test -c configs/recognition/2sagcn/2sagcn_ntucs_bone.yaml -w data/2SAGCN_ntucs_bone.pdparams
# test cross subject with joint data
python main.py --test -c configs/recognition/2sagcn/2sagcn_ntucs_joint.yaml -w data/2SAGCN_ntucs_joint.pdparams
# test cross view with bone data
python main.py --test -c configs/recognition/2sagcn/2sagcn_ntucv_bone.yaml -w data/2SAGCN_ntucv_bone.pdparams
# test cross view with joint data
python main.py --test -c configs/recognition/2sagcn/2sagcn_ntucv_joint.yaml -w data/2SAGCN_ntucv_joint.pdparams
  • Specify the config file with -c, specify the weight path with -w.

Accuracy on NTU-RGB+D dataset:

CS CV
Js-AGCN(joint) 85.8% 94.13%
Bs-AGCN(bone) 86.7% 93.9%

Train log:download

VisualDL log:download

checkpoints:

CS-Js CS-Bs CV-Js CV-Bs
ntu_cs_agcn_joint ntu_cs_agcn_bone ntu_cv_agcn_joint ntu_cv_agcn_bone

Inference

export inference model

python3.7 tools/export_model.py -c configs/recognition/agcn2s/2sagcn_ntucs_joint.yaml \
                                -p data/AGCN2s_ntucs_joint.pdparams \
                                -o inference/AGCN2s_ntucs_joint

To get model architecture file AGCN2s_ntucs_joint.pdmodel and parameters file AGCN2s_ntucs_joint.pdiparams.

infer

python3.7 tools/predict.py --input_file data/example_NTU-RGB-D_sketeton.npy \
                           --config configs/recognition/agcn2s/2sagcn_ntucs_joint.yaml \
                           --model_file inference/AGCN2s_ntucs_joint/AGCN2s_ntucs_joint.pdmodel \
                           --params_file inference/AGCN2s_ntucs_joint/AGCN2s_ntucs_joint.pdiparams \
                           --use_gpu=True \
                           --use_tensorrt=False

infer result

预测引擎推理结果图

Reference