Skip to content

A PyTorch implementation of adversarial pose estimation for multi-person

Notifications You must be signed in to change notification settings

abhikhanna30/adversarial-pose-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Pose Estimation

This repository implements pose estimation methods in PyTorch.

Getting Started

Data

The file lsp_mpii.h5 contains the annotations of MPII, LSP training data and LSP test data.
Place LSP, MPII images in data/LSP/images and data/mpii/images.
Place coco annotations in data/coco/annotations and images in data/coco/images, as suggested by cocoapi. The file valid_id contains the image_ids used for validation.

Compile the extension

Compile the C implementation of the associative embedding loss. Code credit umich-vl/pose-ae-train.

cd src/extensions/AE
python build.py  # be sure to have visible cuda device

Folder Structure

  • data: put the training / testing data here
  • src:
    • models: model definition
    • datasets: dataset definition
    • extensions:
    • utils

All the other folders represents different tasks. Each contains a training script train.py and definition of command-line options opts.py.

Known Issues

  • advpose-ae: Only supports single gpu. Multi-gpu training get stucked randomly. The problem seems to be caused by the AE_loss extension.

TODOs

  • visualization
  • example of usage

About

A PyTorch implementation of adversarial pose estimation for multi-person

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.5%
  • C 8.0%
  • Other 0.5%