The PyTorch Implementation of the paper: CenterNet3D: An Anchor free Object Detector for Autonomous Driving
- LiDAR-based realtime 3D object detection
- Support distributed data parallel training
- Release pre-trained models
pip install -U -r requirements.txt
-
For
mayavi
library, please refer to the installation instructions from its official website. -
To build the
CenterNet3D
model, I have used the spconv library. Please follow the instruction from the repo to install the library. I also wrote notes for the installation here
Download the 3D KITTI detection dataset from here.
The downloaded data includes:
- Velodyne point clouds (29 GB)
- Training labels of object data set (5 MB)
- Camera calibration matrices of object data set (16 MB)
- Left color images of object data set (12 GB)
Please make sure that you construct the source code & dataset directories structure as below.
cd src/data_process
- To visualize 3D point clouds with 3D boxes, let's execute:
python kitti_dataset.py
An example of the KITTI dataset:
python test.py --gpu_idx 0 --peak_thresh 0.2
python train.py --gpu_idx 0 --batch_size <N> --num_workers <N>...
We should always use the nccl
backend for multi-processing distributed training since it currently provides the best
distributed training performance.
- Single machine (node), multiple GPUs
python train.py --dist-url 'tcp://127.0.0.1:29500' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0
- Two machines (two nodes), multiple GPUs
First machine
python train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0
Second machine
python train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1
To reproduce the results, you can run the bash shell script
./train.sh
- To track the training progress, go to the
logs/
folder and
cd logs/<saved_fn>/tensorboard/
tensorboard --logdir=./
- Then go to http://localhost:6006/:
If you think this work is useful, please give me a star!
If you find any errors or have any suggestions, please contact me (Email: [email protected]
).
Thank you!
@article{CenterNet3D,
author = {Guojun Wang, Bin Tian, Yunfeng Ai, Tong Xu, Long Chen, Dongpu Cao},
title = {CenterNet3D: An Anchor free Object Detector for Autonomous Driving},
year = {2020},
journal = {arXiv},
}
@misc{CenterNet3D-PyTorch,
author = {Nguyen Mau Dung},
title = {{CenterNet3D-PyTorch: PyTorch Implementation of the CenterNet3D paper}},
howpublished = {\url{https://github.com/maudzung/CenterNet3D-PyTorch}},
year = {2020}
}
[1] CenterNet: Objects as Points paper, PyTorch Implementation [2] VoxelNet: PyTorch Implementation
${ROOT}
└── checkpoints/
├── centernet3d.pth
└── dataset/
└── kitti/
├──ImageSets/
│ ├── test.txt
│ ├── train.txt
│ └── val.txt
├── training/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ ├── label_2/
│ └── velodyne/
└── testing/
│ ├── image_2/ (left color camera)
│ ├── calib/
│ └── velodyne/
└── classes_names.txt
└── src/
├── config/
│ ├── train_config.py
│ └── kitti_config.py
├── data_process/
│ ├── kitti_dataloader.py
│ ├── kitti_dataset.py
│ └── kitti_data_utils.py
├── models/
│ ├── centernet3d.py
│ ├── deform_conv_v2.py
│ └── model_utils.py
└── utils/
│ ├── evaluation_utils.py
│ ├── logger.py
│ ├── misc.py
│ ├── torch_utils.py
│ └── train_utils.py
├── evaluate.py
├── test.py
├── train.py
└── train.sh
├── README.md
└── requirements.txt