Skip to content
This repository has been archived by the owner on Oct 13, 2021. It is now read-only.

Create Data #6

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 10 additions & 160 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,170 +1,20 @@
# PointPillars
For point_pillars installation refer to https://github.com/nutonomy/second.pytorch.

Welcome to PointPillars.
The code has been modified to run on Argoverse 3D dataset. Changes are done mainly in dataloading part in point_pillars/second/data/preprocess.py

This repo demonstrates how to reproduce the results from
[_PointPillars: Fast Encoders for Object Detection from Point Clouds_](https://arxiv.org/abs/1812.05784) (to be published at CVPR 2019) on the
[KITTI dataset](http://www.cvlibs.net/datasets/kitti/) by making the minimum required changes from the preexisting
open source codebase [SECOND](https://github.com/traveller59/second.pytorch).
Command to train final model (in point_pillars/second/) -

This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_20_argo_upper.proto --model_dir=./models --device=0 --include_roi=True --dr_area=False --include_road_points=False

**WARNING: This code is not being actively maintained. This code can be used to reproduce the results in the first version of the paper, https://arxiv.org/abs/1812.05784v1. For an actively maintained repository that can also reproduce PointPillars results on nuScenes, we recommend using [SECOND](https://github.com/traveller59/second.pytorch). We are not the owners of the repository, but we have worked with the author and endorse his code.**
Pre-trained model can be downloaded from -
https://drive.google.com/file/d/1ebtmoQSJdfIKVVJ93GSCdeGy-ZGqGXyN/view?usp=sharing

![Example Results](https://raw.githubusercontent.com/nutonomy/second.pytorch/master/images/pointpillars_kitti_results.png)

For inference on test set (in point_pillars/second)-

## Getting Started
python pp_inference.py --config_path=./configs/pointpillars/car/xyres_20_argo_upper.proto --model_dir=./models --device=0 --model_path="path_to_model/voxelnet-xxx.tckpt" --save_path="path_to_save/xxx.pkl" --include_roi=1 --include_road_points=0 --dr_area=0

This is a fork of [SECOND for KITTI object detection](https://github.com/traveller59/second.pytorch) and the relevant
subset of the original README is reproduced here.
Command to get Results in AB3DMOT format( in point_pilllars/second) -

### Code Support
python get_tracking_result.py --model_path=path_to_model --sv_dir=path_to_AB3DMOT/data/argo/car_3d_det_val/ --set=val (or test)

ONLY supports python 3.6+, pytorch 0.4.1+. Code has only been tested on Ubuntu 16.04/18.04.

### Install

#### 1. Clone code

```bash
git clone https://github.com/nutonomy/second.pytorch.git
```

#### 2. Install Python packages

It is recommend to use the Anaconda package manager.

First, use Anaconda to configure as many packages as possible.
```bash
conda create -n pointpillars python=3.7 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda
```

Then use pip for the packages missing from Anaconda.
```bash
pip install --upgrade pip
pip install fire tensorboardX
```

Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this
to be correctly configured.
```bash
git clone [email protected]:facebookresearch/SparseConvNet.git
cd SparseConvNet/
bash build.sh
# NOTE: if bash build.sh fails, try bash develop.sh instead
```

Additionally, you may need to install Boost geometry:

```bash
sudo apt-get install libboost-all-dev
```


#### 3. Setup cuda for numba

You need to add following environment variables for numba to ~/.bashrc:

```bash
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
```

#### 4. PYTHONPATH

Add second.pytorch/ to your PYTHONPATH.

### Prepare dataset

#### 1. Dataset preparation

Download KITTI dataset and create some directories first:

```plain
└── KITTI_DATASET_ROOT
├── training <-- 7481 train data
| ├── image_2 <-- for visualization
| ├── calib
| ├── label_2
| ├── velodyne
| └── velodyne_reduced <-- empty directory
└── testing <-- 7580 test data
├── image_2 <-- for visualization
├── calib
├── velodyne
└── velodyne_reduced <-- empty directory
```

Note: PointPillar's protos use ```KITTI_DATASET_ROOT=/data/sets/kitti_second/```.

#### 2. Create kitti infos:

```bash
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
```

#### 3. Create reduced point cloud:

```bash
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
```

#### 4. Create groundtruth-database infos:

```bash
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
```

#### 5. Modify config file

The config file needs to be edited to point to the above datasets:

```bash
train_input_reader: {
...
database_sampler {
database_info_path: "/path/to/kitti_dbinfos_train.pkl"
...
}
kitti_info_path: "/path/to/kitti_infos_train.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
...
kitti_info_path: "/path/to/kitti_infos_val.pkl"
kitti_root_path: "KITTI_DATASET_ROOT"
}
```


### Train

```bash
cd ~/second.pytorch/second
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
```

* If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
* If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
* Training only supports a single GPU.
* Training uses a batchsize=2 which should fit in memory on most standard GPUs.
* On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.


### Evaluate


```bash
cd ~/second.pytorch/second/
python pytorch/train.py evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
```

* Detection result will saved in model_dir/eval_results/step_xxx.
* By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.
23 changes: 23 additions & 0 deletions second/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
*.sh
report*
*.ipynb
*.tckpt
analysis/
*.pkl
models/
network_input_examples/
pcd_output/
ped_models_56/
road_train_info.trch
sample_result_ab3dmot_format/
sample_road_info.trch
sample_road_val_info.trch
sample_test/
sample_test_wroad/
sample_test_wroi/
sample_train_roadmap/
sample_val_roadmap/
test_roadmap/
track_analysis/
sample_val_roadmap/
test/
24 changes: 18 additions & 6 deletions second/builder/dataset_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,13 @@
import numpy as np
from second.builder import dbsampler_builder
from functools import partial
import torch



def build(input_reader_config,
model_config,
info,
training,
voxel_generator,
target_assigner=None):
Expand All @@ -58,8 +61,8 @@ def build(input_reader_config,
cfg = input_reader_config
db_sampler_cfg = input_reader_config.database_sampler
db_sampler = None
if len(db_sampler_cfg.sample_groups) > 0: # enable sample
db_sampler = dbsampler_builder.build(db_sampler_cfg)
#if len(db_sampler_cfg.sample_groups) > 0: # enable sample
# db_sampler = dbsampler_builder.build(db_sampler_cfg)
u_db_sampler_cfg = input_reader_config.unlabeled_database_sampler
u_db_sampler = None
if len(u_db_sampler_cfg.sample_groups) > 0: # enable sample
Expand All @@ -68,10 +71,17 @@ def build(input_reader_config,
# [352, 400]
feature_map_size = grid_size[:2] // out_size_factor
feature_map_size = [*feature_map_size, 1][::-1]

inform = info.copy()
inform["road_map"] = None

root_path = input_reader_config.kitti_root_path
index_list = torch.load(input_reader_config.kitti_info_path)

inform["index_list"] = index_list

prep_func = partial(
prep_pointcloud,
root_path=cfg.kitti_root_path,
root_path=root_path,
class_names=list(cfg.class_names),
voxel_generator=voxel_generator,
target_assigner=target_assigner,
Expand Down Expand Up @@ -100,9 +110,11 @@ def build(input_reader_config,
remove_environment=cfg.remove_environment,
use_group_id=cfg.use_group_id,
out_size_factor=out_size_factor)

dataset = KittiDataset(
info_path=cfg.kitti_info_path,
root_path=cfg.kitti_root_path,
info_path=inform,
root_path=root_path,
class_names=list(cfg.class_names),
num_point_features=num_point_features,
target_assigner=target_assigner,
feature_map_size=feature_map_size,
Expand Down
Loading