SSC.Presentation.Video.-.Made.with.Clipchamp.2.mp4
3D Semantic Scene Completion (SSC) predicts dense 3D geometry and semantic labels from sparse observations. Although effective indoors, existing methods fail in dynamic, sparse outdoor settings. We introduce a technique harnessing a sparse generative neural network, point-cloud segmentation priors, and dense-to-sparse knowledge distillation for single-frame SSC. Our method employs a state-of-the-art semantic segmentation model to predict point cloud features and semantic probabilities from a LiDAR point cloud, which are subsequently fed into a sparse multi-scale generative network to predict geometry and semantics jointly. In addition, we train a multi-frame replica of our model, which takes multiple sequential point clouds as input and apply Knowledge Distillation (KD) to transfer the dense knowledge to the single-frame model. On the SemanticKITTI benchmark, our model achieves a mIoU of 27.1, compared to the previous top scores of 29.5 (S3CNet) and 23.8 (JS3CNet). Furthermore, our approach achieves the highest completion score (60.6) versus 45.6 and 56.6 from S3CNet and JS3CNet respectively.
Follow these steps to set up the environment:
- Install Docker
- Install nvidia-docker
- Download the Semantic-KITTI dataset from http://www.semantic-kitti.org/.
- Download the semantic segmentation pretrained model from this Google Drive folder and place it into the
semantic-scene-completion/data
folder. - (Optional) Download our pretrained models from this Godle Drive Folder and save them to the
semantic-scene-completion/data
folder. - Build the Docker container using the following command:
docker build -t ssc .
Run the labels_downscale.py
script to create labels for the 1/2 and 1/4 scales using majority vote pooling:
python3 tools/labels_downscale.py
First, run the Docker container using our script (modify the --shm-size parameter depending on your systems' specs and change the directory /home/galvis/data/ssc_dataset/dataset/
for the path where your dataset is stored):
source run_docker.sh
To train the semantic scene completion model, run the following command inside the Docker container:
python3 train.py --config-file configs/ssc.yaml
To train the multi-frame semantic scene completion model, use the following command:
python3 train_multi.py --config-file configs/ssc_multi.yaml
To monitor the training progress, use TensorBoard with the following command:
tensorboard --logdir ./semantic-scene-completion/experiments
To evaluate the trained model, run the following command inside the Docker container:
python3 eval.py --config-file configs/ssc.yaml --checkpoint data/modelFULL-19.pth
To generate the submission files to for the Semantic-KITTI benchmark, use the following command:
python3 test.py --config-file configs/ssc.yaml --checkpoint data/modelFULL-19.pth
These are our method's results compared to the semantic-kitti ssc leaderboard:
Method | mIoU | completion |
---|---|---|
S3CNet | 29.5 | 45.6 |
JS3C-Net | 23.8 | 56.6 |
LMSCNet | 17.0 | 55.3 |
Ours | 27.1 | 60.6 |
Many thanks to these open-source projects: