We offer to test on the daytime sequences of Nuscenes dataset.
Check ../../meta_data/nusc_trainsub/json_from_cfg.ipynb and modify the data path.
Run through the notebook to output jsonify nuscenes data. This will increase start-up speed and lower dataset memory usage.
Check nuscenes_visualize repo.
Baseline:
## copy example config
cd config
cp nuscenes_wpose_example nuscenes_wpose.py
## Modify config path
nano nuscenes_wpose.py
cd ..
## Train
./launcher/train.sh configs/nuscenes_wpose.py 0 $experiment_name
## Evaluation
python3 scripts/test.py configs/nuscenes_wpose.py 0 $CHECKPOINT_PATH
It's fine to just use the baseline model for projects. After training baseline, you can further re-train with self-distillation:
## export checkpoint
python3 monodepth/transform_teacher.py $Pretrained_checkpoint $output_compressed_checkpoint
## copy example config
cd config
cp distill_nuscenes_example distill_nuscenes.py
## Modify config path and checkpoint path based on $output_compressed_checkpoint
nano distill_nuscenes.py
cd ..
## Train
./launcher/train.sh configs/distill_nuscenes.py 0 $experiment_name
Check demos/demo.ipynb for visualizing datasets and simple demos.
We support exporting pretrained model to onnx model, and you need to install onnx and onnxruntime.
python3 scripts/onnx_export.py $CONFIG_FILE $CHECKPOINT_PATH $ONNX_PATH
- Launch nuscenes_visualize to stream image data topics and Rviz visualization.
- Launch monodepth_ros to infer on camera topics.
For nuscenes, we offer an additional node to inference six images in batches. Please make sure your computer is powerful enough to infer six images online.