Official Implementation of A Vision-Centric Approach for Static Map Element Annotation
CAMA: Arxiv | Youtube | Bilibili
CAMAv2: Arxiv | Youtube | Bilibili
CAMA: Consistent and Accurate Map Annotation, nuScenes example:
CAMA is also used for detecting drowsiness driving patterns based on static map element matching. For instance, our proposed driving behavior dataset CAMA-D. Details here (https://github.com/FatigueView/fatigueview)
- We release CAMAv2 on Arxiv. CAMAv2 aggregates scenes with intersecting portions into one large scene called a site, which solves the shortcoming of dropping the head and tail frames in the previous single-scene reconstruction and the occlusion and blind zone problems
- Upload camav2_labels.zip [Google Drive] with site reconstruction.
- Upload nuScenes xxx scenes from v1.0-test with CAMA labels.
- Add reprojection demo for both CAMA and nuScenes origin labels.
-
install required python packages
python3 -m pip install -r requirements.txt
-
Download cama_label.zip [Google Drive]
-
Modify config.yaml accordingly:
- dataroot: path to the origin nuScenes dataset
- converted_dataroot: output converted dataset dir
- cama_label_file: path to cama_label.zip you just download from 2
- output_video_dir: where the demo video writes
-
Run the pipeline
python3 main.py --config config.yaml
If you benefit from this work, please cite the mentioned and our paper:
@inproceedings{zhang2021deep,
title={A Vision-Centric Approach for Static Map Element Annotation},
author={Zhang, Jiaxin and Chen, Shiyuan and Yin, Haoran and Mei, Ruohong and Liu, Xuan and Yang, Cong and Zhang, Qian and Sui, Wei},
booktitle={IEEE International Conference on Robotics and Automation (ICRA 2024)},
pages={1-7}
}
@article{chen2024camav2,
title={CAMAv2: A Vision-Centric Approach for Static Map Element Annotation},
author={Chen, Shiyuan and Zhang, Jiaxin and Mei, Ruohong and Cai, Yingfeng and Yin, Haoran and Chen, Tao and Sui, Wei and Yang, Cong},
journal={arXiv preprint arXiv:2407.21331},
year={2024}
}