Skip to content

abhi1kumar/CHARM3R

Repository files navigation

[Project] | [Talk] | [Slides] | [Poster]

arXiv License: MIT Visitors GitHub Stars

Abhinav Kumar1 · Yuliang Guo2 · Zhihao Zhang1 · Xinyu Huang2 · Liu Ren2 · Xiaoming Liu1
1Michigan State University, 2Bosch Research North America, Bosch Center for AI

in ICCV 2025

Monocular 3D object detectors, while effective on data from one ego camera height, struggle with unseen or out-of-distribution camera heights. Existing methods often rely on Plucker embeddings, image transformations or data augmentation. This paper takes a step towards this under-studied problem by first investigating the impact of camera height variations on state-of-the-art (SoTA) Mono3D models. With a systematic analysis on the extended CARLA dataset with multiple camera heights, we observe that depth estimation is a primary factor influencing performance under height variations. We mathematically prove and also empirically observe consistent negative and positive trends in mean depth error of regressed and ground-based depth models, respectively, under camera height changes. To mitigate this, we propose Camera Height Robust Monocular 3D Detector (CHARM3R), which averages both depth estimates within the model. CHARM3R improves generalization to unseen camera heights by more than 45%, achieving SoTA performance on the CARLA dataset.

Much of the codebase is based on DEVIANT. Some implementations are from GrooMeD-NMS. CARLA rendering code is available at CARLA_Rendering, which we adapt from Viewpoint-Robustness.

Citation

If you find our work useful in your research, please consider starring the repo and citing:

@inproceedings{kumar2025charm3r,
   title={{CHARM3R: Towards Unseen Camera Height Robust Monocular $3$D Detector}},
   author={Kumar, Abhinav and Guo, Yuliang and Zhang, Zhihao and Huang, Xinyu and Ren, Liu and Liu, Xiaoming},
   booktitle={ICCV},
   year={2025}
}

Setup

  • Requirements

    1. Python 3.7
    2. PyTorch 1.10
    3. Torchvision 0.11
    4. Cuda 11.3
    5. Ubuntu 18.04/Debian 8.9

This is tested with NVIDIA A100 GPU. Other platforms have not been tested. Clone the repo first. Unless otherwise stated, the below scripts and instructions assume the working directory is the directory CHARM3R:

git clone https://github.com/abhi1kumar/CHARM3R.git
cd CHARM3R
  • Cuda & Python

Build the DEVIANT environment by installing the requirements:

conda create --name CHARM3R --file conda_GUP_environment_a100.txt
conda activate CHARM3R
pip install opencv-python pandas kornia==0.6.6
  • CARLA, CODa, KITTI, nuScenes and Waymo Data

Follow instructions of data_setup_README.md to setup CARLA, CODa, KITTI, nuScenes and Waymo as follows:

CHARM3R
├── data
│      ├── carla
│      │      ├── ImageSets
│      │      ├── height0
│      │      │      ├── training
│      │      │      │     ├── calib
│      │      │      │     ├── image
│      │      │      │     └── label
│      │      │      │
│      │      │      └── validation
│      │      │            ├── calib
│      │      │            ├── image
│      │      │            └── label
│      │      ├── height-6
│      │      ├── height6
│      │      ├── height-12
│      │      ├── height12
│      │      ├── height-18
│      │      ├── height18
│      │      ├── height-24
│      │      ├── height24
│      │      ├── height-27
│      │      └── height30
│      │             ├── training
│      │             │     ├── calib
│      │             │     ├── image
│      │             │     └── label
│      │             │
│      │             └── validation
│      │                   ├── calib
│      │                   ├── image
│      │                   └── label
│      │
│      ├── coda
│      │      ├── ImageSets
│      │      ├── training
│      │      │     ├── calib
│      │      │     ├── image_2
│      │      │     └── label_2
│      │      │
│      │      └── testing
│      │            ├── calib
│      │            ├── image_2
│      │            └── label_2
│      ├── KITTI
│      │      ├── ImageSets
│      │      ├── kitti_split1
│      │      ├── training
│      │      │     ├── calib
│      │      │     ├── image_2
│      │      │     └── label_2
│      │      │
│      │      └── testing
│      │            ├── calib
│      │            └── image_2
│      │
│      ├── nusc_kitti
│      │      ├── ImageSets
│      │      ├── training
│      │      │     ├── calib
│      │      │     ├── image
│      │      │     └── label
│      │      │
│      │      └── validation
│      │            ├── calib
│      │            ├── image
│      │            └── label
│      │
│      └── waymo
│             ├── ImageSets
│             ├── training
│             │     ├── calib
│             │     ├── image
│             │     └── label
│             │
│             └── validation
│                   ├── calib
│                   ├── image
│                   └── label
│
├── experiments
├── images
├── lib
├── nuscenes-devkit        
│ ...
  • AP Evaluation

Run the following to generate the KITTI binaries corresponding to R40:

sudo apt-get install libopenblas-dev libboost-dev libboost-all-dev gfortran
sh data/KITTI/kitti_split1/devkit/cpp/build.sh

We finally setup the Waymo evaluation. The Waymo evaluation is setup in a different environment py36_waymo_tf to avoid package conflicts with our CHARM3R environment:

# Set up environment
conda create -n py36_waymo_tf python=3.7
conda activate py36_waymo_tf
conda install cudatoolkit=11.3 -c pytorch

# Newer versions of tf are not in conda. tf>=2.4.0 is compatible with conda.
pip install tensorflow-gpu==2.4
conda install pandas
pip3 install waymo-open-dataset-tf-2-4-0 --user

To verify that your Waymo evaluation is working correctly, pass the ground truth labels as predictions for a sanity check. Type the following:

/mnt/home/kumarab6/anaconda3/envs/py36_waymo_tf/bin/python -u data/waymo/waymo_eval.py --sanity

You should see AP numbers as 100 in every entry after running this sanity check.

Training

Train the model:

chmod +x scripts_training.sh
./scripts_training.sh

Testing Pre-trained Models

Model Zoo

We provide logs/models/predictions for the main experiments on CARLA data splits available to download here.

Data_Splits Method Config
(Run)
Weight
/Pred
Metrics All
(0.7)
-0.70m
(-0.7)
0.0m
(0.7)
0.76m
(0.7)
All
(0.5)
-0.70
(0.5)
0.0
(0.5)
0.76
(0.5)
CARLA Val GUP Net gup_dla34 gdrive AP40 - 9.46 53.82 7.23 - 41.66 76.47 40.97
CARLA Val +CHARM3R charm3r_gup gdrive AP40 - 19.45 55.68 27.33 - 53.40 74.47 61.98
CARLA Val DEVIANT dev_dla34 gdrive AP40 - 8.63 50.18 6.25 - 40.24 73.78 41.74
CARLA Val +CHARM3R charm3r_dev gdrive AP40 - 17.11 48.74 26.24 - 49.28 70.21 63.60

Testing

Make output folder in the CHARM3R directory:

mkdir output

Place models in the output folder as follows:

CHARM3R
├── output
│      ├── carla_charm3r_dev_dla34
│      ├── carla_charm3r_gup_dla34
│      ├── carla_dev_dla34_height0
│      └── carla_gup_dla34_height0
│
│ ...

Then, to test, run the file as:

chmod +x scripts_inference.sh
./scripts_inference.sh

Qualitative Plots/Visualization

To get qualitative plots and visualize the predicted+GT boxes, type the following:

python plot/plot_qualitative_output.py --dataset carla --folder output/carla_charm3r_gup_dla34/results_test/data

Type the following to reproduce our other plots:

python plot/plot_performance_with_height.py

FAQ

  • Inference on older cuda version For inference on older cuda version, type the following before running inference:
source cuda_9.0_env
  • Correct Waymo version You should see a 16th column in each ground truth file inside data/waymo/validation/label/. This corresponds to the num_lidar_points_per_box. If you do not see this column, run:
cd data/waymo
python waymo_check.py 

to see if num_lidar_points_per_box is printed. If nothing is printed, you are using the wrong Waymo dataset version and you should download the correct dataset version.

  • Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array This error indicates that you're trying to pass a Tensor to a NumPy call". This means you have a wrong numpy version. Install the correct numpy as:
pip install numpy==1.19.5

Acknowledgements

We thank the authors of the following awesome codebases:

Please also consider citing and starring them.

Contributions

We welcome contributions to the CHARM3R repo. Feel free to raise a pull request.

↳ Stargazers

Stargazers repo roster for @nastyox/Repo-Roster

↳ Forkers

Forkers repo roster for @nastyox/Repo-Roster

License

CHARM3R code is under the MIT license.

Contact

For questions, feel free to post here or drop an email to this address- [email protected]

About

[ICCV 2025] Official PyTorch Code of CHARM3R: Towards Unseen Camera Height Robust Monocular 3D Detector

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages