We introduce LayeredDepth, a real and a synthetic dataset tailored to the multi-layer depth estimation task. The real dataset is for benchmark purposes, containing in-the-wild images with high-quality, human-annotated relative depth ground-truth. Complementary to the real-world benchmark, our synthetic dataset allows us to train good-performing models for multi-layer depth estimation.
If you find LayeredDepth useful for your work, please consider citing our academic paper:
Hongyu Wen,
Yiming Zuo,
Venkat Subramanian,
Patrick Chen,
Jia Deng
@article{wen2025layereddepth,
title={Seeing and Seeing Through the Glass: Real and Synthetic Data for Multi-Layer Depth Estimation},
author={Hongyu Wen and Yiming Zuo and Venkat Subramanian and Patrick Chen and Jia Deng},
journal={arXiv preprint arXiv:2503.11633},
year={2025},
}
conda env create -f env.yaml
conda activate layereddepth
The benchmark data is available under CC0 license. Download the validation set (images + ground-truth) and test set (images) here.
Unzip the validation set into the data/ directory.
For each image i.png
in LayeredDepth (where i_j.png
in the estimations directory. For example, the first layer depth estimation for image 0.png
should be named as 0_1.png
.
Then run
python3 evaluate_all.py # for all relative depth tuples
python3 evaluate_layer1.py # for first layer relative depth tuples
To evaluate your model on the test set and compare your results with the baseline, you need to submit your flow predictions to the evaluation server.
Submit your predictions to the evaluation server using the command below. Ensure your submission follows the same depth estimation format described above. Replace the placeholders with your actual email, submission path, and method name:
python3 upload_submission.py --email your_email --path path_to_your_submission --method_name your_method_name --benchmark multi_layer
python3 upload_submission.py --email your_email --path path_to_your_submission --method_name your_method_name --benchmark first_layer
Upon submission, you will receive a unique submission ID, which serves as the identifier for your submission. Results are typically emailed within 1 hour. Please note that each email user may upload only three submissions every seven days.
To make your submission public, run the command below. Please replace the placeholders with your specific details, including your submission ID, email, and method name. You may specify the publication name, or use "Anonymous" if the publication is under submission. It's optional to provide URLs for the publication and code.
python3 modify_submission.py --id submission_id --email your_email --anonymous False --method_name your_method_name --publication "your publication name" --url_publication "https://your_publication" --url_code "https://your_code"
Coming Soon!