Skip to content

cvg/lamaria

Repository files navigation

The LaMAria Dataset
Benchmarking Egocentric Visual-Inertial SLAM at City Scale

Anusha Krishnan*, Shaohui Liu*, Paul-Edouard Sarlin*, Oscar Gentilhomme,
David Caruso, Maurizio Monge, Richard Newcombe, Jakob Engel, Marc Pollefeys

*Equal contribution


We present LaMAria, an egocentric, city-scale benchmark for visual-inertial SLAM, featuring ~ 22 hours / 70 km of trajectories with survey-grade control points providing centimeter-accurate ground truth.

Using LaMAria, you can:

  • Evaluate SLAM systems under real-world egocentric setup: low light, moving platforms, exposure changes, time-varying sensor calibrations.
  • Benchmark against highly accurate ground truths.

This dataset offers 23 training sequences and 63 test sequences.

Overview of LaMAria
Figure 1: Overview of the LaMAria dataset and benchmark.

To learn more about the dataset, please refer to our main dataset website or our paper.

Table of Contents

Installation

Create an environment:

python3 -m venv lamaria_env
source lamaria_env/bin/activate

Install the package:

git clone [email protected]:cvg/lamaria.git
cd lamaria
python -m pip install -r requirements.txt
python -m pip install -e .

Downloading the dataset

Our dataset is fully hosted via the archive here.

python -m tools.download_lamaria --help

We provide a downloading script here, which can be used to download:

  • Specific sequences or entire sets (training/test).
  • Specific types:
    • Raw - Downloads raw .vrs files and Aria calibration file.
    • ASL - Downloads ASL folder and pinhole calibration file.
    • ROSbag - Downloads ROS1 bag and pinhole calibration file.
    • All - Downloads both raw and pinhole data.

Ground truth files are automatically downloaded for the training sequences.

Data sizes

  • Raw data (vrs + aria_calibrations): ~890G (training: ~117G + test: ~773G)
  • ASL folder + pinhole_calibrations: ~1.1T (training: ~170G + test: ~884G)
  • ROSbag + pinhole_calibrations: ~1.5T (training: ~247G + test: ~1.3T)

Some example commands

To download the raw data of a specific sequence (e.g., R_01_easy):

python -m tools.download_lamaria --output_dir ./lamaria_data \
       --sequences R_01_easy --type raw

To download 3 custom sequences in rosbag format:

python -m tools.download_lamaria --output_dir ./lamaria_data \
       --sequences R_01_easy R_02_easy R_03_easy --type rosbag

To download 3 custom sequences in asl format:

python -m tools.download_lamaria --output_dir ./lamaria_data \
       --sequences R_01_easy R_02_easy R_03_easy --type asl

To download all training sequences in both raw and pinhole formats:

python -m tools.download_lamaria --output_dir ./lamaria_data \
       --set training --type all

The downloaded raw data is stored in the following way:

output_dir/
├── training/
│   ├── R_01_easy/
│   │   ├── aria_calibrations/
│   │   │   └── R_01_easy.json
│   │   ├── ground_truth/
│   │   │   ├── pGT/
│   │   │   │   └── R_01_easy.txt
│   │   │   └── control_points/
│   │   │       └── # if sequence has CPs
│   │   ├── raw_data/
│   │   │   └── R_01_easy.vrs
│   └── ...
└── test/ # no ground truth
    └── sequence_1_1

For more information about the training and test sequences, refer to the dataset details. To learn more about the various data formats, calibration files and ground-truths, visit our documentation.

Evaluation

Our training and test sequences are categorized into varying challenges. To evaluate your SLAM results on our data, we provide two main ways:

  1. Evaluation via the website: Upload your results on our website to get evaluation results. All sequences can be submitted via our website. Results on test sequences are displayed on the public leaderboard.
  2. Standalone evaluation: Run the evaluation scripts locally using the provided lamaria package. These scripts can be run only on the training sequences (since ground truth is required).

Input format

The input pose estimate file must be a text file where each line corresponds to a timestamped pose in the following format:

timestamp tx ty tz qx qy qz qw

The timestamp must be in nanoseconds.

Download demo data

We provide a small script quickstart.sh that downloads demo data from the archive. The standalone evaluations and example visual-inertial optimization can be run on the downloaded data.

chmod +x quickstart.sh
./quickstart.sh

The first sequence of the controlled experimental set (R_01_easy) and additional set (sequence_1_19) are stored in the demo/ folder.

Evaluation w.r.t. Control Points

Three sequences in the controlled experimental set and the additional set sequences can be evaluated w.r.t. control points. This script computes the score and control point recall based on the alignment of the estimated trajectory to the control points.

To perform the evaluation on the downloaded demo data:

python -m evaluate_wrt_control_points --estimate demo/estimate/sequence_1_19.txt \
       --cp_json_file demo/sequence_1_19/ground_truth/control_points/sequence_1_19.json \
       --device_calibration_json demo/sequence_1_19/aria_calibrations/sequence_1_19.json \
       --corresponding_sensor imu \
       --output_path demo_outputs/eval_cp

This command evaluates the provided estimate w.r.t. control points and stores the results in demo/eval_cp. The --corresponding_sensor flag indicates which sensor the poses are expressed in (e.g., imu or cam0).

To learn more about the control points and sparse evaluation, refer to Section 4.1 and 4.2 of our paper.

Evaluation w.r.t Pseudo-GT

This script evaluates the estimated trajectory w.r.t. the pseudo-dense ground truth from our ground-truthing pipeline. It requires the alignment obtained from the sparse evaluation (w.r.t. control points). The script computes the pose recall @ 1m and @ 5m, after aligning the estimated trajectory to the pseudo-ground truth.

To perform the evaluation on the downloaded demo data:

python -m evaluate_wrt_pgt --estimate demo/estimate/sequence_1_19.txt \
       --gt_estimate demo/sequence_1_19/ground_truth/pGT/sequence_1_19.txt \
       --sparse_eval_result demo_outputs/eval_cp/sparse_eval_result.npy

EVO Evaluation w.r.t MPS

This script evaluates the estimated trajectory w.r.t. the pseudo-dense ground truth from Machine Perception Services (MPS). It computes the Absolute Trajectory Error (ATE) RMSE between the two trajectories.

To perform the evaluation on the downloaded demo data:

python -m evaluate_wrt_mps --estimate demo/estimate/R_01_easy.txt \
       --gt_estimate demo/R_01_easy/ground_truth/pGT/R_01_easy.txt

This method is used to evaluate results on the controlled experimental set, where the gt estimate file is built directly from the MPS estimated trajectory.

Converting VRS to ASL/ROSbag format

We provide some utility scripts in the tools/ folder to convert between different data formats used in our dataset.

Click to expand commands
  1. Converting from .vrs to ASL folder format:
python -m tools.vrs_to_asl_folder --vrs_file path/to/sequence.vrs --output_asl_folder path/to/output_asl_folder
  1. Converting from ASL folder to ROS1 bag:
python -m tools.asl_folder_to_rosbag --input_asl_folder path/to/asl_folder --output_rosbag path/to/output.bag
  1. Undistorting ASL folder images to PINHOLE format:
python -m tools.undistort_asl_folder --calibration_file path/to/calibration.json --asl_folder path/to/asl_folder --output_asl_folder path/to/output_undistorted_asl_folder

Note: The undistortion script requires COLMAP to be installed from source. Please follow the instructions here to install the library.

Example Visual-Inertial Optimization

As a byproduct of our groundtruthing pipeline, we provide an example visual-inertial optimization pipeline built within the lamaria package. This pipeline takes as input a pose estimate file (in the format described below) and .vrs, performs keyframing and triangulation, and optimizes the trajectory using visual and inertial residuals.

The input pose estimate file must follow the same format as described in the input format section above.

Additional Installation

To extract images from a .vrs file, it is required to install the VRS Command Line Tools. Please follow the instructions here to install the library from source.

Running the example

To run an example visual-inertial optimization on the demo data downloaded following download demo data (Here the initial estimation file in the demo is from a run of OpenVINS):

python -m example_vi_optimization --output demo_outputs/vi_optim \
       --vrs demo/sequence_1_18/raw_data/sequence_1_18.vrs \
       --estimate demo/estimate/sequence_1_18.txt


Figure 2: Optimized trajectory in COLMAP visualizer

BibTeX citation

@InProceedings{Krishnan_2025_ICCV,
  author    = {Krishnan, Anusha and
               Liu, Shaohui and
               Sarlin, Paul-Edouard and
               Gentilhomme, Oscar and
               Caruso, David and
               Monge, Maurizio and
               Newcombe, Richard and
               Engel, Jakob and
               Pollefeys, Marc},
  title     = {Benchmarking Egocentric Visual-Inertial SLAM at City Scale},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      = {2025}
}

License

The dataset, documentation, and other non-code content in this project are licensed under the Creative Commons Attribution 4.0 International License.

The code in this repository is licensed under the MIT License.

About

Benchmarking Visual-Inertial SLAM at City Scale (ICCV 2025).

Resources

License

MIT, CC-BY-4.0 licenses found

Licenses found

MIT
LICENSE
CC-BY-4.0
LICENSE-CC-BY

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •