Skip to content

Commit

Permalink
readme: Add more context
Browse files Browse the repository at this point in the history
  • Loading branch information
marcojob committed Oct 3, 2024
1 parent 5147405 commit 7734282
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/blearn/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
},
"remoteUser": "asl",
"initializeCommand": ".devcontainer/devcontainer_optional_mounts.sh",
"postStartCommand": "pip install -e .",
"postStartCommand": "",
"mounts": [
{
"source": "${localEnv:HOME}/.bash-git-prompt",
Expand Down
43 changes: 39 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,44 @@
# radarmeetsvision
This repository contains the code of the ICRA 2025 submission (in review) "Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics". The submitted version can be found on [arxiv](https://arxiv.org/abs/2410.00736).

## Overview
## Setup
The easiest way to get started is using the respective devcontainer.
The devcontainer all dependencies installed, including a patch that allows rendering depth images without a screen attached to the host.
To use the devcontainer, install Visual Studio Code and follow the [instructions](https://code.visualstudio.com/docs/devcontainers/containers).
In short, only vscode and docker are required.
If installing directly on the host machine is more desirable, the Dockerfiles are a good starting point for the required dependencies.

## radarmeetsvision
This directory contains the network in a python package format. It can be installed using `pip install .` .

### Interface
The file `interface.py` aims to provide an interface to the network.
In this file, most methods to train and evaluate are provided.
In the `scripts` directory, usage examples are provided.
To reproduce the training and evaluation results, the two scripts `train.py` and `evaluate.py` can be run.
For details on the process, we refer to the paper, but in general, no adjustements are needed.

### metric_depth_network
Contains the metric network based on [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2). All important code diffs compared to this upstream codebase are outlined in general.
Contains the metric network based on [Depth Anything V2](https://github.com/DepthAnything/Depth-Anything-V2).
All important code diffs compared to this upstream codebase are outlined in general.

### Pretrained networks
We provide the networks obtained from training and used for evaluation in the paper [here](tbd).

## Training datasets
**blearn** is a tool that allows you to generate a synthetic image and depth training dataset using Blender.
Together with a mesh and texture obtained from photogrammetry, realistic synthetic datasets can be generated.
The script is executed with Blender's built-in Python interpreter, which has the advantage that the Blender Python API is correctly loaded already.

### Download existing datasets
Existing datasets can be downloaded from [here](tdb). The download contains the datasets, as well as the blender projects used to obtain the datasets.

### Generating training datasets
In order to (re-)generate the training datasets, the following steps are needed:
1. Obtaining .fbx mesh and texture for the area of interest. We use aerial photogrammetry and Pix4D to create this output.
2. Import the fbx file into a new blender project. Ensure that only the following elements are present in the blender project: A mesh, a camera and a sun light source. As convenience, we provide the blender projects used in the paper.
3. Create a configuration file: Easiest is to start from an existing configuration file. The main values to adjust are the camera intrinsics, as well as the extent of the mesh, i.e. `paths`: `xmin`, `xmax`, ... which defines the area which is sampled on the mesh and the number of samples the dataset should contain. One thing to consider is the position and orientation with respect to the blender origin. The z-coordinate specification in this tool is always with respect to the distance to ground, essentially doing terrain following. If you are using one of the provided meshes, then the extent does not need to be adjusted.
4. The dataset rendering can then be started using: `blender -b <path to blender project file> --python blearn.py`. Ensure that you adjust the config file accordingly in `blearn.py`.

## Unit tests
Unittests can be run either using `python3 -m unittest` or the CI script `ci/pr_unittest.bash`.
## Validation datasets
The method for obtaining the validation datasets is described in the paper. The datasets are made available [here](tbd).
26 changes: 0 additions & 26 deletions blearn/README.md

This file was deleted.

6 changes: 2 additions & 4 deletions blearn/blearn.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,10 @@
if dn not in sys.path:
sys.path.append(dn)

# Import own code now
from src.blender import Blender # NOQA

from src.blender import Blender

def main():
b = Blender(config_file="config/config_road_corridor_demo.yml")
b = Blender(config_file="config/config_rhone.yml")
b.start()


Expand Down
2 changes: 1 addition & 1 deletion ci/pr_evaluate_networks.bash
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ fi
rm -rf tests/resources/*.pkl

# Run the evaluation script
python3 scripts/evaluation/evaluate_networks.py --dataset tests/resources --config tests/resources/test_evaluation.json --output tests/resources --network tests/resources
python3 scripts/evaluation/evaluate.py --dataset tests/resources --config tests/resources/test_evaluation.json --output tests/resources --network tests/resources

TEX_FILE="tests/resources/results/results_table0.tex"
if [[ -f "$TEX_FILE" && -s "$TEX_FILE" ]]; then
Expand Down
File renamed without changes.

0 comments on commit 7734282

Please sign in to comment.