Skip to content

Commit

Permalink
Update readmes
Browse files Browse the repository at this point in the history
  • Loading branch information
cgeller committed Feb 9, 2024
1 parent 91fdcee commit 12eeb91
Show file tree
Hide file tree
Showing 3 changed files with 44 additions and 23 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ This repository contains CARLOS, the official reference implementation of the op
> **CARLOS: An Open, Modular, and Scalable Simulation Framework for the Development and Testing of Software for C-ITS**
> ([*arXiv link follows*](TODO))
>
> [Christian Geller](https://www.ika.rwth-aachen.de/de/institut/team/fahrzeugintelligenz-automatisiertes-fahren/geller.html), [Benedikt Haas](TODO), [Amarin Kloeker](https://www.ika.rwth-aachen.de/en/institute/team/vehicle-intelligence-automated-driving/kloeker-amarin.html), [Jona Hermens](TODO), [Bastian Lampe](https://www.ika.rwth-aachen.de/en/institute/team/vehicle-intelligence-automated-driving/lampe.html), [Lutz Eckstein](https://www.ika.rwth-aachen.de/en/institute/team/univ-prof-dr-ing-lutz-eckstein.html)
> [Christian Geller](https://www.ika.rwth-aachen.de/de/institut/team/fahrzeugintelligenz-automatisiertes-fahren/geller.html), [Benedikt Haas](https://github.com/BenediktHaas96), [Amarin Kloeker](https://www.ika.rwth-aachen.de/en/institute/team/vehicle-intelligence-automated-driving/kloeker-amarin.html), [Jona Hermens](TODO), [Bastian Lampe](https://www.ika.rwth-aachen.de/en/institute/team/vehicle-intelligence-automated-driving/lampe.html), [Lutz Eckstein](https://www.ika.rwth-aachen.de/en/institute/team/univ-prof-dr-ing-lutz-eckstein.html)
> [Institute for Automotive Engineering (ika), RWTH Aachen University](https://www.ika.rwth-aachen.de/en/)
>
> <sup>*Abstract* – Future mobility systems and their components are increasingly defined by their software. The complexity of these cooperative intelligent transport systems (C-ITS) and the ever-changing requirements posed at the software require continual software updates. The dynamic nature of the system and the practically innumerable scenarios in which different software components work together necessitate efficient and automated development and testing procedures that use simulations as one core methodology. The availability of such simulation architectures is a common interest among many stakeholders, especially in the field of automated driving. That is why we propose CARLOS - an open, modular, and scalable simulation framework for the development and testing of software in C-ITS that leverages the rich CARLA and ROS ecosystems. We provide core building blocks for this framework and explain how it can be used and extended by the community. Its architecture builds upon modern microservice and DevOps principles such as containerization and continuous integration. In our paper, we motivate the architecture by describing important design principles and showcasing three major use cases - software prototyping, data-driven development, and automated testing. We make CARLOS and example implementations of the three use cases publicly available at [https://github.com/ika-rwth-aachen/carlos](https://github.com/ika-rwth-aachen/carlos).</sup>
Expand Down
30 changes: 21 additions & 9 deletions automated-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,6 @@ The script sequentially evaluates all scenario files in the selected folder. Aft

<p align="center"><img src="../utils/images/automated-testing-cli.png" width=800></p>

#### Self-Hosted GitHub Runner

As mentioned before, a [self-hosted GitHub Runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) needs to be set up in order to run the following pipeline. Apart from ensuring the [system requirements](../utils/requirements.md), the runner currently also needs to be started in a **local session** (i.e. not via SSH, RPD or other tools) and has to have access to the primary "display" (see [X window system](https://en.wikipedia.org/wiki/X_Window_System)). You can validate this by running the following command in the same session where you want to start the runner:
```bash
echo $DISPLAY
```
The result should be something simple like `:1` . If there is anything in front of the colon, the session is most likely not local and thus not suitable for this setup.

### Automated CI Pipeline

Expand All @@ -62,8 +55,27 @@ They can be used within a GitHub CI workflow to create a job list of simulation

#### Workflow

The workflow presented in [automated-testing.yml](../.github/workflows/automated-testing.yml) combines the different actions and performs simulation evaluation analog to the local `evaluation-scenarios.sh` . It leverages the modularity and customizability of the provided actions by reusing them and configuring them differently. For example, the `generate-job-matrix` allows customizing the `query-string`, which is used for matching and collecting fitting scenarios as a job matrix for following pipeline steps.
The workflow presented in [automated-testing.yml](../.github/workflows/automated-testing.yml) combines the different actions and performs simulation evaluation analog to the local `evaluation-scenarios.sh`. It leverages the modularity and customizability of the provided actions by reusing them and configuring them differently. For example, the `generate-job-matrix` allows customizing the `query-string`, which is used for matching and collecting fitting scenarios as a job matrix for following pipeline steps.

#### Self-Hosted GitHub Runner

As mentioned before, a [self-hosted GitHub Runner](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners) needs to be set up in order to run the described CI pipeline in your custom repository fork. Apart from ensuring the [system requirements](../utils/requirements.md), the runner currently also needs to be started in a **local session** (i.e. not via SSH, RPD or other tools) and has to have access to the primary "display" (see [X window system](https://en.wikipedia.org/wiki/X_Window_System)). You can validate this by running the following command in the same session where you want to start the runner:
```bash
echo $DISPLAY
```
The result should be something simple like `:1` . If there is anything in front of the colon, the session is most likely not local and thus not suitable for this setup.

### Setup Your Own Simulation Testing Pipeline

Follow these steps to setup your own simulation testing pipeline:
1. [Fork](https://docs.github.com/de/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo) the CARLOS repository on GitHub.
2. Add a self-hosted runner using the provided information [above](#self-hosted-github-runner).
3. Push additional OpenSCENARIO test files in the [scenarios](../utils/scenarios/) folder.
4. Observe the GitHub workflow and scenario test evaluations.

You may now update the specific test metrics to enable comprehensive testing. In addition, custom ITS functions can be used to control the vehicle instead of the basic CARLA autopilot, enabling useful testing.


### Outlook - Scalability using Orchestration Tools
## Outlook - Scalability using Orchestration Tools

The principles and workflows demonstrated here already show the effectiveness of automating the simulation processes. Certainly, a much higher grade of automation can be achieved by incorporating more sophisticated orchestration tools like [Kubernetes](https://kubernetes.io/docs/concepts/overview/), [Docker Swarm](https://docs.docker.com/engine/swarm/) or others. These tools allow for better scalability, while also simplifying the deployment and monitoring of the services.
35 changes: 22 additions & 13 deletions data-driven-development/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,9 @@ The subsequent demonstration showcases rapid *data driven development* and speci
- **flexibility** and containerization
- automation and **scalability**

## Prerequisites
## Getting Started

### Requirements and Installation
> [!IMPORTANT]
> Make sure that all [system requirements](../utils/requirements.md) are fulfilled.
> Additionally, a Python installation is required on the host for this use case. We recommend using [conda](https://docs.conda.io/projects/conda/en/stable/index.html).
Expand All @@ -29,8 +30,6 @@ Alternatively, you can also use Pip:
pip install -r requirements.txt
```

## Getting Started

In the initial demo [software-prototyping](../software-prototyping), we demonstrated the integration of a Function Under Test (FUT) with CARLOS, exploring its capabilities through practical experimentation. While these tests validated the general functionality of our image segmentation module, it became clear that there is considerable potential to improve its performance. Given that this module, like many AD functions, relies heavily on machine learning models trained with specific datasets, the quality and quantity of this training data are crucial.

### Permutation-based Data Generation
Expand All @@ -40,16 +39,16 @@ Given that the specific nature of the data is less critical, the main objective
Run the demo for permutation-based data generation:
```bash
# carlos/data-driven-development$
python ./data_generation.py --config data-driven-delevopment-demo-image-segmentation.json
python ./data_generation.py --config data-driven-development-demo-image-segmentation.json
```

or use the top-level `run-demo.sh` script:
```bash
# carlos$
./run-demo.sh data-driven-delevopment
./run-demo.sh data-driven-development
```

Data is generated by creating all possible permutations from a set of configuration parameters, managed through a JSON configuration file. This results in different simulation runs in several parameter dimensions, which are simulated in sequence. A comprehensive [example configuration](./config/data-driven-delevopment-demo-image-segmentation.json) is provided within the [config](./config/) folder. While the current implementation is limited to the settings specified [below](#configuration-table), the provided code is modular and can be easily customized to fit your requirements.
Data is generated by creating all possible permutations from a set of configuration parameters, managed through a JSON configuration file. This results in different simulation runs in several parameter dimensions, which are simulated in sequence. A comprehensive [example configuration](./config/data-driven-development-demo-image-segmentation.json) is provided within the [config](./config/) folder. While the current implementation is limited to the settings specified [below](#configuration-table), the provided code is modular and can be easily customized to fit your requirements.

```json
"simulation_configs":
Expand All @@ -67,17 +66,17 @@ Data is generated by creating all possible permutations from a set of configurat
}
```

This examplaric demo configures 12 different simulation runs, by applying permutations in the CARLA town, the weather settings and the initial spawning point of the ego vehicle. All simulations run for a maximum simulation time of 60s and all relevant image segmentation data topics are recorded in dedicated ROS bags.
This exemplary demo configures 12 different simulation runs, by applying permutations in the CARLA town, the weather settings and the initial spawning point of the ego vehicle. All simulations run for a maximum simulation time of 60s and all relevant image segmentation data topics are recorded in dedicated ROS bags.

Thus, data generation at large scale becomes possible and helps developers to achive diverse and useful data for any application.
Thus, data generation at large scale becomes possible and helps developers to achieve diverse and useful data for any application.

### Scenario-based Data Generation

Assuming we improved our model, we are now aiming to evaluate its performance in targeted, real-world scenarios. Hence, we need to generate data in such concrete scenarios, for which the scenario-based data generation feature can be utilized. In this example, we demonstrate how a list of multiple OpenSCENARIO files can be integrated into the data generation pipeline as well to generate data under those specific conditions.

```bash
# carlos/data-driven-development$
python ./data_generation.py --config data-driven-delevopment-demo-scenario-execution.json
python ./data_generation.py --config data-driven-development-demo-scenario-execution.json
```

All scenarios are executed sequentially and data is generated analogous to above. The respective configuration file contains mainly a path or list to specific predefined OpenSCENARIO files:
Expand All @@ -95,12 +94,22 @@ All scenarios are executed sequentially and data is generated analogous to above

Following on that initial scenario-based simulation approach, we focus more on the automatic execution and evaluation of scenarios at large scale in the third, [automatic testing demo](../automated-testing/README.md). In addition, a full integration into a CI workflow is provided.

### Record Your Own Data

Follow these steps to record your own data:

1. Specify [sensor configuration file(s)](https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_spawn_objects/#spawning-sensors) to provide sensor data within ROS.
2. Adjust the parameters in the data pipeline configuration file. A full list of supported parameters is given [below](#configuration-parameters).
3. Start the pipeline with `./data_generation.py --config <your-modified-config-file>`.
4. Observe the recorded ROS 2 bag files for further postprocessing.

You may now adjust the configuration parameters to fit your specific use case. In addition, the pipeline code itself can be updated in the [data_generation.py](./data_generation.py) Python file.

### Configuration Parameters
## Configuration Parameters

The JSON configuration file for the data generation pipeline consists of two main sections: `settings` and `simulation_configs`. The `settings` section specifies general parameters. The `simulation_configs` section must either contain `permutation_configs` for permutation-based simulations or `scenario_configs` for scenario-based simulations.

#### General settings (`settings`)
### General Settings (`settings`)

| Name | Description | Note | required | default
| --- | --- | --- | --- | --- |
Expand All @@ -110,7 +119,7 @@ The JSON configuration file for the data generation pipeline consists of two mai
| `record_topics` | Dict of ROS 2 topics to be recorded | | not required | - |
| `output_path` | Path for storing generated data | | not required | `./data/` |

#### Permutation settings (`permutation_configs`)
### Permutation-based Settings (`permutation_configs`)
| Name | Description | Note | required | default
| --- | --- | --- | --- | --- |
| `num_executions` | Number of times a simulation based on a single permutation is executed | Must be an integer | not required | 1 |
Expand All @@ -122,7 +131,7 @@ The JSON configuration file for the data generation pipeline consists of two mai
| `vehicle_occupancy` | List of numbers between 0 and 1 that spawn vehicles proportionally to the number of available spawn points | vehicles are spawned via generate_traffic.py | not required | - |
| `weather` | List of weather conditions | [Weather conditions list](https://github.com/carla-simulator/carla/blob/master/PythonAPI/docs/weather.yml#L158) | not required | depends on town, in general "ClearSunset" |

#### Scenario settings
### Scenario-based Settings
| Name | Description | Note | required | default
| --- | --- | --- | --- | --- |
| `num_executions` | Number of times a simulation based on a single permutation is executed | Must be an integer | not required | 1 |
Expand Down

0 comments on commit 12eeb91

Please sign in to comment.