Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

52/feature/add docs #53

Merged
merged 12 commits into from
Nov 5, 2023
85 changes: 85 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Autonomy Research Testbed Documentation

These docs are meant to be a _succinct_ reference to commands, packages, and any other
information that may be useful to document as it relates to the
`autonomy-research-testbed` platform.

## Table of Contents

1. Design
1. [Repository Structure](./design/repository_structure.md)
2. [`atk.yml`](./design/atk.md)
3. [Dockerfiles](./design/dockerfiles.md)
4. [ROS Workspace](./design/ros_workspace.md)
5. [Launch System](./design/launch_system.md)
2. Usage
1. [Development Workflow](./usage/development_workflow.md)
2. [How to Run](./usage/how_to_run.md)
3. [Frequently Asked Questions](./misc/faq.md)

## Quick Start

This section provides the main commands necessary to launch various components. Please ensure you understand the previous topics in the [Table of Contents](#table-of-contents) before continuing.

### Install dependencies
AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved

Python dependencies are listed in the `requirements.txt` file and can be installed with the following command:

```bash
pip install -r requirements.txt
```

In addition, you will need to install docker and docker compose. Please refer to the [official documentation](https://www.docker.com/get-started/) for installation details.

### Start up vnc

You'll probably want to visualize GUI windows, so start up vnc first. The first time around, the image will need to be build so this may take a little while.

```bash
$ atk dev -u -s vnc
WARNING | logger.set_verbosity :: Verbosity has been set to WARNING
[+] Running 1/1
✔ Container art-vnc Started
```

If you see the following warning, **ignore it**. This simply says that substituting the `$DISPLAY` in the `atk.yml` file fails because `$DISPLAY` is unset which is expected. By passing `vnc` as an optional later, this will override the variable.
```bash
WARN[0000] The "DISPLAY" variable is not set. Defaulting to a blank string.
```

> [!NOTE]
> You can also use x11 if you're _not_ ssh'd to the host. Replace all `vnc` flags in the `--optionals` with `x11` to do this.

### Launch the simulation

The first time you start up the chrono service, it will need to build the image. This may take a while.

AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved
```bash
$ atk dev -ua -s chrono --optionals gpus vnc
WARNING | logger.set_verbosity :: Verbosity has been set to WARNING
[+] Running 1/1
✔ Container art-chrono Started
art@art-chrono:~/art/sim$ cd python
art@art-chrono:~/art/sim/python$ python3 demo_ART_cone.py --track
Running demo_ART_cone.py...
Loaded JSON: /home/art/art/sim/data/art-1/sensors/camera.json
Loaded JSON: /home/art/art/sim/data/art-1/sensors/accelerometer.json
Loaded JSON: /home/art/art/sim/data/art-1/sensors/gyroscope.json
Loaded JSON: /home/art/art/sim/data/art-1/sensors/magnetometer.json
Loaded JSON: /home/art/art/sim/data/art-1/sensors/gps.json
Shader compile time: 5.04626
Initializing rclcpp.
Initialized ChROSInterface: chrono_ros_node.
```

### Run the autonomy stack

The first time you start up the dev service, it will need to build the image. This may take a while.

```bash
$ atk dev -ua -s dev --optionals gpus vnc
WARNING | logger.set_verbosity :: Verbosity has been set to WARNING
[+] Running 1/1
✔ Container art-dev Started
art@art-dev:~/art/workspace$ ros2 launch art_launch art.launch.py use_sim:=True
```
49 changes: 49 additions & 0 deletions docs/design/atk.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# `atk.yml`

This file describes the `atk.yml` configuration file specific to this repository. For
a more general overview of `autonomy-toolkit` and it's configuration parameters, please
refer to the [official documentation](https://projects.sbel.org/autonomy-toolkit).

`autonomy-toolkit` is simply a wrapper of `docker compose`. As such, the `atk.yml`
is fully compatible with `docker compose`. The main feature of `autonomy-toolkit`
are [Optionals](#optionals).

> [!NOTE]
> For information on how to actually run `atk` for this repo, refer to the
> [How to Run](./how_to_run.md) page.
AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved

## Services

For the `autonomy-research-testbed` repo specifically, there are three main service
types: `dev`/`<vehicle>`, `chrono` and `vnc`.

### `dev`/`<vehicle>`

The `dev` and `<vehicle>` services help spin up images/containers that correspond with
development of the autonomy stack. `dev` should be used on non-vehicle platforms (i.e. lab workstations) for common development work. The `<vehicle>` service (where `<vehicle>` corresponds to an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific config (such as device exposure, etc.).

### `chrono`

The `chrono` service spins up a container that contains Chrono and is used to run the
simulation. The `chrono` service should really only ever be run on a powerful workstation and not on the vehicle computer. The autonomy stack then can communicate with the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN.

### `vnc`

The `vnc` service spins up a container that allows visualizing GUI windows in a browser
while running commands in a container. It builds on top of NoVNC. Please see
[How to Run](./how_to_run.md#vnc) for a detailed usage explanation.

## Optionals

In addition to services, the `atk.yml` defines a few optional configurations. Optionals are useful configurations that are optionally included in the `docker compose` configuration file at runtime.

An example use case is the following. If someone is developing on a Mac (which doesn't have a NVIDIA gpu), attaching a gpu to the container will throw an error considering one doesn't exist. Optionals provide a helpful mechanism to only apply certain configurations when they are desired/supported.

See [How to Run](./how_to_run.md#optionals) for a detailed usage explanation.

## Networks

Another useful tool in `docker compose` are networks. Networks allow containers running on the same host to communicate with one another in a virtualized way (i.e. without communicating explicitly with the host). This means, if there are two containers running on the same host (e.g. `dev` and `chrono`), they can communicate with each other without needing to do any special networking. By default, all containers spawned in this repository are put on the same network.

> [!NOTE]
> The `vnc` service requires a all services to be on the same network to work. For instance, for `dev` to display a window in the `vnc` browser, the environment variable `DISPLAY` should be set to `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default network, the windows will be displayed automatically.
199 changes: 199 additions & 0 deletions docs/design/dockerfiles.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
# Dockerfiles

The `docker/` folder holds the dockerfiles and data files associated with docker
image/container creation. Background regarding docker, images/containers, and
dockerfiles is outside the scope of this document. For more information, please
refer to the [official documentation](https://docs.docker.org).
AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved

This folder is structured as follows:

```
docker/
├── data/
├── common/
│ ├── base.dockerfile
│ ├── common.dockerfile
│ └── final.dockerfile
├── snippets/
│ ├── chrono.dockerfile
│ ├── ros.dockerfile
│ └── rosdep.dockerfile
├── chrono.dockerfile
├── dev.dockerfile
└── vnc.dockerfile
```

> [!NOTE]
> This repository was built to accommodate [autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information regarding specific commands, please see [Workflow](./02_workflow.md)
AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved

## `docker/data/`

This folder holds data files that may be used by dockerfile snippets. For example,
the [`docker/snippets/chrono.dockerfile`](../../docker/snippets/chrono.dockerfile) requires the OptiX build script; this file should go here.

## `docker/common/`

This subfolder of `docker/` holds common dockerfile code that is shared across _most_
services. It currently contains three dockerfiles.

### `docker/common/base.dockerfile`

This dockerfile helps initialize the docker system as a whole. It defines global `ARGS`,
such as `USERNAME`, `PROJECT`, etc. Furthermore, it will create a user that has the
desired `uid` and `gid` (can be defined through the `USER_UID` and the `USER_GID`
`ARGS`), and will assign any user groups that the user should be apart of.

**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines the base image which
the custom docker image will be constructed from. The image is constructed using the
following base image: `${IMAGE_BASE}:${IMAGE_TAG}`. An **IMAGE_BASE** of `ubuntu` and an
**IMAGE_TAG** of `22.04` would then build the image from `ubuntu:22.04`.

**IMAGE_TAG**: Used in conjunction with **IMAGE_TAG**. See above for details. An
**IMAGE_BASE** of `ubuntu` and an **IMAGE_TAG** of `22.04` would then build the image
from `ubuntu:22.04`.

**PROJECT**: The name of the project. Synonymous with `project` in docker.

**USERNAME** _(Default: `${PROJECT}`)_: The username to assign to the new user created
in the image.

**USERHOME** _(Default: `/home/${USERNAME}`)_: The home directory for the new user.

**USERSHELL** _(Default: `bash`)_: The shell to use in the container. Bash is
recommended.

**USERSHELLPATH** _(Default: `/bin/${USERSHELL}`)_: The path to the new user's shell.

**USERSHELLPROFILE** _(Default: `${USERHOME}/.${USERSHELL}rc`): The path to the new
user's shell profile.

**USER_UID** _(Default: 1000)_: The user id (User ID -> UID) that the created user is
assigned. In Linux, this must match the system user with which you launch `atk` from.
If it's not assigned correctly, you will have permission issues when trying to edit
files from the host and/or the container. See the [FAQs](./faq.md#file-permissions)
AaronYoung5 marked this conversation as resolved.
Show resolved Hide resolved
for more information.

**USER_GID** _(Default: 1000)_: See **USER_UID** above.

**USER_GROUPS** _(Default: "")_: User groups to add to the new user.

### `docker/common/common.dockerfile`

This dockerfile runs command that we can assume most services want, like package
installation.

**APT_DEPENDENCIES** _(Default: "")_: A space separated list of apt dependencies to
install in the image. Installed with `apt install`.

**PIP_REQUIREMENTS** _(Default: "")_: A space separated list of pip dependencies to
install in the image. Installed with `pip install`.

**USER_SHELL_ADD_ONS** _(Default: "")_: Profile shell addons that are directly echoed
into the user shell profile. For instance,
`USER_SHELL_ADD_ONS: "source /opt/ros/${ROS_DISTRO}/setup.bash"` will run
`echo "source /opt/ros/${ROS_DISTRO}/setup.bash" >> ${USERSHELLPROFILE}`.

### `docker/common/final.dockerfile`

This dockerfile runs commands that are expected to be run after all main installation
snippets are run. It will set the `USER` to our new user, set environment variables, and
set the `CMD` to be `${USERSHELLPATH}`.

## `docker/snippets`

This folder contains dockerfile "snippets", or small scripts that are included in
service dockerfiles to build specific packages, such as Chrono or ROS.

### `docker/snippets/chrono.dockerfile`

This file builds Chrono from source. It currently builds a non-configurable list of
chrono modules that is listed below:

- `PyChrono`
- `Chrono::VSG`
- `Chrono::Irrlicht`
- `Chrono::Vehicle`
- `Chrono::Sensor`
- `Chrono::Parsers`
- `Chrono::ROS`

Furthermore, it also builds [`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces). This is required to build `Chrono::ROS`.

**OPTIX_SCRIPT**: The location _on the host_ that the optix script is located at. This
script can be found on NVIDIA's OptiX downloads page. For more information, see the
[FAQs](./faq.md#optix-install).

**ROS_DISTRO**: The ROS distro to use.

**ROS_WORKSPACE_DIR** _(Default: `${USERHOME}/ros_workspace`)_. The directory to build
`chrono_ros_interfaces` at. Helpful so that you can add custom messages after building
the image. Ensure you copy the changes to the host before tearing down the container
as this is _not_ a volume.

**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_: The folder where the `chrono_ros_interfaces` package is actually cloned.

**CHRONO_BRANCH** _(Default: `main`)_: The Chrono branch to build from.

**CHRONO_REPO** _(Default: `https://github.com/projectchrono/chrono.git`)_: The url of
the Chrono repo to clone and build from.

**CHRONO_DIR** _(Default: `${USERHOME}/chrono`)_: The directory to clone chrono to. The
clone is _not_ deleted to allow people to make changes to the build from within the
container. Ensure you copy the changes to the host before tearing down the container
as this is _not_ a volume.

**CHRONO_INSTALL_DIR** _(Default: `/opt/chrono`)_: The path where Chrono is installed.
The user profile is updated to add the python binary directory to `PYTHONPATH` and
the lib directory is appended to `LD_LIBRARY_PATH`.

### `docker/snippets/ros.dockerfile`

To decrease image size and allow easy customization, ROS is installed separately (as
opposed to the usual method of building _on top_ of an official ROS image). This
snippet will install ROS here.

**ROS_DISTRO**: The ROS distro to use.

### `docker/snippets/rosdep.dockerfile`

`rosdep` is a useful tool in ROS that parses nested packages, looks inside each
`package.xml` for build dependencies (through `<build_depend>`), and installs the
package through the best means (e.g. `apt`, `pip`, etc.). This file will run `rosdep` on
the ROS workspace located within the `autonomy-research-testbed` repository.

**ROS_DISTRO**: The ROS distro to use.

**ROS_WORKSPACE** _(Default: `./workspace`)_: The directory location _on the host_ of
the ROS workspace to run `rosdep` on.

## `docker/chrono.dockerfile`

The dockerfile for the `chrono` service. It will do the following:

1. Run `base.dockerfile`
2. Install ROS
3. Install Chrono
4. Run `common.dockerfile`
5. Run `final.dockerfile`

## `docker/dev.dockerfile`

The dockerfile for the `dev` service. It will do the following:

1. Run `base.dockerfile`
2. Install ROS
3. Run `rosdep`
4. Run `common.dockerfile`
5. Run `final.dockerfile`

## `docker/vnc.dockerfile`

The dockerfile for the `vnc` service.

## More Information

Below is some additional information for people interested in the underlying workings of the docker implementation.

### `dockerfile-x`

In order to be more extensible and general purpose, the dockerfiles mentioned below were built around `dockerfile-x`. [`dockerfile-x`](https://github.com/devthefuture/dockerfile-x.git) is a docker plugin that supports importing of other dockerfiles through the `INCLUDE` docker build action. Using `INCLUDE`, we can construct service dockerfiles that mix and match different [snippets](#dockersnippets) that we implement.
45 changes: 45 additions & 0 deletions docs/design/launch_system.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Launch System

The launch system is used to help spin up all the nodes associated with a given experiement (e.g. simulation, reality). This page describes the file structure and how files are designed.

> [!NOTE]
> The term "orchestrator" is going to be used to describe a launch file that includes other launch files or does house keeping (defines `LaunchConfigurations`, etc.).

## File Structure

The file structure is as follows:

```
autonomy-research-testbed/
├── art_launch/
├── launch_utils/
└── art_<module>_launch/
```

`<module>` here represents the general component that is being launched (e.g. `control`, `perception`, `simulation`, etc.).

Each folder contains a `launch/` folder where all the launch files should be placed.

### File Naming Convention

All launch files end in `.launch.py`. Furthermore, all launch files specific to the a vehicle platform or orchastrator launch files are prefixed with `art_`.

## `art_launch/`

This is where the main launch file is held: [`art.launch.py`](../../workspace/src/common/launch/art_launch/launch/art.launch.py). This file will do a few things.

1. It will first define system wide parameters (e.g. `LaunchConfigurations`, `LaunchDescriptions`, etc.).
2. It will create a [composable node container](https://docs.ros.org/en/galactic/How-To-Guides/Launching-composable-nodes.html).
3. It will include all other orchestration launch files.

## `launch_utils/`

The `launch_utils` folder contains helper functions for creating launch files. These helpers should be used throughout the launch system.

## `art_<module>_launch/`

This folders deal directly with the subfolders defined in the [ROS Workspace](./ros_workspace.md) page. For instance, the [`art_control_launch`](../../workspace/src/common/launch/art_control_launch/) folder contains launch files for the control nodes.

Each folder will have an orchestrator launch file: `art_<module>.launch.py`. This file is responsible for including the other launch files in this folder which are responsible for individual components or nodes.

For instance, the ['art_sensing_launch`](../../workspace/src/common/launch/art_sensing_launch/) folder contains the [`art_sensing.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/art_sensing.launch.py) orchestrator launch file. This file includes the [`usb_cam.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/usb_cam.launch.py) and the [`xsens.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/xsens.launch.py) launch file which is responsible for launching the camera nodes.
Loading