Skip to content

Commit

Permalink
Working through docs
Browse files Browse the repository at this point in the history
  • Loading branch information
AaronYoung5 committed Nov 4, 2023
1 parent 33f2c1b commit 29ebf44
Show file tree
Hide file tree
Showing 5 changed files with 81 additions and 126 deletions.
1 change: 0 additions & 1 deletion docker/snippets/chrono.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ RUN wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | tee /etc/
USER ${USERNAME}

# chrono_ros_interfaces
ARG ROS_DISTRO
ARG ROS_WORKSPACE_DIR="${USERHOME}/ros_workspace"
ARG CHRONO_ROS_INTERFACES_DIR="${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces"
RUN mkdir -p ${CHRONO_ROS_INTERFACES_DIR} && \
Expand Down
45 changes: 14 additions & 31 deletions docs/design/atk.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ This file describes the `atk.yml` configuration file specific to this repository
a more general overview of `autonomy-toolkit` and it's configuration parameters, please
refer to the [official documentation](https://projects.sbel.org/autonomy-toolkit).

For information on how to actually run `atk` for this repo, refer to the
[How to Run](./how-to-run.md) page.
`autonomy-toolkit` is simply a wrapper of `docker compose`. As such, the `atk.yml`
is fully compatible with `docker compose`. The main feature of `autonomy-toolkit`
are [Optionals](#optionals).

> [!NOTE]
> `autonomy-toolkit` is a simple wrapper around `docker compose`. As such, the `atk.yml`
> is fully compatible with `docker compose`. The main advantage of `autonomy-toolkit`
> is [Optionals](#optionals).
> For information on how to actually run `atk` for this repo, refer to the
> [How to Run](./how-to-run.md) page.
## Services

Expand All @@ -20,47 +20,30 @@ types: `dev`/`<vehicle>`, `chrono` and `vnc`.
### `dev`/`<vehicle>`

The `dev` and `<vehicle>` services help spin up images/containers that correspond with
development of the autonomy stack. These are the main development containers and most
commonly used. `dev` should be used on non-vehicle platforms (i.e. lab workstations)
for common development work. The `<vehicle>` service (where `<vehicle>` corresponds to
an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific
config (such as device exposure, etc.).
development of the autonomy stack. `dev` should be used on non-vehicle platforms (i.e. lab workstations) for common development work. The `<vehicle>` service (where `<vehicle>` corresponds to an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific config (such as device exposure, etc.).

### `chrono`

The `chrono` service spins up a container that contains Chrono and is used to run the
simulation. The `chrono` service should really only ever be run on a powerful
workstation and not on the vehicle itself. The autonomy stack then can communicate with
the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN.
simulation. The `chrono` service should really only ever be run on a powerful workstation and not on the vehicle computer. The autonomy stack then can communicate with the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN.

### `vnc`

The `vnc` service spins up a container that allows visualizing X windows in a browser
while in containers. It builds on top of NoVNC. Please see
The `vnc` service spins up a container that allows visualizing GUI windows in a browser
while running commands in a container. It builds on top of NoVNC. Please see
[How to Run](./how-to-run.md#vnc) for a detailed usage explanation.

## Optionals

In addition to services, the `atk.yml` defines a few optionals. Optionals are useful
configurations that are optionally included in the `docker compose` command at runtime.
For instance, if someone is developing on a Mac (which doesn't have a NVIDIA gpu),
attaching a gpu to the container will throw an error considering one doesn't exist.
Optionals provide a helpful mechanism to only apply certain configurations when they
are desired/supported.
In addition to services, the `atk.yml` defines a few optional configurations. Optionals are useful configurations that are optionally included in the `docker compose` configuration file at runtime.

An example use case is the following. If someone is developing on a Mac (which doesn't have a NVIDIA gpu), attaching a gpu to the container will throw an error considering one doesn't exist. Optionals provide a helpful mechanism to only apply certain configurations when they are desired/supported.

See [How to Run](./how-to-run.md#optionals) for a detailed usage explanation.

## Networks

Another useful tool in `docker compose` are networks. Networks allow containers running
on the same host to communicate with one another in a virtualized away from the host.
This means, if there are two containers running on the same host (e.g. `dev` and
`chrono`), they can communicate with each other without needing to expose any ports or
do any special networking. By default, all containers spawned in this repository are put
on the same network.
Another useful tool in `docker compose` are networks. Networks allow containers running on the same host to communicate with one another in a virtualized way (i.e. without communicating explicitly with the host). This means, if there are two containers running on the same host (e.g. `dev` and `chrono`), they can communicate with each other without needing to do any special networking. By default, all containers spawned in this repository are put on the same network.

> [!NOTE]
> The `vnc` service requires a like-network to work. For instance, for `dev` to display
> a window in the `vnc` browser, the environment variable `DISPLAY` should be set to
> `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default
> network, the windows will be displayed automatically.
> The `vnc` service requires a all services to be on the same network to work. For instance, for `dev` to display a window in the `vnc` browser, the environment variable `DISPLAY` should be set to `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default network, the windows will be displayed automatically.
71 changes: 17 additions & 54 deletions docs/design/dockerfiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,12 @@ docker/
```

> [!NOTE]
> In order to be more extensible and general purpose, the dockerfiles mentioned below
> were built around `dockerfile-x`.
> [`dockerfile-x`](https://github.com/devthefuture/dockerfile-x.git) is a docker plugin
> that supports importing of other dockerfiles through the `INCLUDE` docker build
> action. Using `INCLUDE`, we can construct service dockerfiles that mix and match
> different [snippets](#dockersnippets) that we implement.
> [!INFO]
> This repository was built to accommodate is
[autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information
regarding specific commands, please see [Workflow](./02_workflow.md)
> This repository was built to accommodate [autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information regarding specific commands, please see [Workflow](./02_workflow.md)
## `docker/data/`

This folder holds data files that may be used by dockerfile snippets.
This folder holds data files that may be used by dockerfile snippets. For example,
the [`docker/snippets/chrono.dockerfile`](../../docker/snippets/chrono.dockerfile) requires the OptiX build script; this file should go here.

## `docker/common/`

Expand All @@ -52,21 +43,16 @@ such as `USERNAME`, `PROJECT`, etc. Furthermore, it will create a user that has
desired `uid` and `gid` (can be defined through the `USER_UID` and the `USER_GID`
`ARGS`), and will assign any user groups that the user should be apart of.

#### Required `ARGS`

**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines that base image which
**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines the base image which
the custom docker image will be constructed from. The image is constructed using the
following base image: `${IMAGE_BASE}:${IMAGE_TAG}`. An **IMAGE_BASE** of `ubuntu` and an
**IMAGE_TAG** of `22.04` would then build the image from `ubuntu::22.04`.
**IMAGE_TAG** of `22.04` would then build the image from `ubuntu:22.04`.

**IMAGE_TAG**: Used in conjunction with **IMAGE_TAG**. See above for details. An
**IMAGE_BASE** of `ubuntu` and an **IMAGE_TAG** of `22.04` would then build the image
from `ubuntu::22.04`.

**PROJECT**: The name of the project. Synonymous with `project` in docker. The created
user in the container is assigned to **PROJECT**, as well as the home directory.
from `ubuntu:22.04`.

#### Optional `ARGS`
**PROJECT**: The name of the project. Synonymous with `project` in docker.

**USERNAME** _(Default: `${PROJECT}`)_: The username to assign to the new user created
in the image.
Expand Down Expand Up @@ -96,12 +82,6 @@ for more information.
This dockerfile runs command that we can assume most services want, like package
installation.

#### Required `ARGS`

There are no required args.

#### Optional `ARGS`

**APT_DEPENDENCIES** _(Default: "")_: A space separated list of apt dependencies to
install in the image. Installed with `apt install`.

Expand All @@ -119,14 +99,6 @@ This dockerfile runs commands that are expected to be run after all main install
snippets are run. It will set the `USER` to our new user, set environment variables, and
set the `CMD` to be `${USERSHELLPATH}`.

#### Required `ARGS`

There are no required args.

#### Optional `ARGS`

There are no optional args.

## `docker/snippets`

This folder contains dockerfile "snippets", or small scripts that are included in
Expand All @@ -145,27 +117,20 @@ chrono modules that is listed below:
- `Chrono::Parsers`
- `Chrono::ROS`

Furthermore,
[`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces) is
built. This is required to build `Chrono::ROS`.

#### Required `ARGS`
Furthermore, it also builds [`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces). This is required to build `Chrono::ROS`.

**OPTIX_SCRIPT**: The location _on the host_ that the optix script is located at. This
script can be found on NVIDIA's OptiX downloads page. For more information, see the
[FAQs](./faq.md#optix-install).

**ROS_DISTRO**: The ROS distro to use.

#### Optional `ARGS`

**ROS_WORKSPACE_DIR** _(Default: `${USERHOME}/ros_workspace`)_. The directory to build
`chrono_ros_interfaces` at. Helpful so that you can add custom messages after building
the image. Ensure you copy the changes to the host before tearing down the container
as this is _not_ a volume.

**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_:
The folder where the `chrono_ros_interfaces` package is actually cloned.
**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_: The folder where the `chrono_ros_interfaces` package is actually cloned.

**CHRONO_BRANCH** _(Default: `main`)_: The Chrono branch to build from.

Expand All @@ -187,27 +152,17 @@ To decrease image size and allow easy customization, ROS is installed separately
opposed to the usual method of building _on top_ of an official ROS image). This
snippet will install ROS here.

#### Required `ARGS`

**ROS_DISTRO**: The ROS distro to use.

#### Optional `ARGS`

There are no optional args.

### `docker/snippets/rosdep.dockerfile`

`rosdep` is a useful tool in ROS that parses nested packages, looks inside each
`package.xml` for build dependencies (through `<build_depend>`), and installs the
package through the best means (e.g. `apt`, `pip`, etc.). This file will run `rosdep` on
the ROS workspace located within the `autonomy-research-testbed` repository.

#### Required `ARGS`

**ROS_DISTRO**: The ROS distro to use.

#### Optional `ARGS`

**ROS_WORKSPACE** _(Default: `./workspace`)_: The directory location _on the host_ of
the ROS workspace to run `rosdep` on.

Expand All @@ -234,3 +189,11 @@ The dockerfile for the `dev` service. It will do the following:
## `docker/vnc.dockerfile`

The dockerfile for the `vnc` service.

## More Information

Below is some additional information for people interested in the underlying workings of the docker implementation.

### `dockerfile-x`

In order to be more extensible and general purpose, the dockerfiles mentioned below were built around `dockerfile-x`. [`dockerfile-x`](https://github.com/devthefuture/dockerfile-x.git) is a docker plugin that supports importing of other dockerfiles through the `INCLUDE` docker build action. Using `INCLUDE`, we can construct service dockerfiles that mix and match different [snippets](#dockersnippets) that we implement.
51 changes: 37 additions & 14 deletions docs/design/repository_structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,17 @@ See [this page](./dockerfiles.md) for more information.

## `docs/`

This folder holds the documentation for the `autonomy-research-testbed` repo.
This folder holds the documentation pages for the `autonomy-research-testbed` repo.

## `sim/`

This folder holds simulation files.

### `sim/cpp/`

C++ demos are contained here.
C++ demos are contained here. To add a new demo, place the `.cpp` file in this directory
and add the demo to the `DEMOS` list in
[`CMakeLists.txt`](../../sim/cpp/CMakeLists.txt).

### `sim/python/`

Expand All @@ -42,13 +44,30 @@ Python demos are contained here.

Data folders for the simulation are put here.

> [!NOTE]
> When building the `chrono` service's image, the Chrono's data folder is both contained
> in the Chrono clone directory and in the shared installed directory (see
> [`dockerfiles`](./dockerfiles.md#dockersnippetschronodockerfile) for more
> information). Therefore, the sim files should set the Chrono data directory to one of
> these folders. Additional data files that should be loaded at runtime should be set
> directly (i.e. don't use the Chrono path utilities).
The [`chrono`](./dockerfiles.md#dockersnippetschronodockerfile) service contains the
Chrono data folder, so there is no need to include that folder again here. Instead,
include demo-specific data files.

Ensure, when writing demos, that you set the Chrono data directories correctly.
```python
# demo.py
chrono.SetChronoDataPath("/opt/chrono/share/chrono/data/")
```
```cpp
// demo.cpp
SetChronoDataPath("/opt/chrono/share/chrono/data/");
```
And then to access data files in `sim/data/`, you just pass the string directly. It will
probably be relative to the `sim/python` or `sim/cpp` folders, respectively.
```python
# demo.py
path_to_data_file = "../data/data_file.txt"
```
```cpp
// demo.cpp
path_to_data_file = "../data/data_file.txt";
```

## `workspace/`

Expand All @@ -64,24 +83,28 @@ in the `.pre-commit-config.yaml` file are run.
In addition to on commits, `pre-commit` is required to be run in order for PRs to be
merged. This ensures all code in the main branch is formatted.

To automatically run `pre-commit` on _every_ commit, run the following:
```bash
pre-commit install
```

Please see [the official documentation](https://pre-commit.com) for more detailed
information.

## `atk.yml`

This is the `atk` configuration file. See [the ART/ATK documentation](./atk.md) for
detailed information about how the `atk` file is configured. Additionally, please see
the official `autonomy-toolkit` documentation for more details regarding how `atk` works.
[the official `autonomy-toolkit` documentation](projects.sbel.org/autonomy-toolkit) for more details regarding how `atk` works.

## `atk.env`

This file contains environment variables that are evaluated at runtime in the `atk.yml`.
The values defined here can be thought of as variables that are substituted into the
defined locations in `atk.yml` (e.g. `${VARIABLE_NAME}`). See
You can think of these values as variables that are substituted into the `atk.yml`
placeholders (like `${VARIABLE_NAME}`). See
[the official docker documentation](https://docs.docker.com/compose/environment-variables/set-environment-variables) for a more detailed explanation.

## `requirements.txt`

This file defines required pip packages needed to interact with this repository.
Currently, `autonomy-toolkit` is the only requirement. Additional requirements should be
put here.
Currently, `autonomy-toolkit` and `pre-commit` are the only requirements. Additional requirements should be put here.
Loading

0 comments on commit 29ebf44

Please sign in to comment.