diff --git a/docker/snippets/chrono.dockerfile b/docker/snippets/chrono.dockerfile index 05e9a7e3..7cb8df4f 100644 --- a/docker/snippets/chrono.dockerfile +++ b/docker/snippets/chrono.dockerfile @@ -41,7 +41,6 @@ RUN wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | tee /etc/ USER ${USERNAME} # chrono_ros_interfaces -ARG ROS_DISTRO ARG ROS_WORKSPACE_DIR="${USERHOME}/ros_workspace" ARG CHRONO_ROS_INTERFACES_DIR="${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces" RUN mkdir -p ${CHRONO_ROS_INTERFACES_DIR} && \ diff --git a/docs/design/atk.md b/docs/design/atk.md index 0b5b9079..b97e51b6 100644 --- a/docs/design/atk.md +++ b/docs/design/atk.md @@ -4,13 +4,13 @@ This file describes the `atk.yml` configuration file specific to this repository a more general overview of `autonomy-toolkit` and it's configuration parameters, please refer to the [official documentation](https://projects.sbel.org/autonomy-toolkit). -For information on how to actually run `atk` for this repo, refer to the -[How to Run](./how-to-run.md) page. +`autonomy-toolkit` is simply a wrapper of `docker compose`. As such, the `atk.yml` +is fully compatible with `docker compose`. The main feature of `autonomy-toolkit` +are [Optionals](#optionals). > [!NOTE] -> `autonomy-toolkit` is a simple wrapper around `docker compose`. As such, the `atk.yml` -> is fully compatible with `docker compose`. The main advantage of `autonomy-toolkit` -> is [Optionals](#optionals). +> For information on how to actually run `atk` for this repo, refer to the +> [How to Run](./how-to-run.md) page. ## Services @@ -20,47 +20,30 @@ types: `dev`/``, `chrono` and `vnc`. ### `dev`/`` The `dev` and `` services help spin up images/containers that correspond with -development of the autonomy stack. These are the main development containers and most -commonly used. `dev` should be used on non-vehicle platforms (i.e. lab workstations) -for common development work. The `` service (where `` corresponds to -an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific -config (such as device exposure, etc.). +development of the autonomy stack. `dev` should be used on non-vehicle platforms (i.e. lab workstations) for common development work. The `` service (where `` corresponds to an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific config (such as device exposure, etc.). ### `chrono` The `chrono` service spins up a container that contains Chrono and is used to run the -simulation. The `chrono` service should really only ever be run on a powerful -workstation and not on the vehicle itself. The autonomy stack then can communicate with -the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN. +simulation. The `chrono` service should really only ever be run on a powerful workstation and not on the vehicle computer. The autonomy stack then can communicate with the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN. ### `vnc` -The `vnc` service spins up a container that allows visualizing X windows in a browser -while in containers. It builds on top of NoVNC. Please see +The `vnc` service spins up a container that allows visualizing GUI windows in a browser +while running commands in a container. It builds on top of NoVNC. Please see [How to Run](./how-to-run.md#vnc) for a detailed usage explanation. ## Optionals -In addition to services, the `atk.yml` defines a few optionals. Optionals are useful -configurations that are optionally included in the `docker compose` command at runtime. -For instance, if someone is developing on a Mac (which doesn't have a NVIDIA gpu), -attaching a gpu to the container will throw an error considering one doesn't exist. -Optionals provide a helpful mechanism to only apply certain configurations when they -are desired/supported. +In addition to services, the `atk.yml` defines a few optional configurations. Optionals are useful configurations that are optionally included in the `docker compose` configuration file at runtime. + +An example use case is the following. If someone is developing on a Mac (which doesn't have a NVIDIA gpu), attaching a gpu to the container will throw an error considering one doesn't exist. Optionals provide a helpful mechanism to only apply certain configurations when they are desired/supported. See [How to Run](./how-to-run.md#optionals) for a detailed usage explanation. ## Networks -Another useful tool in `docker compose` are networks. Networks allow containers running -on the same host to communicate with one another in a virtualized away from the host. -This means, if there are two containers running on the same host (e.g. `dev` and -`chrono`), they can communicate with each other without needing to expose any ports or -do any special networking. By default, all containers spawned in this repository are put -on the same network. +Another useful tool in `docker compose` are networks. Networks allow containers running on the same host to communicate with one another in a virtualized way (i.e. without communicating explicitly with the host). This means, if there are two containers running on the same host (e.g. `dev` and `chrono`), they can communicate with each other without needing to do any special networking. By default, all containers spawned in this repository are put on the same network. > [!NOTE] -> The `vnc` service requires a like-network to work. For instance, for `dev` to display -> a window in the `vnc` browser, the environment variable `DISPLAY` should be set to -> `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default -> network, the windows will be displayed automatically. +> The `vnc` service requires a all services to be on the same network to work. For instance, for `dev` to display a window in the `vnc` browser, the environment variable `DISPLAY` should be set to `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default network, the windows will be displayed automatically. diff --git a/docs/design/dockerfiles.md b/docs/design/dockerfiles.md index 4b80bcf1..d2daed71 100644 --- a/docs/design/dockerfiles.md +++ b/docs/design/dockerfiles.md @@ -24,21 +24,12 @@ docker/ ``` > [!NOTE] -> In order to be more extensible and general purpose, the dockerfiles mentioned below -> were built around `dockerfile-x`. -> [`dockerfile-x`](https://github.com/devthefuture/dockerfile-x.git) is a docker plugin -> that supports importing of other dockerfiles through the `INCLUDE` docker build -> action. Using `INCLUDE`, we can construct service dockerfiles that mix and match -> different [snippets](#dockersnippets) that we implement. - -> [!INFO] -> This repository was built to accommodate is -[autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information -regarding specific commands, please see [Workflow](./02_workflow.md) +> This repository was built to accommodate [autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information regarding specific commands, please see [Workflow](./02_workflow.md) ## `docker/data/` -This folder holds data files that may be used by dockerfile snippets. +This folder holds data files that may be used by dockerfile snippets. For example, +the [`docker/snippets/chrono.dockerfile`](../../docker/snippets/chrono.dockerfile) requires the OptiX build script; this file should go here. ## `docker/common/` @@ -52,21 +43,16 @@ such as `USERNAME`, `PROJECT`, etc. Furthermore, it will create a user that has desired `uid` and `gid` (can be defined through the `USER_UID` and the `USER_GID` `ARGS`), and will assign any user groups that the user should be apart of. -#### Required `ARGS` - -**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines that base image which +**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines the base image which the custom docker image will be constructed from. The image is constructed using the following base image: `${IMAGE_BASE}:${IMAGE_TAG}`. An **IMAGE_BASE** of `ubuntu` and an -**IMAGE_TAG** of `22.04` would then build the image from `ubuntu::22.04`. +**IMAGE_TAG** of `22.04` would then build the image from `ubuntu:22.04`. **IMAGE_TAG**: Used in conjunction with **IMAGE_TAG**. See above for details. An **IMAGE_BASE** of `ubuntu` and an **IMAGE_TAG** of `22.04` would then build the image -from `ubuntu::22.04`. - -**PROJECT**: The name of the project. Synonymous with `project` in docker. The created -user in the container is assigned to **PROJECT**, as well as the home directory. +from `ubuntu:22.04`. -#### Optional `ARGS` +**PROJECT**: The name of the project. Synonymous with `project` in docker. **USERNAME** _(Default: `${PROJECT}`)_: The username to assign to the new user created in the image. @@ -96,12 +82,6 @@ for more information. This dockerfile runs command that we can assume most services want, like package installation. -#### Required `ARGS` - -There are no required args. - -#### Optional `ARGS` - **APT_DEPENDENCIES** _(Default: "")_: A space separated list of apt dependencies to install in the image. Installed with `apt install`. @@ -119,14 +99,6 @@ This dockerfile runs commands that are expected to be run after all main install snippets are run. It will set the `USER` to our new user, set environment variables, and set the `CMD` to be `${USERSHELLPATH}`. -#### Required `ARGS` - -There are no required args. - -#### Optional `ARGS` - -There are no optional args. - ## `docker/snippets` This folder contains dockerfile "snippets", or small scripts that are included in @@ -145,11 +117,7 @@ chrono modules that is listed below: - `Chrono::Parsers` - `Chrono::ROS` -Furthermore, -[`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces) is -built. This is required to build `Chrono::ROS`. - -#### Required `ARGS` +Furthermore, it also builds [`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces). This is required to build `Chrono::ROS`. **OPTIX_SCRIPT**: The location _on the host_ that the optix script is located at. This script can be found on NVIDIA's OptiX downloads page. For more information, see the @@ -157,15 +125,12 @@ script can be found on NVIDIA's OptiX downloads page. For more information, see **ROS_DISTRO**: The ROS distro to use. -#### Optional `ARGS` - **ROS_WORKSPACE_DIR** _(Default: `${USERHOME}/ros_workspace`)_. The directory to build `chrono_ros_interfaces` at. Helpful so that you can add custom messages after building the image. Ensure you copy the changes to the host before tearing down the container as this is _not_ a volume. -**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_: -The folder where the `chrono_ros_interfaces` package is actually cloned. +**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_: The folder where the `chrono_ros_interfaces` package is actually cloned. **CHRONO_BRANCH** _(Default: `main`)_: The Chrono branch to build from. @@ -187,14 +152,8 @@ To decrease image size and allow easy customization, ROS is installed separately opposed to the usual method of building _on top_ of an official ROS image). This snippet will install ROS here. -#### Required `ARGS` - **ROS_DISTRO**: The ROS distro to use. -#### Optional `ARGS` - -There are no optional args. - ### `docker/snippets/rosdep.dockerfile` `rosdep` is a useful tool in ROS that parses nested packages, looks inside each @@ -202,12 +161,8 @@ There are no optional args. package through the best means (e.g. `apt`, `pip`, etc.). This file will run `rosdep` on the ROS workspace located within the `autonomy-research-testbed` repository. -#### Required `ARGS` - **ROS_DISTRO**: The ROS distro to use. -#### Optional `ARGS` - **ROS_WORKSPACE** _(Default: `./workspace`)_: The directory location _on the host_ of the ROS workspace to run `rosdep` on. @@ -234,3 +189,11 @@ The dockerfile for the `dev` service. It will do the following: ## `docker/vnc.dockerfile` The dockerfile for the `vnc` service. + +## More Information + +Below is some additional information for people interested in the underlying workings of the docker implementation. + +### `dockerfile-x` + +In order to be more extensible and general purpose, the dockerfiles mentioned below were built around `dockerfile-x`. [`dockerfile-x`](https://github.com/devthefuture/dockerfile-x.git) is a docker plugin that supports importing of other dockerfiles through the `INCLUDE` docker build action. Using `INCLUDE`, we can construct service dockerfiles that mix and match different [snippets](#dockersnippets) that we implement. diff --git a/docs/design/repository_structure.md b/docs/design/repository_structure.md index 7da97aff..1642fed7 100644 --- a/docs/design/repository_structure.md +++ b/docs/design/repository_structure.md @@ -24,7 +24,7 @@ See [this page](./dockerfiles.md) for more information. ## `docs/` -This folder holds the documentation for the `autonomy-research-testbed` repo. +This folder holds the documentation pages for the `autonomy-research-testbed` repo. ## `sim/` @@ -32,7 +32,9 @@ This folder holds simulation files. ### `sim/cpp/` -C++ demos are contained here. +C++ demos are contained here. To add a new demo, place the `.cpp` file in this directory +and add the demo to the `DEMOS` list in +[`CMakeLists.txt`](../../sim/cpp/CMakeLists.txt). ### `sim/python/` @@ -42,13 +44,30 @@ Python demos are contained here. Data folders for the simulation are put here. -> [!NOTE] -> When building the `chrono` service's image, the Chrono's data folder is both contained -> in the Chrono clone directory and in the shared installed directory (see -> [`dockerfiles`](./dockerfiles.md#dockersnippetschronodockerfile) for more -> information). Therefore, the sim files should set the Chrono data directory to one of -> these folders. Additional data files that should be loaded at runtime should be set -> directly (i.e. don't use the Chrono path utilities). +The [`chrono`](./dockerfiles.md#dockersnippetschronodockerfile) service contains the +Chrono data folder, so there is no need to include that folder again here. Instead, +include demo-specific data files. + +Ensure, when writing demos, that you set the Chrono data directories correctly. +```python +# demo.py +chrono.SetChronoDataPath("/opt/chrono/share/chrono/data/") +``` +```cpp +// demo.cpp +SetChronoDataPath("/opt/chrono/share/chrono/data/"); +``` + +And then to access data files in `sim/data/`, you just pass the string directly. It will +probably be relative to the `sim/python` or `sim/cpp` folders, respectively. +```python +# demo.py +path_to_data_file = "../data/data_file.txt" +``` +```cpp +// demo.cpp +path_to_data_file = "../data/data_file.txt"; +``` ## `workspace/` @@ -64,6 +83,11 @@ in the `.pre-commit-config.yaml` file are run. In addition to on commits, `pre-commit` is required to be run in order for PRs to be merged. This ensures all code in the main branch is formatted. +To automatically run `pre-commit` on _every_ commit, run the following: +```bash +pre-commit install +``` + Please see [the official documentation](https://pre-commit.com) for more detailed information. @@ -71,17 +95,16 @@ information. This is the `atk` configuration file. See [the ART/ATK documentation](./atk.md) for detailed information about how the `atk` file is configured. Additionally, please see -the official `autonomy-toolkit` documentation for more details regarding how `atk` works. +[the official `autonomy-toolkit` documentation](projects.sbel.org/autonomy-toolkit) for more details regarding how `atk` works. ## `atk.env` This file contains environment variables that are evaluated at runtime in the `atk.yml`. -The values defined here can be thought of as variables that are substituted into the -defined locations in `atk.yml` (e.g. `${VARIABLE_NAME}`). See +You can think of these values as variables that are substituted into the `atk.yml` +placeholders (like `${VARIABLE_NAME}`). See [the official docker documentation](https://docs.docker.com/compose/environment-variables/set-environment-variables) for a more detailed explanation. ## `requirements.txt` This file defines required pip packages needed to interact with this repository. -Currently, `autonomy-toolkit` is the only requirement. Additional requirements should be -put here. +Currently, `autonomy-toolkit` and `pre-commit` are the only requirements. Additional requirements should be put here. diff --git a/docs/design/ros_workspace.md b/docs/design/ros_workspace.md index b2af6a24..c203961d 100644 --- a/docs/design/ros_workspace.md +++ b/docs/design/ros_workspace.md @@ -13,17 +13,14 @@ This page describes the underlying philosophy of the ROS workspace for the The general philosophy of the ROS workspace structure is inspired from [Autoware Universe](https://github.com/autowarefoundation/autoware.universe.git). -Basically, the mentality can be split into three main principles. +Basically, the philosophy can be split into three main principles. ### Principle 1: ROS packages are separated by function This principle serves two purposes: it defines how the package folders are organized and what should be implemented in a package. -ROS packages should be organized in a hierarchy that separates the node directories by -their overarching purpose. For instance, perception nodes should be placed in the -[`perception/`](../workspace/src/perception/) subfolder. See -[Workspace Structure](#workspace-structure) for a more detailed explanation of all the +The ROS packages should be organized in a hierarchy that separates the node directories by their overarching purpose. For instance, perception nodes should be placed in the [`perception/`](../workspace/src/perception/) subfolder. See [Workspace Structure](#workspace-structure) for a more detailed explanation of all the subfolders. Additionally, this principle is meant to describe what goes in a package. Generally @@ -32,24 +29,18 @@ like-nodes, or define shared utilities/helpers that are used by other packages. instance, the [`launch_utils`](../workspace/src/common/launch/launch_utils/) package does not have a node, but implements utilities used by other launch files. -### Principle 2: Meta-packages and launch files organize vehicle spin up/tear down +### Principle 2: Metapackages and launch files organize vehicle spin up/tear down It is certainly possible that there exists multiple ART vehicles each with a different setup (i.e. different sensors, computational hardware, etc.). Therefore, this principle helps to define which nodes are created and/or built as it depends on the specific vehicle platform in use. -First, meta-packages are a new-ish ROS construct which helps define the build -dependencies for a specific package. Essentially, a meta-package has no nodes or code. -It is an empty package except for a `package.xml` and `CMakeLists.txt` file which define -build dependencies. These build dependencies can then be used to directly build -nodes/packages for a specific vehicle platform by only using `colcon build` to build -that package. For instance, if a certain vehicle requires the made up packages called -`camera_driver`, `lidar_driver`, `perception`, `control`, and `actuation`, you can -specify all these packages as `` in the meta-package. When -`colcon build --packages-select ` is run, the `` packages -are automatically built. **In summary, each vehicle platform should have a meta-package -that defines it's nodes that are required to be built for it to run successfully.** +First, [metapackages](https://wiki.ros.org/Metapackages) are a new-ish ROS construct which helps define the build dependencies for a specific package. Essentially, a metapackage has no nodes or code. It is an empty package except for a `package.xml` and `CMakeLists.txt` file which define build dependencies. These build dependencies can then be used to directly build nodes/packages for a specific vehicle platform by only using `colcon build` to build that package. + +For instance, if a certain vehicle requires packages named `camera_driver`, `lidar_driver`, `perception`, `control`, and `actuation`, you can specify all these packages as `` in the metapackage. When `colcon build --packages-select ` is run, the `` packages are automatically built. + +**TL;DR: Each vehicle platform should have a metapackage that defines it's nodes that are required to be built for it to run successfully.** In a similar vein, individual vehicle platforms should have a launch file which is the primary entrypoint for which the vehicle nodes can be launched. This main launch file @@ -84,23 +75,19 @@ workspace/src/ ### `workspace/src/common` Included in this subfolder is common utilities, interfaces, launch files, and -meta-packages. +metapackages. #### `workspace/src/common/interfaces` -An interface in ROS is defined as a schema file that defines either a message (`.msg`), -action (`.action`), or service (`.srv`). Custom internal messages should be defined -here. +An interface in ROS is schema file that defines either a message (`.msg`), action (`.action`), or service (`.srv`). Custom internal messages should be defined here. #### `workspace/src/common/launch` -Launch files for spinning up the vehicle platforms should be implemented here. For a -more detailed explanation about the launch system, please refer to -[this page](./launch_system.md). +Launch files for spinning up the vehicle platforms should be implemented here. For a more detailed explanation about the launch system, please refer to [the Launch System page](./launch_system.md). #### `workspace/src/common/meta` -Vehicle platform meta packages are placed here. +Vehicle platform metapackages are placed here. ### `workspace/src/external` @@ -110,7 +97,7 @@ External packages that are used for debug should be placed here. For instance, ### `workspace/src/sensing` Packages placed here are responsible for interfacing with sensors (i.e. drivers). -These are usually submodules. +These are usually submodules and not written by us. ### `workspace/src/simulation`