diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 259bcbe2..8fe10e64 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -19,3 +19,9 @@ repos: rev: v17.0.2 hooks: - id: clang-format + - repo: https://github.com/tcort/markdown-link-check + rev: v3.11.2 + hooks: + - id: markdown-link-check + args: [-q, -a, "200,202"] + files: \.md$ diff --git a/atk.yml b/atk.yml index 58421408..d0c7ee9b 100644 --- a/atk.yml +++ b/atk.yml @@ -27,8 +27,6 @@ x-optionals: name: art services: common: - env_file: - - atk.env build: context: "./" network: "host" diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 00000000..70132142 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,102 @@ +# Autonomy Research Testbed Documentation + +These docs are meant to be a _succinct_ reference to commands, packages, and any other +information that may be useful to document as it relates to the +`autonomy-research-testbed` platform. + +## Table of Contents + +1. Design + 1. [Repository Structure](./design/repository_structure.md) + 2. [`atk.yml`](./design/atk.md) + 3. [Dockerfiles](./design/dockerfiles.md) + 4. [ROS Workspace](./design/ros_workspace.md) + 5. [Launch System](./design/launch_system.md) +2. Usage + 1. [Development Workflow](./usage/development_workflow.md) + 2. [How to Run](./usage/how_to_run.md) +3. [Frequently Asked Questions](./misc/faq.md) + +## Quick Start + +This section provides the main commands necessary to launch various components. Please ensure you understand the previous topics in the [Table of Contents](#table-of-contents) before continuing. + +### Install dependencies + +The primarily dependency for `autonomy-research-testbed` is the `autonomy-toolkit`. See the [official documentation](https://projects.sbel.org/autonomy-toolkit/) for more details. It can be installed with `pip`, so see below. + +Python dependencies are listed in the `requirements.txt` file and can be installed with the following command: + +```bash +pip install -r requirements.txt +``` + +In addition, you will need to install docker and docker compose. Please refer to the [official documentation](https://www.docker.com/get-started/) for installation details. + +#### Download Optix + +To build the chrono image, you'll need to download the OptiX 7.7 build script from NVIDIA's website and place it in [`docker/data`](./../docker/data). You can find the download link [here](https://developer.nvidia.com/designworks/optix/download). See the [FAQs](./misc/faq.md#optix-install) for more details. + +### Start up vnc + +You'll probably want to visualize GUI windows, so start up vnc first. The first time around, the image will need to be build so this may take a little while. + +```bash +$ atk dev -u -s vnc +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +[+] Running 1/1 + ✔ Container art-vnc Started +``` + +If you see the following warning, **ignore it**. This simply says that substituting the `$DISPLAY` in the `atk.yml` file fails because `$DISPLAY` is unset which is expected. By passing `vnc` as an optional later, this will override the variable. +```bash +WARN[0000] The "DISPLAY" variable is not set. Defaulting to a blank string. +``` + +> [!NOTE] +> You can also use x11 if you're _not_ ssh'd to the host. Replace all `vnc` flags in the `--optionals` with `x11` to do this. + +### Launch the simulation + +The first time you start up the chrono service, it will need to build the image. This may take a while. + +```bash +$ atk dev -ua -s chrono --optionals gpus vnc +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +[+] Running 1/1 + ✔ Container art-chrono Started +art@art-chrono:~/art/sim$ cd python +art@art-chrono:~/art/sim/python$ python3 demo_ART_cone.py --track +Running demo_ART_cone.py... +Loaded JSON: /home/art/art/sim/data/art-1/sensors/camera.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/accelerometer.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/gyroscope.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/magnetometer.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/gps.json +Shader compile time: 5.04626 +Initializing rclcpp. +Initialized ChROSInterface: chrono_ros_node. +``` + +### Build and run the autonomy stack + +The first time you start up the dev service, it will need to build the image. This may take a while. + +> [!NOTE] +> The very first time you run `colcon build`, you may need to install the `bluespace_ai_xsens_ros_mti_driver` library. To do that, run the following: +> ```bash +> $ atk dev -ua -s dev --optionals gpus vnc +> WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +> [+] Running 1/1 +> ✔ Container art-dev Started +> art@art-dev:~/art/workspace$ pushd src/sensing/bluespace_ai_xsens_ros_mti_driver/lib/xspublic && make && popd +> ``` + +```bash +$ atk dev -ua -s dev --optionals gpus vnc +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +[+] Running 1/1 + ✔ Container art-dev Started +art@art-dev:~/art/workspace$ colcon build --symlink-install +art@art-dev:~/art/workspace$ ros2 launch art_launch art.launch.py use_sim:=True +``` diff --git a/docs/design/atk.md b/docs/design/atk.md new file mode 100644 index 00000000..97f1a879 --- /dev/null +++ b/docs/design/atk.md @@ -0,0 +1,49 @@ +# `atk.yml` + +This file describes the `atk.yml` configuration file specific to this repository. For +a more general overview of `autonomy-toolkit` and it's configuration parameters, please +refer to the [official documentation](https://projects.sbel.org/autonomy-toolkit). + +`autonomy-toolkit` is simply a wrapper of `docker compose`. As such, the `atk.yml` +is fully compatible with `docker compose`. The main feature of `autonomy-toolkit` +are [Optionals](#optionals). + +> [!NOTE] +> For information on how to actually run `atk` for this repo, refer to the +> [How to Run](./../usage/how_to_run.md) page. + +## Services + +For the `autonomy-research-testbed` repo specifically, there are three main service +types: `dev`/``, `chrono` and `vnc`. + +### `dev`/`` + +The `dev` and `` services help spin up images/containers that correspond with +development of the autonomy stack. `dev` should be used on non-vehicle platforms (i.e. lab workstations) for common development work. The `` service (where `` corresponds to an actual vehicle, such as `art-1`) is nearly identical to `dev` with vehicle-specific config (such as device exposure, etc.). + +### `chrono` + +The `chrono` service spins up a container that contains Chrono and is used to run the +simulation. The `chrono` service should really only ever be run on a powerful workstation and not on the vehicle computer. The autonomy stack then can communicate with the simulator using [Networks](#networks) (if on the same host) or over WiFi/Cellular/LAN. + +### `vnc` + +The `vnc` service spins up a container that allows visualizing GUI windows in a browser +while running commands in a container. It builds on top of NoVNC. Please see +[How to Run](./../usage/how_to_run.md#vnc) for a detailed usage explanation. + +## Optionals + +In addition to services, the `atk.yml` defines a few optional configurations. Optionals are useful configurations that are optionally included in the `docker compose` configuration file at runtime. + +An example use case is the following. If someone is developing on a Mac (which doesn't have a NVIDIA gpu), attaching a gpu to the container will throw an error considering one doesn't exist. Optionals provide a helpful mechanism to only apply certain configurations when they are desired/supported. + +See [How to Run](./../usage/how_to_run.md#optionals) for a detailed usage explanation. + +## Networks + +Another useful tool in `docker compose` are networks. Networks allow containers running on the same host to communicate with one another in a virtualized way (i.e. without communicating explicitly with the host). This means, if there are two containers running on the same host (e.g. `dev` and `chrono`), they can communicate with each other without needing to do any special networking. By default, all containers spawned in this repository are put on the same network. + +> [!NOTE] +> The `vnc` service requires a all services to be on the same network to work. For instance, for `dev` to display a window in the `vnc` browser, the environment variable `DISPLAY` should be set to `vnc:0.0` and the `vnc` service should be spun up on the host. Using the default network, the windows will be displayed automatically. diff --git a/docs/design/dockerfiles.md b/docs/design/dockerfiles.md new file mode 100644 index 00000000..0e61ebed --- /dev/null +++ b/docs/design/dockerfiles.md @@ -0,0 +1,199 @@ +# Dockerfiles + +The `docker/` folder holds the dockerfiles and data files associated with docker +image/container creation. Background regarding docker, images/containers, and +dockerfiles is outside the scope of this document. For more information, please +refer to the [official documentation](https://docs.docker.com). + +This folder is structured as follows: + +``` +docker/ +├── data/ +├── common/ +│ ├── base.dockerfile +│ ├── common.dockerfile +│ └── final.dockerfile +├── snippets/ +│ ├── chrono.dockerfile +│ ├── ros.dockerfile +│ └── rosdep.dockerfile +├── chrono.dockerfile +├── dev.dockerfile +└── vnc.dockerfile +``` + +> [!NOTE] +> This repository was built to accommodate [autonomy-toolkit](https://projects.sbel.org/autonomy-toolkit). For more information regarding specific commands, please see [Workflow](./../usage/development_workflow.md) + +## `docker/data/` + +This folder holds data files that may be used by dockerfile snippets. For example, +the [`docker/snippets/chrono.dockerfile`](../../docker/snippets/chrono.dockerfile) requires the OptiX build script; this file should go here. + +## `docker/common/` + +This subfolder of `docker/` holds common dockerfile code that is shared across _most_ +services. It currently contains three dockerfiles. + +### `docker/common/base.dockerfile` + +This dockerfile helps initialize the docker system as a whole. It defines global `ARGS`, +such as `USERNAME`, `PROJECT`, etc. Furthermore, it will create a user that has the +desired `uid` and `gid` (can be defined through the `USER_UID` and the `USER_GID` +`ARGS`), and will assign any user groups that the user should be apart of. + +**IMAGE_BASE**: Used in conjunction with **IMAGE_TAG**; defines the base image which +the custom docker image will be constructed from. The image is constructed using the +following base image: `${IMAGE_BASE}:${IMAGE_TAG}`. An **IMAGE_BASE** of `ubuntu` and an +**IMAGE_TAG** of `22.04` would then build the image from `ubuntu:22.04`. + +**IMAGE_TAG**: Used in conjunction with **IMAGE_TAG**. See above for details. An +**IMAGE_BASE** of `ubuntu` and an **IMAGE_TAG** of `22.04` would then build the image +from `ubuntu:22.04`. + +**PROJECT**: The name of the project. Synonymous with `project` in docker. + +**USERNAME** _(Default: `${PROJECT}`)_: The username to assign to the new user created +in the image. + +**USERHOME** _(Default: `/home/${USERNAME}`)_: The home directory for the new user. + +**USERSHELL** _(Default: `bash`)_: The shell to use in the container. Bash is +recommended. + +**USERSHELLPATH** _(Default: `/bin/${USERSHELL}`)_: The path to the new user's shell. + +**USERSHELLPROFILE** _(Default: `${USERHOME}/.${USERSHELL}rc`): The path to the new +user's shell profile. + +**USER_UID** _(Default: 1000)_: The user id (User ID -> UID) that the created user is +assigned. In Linux, this must match the system user with which you launch `atk` from. +If it's not assigned correctly, you will have permission issues when trying to edit +files from the host and/or the container. See the [FAQs](./../misc/faq.md#file-permissions) +for more information. + +**USER_GID** _(Default: 1000)_: See **USER_UID** above. + +**USER_GROUPS** _(Default: "")_: User groups to add to the new user. + +### `docker/common/common.dockerfile` + +This dockerfile runs command that we can assume most services want, like package +installation. + +**APT_DEPENDENCIES** _(Default: "")_: A space separated list of apt dependencies to +install in the image. Installed with `apt install`. + +**PIP_REQUIREMENTS** _(Default: "")_: A space separated list of pip dependencies to +install in the image. Installed with `pip install`. + +**USER_SHELL_ADD_ONS** _(Default: "")_: Profile shell addons that are directly echoed +into the user shell profile. For instance, +`USER_SHELL_ADD_ONS: "source /opt/ros/${ROS_DISTRO}/setup.bash"` will run +`echo "source /opt/ros/${ROS_DISTRO}/setup.bash" >> ${USERSHELLPROFILE}`. + +### `docker/common/final.dockerfile` + +This dockerfile runs commands that are expected to be run after all main installation +snippets are run. It will set the `USER` to our new user, set environment variables, and +set the `CMD` to be `${USERSHELLPATH}`. + +## `docker/snippets` + +This folder contains dockerfile "snippets", or small scripts that are included in +service dockerfiles to build specific packages, such as Chrono or ROS. + +### `docker/snippets/chrono.dockerfile` + +This file builds Chrono from source. It currently builds a non-configurable list of +chrono modules that is listed below: + +- `PyChrono` +- `Chrono::VSG` +- `Chrono::Irrlicht` +- `Chrono::Vehicle` +- `Chrono::Sensor` +- `Chrono::Parsers` +- `Chrono::ROS` + +Furthermore, it also builds [`chrono_ros_interfaces`](https://github.com/projectchrono/chrono_ros_interfaces). This is required to build `Chrono::ROS`. + +**OPTIX_SCRIPT**: The location _on the host_ that the optix script is located at. This +script can be found on NVIDIA's OptiX downloads page. For more information, see the +[FAQs](./../misc/faq.md#optix-install). + +**ROS_DISTRO**: The ROS distro to use. + +**ROS_WORKSPACE_DIR** _(Default: `${USERHOME}/ros_workspace`)_. The directory to build +`chrono_ros_interfaces` at. Helpful so that you can add custom messages after building +the image. Ensure you copy the changes to the host before tearing down the container +as this is _not_ a volume. + +**CHRONO_ROS_INTERFACES_DIR** _(Default: `${ROS_WORKSPACE_DIR}/src/chrono_ros_interfaces`)_: The folder where the `chrono_ros_interfaces` package is actually cloned. + +**CHRONO_BRANCH** _(Default: `main`)_: The Chrono branch to build from. + +**CHRONO_REPO** _(Default: `https://github.com/projectchrono/chrono.git`)_: The url of +the Chrono repo to clone and build from. + +**CHRONO_DIR** _(Default: `${USERHOME}/chrono`)_: The directory to clone chrono to. The +clone is _not_ deleted to allow people to make changes to the build from within the +container. Ensure you copy the changes to the host before tearing down the container +as this is _not_ a volume. + +**CHRONO_INSTALL_DIR** _(Default: `/opt/chrono`)_: The path where Chrono is installed. +The user profile is updated to add the python binary directory to `PYTHONPATH` and +the lib directory is appended to `LD_LIBRARY_PATH`. + +### `docker/snippets/ros.dockerfile` + +To decrease image size and allow easy customization, ROS is installed separately (as +opposed to the usual method of building _on top_ of an official ROS image). This +snippet will install ROS here. + +**ROS_DISTRO**: The ROS distro to use. + +### `docker/snippets/rosdep.dockerfile` + +`rosdep` is a useful tool in ROS that parses nested packages, looks inside each +`package.xml` for build dependencies (through ``), and installs the +package through the best means (e.g. `apt`, `pip`, etc.). This file will run `rosdep` on +the ROS workspace located within the `autonomy-research-testbed` repository. + +**ROS_DISTRO**: The ROS distro to use. + +**ROS_WORKSPACE** _(Default: `./workspace`)_: The directory location _on the host_ of +the ROS workspace to run `rosdep` on. + +## `docker/chrono.dockerfile` + +The dockerfile for the `chrono` service. It will do the following: + +1. Run `base.dockerfile` +2. Install ROS +3. Install Chrono +4. Run `common.dockerfile` +5. Run `final.dockerfile` + +## `docker/dev.dockerfile` + +The dockerfile for the `dev` service. It will do the following: + +1. Run `base.dockerfile` +2. Install ROS +3. Run `rosdep` +4. Run `common.dockerfile` +5. Run `final.dockerfile` + +## `docker/vnc.dockerfile` + +The dockerfile for the `vnc` service. + +## More Information + +Below is some additional information for people interested in the underlying workings of the docker implementation. + +### `dockerfile-x` + +In order to be more extensible and general purpose, the dockerfiles mentioned below were built around `dockerfile-x`. [`dockerfile-x`](https://github.com/devthefuture-org/dockerfile-x) is a docker plugin that supports importing of other dockerfiles through the `INCLUDE` docker build action. Using `INCLUDE`, we can construct service dockerfiles that mix and match different [snippets](#dockersnippets) that we implement. diff --git a/docs/design/launch_system.md b/docs/design/launch_system.md new file mode 100644 index 00000000..39a8106e --- /dev/null +++ b/docs/design/launch_system.md @@ -0,0 +1,45 @@ +# Launch System + +The launch system is used to help spin up all the nodes associated with a given experiement (e.g. simulation, reality). This page describes the file structure and how files are designed. + +> [!NOTE] +> The term "orchestrator" is going to be used to describe a launch file that includes other launch files or does house keeping (defines `LaunchConfigurations`, etc.). + +## File Structure + +The file structure is as follows: + +``` +autonomy-research-testbed/ +├── art_launch/ +├── launch_utils/ +└── art__launch/ +``` + +`` here represents the general component that is being launched (e.g. `control`, `perception`, `simulation`, etc.). + +Each folder contains a `launch/` folder where all the launch files should be placed. + +### File Naming Convention + +All launch files end in `.launch.py`. Furthermore, all launch files specific to the a vehicle platform or orchastrator launch files are prefixed with `art_`. + +## `art_launch/` + +This is where the main launch file is held: [`art.launch.py`](../../workspace/src/common/launch/art_launch/launch/art.launch.py). This file will do a few things. + +1. It will first define system wide parameters (e.g. `LaunchConfigurations`, `LaunchDescriptions`, etc.). +2. It will create a [composable node container](https://docs.ros.org/en/galactic/How-To-Guides/Launching-composable-nodes.html). +3. It will include all other orchestration launch files. + +## `launch_utils/` + +The `launch_utils` folder contains helper functions for creating launch files. These helpers should be used throughout the launch system. + +## `art__launch/` + +This folders deal directly with the subfolders defined in the [ROS Workspace](./ros_workspace.md) page. For instance, the [`art_control_launch`](../../workspace/src/common/launch/art_control_launch/) folder contains launch files for the control nodes. + +Each folder will have an orchestrator launch file: `art_.launch.py`. This file is responsible for including the other launch files in this folder which are responsible for individual components or nodes. + +For instance, the ['art_sensing_launch`](../../workspace/src/common/launch/art_sensing_launch/) folder contains the [`art_sensing.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/art_sensing.launch.py) orchestrator launch file. This file includes the [`usb_cam.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/usb_cam.launch.py) and the [`xsens.launch.py`](../../workspace/src/common/launch/art_sensing_launch/launch/xsens.launch.py) launch file which is responsible for launching the camera nodes. diff --git a/docs/design/repository_structure.md b/docs/design/repository_structure.md new file mode 100644 index 00000000..531b2b2a --- /dev/null +++ b/docs/design/repository_structure.md @@ -0,0 +1,110 @@ +# Repository Structure + +This page describes how this repository's directories are organized. + +This repo is structured as follows: +``` +autonomy-research-testbed/ +├── docker/ +├── docs/ +├── sim/ +├── workspace/ +├── .pre-commit-config.yaml +├── atk.yml +├── atk.env +└── requirements.txt +``` + +> [!NOTE] +> Some files are excluded from the list above for brevity. + +## `docker/` + +See [this page](./dockerfiles.md) for more information. + +## `docs/` + +This folder holds the documentation pages for the `autonomy-research-testbed` repo. + +## `sim/` + +This folder holds simulation files. + +### `sim/cpp/` + +C++ demos are contained here. To add a new demo, place the `.cpp` file in this directory +and add the demo to the `DEMOS` list in +[`CMakeLists.txt`](../../sim/cpp/CMakeLists.txt). + +### `sim/python/` + +Python demos are contained here. + +### `sim/data/` + +Data folders for the simulation are put here. + +The [`chrono`](./dockerfiles.md#dockersnippetschronodockerfile) service contains the +Chrono data folder, so there is no need to include that folder again here. Instead, +include demo-specific data files. + +Ensure, when writing demos, that you set the Chrono data directories correctly. +```python +# demo.py +chrono.SetChronoDataPath("/opt/chrono/share/chrono/data/") +``` +```cpp +// demo.cpp +SetChronoDataPath("/opt/chrono/share/chrono/data/"); +``` + +And then to access data files in `sim/data/`, you just pass the string directly. It will +probably be relative to the `sim/python` or `sim/cpp` folders, respectively. +```python +# demo.py +path_to_data_file = "../data/data_file.txt" +``` +```cpp +// demo.cpp +path_to_data_file = "../data/data_file.txt"; +``` + +## `workspace/` + +See [this page](./ros_workspace.md) for more information. + +## `.pre-commit-config.yaml` + +[pre-commit](https://pre-commit.com) is a tool that works along with git to run +specific commands just prior to committing. We basically use it as a glorified code +formatter. On each commit, `pre-commit` should be run such that the commands defined +in the `.pre-commit-config.yaml` file are run. + +In addition to on commits, `pre-commit` is required to be run in order for PRs to be +merged. This ensures all code in the main branch is formatted. + +To automatically run `pre-commit` on _every_ commit, run the following: +```bash +pre-commit install +``` + +Please see [the official documentation](https://pre-commit.com) for more detailed +information. + +## `atk.yml` + +This is the `atk` configuration file. See [the ART/ATK documentation](./atk.md) for +detailed information about how the `atk` file is configured. Additionally, please see +[the official `autonomy-toolkit` documentation](https://projects.sbel.org/autonomy-toolkit) for more details regarding how `atk` works. + +## `atk.env` + +This file contains environment variables that are evaluated at runtime in the `atk.yml`. +You can think of these values as variables that are substituted into the `atk.yml` +placeholders (like `${VARIABLE_NAME}`). See +[the official docker documentation](https://docs.docker.com/compose/environment-variables/set-environment-variables) for a more detailed explanation. + +## `requirements.txt` + +This file defines required pip packages needed to interact with this repository. +Currently, `autonomy-toolkit` and `pre-commit` are the only requirements. Additional requirements should be put here. diff --git a/docs/design/ros_workspace.md b/docs/design/ros_workspace.md new file mode 100644 index 00000000..1fb99f30 --- /dev/null +++ b/docs/design/ros_workspace.md @@ -0,0 +1,110 @@ +# ROS Workspace + +This page describes the underlying philosophy of the ROS workspace for the +`autonomy-research-testbed`. For details on how to spin up the ROS nodes, see the +[How to Run](./../usage/how_to_run.md) page. + +> [!NOTE] +> This page assumes some underlying experience with ROS. Please ask a more experienced +> lab member or refer to the [official documentation](https://docs.ros.org) for more +> information. + +## Philosophy + +The general philosophy of the ROS workspace structure is inspired from +[Autoware Universe](https://github.com/autowarefoundation/autoware.universe.git). +Basically, the philosophy can be split into three main principles. + +### Principle 1: ROS packages are separated by function + +This principle serves two purposes: it defines how the package folders are organized and +what should be implemented in a package. + +The ROS packages should be organized in a hierarchy that separates the node directories by their overarching purpose. For instance, perception nodes should be placed in the [`perception/`](./../../workspace/src/perception/) subfolder. See [Workspace Structure](#workspace-structure) for a more detailed explanation of all the +subfolders. + +Additionally, this principle is meant to describe what goes in a package. Generally +speaking, a package should implement either a single ROS node, a collection of +like-nodes, or define shared utilities/helpers that are used by other packages. For +instance, the [`launch_utils`](./../../workspace/src/common/launch/launch_utils/) package +does not have a node, but implements utilities used by other launch files. + +### Principle 2: Metapackages and launch files organize vehicle spin up/tear down + +It is certainly possible that there exists multiple ART vehicles each with a different +setup (i.e. different sensors, computational hardware, etc.). Therefore, this principle +helps to define which nodes are created and/or built as it depends on the specific +vehicle platform in use. + +First, [metapackages](https://wiki.ros.org/Metapackages) are a new-ish ROS construct which helps define the build dependencies for a specific package. Essentially, a metapackage has no nodes or code. It is an empty package except for a `package.xml` and `CMakeLists.txt` file which define build dependencies. These build dependencies can then be used to directly build nodes/packages for a specific vehicle platform by only using `colcon build` to build that package. + +For instance, if a certain vehicle requires packages named `camera_driver`, `lidar_driver`, `perception`, `control`, and `actuation`, you can specify all these packages as `` in the metapackage. When `colcon build --packages-select ` is run, the `` packages are automatically built. + +**TL;DR: Each vehicle platform should have a metapackage that defines it's nodes that are required to be built for it to run successfully.** + +In a similar vein, individual vehicle platforms should have a launch file which is the +primary entrypoint for which the vehicle nodes can be launched. This main launch file +should include other launch files which are shared between vehicle platforms, as well +as launch files specific to this platform. + +### Principle 3: + +## Workspace Structure + +This subsection describes how the workspace is structured and what should be placed +in each subfolder. + +``` +workspace/src/ +├── common/ +├── control/ +├── external/ +├── localization/ +├── path_planning/ +├── perception/ +├── sensing/ +├── simulation/ +└── vehicle/ +``` + +> [!NOTE] +> Only non-obvious folders are described below. For instance, it's fairly clear what +> type of package should be placed in `perception/`. For node specific documentation, +> please refer to the package folder readme. + +### `workspace/src/common` + +Included in this subfolder is common utilities, interfaces, launch files, and +metapackages. + +#### `workspace/src/common/interfaces` + +An interface in ROS is schema file that defines either a message (`.msg`), action (`.action`), or service (`.srv`). Custom internal messages should be defined here. + +#### `workspace/src/common/launch` + +Launch files for spinning up the vehicle platforms should be implemented here. For a more detailed explanation about the launch system, please refer to [the Launch System page](./launch_system.md). + +#### `workspace/src/common/meta` + +Vehicle platform metapackages are placed here. + +### `workspace/src/external` + +External packages that are used for debug should be placed here. For instance, +`foxglove` or `rosboard` packages should be placed here. Usually, these are submodules. + +### `workspace/src/sensing` + +Packages placed here are responsible for interfacing with sensors (i.e. drivers). +These are usually submodules and not written by us. + +### `workspace/src/simulation` + +These packages are used to interface with a simulation platform. + +### `workspace/src/vehicle` + +This subfolder is similar to `sensing/`, but interfaces with the vehicle specifically +and these packages may not have sensors. For instance, actuation drivers should be +defined here. diff --git a/docs/misc/faq.md b/docs/misc/faq.md new file mode 100644 index 00000000..b0e410f2 --- /dev/null +++ b/docs/misc/faq.md @@ -0,0 +1,43 @@ +# Frequently Asked Questions + +## File Permissions + +For Linux users, you may run into issues regarding file permissions when using docker. +By properly setting the user id (UID) and group id (GID) of the created user in the +image, these issues can usually be avoided. By default, in the `base.dockerfile` file, +the UID and GID are both set to 1000 (the default UID/GID for new users in most linux +distributions). + +If you are running into file permission issues, you may want to try the following. + +First, check the user id and group id with the following commands: + +```bash +$ id -u +1001 + +$ id -g +1001 +``` + +If you see something similar to the above, where the output id's are _not_ 1000, you +will need to update the `USER_UID` and `USER_GID` environment variables. It is +recommended this is done either through your host's profile file (e.g. `~/.bashrc` or +`~/.zshrc`) or assigning the variables in the `.env` file in the root of the +`autonomy-research-testbed` repo. + +```bash +# In your ~/.bashrc or ~/.zshrc file +export USER_UID=1001 +export USER_GID=1001 +``` + +```bash +# In /autonomy-research-testbed/.env +USER_UID=1001 +USER_GID=1001 +``` + +## OptiX Install + +Chrono currently builds against OptiX 7.7. In order to install OptiX in the container, you need to install the OptiX build script that you download from [their website](https://developer.nvidia.com/designworks/optix/downloads/legacy). Then place the script in the `docker/data/` directory. Files in this folder are ignored by git, so no worries there. diff --git a/docs/misc/vehicle_computers.md b/docs/misc/vehicle_computers.md new file mode 100644 index 00000000..28ea7cc0 --- /dev/null +++ b/docs/misc/vehicle_computers.md @@ -0,0 +1,84 @@ +# Vehicle Computers + +This page describes how we organize the vehicle computers. **Please** review this document fully before making any changes to the vehicle computers. + +## Overview + +Each vehicle has one computer which runs the vehicle service which is responsible for all vehicle related tasks. This includes: +- Reading sensor data +- Controlling actuators +- Running autonomy algorithms + +### Folder Structure + +``` +~/ +└── sbel/ + └── autonomy-research-testbed/ +``` + +### Branches + +Each vehicle should have it's own branch. The branch name should be the name of the vehicle. For example, the branch for the `art-5` vehicle is `art-5`. + +## Setup + +This section outlines the general setup for each vehicle computer. Each vehicle computer should follow this setup so new users can have a consistent experience across vehicles. + +### Run the setup script + +A setup script was written to automate the setup process. To run the script, run the following command: + +```bash +wget -O - https://raw.githubusercontent.com/uwsbel/autonomy-research-testbed/tree/master/vehicles/setup.sh | bash + +``` + +> [!NOTE] +> The script will prompt you for your password. This is required to install the necessary packages. You should review the script to ensure it is safe to run (it obviously should be, but it's good practice). + +This script does the following: +- Installs the necessary packages +- Set the jetson power and fan modes (if it's a jetson) +- Sets up docker permissions +- Installs miniconda +- Adds some configurations to the `.bashrc` file + +### Checkout the vehicle branch + +Next, we need to create the branch for the vehicle we are working on. For example, if we are working on the `art-5` vehicle, we would run: + +```bash +$ git checkout -b art-5 +``` + +### Pushing to the remote repository + +Pushing from one of these computers is somewhat complicated. The computer is shared between people, but there is a single user. Therefore, when we push to the remote repository, we need to make sure we attach the correct github user to the commit. Fortunately, in the [`bashrc`](../../vehicles/bashrc) script, there are some helper functions to make this easier. + +First, you need to create a ssh key. To do this, run the following: + +```bash +$ sbel-ssh-keygen +``` + +This will prompt you for your NetID (to save the ssh key to), your name and email (for github), and a passphrase (so others can't push as you). All arguments are required. + +Next, you need to add the ssh key to your github account. To do this, run the following: + +```bash +$ cat ~/.ssh/id_rsa_.pub +``` + +That output should be copied to your github ssh keys page. + +> [!NOTE] +> The above commands (`sbel-ssh-keygen` and adding the key to github) need only be run once. The following you must run every time before you commit. + +Finally, you need to set the git user. To do this, run the following: + +```bash +$ sbel-ssh-add +``` + +You will be prompted for your NetID. This command will add your ssh key to the ssh agent. It will also set the name and email to git. Both are temporary and will be reset when you exit the shell. diff --git a/docs/usage/development_workflow.md b/docs/usage/development_workflow.md new file mode 100644 index 00000000..8d0e11e9 --- /dev/null +++ b/docs/usage/development_workflow.md @@ -0,0 +1,3 @@ +# Development Workflow + +This page describes the development workflow. diff --git a/docs/usage/how_to_run.md b/docs/usage/how_to_run.md new file mode 100644 index 00000000..9a6f51b5 --- /dev/null +++ b/docs/usage/how_to_run.md @@ -0,0 +1,136 @@ +# How to Run + +This page describes how to run the system. Please review all pages in the [design folder](../design/) before continuing. This page assumes you already have read these pages and have knowledge regarding the system design. Furthermore, it is assumed you have reviewed the [`atk` documentation](https://projects.sbel.org/autonomy-toolkit) and are familiar with the `atk` commands. + +## Installing the dependencies + +There are a few dependencies that need to be installed before running the system. These are listed in the `requirements.txt` file and can be installed with the following command: + +```bash +$ pip install -r requirements.txt +``` + +In addition, you will need to install docker and docker compose. Please refer to the [official documentation](https://www.docker.com/get-started/) for installation details. + +## Using the services + +This is a quick review of some `atk` concepts and how to do some basic operations. + +### Building the services + +Explicitly building the services is not actually required. When you run the `up` command for the first time, the images will also be built. + +At the moment, there are three services: `vnc`, `chrono`, and `dev`. We can build all of these in one go: + +```bash +$ atk dev -b -s vnc chrono dev +``` + +> [!NOTE] +> This may take a long time to complete considering Chrono has to be built with multiple modules enabled. + +### Starting the services + +To start each service, we can use the `--up` command in `atk`. + +```bash +$ atk dev -u -s vnc chrono dev +``` + +#### Optionals + +Note, at this point, we also want to specify any optionals we want to use. Optionals may be changed at attach time (like if it's setting an environment variable), but if you're requesting specific resources (e.g. a nvidia gpu), this _must_ be done at up time. + +You can specify optionals through the `--optionals` flag. + +```bash +$ atk dev -s --optionals ... +``` + +### Attaching to the services + +To attach to each service (as in get a shell), we can use the `--attach` command in `atk`. + +```bash +$ atk dev -a +``` + +You can also combine the `-a` flag with the `-u` flag to start and attach to a service in one go: + +```bash +$ atk dev -s -ua +``` + +> [!NOTE] +> The `-a` flag can only takes one service. + +## Running the system + +This section describes how to actually run some experiements. + +### Starting a simulation + +To start a simulation, we use the `chrono` service. We'll start it up, attach to it, and run the simulation. We'll need gpus for the demo, so we'll pass that in as an optional. + +```bash +$ atk dev -ua -s chrono --optionals gpus vnc +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +[+] Running 1/1 + ✔ Container art-chrono Started +art@art-chrono:~/art/sim$ cd python +art@art-chrono:~/art/sim/python$ python3 demo_ART_cone.py --track +Running demo_ART_cone.py... +Loaded JSON: /home/art/art/sim/data/art-1/sensors/camera.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/accelerometer.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/gyroscope.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/magnetometer.json +Loaded JSON: /home/art/art/sim/data/art-1/sensors/gps.json +Shader compile time: 5.04626 +Initializing rclcpp. +Initialized ChROSInterface: chrono_ros_node. +``` + +### Run the autonomy stack + +To run the autonomy stack, we'll need to spin up and attach to the `dev` container. We'll then launch the system using the `art.launch.py` orchestrator (see the [Launch System doc](../design/launch_system.md) for more details). We'll also pass `use_sim:=True` to the launch command to disable the hardware drivers. + +```bash +$ atk dev -ua -s dev --optionals gpus vnc +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +[+] Running 1/1 + ✔ Container art-dev Started +art@art-dev:~/art/workspace$ ros2 launch art_launch art.launch.py use_sim:=True +``` + +### Visualizing the output + +It's important, when debugging and tracking progress of the simulation, to be able to visualize the output of the simulation. + +#### VNC + +We've already started up the `vnc` service and specified `vnc` as an optional for the other services, so we can simply connect to the vnc window through the browser to view any GUI's we want. + +The `vnc` service will attempt to deploy a vnc server on any port between `8080-8099` on the host. If the port mappings were not a range, `docker compose` would through an error if that port was already in use (like if you had another vnc server running on the host or were using it for another application). You will need to find the port that the `vnc` service is running on. You can do this by running the following command: + +```bash +$ atk dev -s vnc -c ps +WARNING | logger.set_verbosity :: Verbosity has been set to WARNING +NAME IMAGE COMMAND +art-vnc atk/art:vnc "sh -c 'set -ex; exec supervisord -c /opt/supervisord.conf'" vnc 1 min ago Up 1 min 127.0.0.1:5900->5900/tcp, 127.0.0.1:8085->8080/tcp +``` + +In this example, you can see the `vnc` service is being mapped from port `8080` in the container to port `8085` on the host (i.e. `HOST:CONTAINER`). + +To connect to this, simply navigate to `localhost:8080` in your browser. + +> [!NOTE] +> As you can see by the previous command's output, port `5900-5999` may also be mapped. These ports can be used with a vnc client (e.g. [TigerVNC](https://tigervnc.org/)) to connect to the container. This is useful if you want to use a vnc client instead of the browser. + +#### X11 + +Another option for viewing GUI windows is x11. You may need to do some configuring on your system regarding setting up X11 to run propely (like installing [XQuartz](https://www.xquartz.org/) on mac). + +You can use x11 by replacing all `vnc` flags in the `--optionals` with `x11` to do this. + +> [!NOTE] +> This only works if you're on the same host as which the container is running on. If you're ssh'd into the host, you'll need to use vnc. diff --git a/vehicles/bashrc b/vehicles/bashrc new file mode 100644 index 00000000..f7369b54 --- /dev/null +++ b/vehicles/bashrc @@ -0,0 +1,13 @@ +#!/bin/bash + +# This file should be sourced by the vehicle's .bashrc file +# echo "source ~/sbel/autonomy-research-testbed/vehicles/bashrc" >> ~/.bashrc + +# Activate the virtual environment +source ~/sbel/autonomy-research-testbed/venv/bin/activate + +# Create a tmux session +tmux new-session -d -s sbel +if command -v tmux &> /dev/null && [ -n "$PS1" ] && [[ ! "$TERM" =~ screen ]] && [[ ! "$TERM" =~ tmux ]] && [ -z "$TMUX" ]; then + exec tmux attach -t sbel +fi