This is a sample project for F1Tenth challenges, the most recent being F1Tenth ICRA 2022.
- Python 3.8/3.9
- Pip 22.0.3 (or greater)
There can be issues with installation when using older pip versions.
To check pip version and upgrade pip:
pip --version
python -m pip install --upgrade pip
- Click on the green "Code" button and then "Download ZIP".
- Unzip the ZIP file. Open it in VSCode using "Open Folder".
- In VSCode, use
Ctrl-Shift-P
to open the status bar and enterTerminal: Create new Terminal
- Run the upgrade pip command (above).
- Run these commands:
pip install --user -e gym
cd pkg/src
python -m pkg.main
After about 20-30 seconds you should see a window pop up with a moving purple rectangle-this is the car!
Clone this repository and install required packages:
(run the git clone command)
cd f1tenth_simulator_code
pip install --user -e gym
Finally, check if the repo is working properly:
cd pkg/src
python -m pkg.main
To develop your driver you can work in the folder pkg/src/pkg.
Let's take a look at the most basic Driver, which is in the file drivers.py
class SimpleDriver:
def process_observation(self, ranges=None, ego_odom=None):
speed = 5.0
steering_angle = 0.0
return speed, steering_angle
A Driver is just a class that has a process_observation
function which takes in and odometry data and returns a speed to drive at along with a steering angle.
ranges
: an array of 1080 distances (ranges) detected by the LiDAR scanner. As the LiDAR scanner takes readings for the full 360°, the angle between each range is 2π/1080 (in radians).
ego_odom
: A dict with following indices:
{
'pose_x': float, the car's position in x
'pose_y': float, the car's position in y
'pose_theta': float, the car's current orientation
'linear_vel_x': float, the x component of the car's velocity
'linear_vel_y': float, the y component of the car's velocity
'angular_vel_z': float, the rate of rotation of the car
}
steering_angle
: an angle in the range [-π/2, π/2], i.e. [-90°, 90°] in radians, with 0° meaning straight ahead.
Let's look at the main.py file. The section shown below is all we need to worry about.
...
# import your drivers here
from pkg.drivers import DisparityExtender
# choose your drivers here (1-4)
drivers = [DisparityExtender()]
# choose your racetrack here (SOCHI, SOCHI_OBS)
RACETRACK = 'SOCHI'
...
As shown in the comments above, we can import Drivers and then choose which ones we want to use. Let's import our SimpleDriver and choose it
...
# import your drivers here
from pkg.drivers import DisparityExtender, SimpleDriver
# choose your drivers here (1-4)
drivers = [SimpleDriver()]
...
Now if you run the main.py file again, it uses our SimpleDriver
$ python main.py
To see some more complex processing, take a look at the GapFollower Driver which implements the Follow The Gap Method! Notice that it still has a process_lidar
function which takes in LiDAR data and returns a speed and steering angle. That's all we'll ever need.
To practice racing multiple Drivers against eachother, simply choose multiple Drivers! You may choose up to 4 drivers, but in practice the simulator will usually run very slowly if you choose more than 2. You may race the same Driver against itself by choosing it twice. If you try racing GapFollower against itself, you will find that it is not good at multi-agent racing!
Here's how we would race GapFollower against SimpleDriver:
# import your drivers here
from pkg.drivers import GapFollower, SimpleDriver
# choose your drivers here (1-4)
drivers = [GapFollower(), SimpleDriver()]
# choose your racetrack here (SOCHI, SOCHI_OBS)
RACETRACK = 'SOCHI'
You may choose between using the ordinary Sochi map or the Sochi Obstacles map. These are the two maps that will be used in the competition. To switch between them simply change the name of the selected RACETRACK
...
# choose your racetrack here (SOCHI, SOCHI_OBS)
RACETRACK = 'SOCHI_OBS'
...
The baseline solution for this competition is the DisparityExtender, which is included in the drivers.py file. This Driver is an implementation of the Disparity Extender Algorithm which has proved successful in previous competitions.
This baseline should already pass the obstacle avoidance track as-is but it's not very fast! Speeding it up will introduce new challenges which can be handled with some thinking. Each function in this baseline also has tips on ways it can be improved.
You don't need to use the baseline solution but if you're not sure where to start this is a good place!
Prerequisites installs for submitting are: Docker, jq
Use the file pkg/nodes/f1tenth_ros_agent.py to choose the driver you are submitting, as shown below, where we choose the DisparityExtender driver:
...
from pkg.drivers import DisparityExtender as Driver
...
If you're using additional dependencies, make sure they are provided in the pkg/requirements.txt
file (or update your Docker image accordingly, if you know Dockerfile format).
Create an .env
file at the root of the project with following contents:
RACE_MAP_PATH=/catkin_ws/src/f1tenth_gym_ros/maps/SOCHI.yaml
RACE_MAP_IMG_EXT=.png
F1TENTH_AGENT_NAME=a1
F1TENTH_AGENT_IMAGE=a1
RIDERS_CHALLENGE_ID=47
RIDERS_API_HOST=https://api.riders.ai
NOTE: If you're on Linux, change ROS_MASTER_URI=http://host.docker.internal:11311
lines in docker-compose.yml
with ROS_MASTER_URI=http://172.17.0.1:11311
. Otherwise your separate Docker instances won't be able to find each other.
Then, from the root of the project, build your submission:
docker-compose build agent
The submission platform uses ROS to run the cars. Your car should race almost exactly the same in ROS as it did in the environment used in Developing your Driver
, but it is a good idea to double-check sometimes by using ROS locally. This section will show you how to test your submission (if you want to) before you upload it.
Note: choose between SOCHI.yaml
and SOCHI_OBS.yaml
in the .env
file shown above to choose which map to test on (this will not have an effect on what map is used when you submit)
Start ROSCore & F1Tenth ROS Bridge:
docker-compose up --force-recreate roscore-dev bridge-dev
Go to http://localhost:6080 , if everything worked properly until now, you should see simulator window.
Finally, from another terminal, launch the Driver agent:
docker-compose up --force-recreate agent-dev
You should see your agent start driving along the track.
Requirements for running the submission script:
Move into the scripts directory and run the submission file:
cd scripts
sh submit.sh
Follow the instructions displayed by the script.
Once it's finished, check the competition page to see how you did! (it may take up to 15 minutes to process your new submission)
If you already have Docker, just run submit-with-docker.sh
in project root:
bash scripts/submit-with-docker.sh
If you don't have Docker or get any error, follow this instructions.
- Visit https://www.docker.com/products/docker-desktop and install Docker Desktop.
- To be sure, start Docker Desktop from the Windows Start menu. Then, from the Docker menu, select Settings > General. If you have installed Docker Desktop successfully, Use WSL 2 based engine check box will be checked by default. If not, check and click Apply & Restart.
- After that, visit here https://docs.microsoft.com/en-us/windows/wsl/install-win10#step-4---download-the-linux-kernel-update-package
- Perform steps 4-5-6 at the above address
- Then, to check the WSL mode, run :
wsl.exe -l -v
- Here, you can see the linux distro you installed in step 6. You need to set your chosen distro as default, run:
wsl --set-default <distro name>
For example, to set Ubuntu as your default WSL distro, run:
wsl --set-default ubuntu
- Installation is done! Finally, run
submit-with-docker.sh
in project root:
bash scripts/submit-with-docker.sh
If previous options don't work, you can try to upload with Python script:
pip install docker six
python scripts/submit-with-docker.py
Enter your credentials and this script should start a docker container in backend that completes submission. After submission make sure that you visit Submissions page to validate newly created submission. In 15 minutes, visit Results page to view results of your agent.
-
If you run the
pip install...
command above and then later change your file structure in some way, you may get errors withgym
such asmodule 'gym' has no attribute 'make'
. The solution to this is to re-run the commandpip install --user -e gym/
. -
On MacOS Big Sur and above, when rendering is turned on, you might encounter the error:
ImportError: Can't find framework /System/Library/Frameworks/OpenGL.framework.
You can fix the error by installing a newer version of pyglet:
$ pip3 install pyglet==1.5.11
And you might see an error similar to
gym 0.17.3 requires pyglet<=1.5.0,>=1.4.0, but you'll have pyglet 1.5.11 which is incompatible.
which could be ignored. The environment should still work without error.
- How can I view the state of my submission?
Go to Submissions page in Riders.ai, and then click on View Status for the related submission.
If your agent has any issues (such as a syntax error), you will only see a single file "F1Tenth Bridge & Agent Log". By looking at this file you can understand why your agent haven't started (generally it's either a typo or an import issue).
If your agent starts successfully, you'll see two other logs (one for timed trial and one for obstacle avoidance).
- How can I replay my submission?
Download log for Timed Trial or Obstacle Avoidance results, these logs should be in .jsonl format.
Clone F1Tenth Log Player repo, update path of the log file in the main.py and run the Python file as described in the repo README.
- How can I add requirements to my agent?
If these are Python requirements, you can add these to the pkg/requirements.txt
file, they will be automatically installed by the Agent docker image. If you want to add an arbitrary dependency, you'll need to update compose/agent/Dockerfile
. Since these images are based on Ubuntu 20:04, you can use any dependency that's available with apt using apt-get install -y {package-name}
.
If you find this Gym environment useful, please consider citing:
@inproceedings{okelly2020f1tenth,
title={F1TENTH: An Open-source Evaluation Environment for Continuous Control and Reinforcement Learning},
author={O�Kelly, Matthew and Zheng, Hongrui and Karthik, Dhruv and Mangharam, Rahul},
booktitle={NeurIPS 2019 Competition and Demonstration Track},
pages={77--89},
year={2020},
organization={PMLR}
}