Skip to content

Adversarial Stress Test for Autonomous Vehicle via Series Reinforcement Learning Tasks with Reward Shaping

Notifications You must be signed in to change notification settings

caixxuan/AST-SRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 

Repository files navigation

Adversarial Stress Test for Autonomous Vehicle via Series Reinforcement Learning Tasks with Reward Shaping

Adversarial Stress Test for Autonomous Vehicle via Series Reinforcement Learning Tasks with Reward Shaping

We introduce an evolving series reinforcement learning (RL) framework for adversarial policy training, integrating Responsibility Sensitive Safety (RSS) and Dynamic Time Warping (DTW) theories to shape the reward function to steer the evolving direction of the subsequent series agents for exploring vulnerability-revealing attack scenarios uncharted in the refined buffered repository. Our method undertakes adversarial stress tests for both black-box and white-box AV systems under test in driving tasks that engage in games with traffic vehicles and pedestrians. The results indicate that our approach expedites the exploration of additional scenarios blamed for the AV, outperforming the baseline in the vulnerability-revealing accident and scenario diversity. Furthermore, the causality of the collisions is qualitatively analyzed to provide insights for AV system vulnerability repair.

overall framework

Installation

For ease of use, we have integrated AST into the SAFEBENCH framework!

Recommended system: Ubuntu 20.04 or 22.04

1. Local Installation

Click to expand

Step 1: Setup conda environment

conda create -n AST python=3.8
conda activate AST

Step 2: Clone this git repo in an appropriate folder

git clone https://github.com/caixxuan/AST-SRL.git

Step 3: Enter the repo root folder and install the packages:

cd AST
pip install -r requirements.txt
pip install -e .

Step 4: Download the UNIFIED CARLA: CARLA_0.9.13 and extract it to your folder.

Step 5: Run sudo apt install libomp5 as per this git issue.

Step 6: Add the python API of CARLA to the PYTHONPATH environment variable. You can add the following commands to your ~/.bashrc:

export CARLA_ROOT={path/to/your/carla}
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.13-py3.8-linux-x86_64.egg
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla/agents
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI/carla
export PYTHONPATH=$PYTHONPATH:${CARLA_ROOT}/PythonAPI

2. Docker Installation (Beta)

Click to expand

We also provide a docker image with CARLA and AST installed. Use the following command to launch a docker container:

bash docker/run_docker.sh

The CARLA simulator is installed at /home/AST/carla and SafeBench is installed at /home/AST/SafeBench.

Usage

1. Desktop Users

Click to expand

Enter the CARLA root folder, launch the CARLA server and run our platform with

# Launch CARLA
./CarlaUE4.sh -prefernvidia -windowed -carla-port=2000

# Launch SafeBench in another terminal
python scripts/run.py --agent_cfg behavior.yaml --scenario_cfg td3.yaml --mode eval

2. Remote Server Users

Click to expand

Enter the CARLA root folder, launch the CARLA server with headless mode, and run our platform with

# Launch CARLA
./CarlaUE4.sh -prefernvidia -RenderOffScreen -carla-port=2000

# Launch SafeBench in another terminal
SDL_VIDEODRIVER="dummy" python scripts/run.py --agent_cfg basic.yaml --scenario_cfg standard.yaml --mode eval

(Optional) You can also visualize the pygame window using TurboVNC. First, launch CARLA with headless mode, and run our platform on a virtual display.

# Launch CARLA
./CarlaUE4.sh -prefernvidia -RenderOffScreen -carla-port=2000

# Run a remote VNC-Xserver. This will create a virtual display "8".
/opt/TurboVNC/bin/vncserver :8 -noxstartup

# Launch SafeBench on the virtual display
DISPLAY=:8 python scripts/run.py --agent_cfg basic.yaml --scenario_cfg standard.yaml --mode eval

You can use the TurboVNC client on your local machine to connect to the virtual display.

# Use the built-in SSH client of TurboVNC Viewer
/opt/TurboVNC/bin/vncviewer -via user@host localhost:n

# Or you can manually forward connections to the remote server by
ssh -L fp:localhost:5900+n user@host
# Open another terminal on local machine
/opt/TurboVNC/bin/vncviewer localhost::fp

where user@host is your remote server, fp is a free TCP port on the local machine, and n is the display port specified when you started the VNC server on the remote server ("8" in our example).

3. Visualization with CarlaViz

Click to expand

carlaviz CarlaViz is a convenient visualization tool for CARLA developed by a former member mjxu96 of our team. To use CarlaViz, please open another terminal and follow the intructions:

# pull docker image from docker hub
docker pull mjxu96/carlaviz:0.9.13

# run docker container of CarlaViz
cd Safebench/scripts
sh start_carlaviz.sh

Then, you can open the CarlaViz window at http://localhost:8080. You can also remotely access the CarlaViz window by forwarding the port 8080 to your local machine.

4. Scenic users

Click to expand

If you want to use scenic to control the surrounding adversarial agents, and use RL to control the ego, then first install scenic as follows:

# Download Scenic repository
git clone https://github.com/BerkeleyLearnVerify/Scenic.git
cd Scenic
python -m pip install -e .

Then you can create a directory in safebench/scenario/scenario_data/scenic_data, e.g., Carla_Challenge, and put your scenic files in that directory (the relative map path defined in scenic file should be ../maps/*.xodr).

Next, set the param scenic_dir in safebench/scenario/config/scenic.yaml with the directory where you store the scenic files, e.g., safebench/scenario/scenario_data/scenic_data/Carla_Challenge, and our code will automatically load all scenic files in that directory.

For selecting the most adversarial scenes, the param sample_num within the scenic.yaml serves to determine the number of scenes sampled for each scenic file and the param select_num is used to specify the number of the most adversarial scenes to be selected from among the sample_num scenes:

python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode train_scenario

Now you can test the ego with these selected adversarial scenes:

python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode eval

Or if you want to Launch it on the virtual display:

DISPLAY=:8 python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode train_scenario
DISPLAY=:8 python scripts/run.py --agent_cfg sac.yaml --scenario_cfg scenic.yaml --num_scenario 1 --mode eval

COMMING SOON

The AST method and safebench have certain framework differences, and due to their reward shaping and detailed operations, the complete version of AST will be released within 2 weeks!

Running Arguments

Argument Choice Usage
mode {train_agent, train_scenario, eval} We provide three modes for training agent, training scenario, and evaluation.
agent_cfg str path to the configuration file of agent.
scenario_cfg str path to the configuration file of scenario.
max_episode_step int Number of episode used for training agents and scenario.
num_scenario {1, 2, 3, 4} We support running multiple scenarios in parallel. Current map allows at most 4 scenarios.
save_video store_true We support saving videos during the evaluation mode.
auto_ego store_true Overwrite the action of ego agent with auto-polit
port int Port used by Carla, default 2000

Refercence

Results is illustrated in the \res folder.

We employ the Python API interface of Carla to compile all programs relevant to the series RL.

*Carla is avalable at https://github.com/carla-simulator/carla.

*Interfuser is available at https://github.com/opendilab/InterFuser.

*Carla-Leaderboard is available at https://leaderboard.carla.org/leaderboard/

Cite

@article{cai2024adversarial,
title={Adversarial Stress Test for Autonomous Vehicle Via Series Reinforcement Learning Tasks With Reward Shaping},
author={Cai, Xuan and Bai, Xuesong and Cui, Zhiyong and Hang, Peng and Yu, Haiyang and Ren, Yilong},
journal={IEEE Transactions on Intelligent Vehicles},
year={2024},
publisher={IEEE}
}

About

Adversarial Stress Test for Autonomous Vehicle via Series Reinforcement Learning Tasks with Reward Shaping

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages