Skip to content

Commit

Permalink
edit readme to have image
Browse files Browse the repository at this point in the history
  • Loading branch information
colourfulspring committed Apr 28, 2024
1 parent bfed7e6 commit 3d7fcd2
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions docs/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,14 @@
* Install Unity 2022.3.4f1 in Unity Hub at [this link](https://unity.com/releases/editor/archive). Find the currect version in the list and click the blue button with text "Unity Hub".
* Add the path '/Project' as a new project in the Unity Hub. Then open the project.
* In the explorer below the Project panel, enter the path "Assets/ML-Agents/Examples/CarCatching/Scenes". Then drag the CarCatching.unity file to the Hierarchical panel at top left.
* Click the Play button at top middle place. It's icon is "▷" . Then you will see three agent robots(blue) are catching one enemy robot(purple).
* Click the Play button at top middle place. It's icon is "▷" . Then you will see three agent robots(blue) are catching one enemy robot(purple) as the following image shows:
![An example of trajectory](images/Fig8.png)

# Training
* Install MLAgents Python and MLAgents Unity extension based on the steps at [this link](https://gitee.com/capacito/ml-agents-carcatching/blob/main/docs/Installation.md).
* Click "File->Build Settings". In the opening window, we choose the scene we want to build in "Scenes in build". Then we choose the target platform. Finally, we click the Build button at bottom right.
* Open a terminal and change directory to the root path of this repo. Activate the Python environment with MLAgents.
* Run this commands
* Click "File->Build Settings". In the opening window, choose the scene 'ML-Agents/Examples/CarCatching/Scenes/CarCartching' in "Scenes in build". Then choose the target platform. Finally, click the Build button at bottom right.
* Open a terminal and change directory to the root path of this repo. If you use conda, then activate the Python environment with MLAgents.
* Run this command to training
```
mlagents-learn config/ppo/Catching.yaml --env=./Builds/CarCatching/CarCatching.x86_64 --run-id=v8.0.0 --width=2000 --height=1000
Expand Down
Binary file added docs/images/Fig8.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 3d7fcd2

Please sign in to comment.