(If you have any questions, you can post it on https://devtalk.nvidia.com/default/topic/1064362/jetson-projects/rock-paper-scissors-with-jetson-nano/.)
You can try to play the Rock-Paper-Scissors game with a Jetson Nano!
-
Download the expandable JetBot SD card image
jetbot_image_xxxxxx.zip
For the JetBot SD card image, see the link here: https://github.com/NVIDIA-AI-IOT/jetbot/wiki/software-setup
-
Insert an SD card into your desktop machine
-
Using Etcher, select the
jetbot_image_xxxxxx.zip
image and flash it onto the SD card -
Remove the SD card from your desktop machine
-
Do not plug in your HDMI monitor, USB keyboard, mouse for the Jetson Nano
-
Put the SD card prepared in the last section into the Jetson Nano.
-
Plug in the camera module to Jetson Nano.
-
Power the JetBot by plugging in the power supply.
-
Wait a bit for JetBot to boot. After booting, the IP addresses will be as follows:
- JetBot:
192.168.55.1
- PC:
192.168.55.100
- JetBot:
-
Navigate to
http://192.168.55.1:8888
from your desktop's web browser -
Enter
jetbot
as the password to log into Jupyter.
-
Open a new terminal page in Jupyter.
-
Show the list of available networks.
nmcli device wifi list
- Connect to the network
sudo nmcli device wifi connect '<SSID>' password '<PASSWORD>' ifname wlan0
- Check the IP address of Jetson Nano
ifconfig -a
git clone https://github.com/NVIDIA-AI-IOT/jetcam.git
cd jetcam
sudo python3 setup.py install
pip3 install colorama
Alternatively, you can install from source code:
git clone https://github.com/tartley/colorama.git
cd colorama
sudo python3 setup.py install
- Clone the project
cd ~
git clone https://github.com/mokpi/Rock-Paper-Scissors-with-Jetson-Nano.git
-
Open
train_model.ipynb
and run all the cells in order.- You will obtain the file
best_model.pth
after the notebook is completely run, this file will be used as the model to recognize hand gestures. - If the last cell produce 30 lines of output, then the cell is completely run.
- First time running cell #5 may cause the program to download the pretrained model of alexnet from pytorch.com (around 233MB)
- You don't need to do this process every time, after you have already obtained the model.
- You will obtain the file
-
Open
test_game.ipynb
and run all the cells in order. You can play the Rock-Paper-Scissors game after you run the last cell.
Data collection for the rock-paper-scissor hand gestures.
This is the first stage of the project.
Train the model with the data collected.
This is the second stage of the project.
Code for testing gameplay process. This code is a simplified version of game.ipynb
, the GPIO part indicating the device's choice of Rock/Paper/Scissor is removed.
Try running this file before game.ipynb
when you clone this project.
Code for actual gameplay process, including GPIO lights and buttons.
During the gameplay, the trained model is used to recognize the hand gesture of the player.
This is the final stage of the project.
Category (Folder Name) Meaning one Rock two Paper three Scissor
This folders contain the images used to train the network.
This folder contains testing codes in the development phase.
If you want to get good hand gesture recognition results, the following poses are recommended.
Gesture | Poses |
---|---|
Rock | |
Paper | |
Scissors |
- Collecting more data to train the network would certainly improve
the accuracy of the network.
- different environments and lighting
- different colors
- use a better camera
- We use Alexnet for the recognition. Choosing a more suited network may help.