-
Notifications
You must be signed in to change notification settings - Fork 22
Demos
Firstly, ensure that you're cloning Formula1Epoch into the home directory for compilation to properly work
SteerNet runs two sub-networks, one for evaluating image data and one for evaluating LIDAR data, and then combines the result to output a joystick value to steer the car.
Included in the Demo is a dataset with the following structure:
- Data/
- Images/
- 0.png
- 1.png
- ...
- LidarData.txt
- timestamps.txt
- Images/
Each image has a corresponding joystick value and LIDAR equivalent.
Note See dependencies for system requirements before running this project. Note
PeopleNet uses the DetectNet script from JetsonInference, PeopleNet has been fine tuned on top of GoogleNet to detect both people and segments of people and creates bounding boxes around them. Within Formula1Epoch/jetson-inference/build/aarch64/bin, you can test this model with either a camera in realtime using ./detectnet-camera
, or simply test against a single image at location demo.jpg using ./detectnet-console
.
Within Formula1Epoch/jetson-inference, you can edit the c++ files within the detectnet-camera and -console directories.
To make these files, go to Formula1Epoch/jetson-inference/build and run cmake ../
then make to compile any edits you make. Then, to properly run detectnet-console, take the compiled detectnet-console script from Formula1Epoch/jetson-inference/build/aarch64/bin and move it to Formula1Epoch/jetson-inference/detectnet-console, and replace the current file. Then, you can run it from this directory using detectNetRun.py
.
Within the detectnet-console.cpp file, you will see a lot of interfacing with CUDA and a header file called detectnet.h. This file is really important because it allows us to interface with the loaded definition of our DetectNet caffemodel, called detectnet.cpp. This file has paths to the caffemodel and prototxt so you can infer images using what you've trained.
If you want to run ./detectnet-console
and have it use the current camera (without running ./detectnet-camera
, as it has the bug of displaying images upside down) run camera.py
in the same directory, which takes images and saves them to a buffer. Note: This is not part of the demo, just an added option in case you want to see real time detection.
To run PeopleNet part of the demo, run showvid.py
to show the demo frames, then detectNetRun.py
, and finally showPeep.py
to display the image of the detected people.
The different files all span different languages and run as separate processes at the same time. Thus, it is advisable to add the " &" suffix at the end of running each file sequentially in a terminal. Preferably, assuming your system has power to spare, run each file in a separate terminal so they're easy to monitor. Each of the files save/load images from the same memory buffer to create an efficient system that circumvents the need for the exchange of images through ROS or protobuf.
Formula1Epoch
The self-driving car trained with deep learning