-
Notifications
You must be signed in to change notification settings - Fork 22
Training SteerNet
Go to the binTrain Branch, or the GPU Server you have with the appropriate files.
Download and un-Tar the tar file to get the array binaries for running the training using our previously formatted dataset: https://drive.google.com/file/d/0BykgWqid-lY8WFlIZnNVN3RTYlk/view?usp=sharing
Move these files into the binTrain branch directly. Then, you can use our dataset by modifying the files as below.
- A folder of images called 'images'
- A text file called timestamps.txt which is used to make sure lidar data and joystick data gets associated with the appropriate image
- A text file called joydata.txt that contains joystick output data
- A text file called lidardata.txt that contains lidar input data
After collecting training data, take your data on a USB drive and plug it into your server for training. Open the file format.sh and edit the filepaths to your image timestamp, images folders, joydata and lidardata text files. Then below, edit the np.save lines to save the array of data outputted by each of the helper functions. This will save your formatted data as a numpy array binary which can be easily loaded later. This creates a fresh dataset of files which you can use to create a brand new model.
First, we'll train the image network and lidar network on our fresh dataset. Go into the files and edit your image, joystick and lidar binary filepaths which you saved earlier above. This will allow you to use the fresh dataset to creat a new model! In the trainmodel() function you will see a save_model command loaded in by keras. Above this command, there is a modelname variable which you can change for your function. This will allow you to save your own custom model for both your image model and lidar model. Then, in the concatNet.py file, change the filepaths to the same location as your previously saved models.
Now, you will have to take a break to collect some new data the same way as before. since concatNet is simpler than the other networks, less data is required. After the data is collected and transferred like last time, you need to set the filepaths in concatNet to these files like for the other networks. Make sure the np.save line is uncommented and annotated the same way as for the other networks, in case you need to retrain. This file will take the new data, process it through your trained image and lidar models and send that outputted data through the concatenation network. The joystick values from the lidar and image networks are essentially 'weighted' to get a final output, and you are training these weights using your data. Make sure you save these original image and lidar outputs as binaries in case you need to retrain for whatever reason. This final concatNet - for which you should edit the modelname variable to save it in a custom '.h5' file - will be the network you use for inference!
You can run each python file in the terminal with the python command, or you can edit and use train.sh, a script file, to do the same task.
Transfer learning in keras is easy. First we have to transfer learn on the image and lidar models. Before we train, save the binaries of the new data you collect to transfer learn on. Then in the image and lidar models, you should see a 'model()' function being called which creates a new model. Comment that out and uncomment the line that that uses keras's load_model function. This allows you to load a previous model of your choice and train on top of that! Make sure you specify the filepath to your previous model, appropriate to the type of model you want to train!
Finally, for concatNet, load the image and lidar models like before, and also to the same uncommenting to load your previous concat network. Once you run concatNet.py, you should have your new model, assuming you changed the modelnames as specified before!
Formula1Epoch
The self-driving car trained with deep learning