Skip to content

Neural Net Frameworks: Keras and Tensorflow

Abhinav Ayalur edited this page Jul 30, 2017 · 6 revisions

Keras

Keras is a user-friendly API and framework we found that's built on top of Google's Tensorflow, or alternatively the Theano library. We found that it's really easy to create networks, debug them, and deploy them at quite a high level. Transfer learning is also made easy by the ability to save and load pre-trained 'h5' models. We found that at a lower level, however, and especially for advanced programmers, Tensorflow might be better for fine-tuning.

Changing Keras Backend

You can find the Keras configuration file at $HOME/.keras/keras.json. Note: If this file does not exist, you can create it. A default configuration file looks like:

{
    "image_data_format": "channels_last",
    "epsilon": 1e-07,
    "floatx": "float32",
    "backend": "tensorflow"
}

By changing backend, you can switch between using Tensorflow and Theano.

Installation + Dependencies

Finally, a library that's easy to install on the Jetson! No, but seriously, a simple 'pip install keras' does the trick! Keras will natively install the Theano library along with it along with Scipy, and that makes it easy to code in Keras right off the bat. To run the Jetson GPU with Tensorflow, it's as easy as just running the code! However, with Theano, you have to pass in some simple flags to properly run the GPU.

Issues with Keras

Sometimes, Scipy will give you a compile error for version 0.19.0. If so, delete the installation cache for Scipy and install an old version of Scipy beforehand like 0.17.0 or 0.14.0.

Theano shouldn't have any errors unless you call the flags wrong. There will be a warning saying something along the lines of 'this version of cuda is deprecated' but don't pay attention to it.

Installation of Tensorflow

We recommend you follow this tutorial: https://syed-ahmed.gitbooks.io/nvidia-jetson-tx2-recipes/content/first-question.html for the TX2 and this: https://github.com/jetsonhacks/installTensorFlowTX1 for the TX1. There are a lot of random bugs that happen sometimes depending on how you configured the Jetson beforehand. Usually, these are related to missing symbolic links or the absence of files from the proper root directory within the Jetson. If this doesn't work, Theano is always a great option! In fact, Epoch uses a Theano backend due to bugs within the Tensorflow installation.

Our Network in Keras

Our network is inspired by this steering autopilot tutorial from the DonkeyCar DIY racecar model. They built a simple convolutional neural network, which we took inspiration from for our own work. We heavily changed the concept behind it for our own work, adding different layers with different outputs to create our final network. In the trainNew branch, you will find all the different networks we created and tested.

SteerNet

SteerNet is a compilation of a simple convolution neural network, or CNN that takes in image data, and another simpler fully connected network which takes in LiDAR data. The subnetworks are concatenated together using a sort of 'compilation network', which is composed of the outputs of the subnetworks. The output of this concatenation is a joystick turn value from -1 to 1.

The Image Subnetwork

We ended up using the 6 layer, 64 kernel per layer, CNN with a fully-connected dense layer and a tangent-inverse output neuron that corresponded to a turn value between -1 and 1 in the RACECAR. We trained using about 20000 images total with associated joystick values, and experimented with different CNNs to get our final neural network.

The LiDAR Subnetwork

Much like the previous image subnetwork, we used many thousands of LiDAR data instances and matched them with the joystick values. Again, the output was supposed to be the joystick values. The network we chose to use is mainly a 3 layer 1 dimensional CNN with a dense layer which then converges to a single joystick output.

Concatenation

Using a separate dataset, we concatenated these joystick output values, both LiDAR and Image, on a final network. We knew each subnetwork would be slightly off, and that it's hard otherwise to use both the types of inputs in one large network together. Thus, we created a neural network to essentially make weights via which based on a separate training data set, which goes through the subnetworks, creates inputs of supposed joystick values via lidar and image, and then gives a singular joystick output. The network is simple - comprised of a few dense layers.

Getting Data

In the data collection branch, you'll see that we have a tool for capturing images and recording the image timestamp in a text file, along with getting data from a joystick that is formatted with a timestamp. The LiDAR data is captured in close to the same way. This way, you can get thousands of instances of data just by driving around!

Issues

Usually related to the GPU not being accessible. Make sure you have appropriate NVIDIA drivers installed. Also, ensure that the .h5 files properly save.