The first iteration (SignLanguageRecognition.py) was inspired by code from https://github.com/anujshah1003/Transfer-Learning-in-keras---custom-data/blob/master/transfer_learning_vgg16_custom_data.py and keras.io. It predicts what sign language letter (excluding j and z which involve movement) is present in an image.
The second iteration (all other files) used OpenPose (https://github.com/CMU-Perceptual-Computing-Lab/openpose) to track body positions so that we could predict 10 dynamic ASL signs from videos.