Skip to content

Latest commit

 

History

History
executable file
·
22 lines (14 loc) · 1.19 KB

README.md

File metadata and controls

executable file
·
22 lines (14 loc) · 1.19 KB

tensorflow-fcwta

TensorFlow implementation of a fully-connected winner-take-all (FC-WTA) autoencoder, as described in "Winner-Take-All Autoencoders" (2015) by Alireza Makhzani and Brendan Frey at the University of Toronto.

See train_digits.py and train_mnist.py for example code.

Example images

The following images are created by train_mnist.py, which trains a FC-WTA autoencoder on the MNIST digits dataset with 5% sparsity and 2000 hidden units.

This plot compares the original images (top row) to the autoencoder's reconstructions (bottom row): Digit reconstruction visualization

This one shows the autoencoder's learned code dictionary: Code dictionary visualization

Finally, here are t-SNE plots of the original data (left) and the featurized data (right): t-SNE visualizations of original and featurized images

A linear SVM trained on the featurized data achieves a ~98.6% classification accuracy, which is close to the 98.8% accuracy reported in the original paper by Makhzani and Frey.