Skip to content

Fully-connected winner-take-all autoencoder implemented in TensorFlow.

Notifications You must be signed in to change notification settings

guoguo12/tensorflow-fcwta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tensorflow-fcwta

TensorFlow implementation of a fully-connected winner-take-all (FC-WTA) autoencoder, as described in "Winner-Take-All Autoencoders" (2015) by Alireza Makhzani and Brendan Frey at the University of Toronto.

See train_digits.py and train_mnist.py for example code.

Example images

The following images are created by train_mnist.py, which trains a FC-WTA autoencoder on the MNIST digits dataset with 5% sparsity and 2000 hidden units.

This plot compares the original images (top row) to the autoencoder's reconstructions (bottom row): Digit reconstruction visualization

This one shows the autoencoder's learned code dictionary: Code dictionary visualization

Finally, here are t-SNE plots of the original data (left) and the featurized data (right): t-SNE visualizations of original and featurized images

A linear SVM trained on the featurized data achieves a ~98.6% classification accuracy, which is close to the 98.8% accuracy reported in the original paper by Makhzani and Frey.

About

Fully-connected winner-take-all autoencoder implemented in TensorFlow.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages