Skip to content

A Tensorflow implementation of CapsNet(Capsules Net) in Hinton's paper Dynamic Routing Between Capsules

License

Notifications You must be signed in to change notification settings

githubxiaowei/CapsNet-Tensorflow

 
 

Repository files navigation

CapsNet-Tensorflow

Contributions welcome License completion Gitter

A Tensorflow implementation of CapsNet based on Geoffrey Hinton's paper Dynamic Routing Between Capsules

Status:

  1. The code runs, issue #8 fixed.
  2. some results of the tag v0.1 version has been pasted out, but not effective as the results in the paper

Daily task

  1. Adjust margin
  2. Improve the eval pipeline, integrate it into training pipeline: all you need is git clone, cd and python main.py

Others

  1. Here(知乎) is an answer explaining my understanding of Section 4 of the paper (the core part of CapsNet). It may be helpful in understanding the code.
  2. If you find out any problems, please let me know. I will try my best to 'kill' it ASAP.

In the day of waiting, be patient: Merry days will come, believe. ---- Alexander PuskinIf 😊

Requirements

  • Python
  • NumPy
  • Tensorflow (I'm using 1.3.0, not yet tested for older version)
  • tqdm (for displaying training progress info)
  • scipy (for saving image)

Usage

Training

Step 1. Clone this repository with git.

$ git clone https://github.com/naturomics/CapsNet-Tensorflow.git
$ cd CapsNet-Tensorflow

Step 2. Download the MNIST dataset, mv and extract it into data/mnist directory.(Be careful the backslash appeared around the curly braces when you copy the wget command to your terminal, remove it)

$ mkdir -p data/mnist
$ wget -c -P data/mnist http://yann.lecun.com/exdb/mnist/{train-images-idx3-ubyte.gz,train-labels-idx1-ubyte.gz,t10k-images-idx3-ubyte.gz,t10k-labels-idx1-ubyte.gz}
$ gunzip data/mnist/*.gz

Step 3. Start the training:

$ pip install tqdm  # install it if you haven't installed yet
$ python train.py

the tqdm package is not necessary, just an optional tool for displaying the training progress. If you don't want it, change the loop for in step ... to for step in range(num_batch) in train.py

Evaluation

$ python eval.py --is_training False

Results

Results for the 'wrong' version(Details in Issues #8):

  • training loss total_loss

margin_loss reconstruction_loss

  • test acc
Epoch 49 51
test acc 94.69 94.71

test_img1 test_img2 test_img3 test_img4 test_img5


Results after fixing Issues #8:

My simple comments for capsule

  1. A new version neural unit(vector in vector out, not scalar in scalar out)
  2. The routing algorithm is similar to attention mechanism
  3. Anyway, a great potential work, a lot to be built upon

TODO:

  • Finish the MNIST version of capsNet (progress:90%)
  • Do some different experiments for capsNet:
  • Try Using other datasets
  • Adjusting the model structure
  • There is another new paper about capsules(submitted to ICLR 2018), a follow-up of the CapsNet paper.

My weChat:

my_wechat nb312_wechat

  • Our WeChat group is growing fast, and @nb312 is helping me with the wechat request. The left one is my wechat QR, but if you just want to join our group, please contact @nb312 by the QR in the right side.

About

A Tensorflow implementation of CapsNet(Capsules Net) in Hinton's paper Dynamic Routing Between Capsules

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%