Restricted Boltzmann Machines (RBMs)
- Contrastive Divergence
- Persistent Contrastive Divergence
- Parallel Tempering
- Feature extraction (leverage the availability of unlabeled data)
- Data generation
- Data compression
- Image restoration
RBMs are most often used as feature extractors.
VAEs and GANs are better generative models than RBMs.
- The learned features correspond to the weights of RBM hidden units.
- Evolution of the learned features during the training: random -> global -> local.
- The final weights are shown below.
- Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient (Tieleman, 2008)
- Parallel Tempering for Training of Restricted Boltzmann Machines (Desjardins et al., 2010)
- Lecture12: RBMs (Vineeth N Balasubramanian, 2016)
- Training Restricted Boltzmann Machines: An Introduction
- A Practical Guide to Training Restricted Boltzmann Machines (Hinton, 2010)
- Introduction to RBMs
- RBM tutorial (deeplearning.net)
- (Coursera) Lecture 12.3 — Restricted Boltzmann Machines [Neural Networks for Machine Learning, Hinton]
- (Coursera) Lecture 12.4 — An example of RBM learning [Neural Networks for Machine Learning, Hinton]
- Neural networks [5.1]: RBM - definition
- Neural networks [5.2]: RBM - inference
- Neural networks [5.3]: RBM - free energy
- Neural networks [5.4]: RBM - contrastive divergence
- Neural networks [5.5]: RBM - contrastive divergence (parameter update)
- Neural networks [5.6]: RBM - persistent contrastive divergence
- Neural networks [5.7]: RBM - example
- Neural networks [5.8]: RBM - extensions
- RBMs for beginners