Wikipedia (https://en.wikipedia.org/wiki/Restricted_Boltzmann_machine) has a brief description on RBM which rose to prominence after Geoffrey Hinton and collaborators invented the contrastive divergence algorithm. The deep belief network triggered the revival of deep learning and more breakthroughs in last decade. The traditional teaching of RBM is based on a probabilistic model, in this project a data mapping is used to describe the relationship between visible and hidden layers, then the classical topics such as dimensionality reduction, feature extraction, data generation, classification and regression are studied within the data mapping framework. Three papers are planned to be written on these subjects:
- Use a squared error as the cost function to avoid the probability, a finite difference learning is used for both directed and undirected graphs, and polynomial-decay activations are exercised. (https://arxiv.org/abs/1909.08210)
- Investigate hygrid cost functions to combine data representation and classification without dedicated classification/regression layer.
- Extend RBM architecture to perform some functionalities of modern deep networks such CNN, RNN and GAN.