[Work in progress]
In this notebook we perform supervised learning using two things only:
- matrix multiplication,
- simple for loops.
In particular we do not use calculus (i.e. no gradient descent) to learn. To learn we use a naive algorythm.
We think that the loss in performance and depth in this neural network is compensated by its transparency and concreteness, which illuminates basic features and classic issues of neural networks, such as overfitting.
You can run this notebook with Binder by clicking on this badge
Alternatively, you can install the environment locally using Anaconda.
- JAX implementation.