Skip to content

Latest commit

 

History

History
28 lines (21 loc) · 1.22 KB

README.md

File metadata and controls

28 lines (21 loc) · 1.22 KB

Minimal artificial neural network with backpropagation

In machine learning and cognitive science, artificial neural networks (ANNs) are a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning.

Backpropagation, an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.

This module is based on Toby Segaran's model published in Programming Collective Intelligence, O'Reilly, 2007.


J. A. Corbal, 2019-2020.