-
-
Notifications
You must be signed in to change notification settings - Fork 0
Guide Neuron (Back)Propagation
"the sum of parts is greater than the whole"
Summary
Neural networks help computers "think" like people; backpropagation helps these "neural networks" achieve this feat.
Back-propagation is an algorithm that helps many small parts (neurons) learn to work together to perform tasks more complicated than they could otherwise achieve individually (i.e. brains) through "supervised learning".
Though backpropagation tries to mimic the way that our brain networks information to learn from teachers, parents, and books (i.e. "supervised learning"), it can be applied to non-neural networks - i.e. try to see where you can implement this in your day-to-day life.
Introduction
Conceptually, backpropagation is a rather simple algorithm which takes previous information and experience to teach someone - or something 😜 - how to perform a task.
Using some "broad strokes" here, it looks something like this:
- Given a situation, do something.
- Given the action taken, a "supervisor" will mention what should have been done.
- Given the "right answer", figure out what could have been interpreted - or done - differently to achieve something closer to the "right answer" given the same situation again.
Though it's not the best algorithm, it's a pretty good start - it was one of the first algorithms that allowed us to train computer to do things with data, instead of code - i.e. I need you to [INSERT SUPER BORING, MONOTONOUS, & TIME-CONSUMING JOB HERE] before this Friday's meeting, everything you need to know is in that 34 story building of back-to-back file cabinets; figure it out.
The beauty of backpropagation is it:
a) could be run on a computer, which is a ~1000x faster than people b) it could - in theory - perform better than any person (not just faster) c) you can now free up a person's time - in theory - to focus on harder or newer problems
However, today many drawbacks have come to the surface.
Drawbacks
Neuron
Property | Type | Default | Description |
---|---|---|---|
bias |
number |
Math.random() * 2 - 1 |
|
output |
number |
Last output from neuron.activate()
|
|
derivative |
number |
Derivative of output
|
|
weights |
Map |
Math.random() * 2 - 1 |
A list of connection weights to/from incoming
|
outgoing |
Map |
Map {} |
List of all target neurons (i.e. outgoing connections) - Map Search is O(1) vs Array Search which is O(n)
|
incoming |
Map |
Map {} |
List of all targeting neurons (i.e. incoming connections) - Map Search is O(1) vs Array Search which is O(n)
|
Functions | Description |
---|---|
update(target_id) |
Update weight of incoming neuron
|
learn([target]) |
Learns from feedback or critiques from outgoing neurons
|
activate([input]) |
Fires a signal from input stimulus or incoming neurons
|
The following is a proposal for a backpropagation architecture (i.e. execution order and state life-cycle throughout propagation).
Step 1 - Stasis | |
---|---|
Without any external stimulus a neuron should remain exclusively as an *in-memory* object; in other words, no functions should be executing without (in)direct external stimulus (e.g. a function call). |
Step 2 - Incoming Critiques | |
---|---|
All external entities - albeit other neurons or the environment - have generated critiques of the central neuron and have, or are ready to, forward those critiques backwards through the network. |
Step 3 - Internalizing Critiques | |
---|---|
Having received critiques from all external variables, the central neuron can begin to calculate it's own blame for the network's inaccuracy in the previous state. |
Step 4 - Accepting Blame | |
---|---|
Central neuron accepts the blame from the critiques and established it as its own error. |
Step 5 - Forwarding Error & Triggering Weight Updates | |
---|---|
Using its own error the central neuron can now allow previous neurons to update their weights and forward its error backwards through the network. |
Step 6 - Updating Weights | |
---|---|
As neurons finish interpreting the central neuron's error, it's weights get updated. |
Step 7 - Completion | |
---|---|
At this point the central neuron is stable and updated - now ready for another round. |