The project serves a purpose of demystifying internal CNN mechanisms of forward and back propagation.
For this purpose we used multiple NumPy functions to train the CNN model on the keras MNIST dataset.
The idea was to (+/-) replicate the tensorflows model of the given pipeline:
model = keras.Sequential()
model.add(
keras.layers.Conv2D(filters=32, kernel_size=(3, 3),
activation="relu", input_shape=(28, 28, 1))
)
model.add(
keras.layers.Conv2D(filters=64, kernel_size=(3, 3),
activation="relu")
)
model.add(keras.layers.MaxPool2D(pool_size=(2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.Adadelta(learning_rate=1),
metrics='accuracy'
)
- The model uses simple gradient descent optimizer (which means constant learning rate)
- For now the model is static, therefore adjusting of layers order is limited
- Handles one image at the time
- Doesn't include bias matrices
Model was initially run on 625 training sample (500 for actual training, and 125 for validation) on 10 epochs. Plots below show the history of loss and accuracy for this approach. The model clearly overfits due to the small sample size, but it can be seen that loss is minimized between epochs.
- optimization by translating to machine code (numba / cython)
- Adding Dropout layers
- Making one universal convolution function with adjustable parameters
- Including bias matrices
- Including batch size > 1
- Including easier modification of parameters such as number of hidden layers / number of convolution - maxpool sequences
- Further optimization of functions
- Mathematics behind backpropagation in CNN [medium]
- Convolution derivation [medium]
- Max Pool derivation [medium]
- Block shaped matrices [stack]
MIT License | Copyright (c) 2021 Jan Androsiuk