This project is an implementation of the paper "Automatically Designing CNN Architectures Using Genetic Algorithm for Image Classification"
This is an algorithm that is able to full automatically find an optimal CNN (Convolutional Neural Network) architecture.
There are two main building blocks to this algorithm:
Each Skip Layer comprises of:
The input first passes through:
- A convolution layer
- A batch normalization
- A ReLU activation layer
- Another convolution layer
- Another batch normalization layer
(all of the convolution layers have a kernel size of 3x3 and a stride of 1x1, the filter size is randomly chosen as a power of 2)
The input also passes through a convolution of kernel and stride size 1x1 with a filter size being the same as the last convolution. This "reshapes" the input in order to allow for element wise adding
The two outputs are combined in an add operation and then passed through a ReLU activation function
This is either a Max Pooling or a Average Pooling layer, the kernel and the stride size are 2x2
Layer Type | Layer Documentation |
---|---|
Convolution | tf.keras.layers.Conv2D |
MaxPooling | tf.keras.layers.MaxPool2D |
AveragePooling | tf.keras.layers.AveragePooling2D |
Activation | tf.keras.layers.Activation |
Add | tf.keras.layers.add |
BatchNormalization | tf.keras.layers.BatchNormalization |
To do this the algorithm follows these steps:
- Create an random initial population
- Evaluate the fitness of the population by training the CNN
- Generating offsprings
- 2 different CNN in the population are selected using a Binary Tournament Selection method
- Given a certain probability a crossover between the two parents might happen
- the two CNNs are split into two and two new CNN are created by mixing the parent "genes" -After all the new offsprings are created go through each of them and given a certain probability mutate the offspring
- a mutation are:
- add a skip layer: increases the complexity and depth of the network
- add a pooling layer: increases the depth but might decrease the complexity due to the nature of pooling
- remove a layer: reduce complexity and depth
- randomize a layer: changes the parameters of a layer (i.e filter size, max or mean pooling)
- Evaluate the offspring fitness
- From the offsprings and the parent population generate a new population
- Until N CNN have been selected:
- randomly select 2 CNN, add the one with the highest fitness to the list
- Look at if the CNN from the offspring and parent population was placed in the new population
- if it is not replace the worst CNN by the best
- Until N CNN have been selected:
- Repeat step 2.
pip install auto-cnn
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # Removes Tensorflow debuggin ouputs
import tensorflow as tf
tf.get_logger().setLevel('INFO') # Removes Tensorflow debugging ouputs
from auto_cnn.gan import AutoCNN
import random
# Sets the random seeds to make testing more consisent
random.seed(42)
tf.random.set_seed(42)
def mnist_test():
# Loads the data as test and train
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Puts the data in a dictionary for the algorithm to use
data = {'x_train': x_train, 'y_train': y_train, 'x_test': x_test, 'y_test': y_test}
# Sets the wanted parameters
a = AutoCNN(population_size=5, maximal_generation_number=4, dataset=data, epoch_number=5)
# Runs the algorithm until the maximal_generation_number has been reached
best_cnn = a.run()
print(best_cnn)
if __name__ == '__main__':
mnist_test()
Given these parameters the structure that was chosen was this:
CNN: 128-64
Score (Test Accuracy): 0.9799000024795532
If you have any idea on improving the performance, adding more customizations or correcting mistakes, please make a pull request or create an issue. I'd be happy to accept any contributions!