Skip to content

Implemented CIFAR-10 and achieved an accuracy of 90.7% on test scores and implemented a flask app to show the predictions.

License

Notifications You must be signed in to change notification settings

SAZZZO99/CIFAR-10-Dataset-using-flask-app

Repository files navigation

CIFAR-10-Dataset-using-Flask-app

Implemented CIFAR-10 and achieved an accuracy of 90.7% on test scores and implemented a flask app to show the predictions.

CIFAR-10 is an established computer-vision dataset used for object recognition. It is a subset of the 80 million tiny images dataset and consists of 60,000 32x32 color images containing one of 10 object classes, with 6000 images per class.

CONCEPTS INVOLVED:

DATA AUGMENTATION: Data augmentation is a strategy that enables practitioners to significantly increase the diversity of data available for training models, without collecting new data. Data augmentation techniques such as cropping, padding, and horizontal flipping are commonly used to train large neural networks.

NEURAL NETWORKS: Neural networks generally do not need to be programmed with specific rules that define what to expect from the input. The neural net learning algorithm instead learns from processing many labeled examples (i.e. data with "answers") that are supplied during training. Using this answer key to learn what characteristics of the input are needed to construct the correct output. Once enough examples have been processed, the neural network can begin to process new, unseen inputs and successfully return accurate results. The more examples and variety of inputs the network sees, the more accurate the results typically are provided as output because the it learns with experience.

CONVOLUTIONS IN NEURAL NETWORK: Convolution layers are the major building blocks used in convolutional neural networks. A convolution is the simple application of a filter to an input that results in an activation. Repeated application of the same filter to an input results in a map of activations called a feature map, indicating the locations and strength of a detected feature in an input, such as an image. The innovation of convolutional neural networks is the ability to automatically learn many features in parallel specific to a training dataset under the constraints of a specific predictive modeling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images.

MAX POOLING: A pooling layer is another building block of a CNN. Its function is to progressively reduce the spatial size of the representation to reduce the number of parameters and computation in the network. Pooling layer operates on each feature map independently. The most common approach used in pooling is max pooling.

BATCH-NORMALIZATION: Batch normalization (also known as batch norm) is a technique for improving the speed, performance, and stability of artificial neural networks. It is used to normalize the input layer by re-centering and re-scaling.

DROPOUTS: Dropout is a technique where randomly selected neurons are ignored during training. They are “dropped-out” randomly. This means that their contribution to the activation of downstream neurons is temporally removed on the forward pass and any weight updates are not applied to the neuron on the backward pass.

DEEP LEARNING ARCHITECTURE

Screenshot (344)

About

Implemented CIFAR-10 and achieved an accuracy of 90.7% on test scores and implemented a flask app to show the predictions.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages