Skip to content

guptakhil/Deep-Learning-UIUC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Open Source Love HitCount

Table of Contents:

Course Information:

Overview:

This course is an introduction to Deep Learning. Topics include convolution neural networks, recurrent neural networks, and deep reinforcement learning. Homeworks on image classification, video recognition, and deep reinforcement learning. Training of deep learning models using PyTorch. A large amount of GPU resources are provided to the class. Mathematical analysis of neural networks, reinforcement learning, and stochastic gradient descent algorithms will also be covered in lectures.

Topics:

  • Fully-connected and feedforward networks
  • Convolution networks
  • Backpropagation
  • Stochastic Gradient Descent
  • Hyperparameter selection and parameter initialization
  • Optimization algorithms (RMSprop, ADAM, momentum, etc.)
  • Second-order optimization (e.g., Hessian-free optimization)
  • TensorFlow, PyTorch, automatic differentiation, static versus dynamic graphs, define-by-run
  • Regularization (L2 penalty, dropout, ensembles, data augmentation techniques)
  • Batch normalization
  • Residual neural networks
  • Recurrent neural networks (LSTM and GRU networks)
  • Video recognition (two-stream convolution network, 3D convolution networks, convolution networks combined with LSTM, optical flow)
  • Generative Adversarial Networks
  • Deep reinforcement learning (Q-learning, actor-critic, policy gradient, experience replay, double Q-learning, deep bootstrap networks, generalized advantage estimation, dueling network, continuous control, Atari games, AlphaGo)
  • Distributed training of deep learning models (e.g., asynchronous stochastic gradient descent)
  • Theory of deep learning (universal approximation theorem, convergence rate, and recent mathematical results)
  • Convergence analysis of stochastic gradient descent, policy gradient, tabular Q-learning

Pre-requisites

CS 446 (or equivalent). Python. Basic statistics, probability, and optimization. Basic knowledge of Bash/Linux is recommended.

Grading

  • 35% Homeworks
  • 35% Midterm
  • 30% Final Project

Instructors

TAs:

All copyrights reserved © IE534 Instructors & TAs

Homeworks

  • HW 1 : Train a single-layer neural network from scratch in Python using NumPy for MNIST dataset.
  • HW 2 : Train a conolutional neural network with multiple channels in Python using NumPy for MNIST dataset.
  • HW 3 : Train a deep convolution network on a GPU with PyTorch for the CIFAR10 dataset.
  • HW 4 : Implement a deep residual neural network for CIFAR100.
  • HW 5 : Natural Language Processing A.
  • HW 6 : Natural Language Processing B.
  • HW 7 : Generative Adversarial Networks (GANs).
  • HW 8 : Deep reinforcement learning on Atari games.
  • HW 9 : Video Recognition I.

Project Details

Choose a research paper from the given list and implement it with substantial contribution.
To be done in a group of 4-6 members.

Report available here for Show and Tell paper.

Additional Resources