Skip to content

Shitao-zz/machine-learning-notes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Tutorial to these notes.

  • I recorded about 20% of these notes in videos in 2015 in Mandarin (all my notes and writings are in English) You may find them on Youtube and 优酷

  • I always look for high quality PhD students in Machine Learning, both in terms of probabilistic model and Deep Learning models. Contact me on YiDa.Xu@uts.edu.au

Data Science

Three perspectives into machine learning and Data Science. Supervised vs Unsupervised Learning, Classification accuracy

Classification: Logistic and Softmax; Regression: Linear, polynomial; Mix Effect model

collaborative filtering, Factorization Machines, Non-Negative Matrix factorisation, Multiplicative Update Rule

this is what I used to talk to industry about AI, ML and DL

classic PCA and t-SNE

Deep Learning (jupyter style notes coming in 2018)

Optimisation methods in general. not limited to just Deep Learning

basic neural networks and multilayer perceptron

detailed explanation of CNN, various Loss function, Centre Loss, contrastive Loss, Residual Networks, YOLO, SSD

Other Deep learning models including RNN, GAN and RBM

basic knowledge in reinforcement learning, Markov Decision Process, Bellman Equation and move onto Deep Q-Learning

Probability and Statistics Background

revision on Bayes model include Bayesian predictive model, conditional expectation

some useful distributions, conjugacy, MLE, MAP, Exponential family and natural parameters

useful statistical properties to help us prove things, include Chebyshev and Markov inequality

Probabilistic Model

Proof of convergence for E-M, examples of E-M through Gaussian Mixture Model

explain in detail of Kalman Filter and Hidden Markov Model

Inference

explain Variational Bayes both the non-exponential and exponential family distribution plus stochastic variational inference.

stochastic matrix, Power Method Convergence Theorem, detailed balance and PageRank algorithm

inverse CDF, rejection, adaptive rejection, importance sampling

M-H, Gibbs, Slice Sampling, Elliptical Slice sampling, Swendesen-Wang, demonstrate collapsed Gibbs using LDA

Sequential Monte-Carlo, Condensational Filter algorithm, Auxiliary Particle Filter

Advanced Probabilistic Model

Dircihlet Process (DP), Hierarchical DP, HDP-HMM, Slice sampling for DP

explain the details of DPP’s marginal distribution, L-ensemble, its sampling strategy, our work in time-varying DPP

About

This contains my past machine learning notes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published