Skip to content

Latest commit

 

History

History
123 lines (109 loc) · 6.42 KB

extra_materials.md

File metadata and controls

123 lines (109 loc) · 6.42 KB

Extra materials for ml-mipt course

Prerequisites

  1. [en] Stanford lectures on Probability Theory: link
  2. [en] Matrix calculus notes from Stanford: link
  3. [en] Derivatives notes from Stanford: link

Basic Machine Learning

  1. [en] The Hundred-page Machine Learning book: link
  2. [ru] Отличные лекции Жени Соколова. Читать pdf, лучше всего наиболее актуальный год: link
  3. [en] Naive Bayesian classifier explained: link
  4. [en] Stanford notes on linear models: link
  5. [ru] “Рукописный учебник” от студентов нашего курса на ФИВТе: link
  6. [ru] Методичка Воронцова, link

Bootstrap and bias-variance decomposition

  1. [en] Detailed description of bootstrap procedure: link
  2. [en] Bias-variance tradeoff in more general case: A Unified Bias-Variance Decomposition and its Applications link

Gradient Boosting and Feature importances

  1. [en] Great interactive blogpost by Alex Rogozhnikov on Gradient Boosting: http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html
  2. [en] And great gradient boosted trees playground by Alex Rogozhnikov: http://arogozhnikov.github.io/2016/07/05/gradient_boosting_playground.html
  3. [en] Shap values repo and explanation: https://github.com/slundberg/shap
  4. [en] Kaggle tutorial on feature importances: https://www.kaggle.com/learn/machine-learning-explainability

Deep Learning

  1. [en] Deep Learning book.
    Classical. Delivers comprehensive overview of almost all vital themes in ML and DL. Available online at https://www.deeplearningbook.org
  2. [en] Notes on vector and matrix derivatives: http://cs231n.stanford.edu/vecDerivs.pdf
  3. [en] More notes on matrix derivatives from Stanford: link
  4. [en] Stanford notes on backpropagation: http://cs231n.github.io/optimization-2/
  5. [en] Stanford notes on different activation functions (and just intuition): http://cs231n.github.io/neural-networks-1/
  6. [en] Great post on Medium by Andrej Karpathy: https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b
  7. [en] CS231n notes on data preparation (batch normalization over there): http://cs231n.github.io/neural-networks-2/
  8. [en] CS231n notes on gradient methods: http://cs231n.github.io/neural-networks-3/
  9. [en] Original paper introducing Batch Normalization: https://arxiv.org/pdf/1502.03167.pdf
  10. [en] What Every Computer Scientist Should Know About Floating-Point Arithmetic: https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
  11. [en] The Unreasonable Effectiveness of Recurrent Neural Networks blog post by Andrej Karpathy: http://karpathy.github.io/2015/05/21/rnn-effectiveness/
  12. [en] Understanding LSTM Networks: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
  13. [en] CS231n notes on data preparation: http://cs231n.github.io/neural-networks-2/
  14. [en] Convolutional Neural Networks: Architectures, Convolution / Pooling Layers: http://cs231n.github.io/convolutional-networks/
  15. [en] Understanding and Visualizing Convolutional Neural Networks: http://cs231n.github.io/understanding-cnn/
  16. [en] LR warm-up and useful tricks - article

Natural Language Processing

  1. [en] Great resource by Lena Voita (direct link to Word Embeddings explanation): https://lena-voita.github.io/nlp_course/word_embeddings.html
  2. [en] Word2vec tutorial: http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
  3. [en] Beautiful post by Jay Alammar on word2vec: http://jalammar.github.io/illustrated-word2vec/
  4. [en] Blog post about text classification with RNNs and CNNs blogpost: https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f
  5. [en] Convolutional Neural Networks for Sentence Classification: https://arxiv.org/abs/1408.5882
  6. [en] Great blog post by Jay Alammar on Transformer: https://jalammar.github.io/illustrated-transformer/
  7. [en] Great Annotated Transformer article with code and comments by Harvard NLP group: https://nlp.seas.harvard.edu/2018/04/03/attention.html
  8. [en] Harvard NLP full Transformer implementation in PyTorch
  9. [en] OpenAI blog post Better Language Models and Their Implications (GPT-2)
  10. [en] Paper describing positional encoding "Convolutional Sequence to Sequence Learning"
  11. [en] Paper presenting Layer Normalization
  12. [en] The Illustrated BERT blog post
  13. [en] DistillBERT overview (distillation will be covered later in our course) blog post
  14. [en] Google AI Blog post about open sourcing BERT
  15. [en] OpenAI blog post Better Language Models and Their Implications (GPT-2)
  16. [en] One more blog post explaining BERT
  17. [en] Post about GPT-2 in OpenAI blog (by 04.10.2019)

Graph Neural Networks

  1. [en] Introduction to Graph Neural Networks
  2. [en] Grear repo with must-read papers on GNN
  3. [en] Reinforcement Learning: An introduction by Richard S. Sutton and Andrew G. Barto: link