-
Notifications
You must be signed in to change notification settings - Fork 5
/
index.html
353 lines (223 loc) · 23.3 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8' />
<meta http-equiv="X-UA-Compatible" content="chrome=1" />
<meta name="description" content="" />
<link rel="stylesheet" type="text/css" media="screen" href="stylesheets/stylesheet.css">
<title>Foundations of Machine Learning and AI</title>
</head>
<body>
<!-- HEADER -->
<div id="header_wrap" class="outer">
<header class="inner">
<a id="forkme_banner" href="https://github.com/tevgeniou/FoundationsML">View on GitHub</a>
<h2 id="project_title">INSEAD PhD Course:
<br>
Foundations of Machine Learning and AI </h2>
<h2 id="project_tagline"> </h2>
<p style="text-align: justify;">
<h4 id="project_tagline"> <a href="http://faculty.insead.edu/theodoros-evgeniou/" target="_blank"> T. Evgeniou </a>
<br>
Professor of Decision Sciences and Technology Management, INSEAD
</h4>
<p style="text-align: justify;">
<h4 id="project_tagline"> <a href="http://nvayatis.perso.math.cnrs.fr/" target="_blank">N. Vayatis</a>
<br>
Professor, Ecole Normale Superieure Paris-Saclay
</h4>
</header>
</div>
<!-- MAIN CONTENT -->
<div id="main_content_wrap" class="outer">
<section id="main_content" class="inner">
<h3>
<a name="welcome-to-github-pages" class="anchor" href="#welcome-to-github-pages"><span class="octicon octicon-link"></span></a></h3>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<p style="text-align: justify;"></p>
<br>
<hr>
<br>
<p style="text-align: justify;"> <em>"Another thing I must point out is that you cannot prove a vague theory wrong.
[...] Also, if the process of computing the consequences is indefinite, then with a
little skill any experimental result can be made to look like the expected
consequences."</em></p>
<p style="text-align: justify; padding-left: 120px;">Richard Feynman </p>
<p style="text-align: justify; "> <em>"I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk."</em></p>
<p style="text-align: justify; padding-left: 120px;">Enrico Fermi </p>
<br>
<h3><strong>Course Description</strong></h3>
<p style="text-align: justify;">AI and Machine Learning have become central topics of discussion in the popular press after being developed for over 50 years in Academia - by computer scientists and, in more recent years, by mathematicians and statisticians. These fields are expected to have a major impact in potentially every aspect of research as well as business: from basic science fields such as life sciences, to Decision Sciences, Finance, but also areas like Sociology, Economics, and other Social Sciences.
</p>
<p style="text-align: justify;">However, while one can be a "reasonable" user of some popular machine learning and AI methods, gaining an edge in terms of innovation in research and practice but also taking full advantage of the capabilities offered by these technologies requires a more fundamental understanding of the principles behind these booming fields.
</p>
<p style="text-align: justify;">The goal of this course is to: </p>
<ul>
<li>Provide the foundations of Machine Learning and AI, so that students can better understand these methods, use them, and potentially develop their own custom based ones that can also use to advance their respective fields;
<li>Provide an overview of some of the most important machine learning methods used in research and practice;
<li>Provide students not only with a historical perspective of these fields, but also with a view of the state-of-the-art methodologies and research advances as well as views on future directions;
<li>Help students use machine learning methods appropriately in their research fields, with the aim of developing insights that are only feasible due to the usage of these new "microscopes".
</ul>
<p style="text-align: justify;"> The course will be run as a combination of lectures, discussions of important papers, exercises, coding (in R or Python), and a class project. Participants are required to have knowledge of the core Probability and Statistics (I and II) courses.
</p>
<hr>
<p style="text-align: justify;"></p><h3>Recommended books</h3>
<p style="text-align: justify;"> While we will not follow any specific book, the following books are some of the "classics"" in the field. We will also use a few chapters from them.
</p>
<br>
V. N. Vapnik, <a href="http://read.pudn.com/downloads161/ebook/733192/Statistical-Learning-Theory.pdf" target="_blank"> Statistical Learning Theory</a>, Wiley, 1998.
<br>
L. Devroye, L. Gyorfi, G. Lugosi, <a href="http://www.szit.bme.hu/~gyorfi/pbook.pdf" target="_blank"> A Probabilistic Theory of Pattern Recognition</a>, Springer, 1996.
<br>
T. Hastie, R. Tibshirani and J. Friedman, <a href="https://web.stanford.edu/~hastie/ElemStatLearn/" target="_blank"> The Elements of Statistical Learning</a>, 2nd Ed., Springer, 2009.
<br>
V. K. Ivanov, V. V. Vasin, V. P. Tanana, <a href="https://books.google.fr/books?id=wQEgAAAAQBAJ" target="_blank"> Theory of Linear Ill-posed Problems and Its Applications</a>, 1978 (revised version 2002).
<br>
T. Cover and J. Thomas, <a href="http://staff.ustc.edu.cn/~cgong821/Wiley.Interscience.Elements.of.Information.Theory.Jul.2006.eBook-DDU.pdf" target="_blank"> Elements of Information Theory</a>, 1991 (revised version 2002).
<br>
<p style="text-align: justify;">These are some other, more recent books:
</p>
C. E. Rasmussen and C. K. I. Williams, <a href="http://www.gaussianprocess.org/gpml/" target="_blank"> Gaussian Processes for Machine Learning</a>, The MIT Press, 2006 (another approach to Machine Learning).
<br>
I. Goodfellow and Y. Bengio and A. Courville, <a href="http://www.deeplearningbook.org/" target="_blank"> Deep Learning</a>, The MIT Press, 2016.
<hr>
<p style="text-align: justify;"><h3>Grading</h3>
<p style="text-align: justify;">20% Class Participation and Paper Presentation </p>
<p style="text-align: justify;"> 30% Exercises: two exercise sets, combining mathematical and hands-on application exercises
</p>
<p style="text-align: justify;"> 50% Class Project: "Develop Your Own Machine Learning Method and Share the Code on Github". </p>
<p style="text-align: justify;"></p>
<strong>Class Project:</strong> You will need to work either alone or at most with one more colleague on the following project:
<ul>
<li> Define a research question that involves - or can be framed as - a problem which can be approached using machine learning methods and principles. For example, it can be part of a research project you may be interested in for which you need to solve a prediction, estimation, or data representation problem.
<li> Describe what are some specific (idiosyncratic) characteristics of the problem, for example regarding some data generation process or structure of the problem and data.
<li> Develop a machine learning method that captures the "structure" of the problem that you identified. Setup an optimization problem and explain how that fits with machine learning theory principles.
<li> Formulate at least two alternative optimization formulations (hence also possibly two different machine learning "methods") for your proposed approach. Discuss how one could approach the optimization.
<li> Write pseudocode for your method
<li> (Extra credit - but recommended) Code your method (using Jupiter notebooks with R, Python or any other language you are most familiar with) and share it on a github repository (see www.github.com). Use your method either on data you have and/or on simulated data. For simulated data, explore how the method works as the data generation process "fits" more (or less) the structure your method assumes. Also explore the trade off between (over)fitting and complexity control, devising also a (out of sample) test process.
<li> Prepare a report (can be on the same Jupiter notebook/document as your code) and be ready to discuss your work in Sessions 13-14.
</ul>
<br>
<hr>
<h2><strong>Course Sessions </strong></h2>
<h4>Sessions 1-2: Introduction and Set Up: AI and the Machine Learning Problem</h4>
In this session we will first provide a brief history of AI and Machine Learning, and outline the fundamental problems these fields aim to solve. We will then shift to the theoretical foundations of Machine Learning and provide an overview of the field, of some popular machine learning methods, of application of Machine Learning and AI, as well as a summary of this course.
Main concepts: Symbolic AI, Connectionism, Statistical Learning, Approximation Theory, Bias-Variance, Empirical Risk Minimization, Hypothesis Spaces, Loss Functions, Generalization Error, Learnability, Consistency Properties.
Background Readings:
<ul>
<li>Chapter 0 ("Introduction") and Section 3.10 ("Kant's Problem of Demarcation and Popper's Theory of Non-Falsifiability") of V. N. Vapnik, <a href="http://read.pudn.com/downloads161/ebook/733192/Statistical-Learning-Theory.pdf" target="_blank"> Statistical Learning Theory</a>, Wiley, 1998
<li>T. Poggio and F. Girosi, <a href="http://cbcl.mit.edu/people/poggio/journals/poggio-girosi-science-1990.pdf" target="_blank"> Regularization Algorithms for Learning that are Equivalent to Multilayer Networks</a>, Science, 247, 978-982, 1990.
<li>Chapter 2 of L. Devroye, L. Gyorfi, G. Lugosi, <a href="http://www.szit.bme.hu/~gyorfi/pbook.pdf" target="_blank"> A Probabilistic Theory of Pattern Recognition</a>, Springer, 1996.
<li>Nature Insights, <a href="https://www.nature.com/collections/jztndjgvxz" target="_blank"> Machine Intelligence</a>, Nature, Vol. 521 No. 7553, pp. 435-482, 2015 (a collection for reference to skim through).
<li>D. Donoho, <a href="https://pdfs.semanticscholar.org/63c6/8278418b69f60b4814fae8dd15b1b1854295.pdf" target="_blank"> High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality</a>, Stanford University, 2000.
<li>C. E. Shannon, <a href="http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf" target="_blank"> A Mathematical Theory of Communication</a>, The Bell System Technical Journal, 1948.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20General%20Intro.pdf" target="_blank"> FMLAI General Introduction Handouts</a>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%201-2.pdf" target="_blank"> Sessions 1-2 Handouts</a>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/FMLAI%20Set%201.pdf" target="_blank"> Exercise Set 1 (to prepare before Sessions 5-6)</a>
<br>
<a href="https://www.youtube.com/watch?v=STFcvzoxVw4" target="_blank"> An Interview with Vladimir Vapnik</a>
<br>
<h4>Sessions 3-4: From Classical Statistics to Machine Learning </h4>
In this session we will develop and analyze some of the most common machine learning methods that are also the closest to classical statistical/econometric methods. We will also discuss about relations between Machine Learning and other important fields such as optimization theory, regularization theory for ill-posed problems, and signal processing.
Main concepts: Regularization theory, Ridge Regression, Lasso, Support Vector Machines, Kernels, Sparsity, Model Selection, Cross-Validation, Matrix Completion, Recommender Systems.
Background Readings:
<ul>
<li> <a href="http://www.mit.edu/~9.520/scribe-notes/class02_scrb_sara.pdf" target="_blank"> The Learning Problem and Regularization</a>, Lecture Notes, MIT course 9.520 on Statistical Learning Theory and Applications.
<li>T. Evgeniou, M. Pontil and T. Poggio, <a href="http://cbcl.mit.edu/publications/ps/evgeniou-reviewall.pdf" target="_blank"> Regularization networks and support vector machines</a>, Advances in Computational Mathematics, 2000.
<li>Sections 1.7 and 1.8 of V. N. Vapnik, <a href="http://read.pudn.com/downloads161/ebook/733192/Statistical-Learning-Theory.pdf" target="_blank"> Statistical Learning Theory</a>, Wiley, 1998.
<li>R Packages: <a href="https://cran.r-project.org/web/packages/elasticnet/elasticnet.pdf" target="_blank"> ElasticNet</a>, <a href="https://cran.r-project.org/web/packages/glmnet/glmnet.pdf" target="_blank"> glmnet</a>.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%203-4.pdf" target="_blank"> Sessions 3-4 Handouts</a>
<br>
<h4>Sessions 5-6: Data Representations, Feature Learning, and Applications </h4>
In this session we will revisit the problem of machine learning, this time from the point of view of finding good data ("world") representations. We will revisit and discuss topics like sparse representations, kernels, and learning data representations using deep learning methods. We will then discuss a number of applications of machine learning, ranging from text mining to time series prediction and analysis of network and graph data.
Main concepts: Sparsity, Variable Selection, Feature Learning, Kernels, Sparse PCA, Low Rank Representations, Dictionary Learning, Text Mining, Time Series, Network Data.
Background Readings:
<ul>
<li>Chapters 1 and 2.1-2.2 of T. Hastie, R. Tibshirani and M. Wainright, <a href="https://web.stanford.edu/~hastie/StatLearnSparsity/" target="_blank"> Statistical Learning with Sparsity: The Lasso and Generalizations</a>, 2016.
<li>B. Olshausen and D. Field, <a href="https://courses.cs.washington.edu/courses/cse528/11sp/Olshausen-nature-paper.pdf" target="_blank"> Emergence of simple-cell receptive field properties by learning a sparse code for natural images</a>, Nature volume 381, pages 607-609 (13 June 1996).
<li>H. Zhou, T. Hastie, and R. Tibshirani, <a href="https://web.stanford.edu/~hastie/Papers/spc_jcgs.pdf" target="_blank"> Sparse Principal Component Analysis</a>, 2006.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%205-6.pdf" target="_blank"> Sessions 5-6 Handouts</a>
<br>
<br>
<h4>Sessions 7-8: Deep Learning and Recent Mysteries in AI</h4>
In this session we will discuss some of the most common Deep Learning methods, and also touch upon some current open problems in Machine Learning and AI. A more general framework of machine learning and AI will also be discussed, and some recent applications of these tools will be presented.
Main concepts: Perceptron, Feed-forward Neural Networks, Convolutional Neural Networks, Stochastic Gradient Descent, Back-propagation, Hierarchical Learning, Feature Learning.
Background Readings:
<ul>
<li>I. Goodfellow, Y. Bengio and A. Courville, <a href="http://www.deeplearningbook.org/" target="_blank"> Deep Learning book</a>, MIT Press, 2016. Glance through the book for a general idea.
<li>H. Mhaskar, Q. Liao, T. Poggio, <a href="https://arxiv.org/abs/1603.00988" target="_blank"> Learning Functions: When Is Deep Better Than Shallow</a>, 2016 (Skim through).
<li>L. Bottou, <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2012/01/tricks-2012.pdf" target="_blank"> Stochastic Gradient Descent Tricks: Tricks of the Trade</a>, p. 421-436, 2012.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%207-8.pdf" target="_blank"> Sessions 7-8 Handouts</a>
<br>
<a href="https://youtu.be/4yLCuZnhkdI" target="_blank"> A lecture on Theories of Deep Learning, by Tomaso Poggio</a>
<br>
<strong>Exercise Set 2 (Due Sessions 13-14): </strong> Explore the website of the course <a href="http://inseaddataanalytics.github.io/INSEADAnalytics/home.html" target="_blank"> Data Science for Business</a> and work on the assignment 2 in that course (under Sessions 5-6) called <strong> Credit Card Default</strong> starting from the <a href="http://inseaddataanalytics.github.io/INSEADAnalytics/CourseSessions/ClassificationProcessCreditCardDefault.html" target="_blank"> analysis under Sessions 7-8</a>.
<br>
<h4>Sessions 9-10: Ensemble Methods and Other Algorithms</h4>
In this session we will discuss some well known approaches to combining machine learning methods. Combinations of methods, much like combinations of diverse expert opinions, is known to improve the accuracy of models/groups. We will discuss some theoretical underpinnings of ensemble methods as well as some further machine learning methods such as Classification and Regression Trees, Random Forests, Bagging and Boosting, and Neural Networks. We will also start exploring machine learning software packages.
Main concepts: Bagging, Boosting, Random Forests, Boosted Trees, Neural Networks.
Background Readings:
<ul>
<li>Chapters 9.2, 10.1-10.9 and glance through the remaining of Chapter 10 of T. Hastie, R. Tibshirani and J. Friedman, <a href="https://web.stanford.edu/~hastie/ElemStatLearn//" target="_blank"> The Elements of Statistical Learning</a>, 2nd Ed., Springer, 2009.
<li>M. Hibon, T. Evgeniou, <a href="https://faculty.insead.edu/theodoros-evgeniou/documents/to_combine_or_not_to_combine.pdf" target="_blank"> To Combine or Not to Combine: Selecting among Forecasts and their Combinations</a>, International Journal of Forecasting, 2005.
<li>R Packages: <a href="https://cran.r-project.org/web/packages/randomForest/randomForest.pdf" target="_blank"> randomForest</a>, <a href="https://cran.r-project.org/web/packages/rpart/rpart.pdf" target="_blank"> rpart</a>.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%2011-12.pdf" target="_blank"> Sessions 9-10 Handouts</a>
<br>
<h4>Sessions 11-12: Theoretical Foundations of Machine Learning</h4>
In this session we will introduce the main mathematical tools and intuitions that can help us better understand why and when machine learning methods work. We will also discuss some of the main theorems that explain the predictive performance of machine learning methods. It is these theorems, together with advances in computing power, storage, and availability of (big) data, which led to the recent important breakthroughs of AI and Machine Learning in all scientific and business areas.
Main concepts: Concentration Inequalities, Complexity Measures, Learning Rates and bounds, VC-dimension, Structural Risk Minimization, Stability, Rademacher Complexity, Estimation and Generalization/Prediction Error, Approximation Theory.
Background Readings:
<ul>
<li>T. Poggio, R. Rifkin, S. Mukherjee, and P. Niyogi, <a href="http://people.cs.uchicago.edu/~niyogi/papersps/nature-predictivity.pdf" target="_blank"> General conditions for predictivity in learning theory</a>, Nature, Vol. 428, 419-422, 2004.
<li>F. Cucker and S. Smale, <a href="http://www.mit.edu/~9.520/Papers/cuckersmale.pdf" target="_blank"> On the mathematical foundations of learning</a>, Bulletin of the American Mathematical Society, 2001.
<li>S. Bucheron, O. Bousquet, G. Lugosi, <a href="http://www.econ.upf.edu/~lugosi/esaimsurvey.pdf" target="_blank"> Theory of Classification: A Survey of Some Recent Advances</a>, ESAIM, Probability and Statistics, 2005.
</ul>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/INSEAD%20FMLAI%20Sessions%2011-12.pdf" target="_blank"> Sessions 11-12 Handouts</a>
<br>
<a href="https://github.com/tevgeniou/FoundationsML/blob/master/FMLAI_Set2.pdf" target="_blank"> Exercise Set 3 (Optional)</a>
<br>
<h4>Sessions 13-14: Other Topics and Paper Presentations</h4>
In this session participants will present a number of papers that will be selected during the course. We will also discuss other topics not covered in this course. More online resources will be shared during the course. Participants are also expected to contribute some of these resources on the course website throughout the course.
Example concepts: Deep Reinforcement Learning, Fairness in AI, Independent Component Analysis, Generative Adversarial Networks, Compressed Sensing, Random Matrix Theory, Wavelets, High Dimensional Statistics, Information Theory, Compression, Gaussian Processes, Graphical Models, Approximation Theory, Splines, Reproducing Kernel Hilbert Spaces, Bootstrap, Clustering, Matrix Estimation, Matrix Completion, Low Rank, Active Learning, Experimental Design, Change Point Detection, Natural Language Processing, Text Mining, etc.
Example papers:
<ul>
<li>A. Argyriou, T. Evgeniou, M. Pontil, <a href="https://link.springer.com/content/pdf/10.1007/s10994-007-5040-8.pdf" target="_blank"> Convex Multi-task Feature Learning</a>, Machine Learning 73 (3), 243-272, 2008 (will be discussed).
<li>J. R. Hauser, O. Toubia, T. Evgeniou, R. Befurt, D. Dzyabura, <a href="https://faculty.insead.edu/theodoros-evgeniou/documents/Disjunctions_of_Conjunctions.pdf" target="_blank"> Disjunctions of conjunctions, cognitive simplicity, and consideration sets</a>, Journal of Marketing Research, Vol. 47, No. 3, pp. 485-496, June 2010 (will be discussed).
<li>O. Toubia, E. Johnson, T. Evgeniou, P. Delquie, <a href="https://faculty.insead.edu/theodoros-evgeniou/documents/Dynamic%20Experiments%20for%20Estimating%20Preferences_web.pdf" target="_blank"> Dynamic Experiments for Estimating Preferences: An Adaptive Method of Eliciting Time and Risk Parameters</a>, Management Science, March 2013 (will be discussed).
<li>S. Clemencon, G. Lugosi, N. Vayatis, <a href="https://projecteuclid.org/euclid.aos/1205420521" target="_blank"> Ranking and Empirical Minimization of U-statistics</a>, The Annals of Statistics 36 (2), 844-874, 2008 (will be discussed).
<li>J. Li, P. Rusmevichientong, D. Simester, J. N. Tsitsiklis, S. I. Zoumpoulis, <a href="https://www.insead.edu/faculty-research/publications/journal-articles/the-value-of-field-experiments--33535" target="_blank"> The value of field experiments</a>, Management Science, Vol. 61(7), pp. 1722-1740, 2015.
<li>S. Gu, B. Kelly, D. Xiu, <a href="http://dachxiu.chicagobooth.edu/download/ML.pdf" target="_blank"> Empirical Asset Pricing via Machine Learning</a>, 2018.
<li>N. Hardt, E. Price, N. Srebro, <a href="http://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.pdf" target="_blank"> Equality of Opportunity in Supervised Learning</a>, NIPS 2016.
<li><a href="https://nips.cc/" target="_blank"> Neural Information Processing Systems (NIPS)</a>, Conference.
<li><a href="https://www.kdd.org/conferences" target="_blank"> Knowledge Discovery and Data Mining (KDD)</a>, Conference.
<li><a href="http://www.fatml.org/" target="_blank"> Fairness, Accountability, and Transparency in Machine Leaning (FAT/ML)</a>, Conference.
</ul>
<br>
<h4>Some other articles on the broader topic of "Humans and Machines"</h4>
<ul>
<li> <a href="https://github.com/tevgeniou/FoundationsML/blob/master/Human%20Decisions%20and%20Machine%20Predictions.pdf" target="_blank"> Human Decisions and Machine Predictions</a>.
<li> <a href="https://github.com/tevgeniou/FoundationsML/blob/master/Bias%20and%20Productivity%20in%20Humans%20and%20Algorithms.pdf" target="_blank"> Bias and Productivity in Humans and Algorithms</a>.
<li> <a href="https://github.com/tevgeniou/FoundationsML/blob/master/Psychological%20roadblocks%20to%20the%20adoption%20of%20self%20driving%20vehicles.pdf" target="_blank"> Psychological roadblocks to the adoption of self driving vehicles</a>.
<li> Some <a href="https://www.linkedin.com/feed/update/urn:li:activity:6531264358541074433/" target="_blank"> general slides on AI, Machine Learning, and various broader topics/issues</a>.
</ul>
<!-- FOOTER -->
<div id="footer_wrap" class="outer">
<footer class="inner">
</footer>
</div>
</body>
</html>