- Course begins: October 19th, 2017.
- Course time: 10:00-12:00 Thursdays and Fridays
- Course location: Thursday M101, Friday M102 (Exercises and live coding)
- I will also be available by appointment (office M206).
- I have created a Course blog containing detailed notes and references for the course.
- Datacamp has generously agreed to offer all course participants access to their minicourses for free (details will be given on the first day). We will be using their online materials to supplement the course and to help people get familiar with the computer tools available (especially R and python).
- The Registration is now closed. If you would like to be added please email me.
Machine learning lies at the center of many recent and ongoing technological advancements. The applications are numerous and include image recognition, voice recognition, language translation, data analysis, and self-driving cars. During this course we will look at some of the fundamental tools and techniques in this field, as well as the mathematics that underlies it all.
Machine learning involves the development of algorithms which learn from data for making predictions or decisions. Theese algorithms improves with more and more data. One of the reasons for the recent surge of interest in machine learning is the sheer abundance of data which is now available to us and some notable landmark acheivements in neural networks.
Underlying these techniques lies some elementary, but powerful mathematics, more than a fair amount of statistics, some computer science, and a lot of hand-waving. This being a mathematics course, we will focus on the mathematics and statistics, touch on the important issues from computer science, and try to identify the weaknesses in the foundations. Since this is a course in an applied area, we will also spend some time seeing how to actually apply these techniques to real data.
Prerequisites: A solid understanding of linear algebra and calculus. Some experience with probability and statistics (at least some measure theory). On Friday's, I will demonstrate how to apply these methods in the exercise session. This will use a mixture of mathematics, R and python (the anaconda environment). This will require some fluency with a computer, including using the command line.
- Overview: Supervised, unsupervised, and reinforcement learning.
- Supervised learning: Linear and polynomial regression, least squares, bias-variance tradeoff. Validation and regularization techniques.
- Supervised learning: Logistic regression, linear discriminant analysis, K-nearest neighbors, and the curse of dimensionality.
- Supervised learning: Bayesian methods. Naive Bayes classifiers.
- Supervised learning: Decision trees, bagging, and boosting.
- Supervised learning: Support vector machines and kernel methods.
- Supervised learning: Perceptrons, neural networks, cross-entropy.
- Supervised learning: Optimization techniques: gradient descent, stochastic gradient descent, momentum.
- Supervised learning: Convolutional and recurrent neural networks. Transfer learning.
- Supervised learning: Word embeddings and applications.
- Unsupervised learning: Dimensionality reduction: Principal components analysis, independent component analysis, autoencoders.
- Unsupervised learning: Dimensionality reduction: t-SNE.
- Unsupervised learning: Clustering: K-means, hierarchical clustering, Gaussian mixture models.
- Unsupervised learning: Topological data analysis I: Cech and Vietoris-Rips complexes, homology, bar-codes.
- Unsupervised learning: Topological data analysis II: Properties and structure theorem.
- Reinforcement learning: Finding policies. The Bellman-equation.