Machine learning is one of the most promising approaches to address difficult decision and regression problems under uncertainty. The general idea is very simple: Instead of modeling a solution explicitly, a domain expert provides example data that demonstrate the desired behavior on representative problem instances. A suitable machine learning algorithm is then trained on these examples to reproduce the expert's solutions as well as possible and generalize it to new, unseen data. The last two decades have seen tremendous progress towards ever more powerful algorithms. This course attempts to cover all the essential methods from linear classifiers and robust regression to neural networks and reinforcement learning. In short, it will be a one-semester "best of"-version of the popular two-semester course on "Fundamentals of ML" and "Advanced ML".
The lecture belongs to the Master of Data and Computer Science program, but is also recommended for students towards a Master of Physics (specialization Computational Physics), Master in Scientific Computing and anyone interested.
Solid knowledge in linear algebra, analysis (multi-dimensional differentiation and integration) and probability theory is required.
Dates:
Lecture | Tuesdays (start: 18. April) | 14:15-15:45 | Hörsaal Ost, Chemie, INF 252 | |
Lecture | Fridays (start: 21. April) | 11:15-12:45 | Großer Hörsaal Geowissenschaften, INF 235 | |
Tutorials | Wednesdays (start: 26. April) | 14:15-15:45 | Seminarräume A+B, Mathematikon (INF 205) | |
Please sign up for the lecture via Muesli. | ||||
Homework assignments and other course material will be published on MaMPF |
Contents
- Intro (learning from data, features and response, one-hot encoding, supervised/unsupervised/weakly supervised learning, notation, centered data)
- Simple classifiers (threshold, perceptron & linear decision boundary, nearest neighbor - Voronoi decision boundary)
- Evaluation (training vs test set, cross validation, confusion matrix, error rate, false positive/negative rate, precision/recall, AOC)
- Bayes theorem (prior, likelihood, posterior), generative and discriminative classifiers
- QDA, LDA (clustered data, multi-variate Gaussian distribution, co-variance matrix, precision matrix, generative model, maximum-likelihood estimation, i.i.d.)
- SVM, LR, unification via loss functions
- Non-linear classification: nearest neighbor and QDA recap, strategies: non-linear boundaries vs. augmented features, decision trees and forests, hand-crafted mappings
- Neural networks: hand-crafted example, neurons, layers, architecture, activation functions, loss functions
- Backprop, training tricks
- Convolution, ConvNets
- Famous CNNs and ResNets
- U-nets and semantic segmentation
- Ordinary least squares (normal equarions, pseudo-inverse, Cholesky, QR, singular value decomposition, LSQR)
- Weighted LSQ, Heteroscedastic loss, alternating optimization, IRLS
- Bias-variance trade-off, ridge regression, LASSO, orthogonal matching pursuit
- Non-linear regression: non-linear LSQ (short), regression trees/forests, regression neural networks
- Gaussian processes
- GP kernels, Bayesian hyper-parameter optimization
- Robust regression: robust loss functions (median, Huber, logcosh), RANSAC algorithm
- Linear dimension reduction: PCA, ICA, NMF
- Non-linear dimension reduction: LLE, t-SNE, UMAP
- Non-linear dimension reduction: (variational) auto-encoders
- Generative modelling: GANs, normalizing flows
- Clustering: hierarchical, k-means, k-means++, k-medoids, GMM, EM algorithm
- Reinforcement learning, Model-free RF, Deep Q-Learning
Course material
Current course on MaMPF.
Recordings of last year's two-semester version (same content, more details, additional topics):
Textbooks:
- Efron & Hastie: Computer Age Statistical Inference
- Goodfellow, Bengio & Courville: Deep Learning
- Murphy: Probabilistic Machine Learning Book 1, Book 2
- Bishop: Pattern Recognition and Machine Learning
- Burkov: The Hundred-Page Machine Learning Book
- Zhang, Lipton, Li & Smola: Dive into Deep Learning (interactive book with Jupyter notebooks)
- Sutton & Barto: Reinforcement Learning: An Introduction (the classic for the final project)
- Deisenroth, Faisal & Ong: Mathematics for Machine Learning (mathematical prerequisites)