The following topics will be relevant for the exam: Representation Learning (Autoencoders) - Basics of Bayes, Gaussian Random Processes, Kullback-Leibler, Fisher Information, Mutual Information - Learning probability distributions (RBM,norm. flows,VAE,GAN) - Graph neural networks - Attention mechanisms - Basics of Implicit Layers - Model-free RL - Active Learning and Bayesian Optimal Experimental Design.
Not tested in the exam: Introductory chapter about basics of Neural Networks (you should know about this, but we will not test on this) - Recurrent Neural Networks - Details of Transformers - Model-based RL - section 10.3 (advanced active learning) - Algorithmic Information Theory
The exam will be similar in spirit to the more conceptual/mathematical homework problems, not the occasional programming exercises.
We will try to produce some example questions by next Monday and distribute that on the website, so you can get a feeling of how the exam looks like.
Best regards,
Florian