Free course review — Reliable and Interpretable Artificial Intelligence (ETH Zurich)
post by Jan Czechowski (przemyslaw-czechowski) · 2021-08-10T16:36:39.737Z · LW · GW · 0 commentsContents
General impressions Background knowledge required Course contents Conclusion None No comments
Crossposted from my medium
Recently, I became interested in the topic of Explainable Artificial Intelligence. I was mostly inspired by the Machine Learning Street Talk episode with Christoph Molnar. I decided I need to learn the basics of the field and I found this 13 lectures course available for free on YouTube: Reliable and Interpretable Artificial Intelligence originally held in Autumn Semester 2020 at ETH Zurich by Martin Vechev
In this review I will try to answer the question:
- Is this course for you?
- What math/machine learning background knowledge do you need to understand the contents?
- What are the contents of the course? What lectures can be skipped?
General impressions
I really enjoyed the course and consider it a good investment in my situation (working full time but wanting to transition to AI research with a focus on AI safety). Each lecture was insightful, and every point was very thoroughly explained with multiple rephrasings and examples. I was able to watch some parts of it in increased playback speed, occasionally slowing down when I felt I’m losing the thread. I logged in total 15h for watching the course content and around 3h for Anki cards creation and review in my personal time tracking app.
Background knowledge required
I expected operations like propagating generalized convex polyhedra through multiple layers of a neural net to require quite sophisticated mathematics — turns out, basic linear algebra was enough to understand the concept.
Although the course assumes the basic ML concepts are known to the student when the given concept is needed usually a quick recap is provided; for example for RELU activation, CNN architecture, etc.
To sum up, the course is perfectly approachable for a machine learning enthusiast, still working on a good overview of the basics.
Course contents
The course covers the popular topics of adversarial examples, robustness, and also smoothing classifiers and visualization techniques for computer vision systems.
I think certification methods based on various convex relaxation techniques might be overrepresented in the curriculum. Possibly because this is the particular area of research interest of the lecturer Martin Vechev. If your goal is to get a good overview of machine learning interpretability in a time-effective manner, you might want to consider skipping lectures 5–7.
However, the important topics of geometrical robustness (i. e. defense from adversarial examples based on geometrical operations on the input such as rotation or translation rather than L2 perturbations) and certified defense (training the network not only to be robust but to be provably robust) might have been difficult to grasp without a proper, comprehensive introduction of more complicated convex relaxation techniques like DeepPoly — so entirely skipping those lectures might not be a smart move after all.
For reference, below I provide a more detailed syllabus:
- Adversarial Examples: (lectures 1–3) how to generate a small, invisible perturbation to the input that will cause the model to make a mistake? FGSM and PGD methods are introduced.
- Adversarial Defenses: (lecture 4a) how to train a model to make it immune to adversarial attacks (make it robust)?
- Certification: (lecture 4b-7) how to prove a neural network is immune to adversarial attack? If you decide to skip those, here’s a summary: multiple methods for certification are presented, with increasing computational complexity: Box and Zonotope relaxation, DeepPoly, and Mixed Integer Linear Programming (MILP). Most of them are sound but incomplete (they result in the answer “network is safe” or “I don’t know” — using a more complex method increases the chance of the first answer). Only MILP theoretically works always but cannot be solved in polynomial time.
- Certified Defence: (lecture 8) how to train a network so it’s provably robust?
- Geometric Robustness: (lecture 9) a generalization of adversarial attacks, where input is perturbed by some geometric operation (rotation, translation, scaling) — certification is possible also in this case, but is more complicated.
- Visualization: (lecture 10) various techniques for visualizing how computer vision systems make decisions. How to visualize what a given neuron is responsible for? Analyzing which parts and properties of the input image contribute to the output label.
- Enforcing logic in the training process: (lecture 11) a generalization of adversarial defense. How to enforce an additional logical property during training other than accuracy? The topic of semi-supervised learning is introduced in this context.
- Randomized Smoothing — (lecture 12) a framework for increasing robustness of a given black-box model. It’s based on taking multiple perturbed samples of input in inference time and choosing the most popular label as the output.
Conclusion
Watching the whole lecture was definitely a good investment into getting started in Interpretable Machine Learning. Gained knowledge allows me to keep up with seminar talks like this one. I feel confident that I would be able to pick up a random new article in the field and more or less understand the content — but I didn’t really try it yet.
As the next steps to get familiar with the field I’m thinking of systematically going through the very promising book by Christoph Molnar (including all practical examples if I have time).
I didn’t find any interesting courses on Coursera on this topic, but I’m considering taking the one covering GANs. Mostly because it has graded practical exercises in PyTorch if I understand the descriptions correctly.
Do you have any positive experiences with AI-related courses or books available publicly (both paid and free)?
0 comments
Comments sorted by top scores.