Announcing Human-aligned AI Summer School
post by Jan_Kulveit, Tomáš Gavenčiak (tomas-gavenciak) · 2024-05-22T08:55:10.839Z · LW · GW · 0 commentsThis is a link post for https://humanaligned.ai/
Contents
Format of the school Confirmed speakers None No comments
The fourth Human-aligned AI Summer School will be held in Prague from 17th to 20th July 2024. We will meet for four intensive days of talks, workshops, and discussions covering latest trends in AI alignment research and broader framings of AI alignment research. Apply now, applications are evaluated on a rolling basis.
The intended audience of the school are people interested in learning more about the AI alignment topics, PhD students, researchers working in ML/AI outside academia, and talented students.
Format of the school
The school is focused on teaching and exploring approaches and frameworks, less on presentation of the latest research results. The content of the school is mostly technical – it is assumed the attendees understand current ML approaches and some of the underlying theoretical frameworks.
This year, the school will cover these main topics:
- Overview of the alignment problem and current approaches.
- Alignment of large language models: RLHF, DPO and beyond. Methods used to align current large language models and their shortcomings.
- Evaluating and measuring AI systems: How to understand and oversee current AI systems on the behavioral level.
- Interpretability and the science of deep learning: What's going on inside of the models?
- AI alignment theory: While 'prosaic' approaches to alignment focus on current systems, theory aims for deeper understanding and better generalizability.
- Alignment in the context of complex systems and multi-agent settings: What should the AI be aligned to? In most realistic settings, we can expect there are multiple stakeholders and many interacting AI systems; any solutions to alignment problem need to solve multi-agent settings.
The school consists of lectures and topical series, focused smaller-group workshops and discussions, expert panels, and opportunities for networking, project brainstorming and informal discussions.
Detailed program of the school will be announced shortly before the event. See below for a program outline and e.g. the program of the previous school for an illustration of the program content and structure.
Confirmed speakers
- Stephen Casper [LW · GW] – Algorithmic Alignment Group, MIT.
- Stanislav Fort – Google DeepMind.
- Jesse Hoogland [LW · GW] – Timaeus.
- Jan Kulveit [LW · GW]– Alignment of Complex Systems, Charles University.
- Mary Phuong [LW · GW] – Google DeepMind.
- Deger Turan [LW · GW] – AI Objectives Institute and Metaculus.
- Vikrant Varma [LW · GW] – Google DeepMind.
- Neel Nanda [LW · GW] - Google DeepMind.
(more to be announced later)
0 comments
Comments sorted by top scores.