Announcing ILIAD — Theoretical AI Alignment Conference
post by Nora_Ammann, Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-06-05T09:37:39.546Z · LW · GW · 18 commentsContents
***Apply to attend by June 30!*** About ILIAD Program and Unconference Format Financial Support None 18 comments
We are pleased to announce ILIAD — a 5-day conference bringing together 100+ researchers to build strong scientific foundations for AI alignment.
***Apply to attend by June 30!***
- When: Aug 28 - Sep 3, 2024
- Where: @Lighthaven (Berkeley, US)
- What: A mix of topic-specific tracks, and unconference style programming, 100+ attendees. Topics will include Singular Learning Theory [? · GW], Agent Foundations [LW · GW], Causal Incentives, Computational Mechanics [LW · GW] and more to be announced.
- Who: Currently confirmed speakers include: Daniel Murfet, Jesse Hoogland, Adam Shai, Lucius Bushnaq, Tom Everitt, Paul Riechers, Scott Garrabrant, John Wentworth, Vanessa Kosoy, Fernando Rosas and James Crutchfield.
- Costs: Tickets are free. Financial support is available on a needs basis.
See our website here. For any questions, email iliadconference@gmail.com
About ILIAD
ILIAD is a 100+ person conference about alignment with a mathematical focus. The theme is ecumenical, yet the goal is nothing less than finding the True Names of AI alignment.
Participants may be interested in all tracks, only one or two or none at all. The unconference format will mean participants have maximum freedom to direct their own time and energy.
Program and Unconference Format
ILIAD will feature an unconference format - meaning that participants can propose and lead their own sessions. We believe that this is the best way to release the latent creative energies in everyone attending.
That said, freedom can be scary! If taking charge of your own learning sounds terrifying, rest assured there will be plenty of organized sessions as well. We will also run the topic-specific workshop tracks such as:
- Computational Mechanics is a framework for understanding complex systems by focusing on their intrinsic computation and information processing capabilities. Pioneered by J. Crutchfield, it has recently found its way into AI safety. This workshop is led by Paul Riechers.
- Singular learning theory, developed by S. Watanabe, is the modern theory of Bayesian learning. SLT studies the loss landscape of neural networks, using ideas from statistical mechanics, Bayesian statistics and algebraic geometry. The track lead is Jesse Hoogland.
- Agent Foundations uses tools from theoretical economics, decision theory, Bayesian epistemology, logic, game theory and more to deeply understand agents: how they reason, cooperate, believe and desire. The track lead is Daniel Hermann.
- Causal Incentives is a collection of researchers interested in using causal models to understand agents and their incentives. The track lead is Tom Everitt.
- “How It All Fits Together” turns its attention to the bigger picture — where are we coming from, and where are we going? — under the direction of John Wentworth.
Financial Support
Financial support for accommodation & travel are available on a needs basis. Lighthaven has capacity to accommodate % of participants. Note that these rooms are shared.
18 comments
Comments sorted by top scores.
comment by Chris_Leong · 2024-06-05T13:47:51.732Z · LW(p) · GW(p)
How are applications processed? Sometimes applications are processed on a rolling basis, so it's important to submit as soon as possible. Other times, you just need to apply by the date, so if you're about to post something big, it makes sense to hold-off your application.
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-06-05T16:44:41.437Z · LW(p) · GW(p)
We intend to review end of the submit deadline June 30th but I wouldn't hold off on your application.
comment by jacobjacob · 2024-06-05T10:15:24.724Z · LW(p) · GW(p)
Sidenote: I'm a bit confused by the name. The all caps makes it seem like an acronym. But it seems to not be?
Replies from: gw, mateusz-baginski, vanessa-kosoy, Lorxus, Alex_Altair↑ comment by gw · 2024-06-05T11:06:37.899Z · LW(p) · GW(p)
I
Love
Interesting
Alignment
Donferences
↑ comment by jacobjacob · 2024-06-05T14:25:38.033Z · LW(p) · GW(p)
ah that makes sense thanks
↑ comment by TsviBT · 2024-06-05T20:02:40.516Z · LW(p) · GW(p)
honestly i prefer undonfrences
Replies from: Alex_Altair↑ comment by Alex_Altair · 2024-06-05T20:43:05.166Z · LW(p) · GW(p)
How about deconferences?
Replies from: TsviBT↑ comment by TsviBT · 2024-06-05T21:08:29.995Z · LW(p) · GW(p)
idk, sounds dangerously close to deferences
↑ comment by Mateusz Bagiński (mateusz-baginski) · 2024-06-06T06:33:48.906Z · LW(p) · GW(p)
Intentional
Lure for
Improvised
Acronym
Derivation
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2024-06-05T18:06:13.285Z · LW(p) · GW(p)
International League of Intelligent Agent Deconfusion
↑ comment by Alex_Altair · 2024-06-05T18:14:49.435Z · LW(p) · GW(p)
Interactively Learning the Ideal Agent Design
comment by Lorxus · 2024-08-25T06:02:04.883Z · LW(p) · GW(p)
> https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference#whqf4oJoYbz5szxWc
you didn't invite me so you don't get to have all the nice things, but I did leave several good artifacts [LW · GW] and books I recommend lying around. I invite you to make good use of them!
Replies from: alexander-gietelink-oldenziel↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-08-25T07:34:18.452Z · LW(p) · GW(p)
Thank you Lorxus, that's appreciated. I'm sure we can make good use of them.
Unfortunately, we get many more applications than we have spots so we have to make some tough choices. Better luck next time!
comment by Lorxus · 2024-06-05T23:30:08.149Z · LW(p) · GW(p)
https://manifold.markets/Lorxus/will-lorxus-attend-the-iliad-unconf?r=TG9yeHVz
Replies from: Lorxus↑ comment by Lorxus · 2024-06-10T16:27:56.834Z · LW(p) · GW(p)
Also: if I get accepted to come to ILIAD I am going to make delicious citrus sodas.[1] Maybe I could even run a pair of panels about that?[2] That seemed extremely out of scope though so I didn't put it in the application.
- ^
Better than you've had before. Like, ever. Yes I am serious, I've got lost lore. Also, no limit on the flavor as long as it's a citrus fruit we can go and physically acquire on-site. Also, no need at all for a stove or heating element.
- ^
There is a crucially important time-dependent step on the scale of hours, so a matched pair of panels would be the best format.
comment by Review Bot · 2024-06-05T19:12:57.282Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?