Podcast with Divia Eden and Ronny Fernandez on the strong orthogonality thesis
post by DanielFilan · 2023-04-28T01:30:45.681Z · LW · GW · 1 commentsThis is a link post for https://youtu.be/D5rEMNyfIWw
Contents
1 comment
On my side podcast, "The Filan Cabinet", I invited Ronny Fernandez and Divia Eden to talk about the strong orthogonality thesis, and whether it's true. Seems like people here might also be interested. Podcast description below, and you can listen here.
In this episode, Divia Eden and Ronny Fernandez talk about the (strong) orthogonality thesis - that arbitrarily smart intelligences can be paired with arbitrary goals, without additional complication beyond that of specifying the goal - with light prompting from me. Topics they touch on include:
- Why aren't bees brilliant scientists?
- Can you efficiently make an AGI out of one part that predicts the future conditioned on some plans, and another that evaluates whether plans are good?
- If minds are made of smaller sub-agents with more primitive beliefs and desires, does that shape their terminal goals?
- Also, how would that even work?
- Which is cooler: rockets, or butterflies?
- What processes would make AIs terminally value integrity?
- Why do beavers build dams?
- Would these questions be easier to answer if we made octopuses really smart?
1 comments
Comments sorted by top scores.
comment by Adrien Sicart · 2023-05-05T18:53:51.498Z · LW(p) · GW(p)
My first post should be validated soon, and is a proof that the strong form does not hold: in some games some terminal alignment perform less than non-terminal equivalent alignment.
An hypothesis is that most goals, if they become “terminal” (“in itself”, impervious to change), prevent evolution, and mutualistic relationships with other agents.