Otherness and Control in the Age of AGI
post by jenn (pixx) · 2024-10-21T23:02:58.854Z · ? · GW · 2 commentsContents
Discussion Format Readings None 2 comments
Meet inside The Shops at Waterloo Town Square - we will congregate in the indoor seating area next to the Your Independent Grocer with the trees sticking out in the middle of the benches (pic) at 7:00 pm for 15 minutes, and then head over to my nearby apartment's amenity room. If you've been around a few times, feel free to meet up at the front door of the apartment at 7:30 instead.
Discussion
(Sorry for 3 brainy meetups in a row! After this one I swear we can all turn our brains off for a bit.)
It's spooky season, so this week we'll tackle Situational Awareness [? · GW]'s slightly more woo cousin, Joe Carlsmith's Otherness and Control in the Age of AGI sequence [? · GW], which was published from January to June this year. You can kind of think of this sequence as Meditations on Moloch... 2!
Carlsmith is a senior research analyst at Open Philanthropy, one of the biggest sources of funding when it comes to the emerging field of AI safety (though the views in the sequence are his own.)
Here's what Carlsmith says [LW · GW] about the sequence as a whole:
the goal of the series as a whole, it’s less of a linear argument and it’s more about trying to encourage a certain kind of attunement to the philosophical structure at work in some of these discussions. … As we talk about AI alignment and the future of humanity and a bunch of the issues that crop up in the context of AI, I think moving underneath that are a bunch of abstractions and philosophical views. And there’s a bunch of stuff that I want us to be able to excavate and see clearly and understand in a way that allows us to choose consciously between the different options available, as opposed to just being structured unconsciously by the ideas we’re using.
So in some sense, the series isn’t about: here’s my specific thesis. It’s more about: let’s become sensitized to the structure of a conversation, so that we can respond to it wisely.
Here's what Raemon (who you might know as one of the overlords of LessWrong, the guy who ~invented the secular solstice ceremony as rationalists practice it, or just some reply guy on LW with absurd amounts of Karma) says about it [LW(p) · GW(p)]:
The sequence is very long, and each post deliberately meanders through it's subject matter in a marinating, guided-meditation-y sort of way. I can't tell you a simple takeaway from the sequence, because the takeaway is something like "subtly orienting to a kind of wisdom." The marinating meditation is the point.
...
What stands out to me is that this sequence that is a reflection of things I was thinking through 10 years ago – the first Winter Solstice ceremony I ran began with the quote from Lovecraft ("We live on a placid island of ignorance in the midst of black seas of infinity...") and tried to grapple with that spiritually.I feel like I have some deeper understanding of that now. I like the concept of the Lovecraft-Sagan spectrum, and the question of "okay, so, we do sure seem to live on an island of ignorance amid black seas of infinity... but, how do we wanna feel about that? What do we want to do about it?"
Format
We're doing this one partitioned style again, but instead of doing summary into general discussion during the meetup, I'll ask you to focus on 4 key themes while you're doing the reading, and we'll work through these as they come up during the meetup:
- Deep atheism, the fundamental worldview: a mistrust towards both Nature and "bare intelligence", the universe as inherently indifferent or even hostile, lacking any intrinsic meaning or benevolence.
- Yin/yang, the balance of action and acceptance: while the AI risk discourse often leans heavily towards yang (control, intervention), there may be value in incorporating more yin-like approaches, such as gentleness, tolerance, and a willingness to coexist with difference.
- Human values, what we're protecting: The nature of morality, the possibility of value alignment, and the ethical considerations in shaping the values of future generations (both human and artificial).
- Attunement and wisdom, how we approach these challenges: How we might cultivate a deeper, more holistic understanding of our relationship with technology and the universe, per Carlsmith.
Readings
Read the intro AND THEN choose an essay to read. Do that in that order so you know what you're getting into for your essay! (There is no obligation to do any of this, feel free to come to the meetup and hang out even if you haven't done the readings.)
Link to the intro: https://www.lesswrong.com/s/BbAvHtorCZqp97X9W/p/TtkfjskkAurvEN8Fa [? · GW]
Link to the text sequence: https://www.lesswrong.com/s/BbAvHtorCZqp97X9W [? · GW]
Link to the audio version of the sequence (scroll down past the omnibus part 1 and part 2): https://joecarlsmithaudio.buzzsprout.com/2034731/episodes
If this is your first event, read the essay in the sequence that seems the most interesting to you. If this isn't, put your name down for a chapter in the spreadsheet.
2 comments
Comments sorted by top scores.
comment by Harold (harold-1) · 2024-11-10T05:07:50.848Z · ? · GW
I had a twenty hour drive by myself recently, and binged Otherness in the Age of AGI. It was tremendous. Ambitious to take on the entire thing in one session!
Replies from: pixx↑ comment by jenn (pixx) · 2024-11-11T12:03:10.831Z · ? · GW
I have a fun crowd where half the people who showed up already read the entire thing in their own time as it came out, that was helpful :p