What does Eliezer Yudkowsky think of the meaning of life now?

post by metaqualia · 2024-04-11T18:36:25.153Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    5 mako yass
None
1 comment

https://docs.google.com/document/d/12xxhvL34i7AcjXtJ9phwelZ7IzHZ_xiz-8lGwpWxucI/edit

In this Google doc, if you go to 2.5, you'll find a section called "What is the meaning of life?"

Yudkowsky concluded at that time that the meaning of life is the singularity. It looks very logical to me, but in the end the singularity is an extreme existential threat to humanity. He's clearly terrified of it now, so I'm curious what his new idea of meaning is.

Answers

answer by mako yass · 2024-04-11T19:05:48.886Z · LW(p) · GW(p)

If that's really the only thing he drew meaning from, and if he truly thinks that failure is inevitable, today, then I guess he must be getting his meaning from striving to fail in the most dignified possible way [LW · GW].

But I'd guess that like most humans, he probably also draws meaning from love, and joy. You know, living well. The point of surviving was that a future where humans survive would have a lot of that in it.
If failure were truly inevitable (though I don't personally think it is[1]), I'd recommend setting the work aside and making it your duty to just generate as much love and joy as you can with the time you have available. That's how we lived for most of history, and how most people still live today. We can learn to live that way.

  1. ^

    Reasons I don't understand why anyone would have a P(Doom) higher than 75%: Governments are showing indications of taking the problem seriously. Inspectability techniques are getting pretty good, so misalignment is likely to be detectable before deployment, so a sufficiently energetic government response could be possible, and sub-AGI tech is sufficient for controlling the supply chain and buying additional time, and China isn't suicidal. Major inner misalignment might just not really happen. Self-correcting from natural language instructions to "be good, you know" could be enough. There are very deep principled reasons to expect that having two opposing AGIs debate and check each others' arguments works well.

1 comment

Comments sorted by top scores.

comment by Dalcy (Darcy) · 2024-04-11T19:02:14.534Z · LW(p) · GW(p)

If after all that it still sounds completely wack, check the date. Anything from before like 2003 or so is me as a kid, where "kid" is defined as "didn't find out about heuristics and biases yet", and sure at that age I was young enough to proclaim AI timelines or whatevs.

https://twitter.com/ESYudkowsky/status/1650180666951352320