Nick Bostrom’s new book, “Deep Utopia”, is out today

post by PeterH · 2024-03-27T11:24:01.401Z · LW · GW · 5 comments

This is a link post for https://nickbostrom.com/deep-utopia/

Contents

5 comments

5 comments

Comments sorted by top scores.

comment by Cookiecarver · 2024-03-28T14:28:39.592Z · LW(p) · GW(p)

I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be because of this public awareness of the risks.

Replies from: Mo Nastri, teradimich, Shiroe
comment by Mo Putera (Mo Nastri) · 2024-04-01T05:11:49.421Z · LW(p) · GW(p)

In Bostrom's recent interview with Liv Boeree, he said (I'm paraphrasing; you're probably better off listening to what he actually said)

  • p(doom)-related
    • it's actually gone up for him, not down (contra your guess, unless I misinterpreted you), at least when broadening the scope beyond AI (cf. vulnerable world hypothesis, 34:50 in video)
    • re: AI, his prob. dist. has 'narrowed towards the shorter end of the timeline - not a huge surprise, but a bit faster I think' (30:24 in video)
    • also re: AI, 'slow and medium-speed takeoffs have gained credibility compared to fast takeoffs'
    • he wouldn't overstate any of this
  • contrary to people's impression of him, he's always been writing about 'both sides' (doom and utopia) 
  • in the past it just seemed more pressing to him to call attention to 'various things that could go wrong so we could avoid these pitfalls and then we'd have plenty of time to think about what to do with this big future'
    • this reminded me of this illustration from his old paper introducing the idea of x-risk prevention as global priority: 
comment by teradimich · 2024-03-30T20:11:14.503Z · LW(p) · GW(p)

It seems that in 2014 he believed that p(doom) was less than 20%

comment by Shiroe · 2024-03-30T10:33:51.208Z · LW(p) · GW(p)

It's very surprising to me that he would think there's a real chance of all humans collectively deciding to not build AGI, and successfully enforcing the ban indefinitely.

comment by Richard_Ngo (ricraz) · 2024-04-03T17:58:14.865Z · LW(p) · GW(p)

Just read this (though not too carefully). The book is structured with about half being transcripts of fictional lectures given by Bostrom at Oxford, about a quarter being stories about various woodland creatures striving to build a utopia, and another quarter being various other vignettes and framing stories.

Overall, I was a bit disappointed. The lecture transcripts touch on some interesting ideas, but Bostrom's style is generally one which tries to classify and taxonimize, rather than characterize (e.g. he has a long section trying to analyze the nature of boredom). I think this doesn't work very well when describing possible utopias, because they'll be so different from today that it's hard to extrapolate many of our concepts to that point, and also because the hard part is making it viscerally compelling.

The stories and vignettes are somewhat esoteric; it's hard to extract straightforward lessons from them. My favorite was a story called The Exaltation of ThermoRex, about an industrialist who left his fortune to the benefit of his portable room heater, leading to a group of trustees spending many millions of dollars trying to figure out (and implement) what it means to "benefit" a room heater.