Max Tegmark's new Time article on how we're in a Don't Look Up scenario [Linkpost]

post by Jonas Hallgren · 2023-04-25T15:41:16.050Z · LW · GW · 9 comments

This is a link post for https://time.com/6273743/thinking-that-could-doom-us-with-ai/

https://time.com/6273743/thinking-that-could-doom-us-with-ai/

Max Tegmark has posted a Time article on AI Safety and how we're in a "Don't Look Up" scenario. 

In a similar manner to Yudkowsky, Max went on Lex Fridman and has now posted a Time article on AI Safety. (I propose we get some more people into this pipeline)

Max, however, portrays a more palatable view regarding societal standards. With his reference to Don't Look Up, I think this makes it one of my favourite pieces to send to people new to AI Risk, as I think it describes everything that your average joe needs to know quite well. (An asteroid with a 10% risk of killing humanity is bad)

In terms of general memetics, it will be a lot harder for someone like LeCun to come up with a genius equivalence between asteroid safety and airplane safety with this framing. (Which might be a shame since it's one of the dumber counter-arguments I've heard.) 

But who knows? He might just claim that scientists know and have always known how to use nuclear bombs to shoot the asteroid away or something.

What I wanted to say with the above is that I think Max is doing a great way of framing the problem, and with his respect from his earlier career as a physicist, I think it would be good to use his articles more in public discussions. I also did quite enjoy how he described alignment on the Lex Fridman podcast, and even though I don’t agree with all he says, it’s good enough.


 

9 comments

Comments sorted by top scores.

comment by 1a3orn · 2023-04-25T19:55:41.371Z · LW(p) · GW(p)

He repeats the "A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction" claim, but the survey in question finds that this is true of respondents, who were 17% of those the survey was sent to. So this claim is misleading as phrased.

Replies from: habryka4, None, Vladimir_Nesov
comment by habryka (habryka4) · 2023-04-25T21:12:12.383Z · LW(p) · GW(p)

The survey seems to have taken reasonable steps to account for responder-bias, and IIRC at least I couldn't tell any obvious direction in which respondents were biased. Katja has written some about this here: https://twitter.com/KatjaGrace/status/1643342692905254912 

Response rates still seem good to mention when mentioning the survey, but I don't currently believe that getting a survey with a higher response rate would change the results. Might be worth a bet?

Replies from: 1a3orn
comment by 1a3orn · 2023-04-25T22:57:17.195Z · LW(p) · GW(p)

Fair enough, didn't know about those steps. That does update me towards this being representative.

comment by [deleted] · 2023-04-26T03:52:08.404Z · LW(p) · GW(p)

For reference, https://aiguide.substack.com/p/do-half-of-ai-researchers-believe is a recent blog post about the same claim. After fact-checking, the author is "not convinced" by the survey.

Replies from: None
comment by [deleted] · 2023-04-26T05:35:23.235Z · LW(p) · GW(p)

Additionally, here https://twitter.com/tdietterich/status/1651096428935254016 according to one survey participant, "it was obvious from question formulation that they were not interested in an unbiased answer."

comment by Vladimir_Nesov · 2023-04-26T01:05:02.383Z · LW(p) · GW(p)

but the survey in question finds that this is true of respondents, who were 17% of those the survey was sent to

Only 20% of the respondents gave a response to that particular question (thanks to Denreik for drawing my attention to that fact [LW(p) · GW(p)], which I verified). Of the initially contacted 4271 researchers, 738 gave responses (17% of 4271), and 149 (20% of 738) gave a probability for the "extremely bad" outcome on the non-trick version of the question (without the "human inability to control" part).

comment by Algon · 2023-04-25T16:57:21.125Z · LW(p) · GW(p)

Max believes in all sorts of crazy things. I agree with a lot of them, but nevertheless, I think that makes him less of an eminently respectable scientist than he would otherwise be, given his intelligence and track record. But then again, he has published a lot of crazy stuff, and it has gained traction. So maybe he knows what he's doing when it comes to communicating to the public.

Honestly, I want stats on these articles to see what their reception is like. Engagement metrics, surveys of readership, Google trend analytics etc. I'm tempted to go out and intrview professionals on their reception to the FT article on AI. 

comment by RHollerith (rhollerith_dot_com) · 2023-04-27T15:35:18.978Z · LW(p) · GW(p)

Very nice! I would've liked to have seen either a call to action (e.g., "ban all training of models larger than GPT-4") or an exploration of the emotional implications (e.g., "don't put your hope in the future, because there probably isn't much future left", which Eliezer said during his interview with Lex Fridman) but overall very helpful.

comment by awg · 2023-04-25T16:51:34.635Z · LW(p) · GW(p)

Totally agreed. This is probably the most accessible framing of the issue I've ever read up to this point!

comment by Algon · 2023-04-25T17:03:53.694Z · LW(p) · GW(p)