Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures

post by Dan H (dan-hendrycks) · 2023-05-30T09:05:25.986Z · LW · GW · 77 comments

This is a link post for https://www.safe.ai/statement-on-ai-risk

Today, the AI Extinction Statement was released by the Center for AI Safetya one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.

Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).

The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.

77 comments

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2023-05-30T21:26:41.641Z · LW(p) · GW(p)

For those who might not have noticed, this actually is historic, they're not just saying that- the top 350 people have effectively "come clean" about this, at once, in a schelling-point **kind-of** way. 

The long years of staying quiet about this and avoiding telling other people your thoughts about AI potentially ending the world, because you're worried that you're crazy or that you take science fiction too seriously- those days **might have** just ended. 

This was a credible signal, none of these 350 high-level people can go back and say "no, I never actually said that AI could cause extinction and AI safety should be a top global priority", and from now on you and anyone else can cite this announcement to back up your views (instead of saying "Bill Gates, Elon Musk, and Stephen Hawking have all endorsed...") and go straight to AI timelines [? · GW] (I like sending people Epoch's Literature review).

EDIT: For the record, this might not be true, or it might not stick, and signatories retain ways of backing out or minimizing their past involvement. I do not endorse unilaterally turning this into more of a schelling point than it was originally intended to be. 

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2023-06-01T03:56:15.104Z · LW(p) · GW(p)

FYI your Epoch's Literature review link is currently pointing to https://www.lesswrong.com/tag/ai-timelines

comment by Kaj_Sotala · 2023-05-30T13:36:58.149Z · LW(p) · GW(p)

Some notable/famous signatories that I noted: Geoffrey Hinton, Yoshua Bengio, Demis Hassabis (DeepMind CEO), Sam Altman (OpenAI CEO), Dario Amodei (Anthropic CEO), Stuart Russell, Peter Norvig, Eric Horvitz (Chief Scientific Officer at Microsoft), David Chalmers, Daniel Dennett, Bruce Schneier, Andy Clark (the guy who wrote Surfing Uncertainty), Emad Mostaque (Stability AI CEO), Lex Friedman, Sam Harris.

Edited to add: a more detailed listing from this post:

Signatories include notable philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists. [...]

Signatories of the statement include:

  • The authors of the standard textbook on Artificial Intelligence (Stuart Russell and Peter Norvig)
  • Two authors of the standard textbook on Deep Learning (Ian Goodfellow and Yoshua Bengio)
  • An author of the standard textbook on Reinforcement Learning (Andrew Barto)
  • Three Turing Award winners (Geoffrey Hinton, Yoshua Bengio, and Martin Hellman)
  • CEOs of top AI labs: Sam Altman, Demis Hassabis, and Dario Amodei
  • Executives from Microsoft, OpenAI, Google, Google DeepMind, and Anthropic
  • AI professors from Chinese universities
  • The scientists behind famous AI systems such as AlphaGo and every version of GPT (David Silver, Ilya Sutskever)
  • The top two most cited computer scientists (Hinton and Bengio), and the most cited scholar in computer security and privacy (Dawn Song)
Replies from: Dweomite, joel-burget, leon-lang
comment by Dweomite · 2023-06-03T05:26:37.425Z · LW(p) · GW(p)

Bruce Schneier has posted something like a retraction on his blog, saying he focused on the comparisons to pandemics and nuclear war and not on the word "extinction".

Replies from: jeffreycaruso
comment by jeffreycaruso · 2024-03-19T03:11:02.759Z · LW(p) · GW(p)

That's a good example of my point. Instead of a petition, a more impactful document would be a survey of risks and their probability of occurring in the opinion of these notable public figures. 

In addition, there should be a disclaimer regarding who has accepted money from Open Philanthropy or any other EA-affiliated non-profit for research. 

comment by Joel Burget (joel-burget) · 2023-05-30T16:41:26.949Z · LW(p) · GW(p)

Though the statement doesn't say much the list of signatories is impressively comprehensive. The only conspicuously missing names that immediately come to mind are Dean and LeCun (I don't know if they were asked to sign).

Replies from: Zack_M_Davis, Jayson_Virissimo, TrevorWiesinger, Sherrinford, Nathan Young
comment by Zack_M_Davis · 2023-05-31T01:36:50.370Z · LW(p) · GW(p)

The statement not saying much is essential for getting an impressively comprehensive list of signatories: the more you say, the more likely it is that someone whom you want to sign will disagree.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-05-31T02:00:07.550Z · LW(p) · GW(p)

Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he's ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn't a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.

comment by Jayson_Virissimo · 2023-05-31T05:05:40.788Z · LW(p) · GW(p)

IMO,  Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.

Replies from: LosPolloFowler, zchuang
comment by Stephen Fowler (LosPolloFowler) · 2023-06-01T08:26:35.328Z · LW(p) · GW(p)

For anyone who wasn't aware both Ng and LeCun have strongly indicated that they don't believe people existential risks from AI are a priority. Summary here

You can also check out Yann's twitter. 

Ng believes the problem is "50 years" down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today's world. 
 

comment by zchuang · 2023-06-06T06:23:54.317Z · LW(p) · GW(p)

He posted on a twitter a request to talk to people who feel strongly here.

comment by trevor (TrevorWiesinger) · 2023-05-31T01:00:36.136Z · LW(p) · GW(p)

I'd say the absence of names from Facebook, Amazon, and Apple in general are worrying, as well as that there were only two from Microsoft. Apple's absence, in particular, is what keeps me up at night.

Replies from: MakoYass, dr_s, joel-burget
comment by mako yass (MakoYass) · 2023-06-01T05:40:13.611Z · LW(p) · GW(p)

Does anyone see any hardware names?

What is it about hardware. I've never seen anyone from there express concern.

I wonder if it's that, for anyone else in AI, their research is either fairly neutral - not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there's no way for you to keep your job and stay sane?

comment by dr_s · 2023-06-01T07:38:22.938Z · LW(p) · GW(p)

Yann LeCun at least is very, very, loudly and repeatedly open on Twitter about considering X-risk a bunch of doomerist nonsense, so we know where he (and thus, Facebook) stands.

comment by Joel Burget (joel-burget) · 2023-06-01T00:52:30.332Z · LW(p) · GW(p)

We don't hear much about Apple in AI -- curious why you rank them so important.

comment by Sherrinford · 2023-05-31T06:17:57.015Z · LW(p) · GW(p)

Here is the coverage on the "most frequently quoted online media product in Germany": Spiegel.de 

I mention this mainly to note that even if you get close to a consensus among experts, a newspaper website may still write a paragraph about it that gives the imporeesion that the distribution of expert opinion is completely unclear: "However, there is also disagreement in the research community. Meta's AI chief scientist Yann LeCun, for example, who received the Turing Award together with Hinton and Bengio, has not wanted to sign any of the appeals so far. He sometimes describes the warnings as “AI doomism”" (linking to a twitter thread by LeCun).

 

To be clear, the statement and its coverage are very impressive.

comment by Nathan Young · 2023-06-04T08:10:39.872Z · LW(p) · GW(p)

Seems extremely likely (90%) that someone either asked them to sign or that people thought it very unlikely they would. I guess I'd go for the second. LeCun doesn't look to want to sign something like this.

comment by Leon Lang (leon-lang) · 2023-06-02T03:57:51.049Z · LW(p) · GW(p)

https://twitter.com/ai_risks/status/1664323278796898306?s=46&t=umU0Z29c0UEkNxkJx-0kaQ

Apparently Bill Gates signed.

Stating the obvious: Do we expect that Bill Gates will donate money to prevent the extinction from AI?

Replies from: Daniel_Eth
comment by Daniel_Eth · 2023-06-05T12:24:38.005Z · LW(p) · GW(p)

Gates has been publicly concerned about AI X-risk since at least 2015, and he hasn't yet funded anything to try to address it (at least that I'm aware of), so I think it's unlikely that he's going to start now (though who knows – this whole thing could add a sense of respectability to the endeavor that pushes him to do it).

comment by Wei Dai (Wei_Dai) · 2023-06-01T16:12:39.950Z · LW(p) · GW(p)

Is it just me or is it nuts that a statement this obvious could have gone outside the overton window, and is now worth celebrating when it finally (re?)enters?

How is it possible to build a superintelligence at acceptable risk while this kind of thing can happen? What if there are other truths important to safely building a superintelligence, that nobody (or very few) acknowledges because they are outside the overton window?

Now that AI x-risk is finally in the overton window, what's your vote for the most important and obviously true statement that is still outside it (i.e., that almost nobody is willing to say or is interested in saying)? Here are my top candidates:

  1. Dying of old age, as well as physical and mental deterioration from it, are bad and worth substantial coordinated effort to prevent.
  2. It's possible to make serious irreversible mistakes due to having incorrect answers to important philosophical questions. In fact, this is likely, considering how much confusion and disagreement there is on many philosophical questions that seem obviously important.
Replies from: daniel-kokotajlo, dr_s
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-06-04T15:09:52.086Z · LW(p) · GW(p)

Why is 1 important? It seems like something we can defer discussion of until after (if ever) alignment is solved, no?

2 is arguably in that category also, though idk.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-06-05T03:41:21.498Z · LW(p) · GW(p)

Why is 1 important? It seems like something we can defer discussion of until after (if ever) alignment is solved, no?

If aging was solved or looked like it will be solved within next few decades, it would make efforts to stop or slow down AI development less problematic, both practically and ethically. I think some AI accelerationists might be motivated directly by the prospect of dying/deterioration from old age, and/or view lack of interest/progress on that front as a sign of human inadequacy/stagnation (contributing to their antipathy towards humans). At the same time, the fact that pausing AI development has a large cost in lives of current people means that you have to have a high p(doom) or credence in utilitarianism/longtermism to support it (and risk committing a kind of moral atrocity if you turn out to be wrong).

2 is arguably in that category also, though idk.

2 is important because as tech/AI capabilities increase, the possibilities to "make serious irreversible mistakes due to having incorrect answers to important philosophical questions" seem to open up exponentially. Some examples:

  • premature value lock-in
  • value drift,
  • handing over too much control/resources to alien/unaligned agents due to negotiation mistakes
  • mistakes related to commitment races
  • the process of creating/aligning AI might be unethical or creates a costly obligation
  • failure to prevent mindcrime inside AIs
  • intentionally doing horrible things at astronomical scale due to having wrong values/philosophies

If your point is that we could delegate solving these problems to aligned AI once we have them, my worry is that AI, including aligned AI, will be much better at creating new philosophical problems (opportunities to make mistakes) than at solving them. The task of reducing this risk (e.g., by solving metaphilosophy or otherwise making sure AIs' philosophical abilities keep up with or outpace their other intellectual abilities) seems super neglected, in part because very few people seem to acknowledge the importance of avoiding errors like the ones listed above.

(BTW I was surprised to see your skepticism about 2, since it feels like I've been talking about it on LW like a broken record, and I don't recall seeing any objections from you before. Would be curious to know if anything I said above is new to you, or you've seen me say similar things before but weren't convinced.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-06-05T04:02:01.431Z · LW(p) · GW(p)

Something like 2% of people die every year right? So even if we ignore the value of future people and all sorts of other concerns and just focus on whether currently living people get to live or die, it would be worth delaying a year if we could thereby decrease p(doom) by 2 percentage points. My p(doom) is currently 70% so it is very easy to achieve that. Even at 10% p(doom), which I consider to be unreasonably low, it would probably be worth delaying a few years.

Re: 2: Yeah I basically agree. I'm just not as confident as you are I guess. Like, maybe the answers to the problems you describe are fairly objective, fairly easy for smart AIs to see, and so all we need to do is make smart AIs that are honest and then proceed cautiously and ask them the right questions. I'm not confident in this skepticism and could imagine becoming much more convinced simply by thinking or hearing about the topic more.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-06-05T22:03:31.923Z · LW(p) · GW(p)

Even at 10% p(doom), which I consider to be unreasonably low, it would probably be worth delaying a few years.

Someone with with 10% p(doom) may worry that if they got into a coalition with others to delay AI, they can't control the delay precisely, and it could easily become more than a few years. Maybe it would be better not to take that risk, from their perspective.

And lots of people have p(doom)<10%. Scott Aaronson just gave 2% for example, and he's probably taken AI risk more seriously than most (currently working on AI safety at OpenAI), so probably the median p(doom) (or effective p(doom) for people who haven't thought about it explicitly) among the whole population is even lower.

I’m just not as confident as you are I guess. Like, maybe the answers to the problems you describe are fairly objective, fairly easy for smart AIs to see, and so all we need to do is make smart AIs that are honest and then proceed cautiously and ask them the right questions.

I think I've tried to take into account uncertainties like this. It seems that in order for my position (that the topic is important and too neglected) to be wrong, one has to reach high confidence that these kinds of problems will be easy for AIs (or humans or AI-human teams) to solve, and I don't see how that kind of conclusion could be reached today. I do have some specific arguments [LW · GW] for why the AIs we'll build may be bad at philosophy, but I think those are not very strong arguments so I'm mostly relying on a prior that says we should be worried about and thinking about this until we see good reasons not to. (It seems hard to have strong arguments either way today, given our current state of knowledge about metaphilosophy and future AIs.)

Another argument for my position is that humans have already created a bunch of opportunities for ourselves to make serious philosophical mistakes, like around nuclear weapons, farmed animals, AI, and we can't solve those problems by just asking smart honest humans the right questions, as there is a lot of disagreement between philosophers on many important questions.

I’m not confident in this skepticism and could imagine becoming much more convinced simply by thinking or hearing about the topic more.

What's stopping you from doing this, if anything? (BTW, beyond the general societal level of neglect, I'm especially puzzled by the lack of interest/engagement on this topic from the many people in EA with formal philosophy backgrounds. If you're already interested in AI and x-risks and philosophy, how is this not an obvious topic to work on or think about?)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-06-05T23:14:57.079Z · LW(p) · GW(p)

I guess I just think it's pretty unreasonable to have p(doom) of 10% or less at this point, if you are familiar with the field, timelines, etc. 

I totally agree the topic is important and neglected. I only said "arguably" deferrable, I have less than 50% credence that it is deferrable. As for why I'm not working on it myself, well, aaaah I'm busy idk what to do aaaaaaah! There's a lot going on that seems important. I think I've gotten wrapped up in more OAI-specific things since coming to OpenAI, and maybe that's bad & I should be stepping back and trying to go where I'm most needed even if that means leaving OpenAI. But yeah. I'm open to being convinced!

Replies from: Wei_Dai, Wei_Dai, Wei_Dai, Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-06-06T00:11:43.669Z · LW(p) · GW(p)

I guess part of the problem is that the people who are currently most receptive to my message are already deeply enmeshed in other x-risk work, and I don't know how to reach others for whom the message might be helpful (such as academic philosophers just starting to think about AI?). If on reflection you think it would be worth spending some of your time on this, one particularly useful thing might be to do some sort of outreach/field-building, like writing a post or paper describing the problem, presenting it at conferences, and otherwise attracting more attention to it.

(One worry I have about this is, if someone is just starting to think about AI at this late stage, maybe their thinking process just isn't very good, and I don't want them to be working on this topic! But then again maybe there's a bunch of philosophers who have been worried about AI for a while, but have stayed away due to the overton window thing?)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-06-06T00:33:11.570Z · LW(p) · GW(p)

Somehow there are 4 copies of this post

comment by dr_s · 2023-06-01T18:17:25.302Z · LW(p) · GW(p)

1 is an obvious one that many would deny out of sheer copium. Though of course "not dying" has to go hand in hand with "not aging" or it would rightly be seen as torture.

2 seems vague enough that I don't think people would vehemently disagree. If you specify, such as suggesting that there are absolutely correct or wrong answers to ethical questions, for example, then you'll get disagreement (including mine, for that matter, on that specific hypothetical claim).

comment by Algon · 2023-05-30T13:55:10.297Z · LW(p) · GW(p)

Disclaimer: I've never been to an academic conference
EDIT: Also, I'm just thinking out loud here. Not stating my desire to start a conference, just thinking about what can make academics feel like researching alignment is normal.

Those are some big names. I wonder if arranging a big AI safety conference w/ these people would make worrying about alignment feel more socially acceptable to a lot of researchers. It feels to me like a big part of making thinking about alignment socially acceptable is to visibly think about alignment in socially acceptable ways. In my imagination, you have conferences on important problems in academia. 

You talk about the topic there with your colleagues and impressive people. You also go there to catch up with friends, and have a good time. You network. You listen to big names talking about X, and you wonder which of your other colleagues will also talk about X in the open. Dismissing it no longer feels like it will go uncontested. Maybe you should take care when talking about X? Maybe even wonder if it could be true. 

Or on the flip side, you wonder if you can talk about X without your colleagues laughing at you. Maybe other people will back you up when you say X is important. At least, you can imply the big names will. Oh look, a big name X-thinker is coming round the corner. Maybe you can start up a conversation with them in the open. 

comment by Soroush Pour (soroush-pour) · 2023-06-01T13:23:37.666Z · LW(p) · GW(p)

There have been some strong criticisms of this statement, notably by Jeremy Howard et al here. I've written a detailed response to the criticisms here:

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a-response-to-seth-lazar-jeremy-howard-and-arvind-narayanan/

Please feel free to share with others who may find it valuable (e.g. skeptics of AGI x-risk).

comment by Jan_Kulveit · 2023-05-31T20:19:58.831Z · LW(p) · GW(p)

I feel somewhat frustrated by execution of this initiative.  As far as I can tell, no new signatures are getting published since at least one day before the public announcement. This means even if I asked someone famous (at least in some subfield or circles) to sign, and the person signed, their name is not on the list, leading to understandable frustration of them.  (I already got a piece of feedback in the direction "the signatories are impressive, but the organization running it seems untrustworthy") 

Also if the statement is intended to serve as a beacon, allowing people who have previously been quiet about AI risk to connect with each other, it's essential for signatures to be published. It's nice that Hinton et al. signed, but for many people in academia it would be actually practically useful to know who from their institution signed - it's unlikely that most people will find collaborators in Hinton, Russell or Hassabis.

I feel even more frustrated because this is second time where similar effort is executed by xrisk community while lacking basic operational competence consisting in the ability to accept and verify signatures. So, I make this humble appeal and offer to the organizers of any future public statements collecting signatures: if you are able to write a good statement and secure the endorsement of some initial high-profile signatories, but lack the ability to accept, verify and publish more than a few hundreds names, please reach out to me - it's not that difficult to find volunteers for this work. 

 

Replies from: ThomasWoodside
comment by ThomasW (ThomasWoodside) · 2023-06-01T01:50:43.258Z · LW(p) · GW(p)

Hi Jan, I appreciate your feedback.

I've been helping out with this and I can say that the organizers are working as quickly as possible to verify and publish new signatures. New signatures have been published since the launch, and additional signatures will continue to be published as they are verified. There is a team of people working on it right now and has been since launch.

The main obstacles to extremely swift publication are:

  • First, determining who meets our bar for name publication. We think the letter will have greater authority (and coordination value) if all names are above a certain bar, and so some effort needs to be put into determining whether signatories meet that bar.
  • Second, as you mention verification. Prior to launch, CAIS built an email verification system that ensures that signatories must verify their work emails in order for their signature to be valid. However, this has required some tweaks, such as making the emails more attention grabbing and adding some language on the form itself that makes clear that people should expect an email (before these tweaks, some people weren't verifying their emails).
  • Lastly, even with verification, some submissions are still possibly fake (from email addresses that we aren't sure are the real person) and need to be further assessed.

These are all obstacles that simply require time to address, and the team is working around the clock. In fact, I'm writing this comment on their behalf so that they can focus on the work they're doing.  We will publish all noteworthy signatures as quickly as we can, which should be within a matter of days (as I said above, some have already been published and this is ongoing). We do take your feedback that perhaps we should have hired more people so that verification could be swifter. 

In response to your feedback, we have just added language in the form and email that makes clear signatures won't show up immediately so that we can verify them. This might seem very obvious, but when you are running something with so many moving parts as this entire process has been, it is easy to miss things.

Thank you again for your feedback.

Replies from: Jan_Kulveit
comment by Jan_Kulveit · 2023-06-01T07:41:45.121Z · LW(p) · GW(p)

Thanks for the reply.  Also for the work - it's great signatures are added - before I've checked bottom of the list and it seemed it's either same or with very few additions.

I do understand verification of signatures requires some amount of work. In my view having more people (could be volunteers) to process the initial expected surge of signatures fast would have been better; attention spent on this will drop fast.
 

comment by Vishrut Arya (vishrut-arya) · 2023-05-30T17:57:31.015Z · LW(p) · GW(p)

Any explanations for why Nick Bostrom has been absent, arguably notably, in recent public alignment conversations (particularly since chatgpt)?

He's not on this list (yet other FHI members, like Toby Ord, are). He wasn't on the FLI open letter, too, but I could understand why he might've avoided endorsing that letter given its much wider scope.

Replies from: habryka4
comment by habryka (habryka4) · 2023-05-30T19:09:43.908Z · LW(p) · GW(p)

Almost certainly related to that email controversy from a few months ago. My sense is people have told him (or he has himself decided) to take a step back from public engagement. 

I think I disagree with this, but it's not a totally crazy call, IMO.

Replies from: scipio , dr_s, vishrut-arya
comment by ROM (scipio ) · 2023-06-01T13:04:06.347Z · LW(p) · GW(p)

I think this explains his absence from this + the FLI letter. 

He still seems to be doing public outreach though: see interview NY Times, interview with RTE, Big Think video and interview with Analytics India Magazine

None of these interviews have discussed the email. 

comment by dr_s · 2023-06-01T18:14:00.914Z · LW(p) · GW(p)

Yeah, beyond that honestly I would worry that his politics in general might do even more to polarize the issue in an undesirable way. I think it's not necessarily a bad call in the current atmosphere.

comment by Vishrut Arya (vishrut-arya) · 2023-05-31T18:20:02.484Z · LW(p) · GW(p)

Aha. Ugh, what an unfortunate sequence of events.

comment by Vladimir_Nesov · 2023-05-30T23:29:03.365Z · LW(p) · GW(p)

It's a step, likely one that couldn't be skipped. Still just short of actually acknowledging nontrivial probability of AI-caused human extinction, and the distinction between extinction and lesser global risks, availability of second chances at doing better next time. Nuclear war can't cause extinction, so it's not properly alongside AI x-risk. Engineered pandemics might eventually get extinction-worthy, but even that real risk is less urgent.

Replies from: dr_s
comment by dr_s · 2023-06-01T18:11:34.489Z · LW(p) · GW(p)

Eh, I think this is really splitting hairs. I have seen already multiple people using the lack of reference to climate change to dismiss the whole thing. Not every system of values places extinction on its own special pedestal (though I think in this case "biological omnicide" might be more it: unlike pandemics, AI could also kill the rest of non-human life). But in terms of expected loss of life AI could be even with those other things if you consider them more likely.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T20:07:34.495Z · LW(p) · GW(p)

Not every system of values places extinction on its own special pedestal [...] in terms of expected loss of life AI could be even with those other things

Well this is wrong, and I'm not feeling any sympathy for a view that it's not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever.

I agree that there are a lot of people who don't see this, and will dismiss a claim that expresses this kind of thing clearly. In mainstream comments to the statement, I've seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children. Whereas if object level claims were to be stated more plainly, this interpretation would crumble, and the same worldview would be forced to admit that the people signing the claim are either insane, or have a reason for saying these things that is not Controlling the Narrative. It's the same thing as with AI NotKillEveryoneism vs. AI Safety.

Replies from: dr_s
comment by dr_s · 2023-06-01T20:59:36.588Z · LW(p) · GW(p)

Well this is wrong, and I'm not feeling any sympathy for a view that it's not. An eternity of posthuman growth after recovering from a civilization-spanning catastrophe really is much better than lights out, for everyone, forever.

You can't really say anything is objectively wrong when it comes to morals, but also, I generally think that evaluating the well-being of potential entities to be leads to completely nonsensical moral imperatives like the Repugnant Conclusion. Since no one experiences all of the utility at the same time, I think "expected utility probability distribution" is a much more sensible metric (as in, suppose you were born as a random sentient in a given time and place: would you be willing to take the bet?).

That said, I do think extinction is worse than just a lot of death, but that's as a function of the people who are about to witness it and know they are the last. In addition, I think omnicide is worse than human extinction alone because I think animals and the rest of life have moral worth too. But I wouldn't blame people for simply considering extinction as 8 billion deaths, which is still A LOT of deaths anyway. It's a small point that's worthless arguing. We have wide enough uncertainties on the probability of these risks anyway that we can't really put fixed numbers to the expect harms, just vague orders of magnitude. While we may describe them as if they were numerical formulas, these evaluations really are mostly qualitative; enough uncertainty makes numbers almost pointless. Suffice to say, I think if someone considers, say, a 5% chance of nuclear war a bigger worry than a 1% chance of AI catastrophe, then I don't think I can make a strong argument for them being dead wrong.

In mainstream comments to the statement, I've seen frequent claims that this is about controlling the narrative and ensuring regulatory lock-in for the big players. From the worldview where AI x-risk is undoubtedly pure fiction, the statement sounds like Very Serious People expressing Concern for the Children.

I agree this makes no sense, but it's a completely different issue. That said, I think the biggest uncertainty re: X-risk remains whether AGI is really as close as some estimate it is at all. But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it's possible, and then it's dangerous, or it's still way far off, and then it's a waste of money and precious resources and ingenuity.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T21:30:16.271Z · LW(p) · GW(p)

It's not that complicated [LW · GW]. There is a sense in which these claims are objective [LW · GW] (even as the words we use to make them are 2-place words [LW · GW]), to the same extent as factual claims, both are seen through my own mind and reified as platonic models. Though morality is an entity that wouldn't be channeled in the physical world without people, it actually is channeled, the same as the Moon actually is occasionally visible in the sky.

as a function of the people who are about to witness it and know they are the last

My point is not about anyone's near term subjective experience, but about what actually happens in the distant future.

But this aspect is IMO irrelevant when judging the opportunity of actively trying to build AGI. Either it's possible, and then it's dangerous, or it's still way far off, and then it's a waste of money and precious resources and ingenuity.

It's their resources and ingenuity. If there is no risk, it's not our business to tell them not to waste them.

Replies from: dr_s
comment by dr_s · 2023-06-01T22:14:58.822Z · LW(p) · GW(p)

My point is not about anyone's near term subjective experience, but about what actually happens in the distant future.

I really, really don't care about what happens in the distant future compared to what happens now, to humans that actually exist and feel. I especially don't care about there being an arbitrarily high amount of humans. I don't think a trillion humans is any better than a million as long as:

  1. they are happy

  2. whatever trajectory lead to those numbers didn't include any violent or otherwise painful mass death event, or other torturous state.

There really is nothing objective about total sum utilitarianism; and in fact, as far as moral intuitions go, it's not what most people follow at all. With things like "actually death is bad" you can make a very cogent case: people, day to day, usually don't want to die, therefore there never is a "right moment" in which death is not a violation; if there was, people can still commit suicide anyway, thus death by old age or whatever else is just bad. That's a case where you can invoke the "it's not that complicate" argument IMO. Total sum utilitarianism is not; I find it a fairly absurd ethical system, ripe for exploits so ridiculous and consequences so blatantly repugnant that it really isn't very useful at all.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T23:22:40.698Z · LW(p) · GW(p)

I agree, amount of humans and a lot of other utilitarian aims is goodharting for bad proxies. The distinction I was gesturing at is not about amount of what happens, but about perception vs. reality. And a million humans is very different from zero anyone, even if the end was not anticipated nor perceived.

Replies from: dr_s
comment by dr_s · 2023-06-02T05:38:42.655Z · LW(p) · GW(p)

Ol, let's consider two scenarios:

  1. humanity goes extinct gradually and voluntarily via a last generation that simply doesn't want to reproduce and is cared for by robots to the end, so no one suffers particularly in the process;

  2. humanity is locked in a torturous future of trillions in inescapable torture, until the heat death of the universe.

Which is better? I would say 1 is. There are things worse than extinction (and some of them are on the table with AI too, theoretically). And anyway you should consider that with how many "low hanging fruit" resources we've used, there's fair odds that if we're knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again. Stasis is better than immediate extinction but if you care about the long term future it's also bad (and implies a lot more suffering because it's a return to the past).

Replies from: Lichdar, Vladimir_Nesov
comment by Lichdar · 2023-06-02T11:45:54.243Z · LW(p) · GW(p)

As a normie, I would say 1 is. Depending on how some people see things, 2 is the past - which I disagree with and at any rate would say was the generator of an immense quantity of joy, love and courage along with the opposite qualities of pain, mourning and so on.

So for me, I would indeed say that my morality puts extinction on a higher pedestal than anything else(and also am fully am against mind uploading or leaving humans with nothing to do).

Just a perspective from a small brained normie.

Replies from: dr_s
comment by dr_s · 2023-06-02T13:02:40.748Z · LW(p) · GW(p)

I mean, 2 is not the past in a purely numerical sense (I wouldn't say we ever hit trillions of total humans). But the problem is also on the inescapable part, which assumes e.g. permanent disempowerment. That's not a feature of the past.

I'm not sure which your "I would say 1" meant - I asked which was better, but then you said you think extinction is its own special thing. Anyway I don't disagree that extinction is a special kind of bad, but it is IMO still in relation to people living today. I'd rather not die, but if I have to die, I'd rather die knowing the world goes on and the things I did still have purpose for someone. Extinction puts an end to that. I want to root that morality in the feelings of present people because I feel like assigning moral worth to not-yet-existing ones completely breaks any theory. For example, however many actual people exist, there's always an infinity of potential people that don't. In addition, it allows for justifying making existing people suffer for the sake of creating more people later (e.g. intensive factory farming of humans until we reach population 1 trillion ASAP, or however many is needed to justify the suffering inflicted via created utility), which is just absurd.

Replies from: Lichdar
comment by Lichdar · 2023-06-02T14:33:03.011Z · LW(p) · GW(p)

I would just say as a normie, that these extensive thought experiments of factory humans mostly don't concern themselves to me - though I could see a lot of justification of suffering to allow humanity to exist for say, another 200 billion years. People have always suffered in some extent to do anything; and certainly having children entails some trade-offs, but existence itself is worth it.

But mostly the idea of a future without humanity, or even one without our biology, just strikes me with such abject horror that it can't be countenanced.

I have children myself and I do wonder if this is a major difference. To imagine a world where they have no purpose drives me quite aghast and I feel this would reflect the thinking of the majority of humans.

And as such, hopefully drive policy which will, in my best futures, drive humanity forward. I see a good end as humanity spreading out into the stars and becoming inexhaustible, perhaps turning into multiple different species but ultimately, still with the struggles, suffering and triumphs of who we are.

I've seen arguments here and there about how the values drift from say, a hunter gatherer to us would horrify us, but I don't see that. I see a hunter-gatherer and relate to him on a basic level. He wants food, he will compete for a mate and one day, die and his family will seek comfort from each other. My work will be different from his but I comprehend him, and as writings like Still A Pygmy show, they comprehend us.

The descriptions of things like mind uploading of accepting the extinction of humanity strike me with such wildness that it's akin to a vast, terrifying revulsion. It's Lovecraftian horror and I think, very far from any moral goodness to inflict upon the majority of humanity.

Replies from: dr_s
comment by dr_s · 2023-06-03T07:49:38.812Z · LW(p) · GW(p)

My point isn't that extinction is a-ok, but rather that you could "price it" as the total sum of all human deaths (which is the lower bound, really) and there would still be a case for that. It still remains very much to avoid! I think it's worse than that but I also don't think it's worse than anything. If the choice was between going extinct now or condemning future generations to lives of torture, I'd pick extinction as the lesser evil. And conversely I am also very sceptical of extremely long term reasoning, especially if used to justify present suffering. You bring up children but those are still very much real and present. You wouldn't want them to suffer for the sake of hypothetical 40th century humans, I assume.

Replies from: Lichdar
comment by Lichdar · 2023-06-03T13:57:16.643Z · LW(p) · GW(p)

Depends on the degree of suffering to be totally honest- obviously I'm fine with them suffering to some extent, which is why we drive then to behave, etc so they can have better futures and sometimes conjoin them to have children so that we can continue the family line.

I think my answer actually is yes, if hypothetically their suffering allows the existence of 40th century humans, it's pretty noble and yes, I'd be fine with it.

Replies from: dr_s
comment by dr_s · 2023-06-03T14:12:25.740Z · LW(p) · GW(p)

if hypothetically their suffering allows the existence of 40th century humans, it's pretty noble and yes, I'd be fine with it

So supposing everything goes all right, for every more human born today there might be millions of descendants in the far future. Does that mean we have a moral duty to procreate as much as possible? I mean, the increased stress or financial toll surely don't hold a candle to the increased future utility experienced by so many more humans!

To me it seems this sort of reasoning is bunk. Extinction is an extreme of course but every generation must look first and foremost after the people under its own direct care, and their values and interests. Potential future humans are for now just that, potential. They make no sense as moral subjects of any kind. I think this extends to extinction, which is only worse than the cumulative death of all humans insofar as current humans wish for there to be a future. Not because of the opportunity cost of how non-existing humans will not get to experience non-existing pleasures.

Replies from: Lichdar
comment by Lichdar · 2023-06-03T19:25:59.987Z · LW(p) · GW(p)

I apologize for being a normie but I can't accept anything that involves non-existence of humanity and would indeed accept an enormous amount of suffering if those were the options.

comment by Vladimir_Nesov · 2023-06-02T06:08:02.186Z · LW(p) · GW(p)

there's fair odds that if we're knocked back into the millions by a pandemic or nuclear war now we may never pick ourselves back again

Humanity went from Göbekli Tepe to today in 11K years. I doubt even after forgetting all modern learning, it would take even a million years to generate knowledge and technologies for new circumstances. I hear the biosphere can last about a billion years more. (One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)

Replies from: dr_s
comment by dr_s · 2023-06-02T06:30:59.441Z · LW(p) · GW(p)

I disagree it's that easy. It's not a long trajectory of inevitability; like with evolution, there are constraints. Each step generally has to be on its own aligned with economic incentives at the time. See how for example steam power was first developed to fuel pumps removing water from coal mines; the engines were so inefficient that it was only cost effective if you didn't also need to transport the coal. Now we've used up all surface coal and oil, not to mention screwed up the climate quite a bit for the next few millennia, conditions are different. I think technology is less uniform progression and more a mix of "easy" and "hard" events (as in the grabby aliens paper, if you've read it), and by exhausting those resources we've made things harder. I don't think climbing back up would be guaranteed.

(One specific path is to use low-tech animal husbandry to produce smarter humans. This might even solve AI x-risk by making humanity saner.)

This IMO even if it was possible would solve nothing while potentually causing an inordinate amount of suffering. And it's also one of those super long term investments that don't align with almost any incentive on the short term. I say it solves nothing because intelligence wouldn't be the bottleneck; if they had any books left lying around they'd have a road map to tech, and I really don't think we've missed some obvious low tech trick that would be relevant to them. The problem is having the materials to do those things and having immediate returns.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-02T06:49:15.528Z · LW(p) · GW(p)

Intelligence is also a thing that enables perceiving returns that are not immediate, as well as maintenance of more complicated institutions that align current incentives towards long term goals.

Replies from: dr_s
comment by dr_s · 2023-06-02T13:05:31.916Z · LW(p) · GW(p)

This isn't a simple marshmallow challenge scenario. If you have a society that has needs and limited resources, it's not inherently "smart" to sacrifice those significantly for the sake of a long term project that might e.g. not benefit anyone who's currently living. It's a difference in values at that point; even if you're smart enough you can still not believe it right.

For example, suppose in 1860 everyone knew and accepted global warming as a risk. Should they, or would they, have stopped using coal and natural gas in order to save us this problem? Even if it meant lesser living standards for themselves, and possibly more death?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-02T13:16:56.413Z · LW(p) · GW(p)

it's not inherently "smart" to sacrifice those significantly for the sake of a long term project

Your argument was that this hopeless trap might happen after a catastrophe and it's so terrible that maybe it's as bad or worse as everyone dying quickly. If it's so terrible, in any decision-relevant sense, then it's also smart to plot towards projects that dig humanity out of the trap.

Replies from: dr_s
comment by dr_s · 2023-06-03T07:54:08.098Z · LW(p) · GW(p)

No, sorry, I may have conveyed that wrong and mixed up two arguments. I don't think stasis is straight up worse than extinction. For good or bad, people lived in the Middle Ages too. My point was more that if your guiding principle is "can we recover", then there are more things than extinction to worry about. If you aspire at some kind of future in which humans grow exponentially then you won't get it if we're knocked back to preindustrial levels and can't recover.

I don't personally think that's a great metric or goal to adopt, just following the logic to its endpoint. And I also expect that many smart people in the stasis wouldn't plot with only that sort of long term benefit in mind. They'd seek relatively short term returns.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-03T08:03:29.383Z · LW(p) · GW(p)

I see. Referring back to your argument was more an illustration of existence for this motivation. If a society forms around the motivation, at any one time in the billion years, and selects for intelligence to enable nontrivial long term institution design, that seems sufficient to escape stasis.

comment by Alan E Dunne · 2023-05-31T21:25:37.305Z · LW(p) · GW(p)

skeptical reaction with one expression of support: https://statmodeling.stat.columbia.edu/2023/05/31/jurassic-ai-extinction/

comment by Richard_Kennaway · 2023-06-01T12:05:30.856Z · LW(p) · GW(p)

I have to wonder what people — both the signatories and all the people suddenly taking this seriously — have in mind by "risk of extinction". The discussions I've seen have mentioned things like deepfakes, autonomous weapons, designer pathogens, AI leaving us nothing to do, and algorithmic bias. No-one I have heard is talking about Eliezer's "you are made of atoms that the AI wants to use for something else".

Replies from: dr_s
comment by dr_s · 2023-06-01T18:19:38.153Z · LW(p) · GW(p)

I honestly think that's for the best because I don't believe super fast takeoff FOOM scenarios are actually realistic. And in any slower takeoff existential risk looks more like a muddled mix of human and AI driven processes than "nanomachines, everyone dies".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T20:20:09.451Z · LW(p) · GW(p)

The discussions I've seen have mentioned things like deepfakes, autonomous weapons, designer pathogens, AI leaving us nothing to do, and algorithmic bias.

I honestly think that's for the best because I don't believe super fast takeoff FOOM scenarios are actually realistic.

When a claim is wrong, ignoring its wrongness and replacing it in your own perception with a corrected steelman of completely different literal meaning [LW · GW] is not for the best [LW · GW]. The sane thing would be to call out the signatories for saying something wildly incorrect, not pretending that they are saying something they aren't. The sane thing for the signatories would be to mean what they sign, not something others would hear when they read it, despite it contradicting the literal meaning of the words in the statement.

Replies from: dr_s
comment by dr_s · 2023-06-01T21:11:25.067Z · LW(p) · GW(p)

My point is I don't think they're incorrect. All those things are ALSO problems, and many are paths to X-risk even, which I'd consider more likely (in a slow takeoff scenario) than FOOM. A few possible scenarios:

  1. designer pathogens are the obvious example because there's so many ways they can cause targeted human extinction without risking the AI's integrity in any way, so they're obvious candidates for both a misuse of AI by malicious humans and for a rogue AI that is actively trying to kill us for whatever reason. Plus there's also the related risk of designer microorganisms that alter the Earth's biosphere as a whole, which would be even deadlier (think something like the cyanobacteria that caused the Great Oxidation Event, except now it'd be something that grows out of control and makes the atmosphere deadly)

  2. autonomous weapons are obviously dangerous in the hands of an AI because that's really handing it our defenses on a silver platter and makes it turning against us much more likely to succeed (and thus a potentially attractive strategy to seek power)

  3. deepfakes or actual even more sophisticated forms of deception could be exploited by an AI to manipulate humans towards carrying out actions that benefit it or allow it to escape confinement and so on. Being able to see through deepfakes and defend against them would be key to the security of important strategic resources

  4. we don't know what happens socially and economically if AGI really takes everyone's job. We go into unexplored territory, and more so, we go there from a place where the AGI will probably be owned by a few to begin with. We might hope for glorious post-scarcity abundance but that's not the only road nor, I fear, the most likely. If the transition goes badly it can weaken our species enough that all AGI needs to do to get rid of us for good is give a little push.

All these things of course are only relevant with relatively weak-ish AGI. If it is possible to have a self-improving AGI that FOOMs in a matter of hours and kills everyone with nanobots, then we're screwed either way. But we don't know that's possible, the slower scenario are also possible and we can at least defend more against them. Other strategies, like "find ways to align the AGI" or "just don't build the damn thing" prevent all the scenarios anyway. So I don't think this statement is particularly missing the mark. Remember that the narrower the statement, the less people can endorse it. This is the Minimum Common Denominator: the things that all the signatories, even with probably very different individual viewpoints, can agree for sure are an issue. Which they are. There's probably more issues, but consensus on those is not as solid.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T21:42:08.999Z · LW(p) · GW(p)

My point is I don't think they're incorrect.

Misconstruing an incorrect statement with a correct steelman is incorrect. If I say "I've discovered a truly marvelous proof that 2+2=3000 that this margin is too small to contain," and you reply, "Ah, so you are saying 2+2=4, quite right," then the fact of your inexplicable discussion of a different and correct statement doesn't make your interpretation of my original incorrect statement correct.

Replies from: dr_s
comment by dr_s · 2023-06-01T22:09:00.562Z · LW(p) · GW(p)

I explained in the rest of the comment why I don't think they're incorrect, literally. The signed statement anyway is just:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

So that seems to me like it's succinct enough to include all categories discussed here. I don't see the issue.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-06-01T23:25:16.418Z · LW(p) · GW(p)

"Extinction from AI" really doesn't refer to deepfakes, AI leaving us nothing to do, and algorithmic bias. It doesn't include any of these categories. There is nothing correct about interpreting "extinction from AI" as referring to either of those things. This holds even if extinction from AI is absolutely impossible, and those other things are both real/imminent and extremely important. Words have meaning even when they refer to fictional ideas.

Replies from: dr_s
comment by dr_s · 2023-06-02T05:32:26.526Z · LW(p) · GW(p)

Deepfakes as in "hey, my uncle posted an AI generated video of Biden eating a baby on FB", no (though that doesn't help our readiness as a species). The general ability of AI to deceive, impersonate, and pretty much break any assumption about who we may believe we are talking to though is a prominent detail that often features in extinction scenarios (e.g. how the AI starts making its own money or manipulating humans into producing thing it needs).

I would say "extinction scenarios" include everything that features extinction and AI in the event chain, which doesn't even strictly need to be a takeover by agentic AI. Anyway the actual signed statement is very general. I can guess that some of these people don't worry specifically about the "you are made of atoms" scenario, but that's just arguing against something that isn't in the statement.

comment by jeffreycaruso · 2024-03-19T02:29:30.218Z · LW(p) · GW(p)

What aspect of AI risk is deemed existential by these signatories? I doubt that they all agree on that point. Your publication "An Overview of Catastrophic AI Risks" lists quite a few but doesn't differentiate between theoretical and actual. 

Perhaps if you were to create a spreadsheet with a list of each of the risks mentioned in your paper but with the further identification of each as actual or theoretical, and ask each of those 300 luminaries to rate them in terms of probability, then you'd have something a lot more useful. 

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2024-03-19T02:40:29.518Z · LW(p) · GW(p)

The statement does not mention existential risk, but rather "the risk of extinction from AI".

Replies from: jeffreycaruso
comment by jeffreycaruso · 2024-03-19T03:04:46.429Z · LW(p) · GW(p)

Which makes it an existential risk. 

"An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population." - FLI

comment by Joseph Van Name (joseph-van-name) · 2023-05-30T22:55:02.048Z · LW(p) · GW(p)

Is this why I cannot get anyone to fund my research on AI interpretability even though I am the only person using spectral radii of tensor products to produce more interpretable and mathematical graph embeddings, word embeddings, and dimensionality reductions? I should have no problems with funding my own research on AI interpretability since I am a cryptocurrency creator, but in reality, most people are quite anti-innovative so they would not invest in anything unless the media tells them to.

Replies from: GeneSmith, Algon
comment by GeneSmith · 2023-05-31T00:46:48.339Z · LW(p) · GW(p)

I don't understand what this statement has to do with your research funding. Can you explain?

Replies from: joseph-van-name
comment by Joseph Van Name (joseph-van-name) · 2023-07-01T14:23:25.785Z · LW(p) · GW(p)

There are reasons why I said what I did, but I am unwilling to assume that you are ready or willing to have a decent discussion. It is best if we ceased talking about this.

comment by Algon · 2023-05-31T14:40:09.114Z · LW(p) · GW(p)

The number of downvotes on this comment is silly. 

Replies from: joseph-van-name
comment by Joseph Van Name (joseph-van-name) · 2023-07-04T15:58:31.986Z · LW(p) · GW(p)

Yes. The LW community is very silly. They do not even know what I have been doing. They just hate because they are anti-intellectuals. They hate me because I have a Ph.D. in Mathematics and they don't. Not that that matters because universities are extremely unprofessional and still refuse to apologize for promoting violence against me, and I am going to mention this every time I have the opportunity to do so until they apologize. But I would like to be enlightened by one one the downvoters if they have anything to say (unlike Gene Smith). If  are -complex matrices and , then define the -spectral radius of  to be

Can you at least tell me why we have a MAX there and not a SUP? And once you tell me that, can you tell me what I should not have spent any time at all using the -spectral radius to solve problems in cryptography and AI? Because it seems like the people here are just anti-intellectual and hate AI safety got some reason.

Added later: I see that people are downvoting this because they are anti-intellectual and hate mathematics. WHAT A JOKE!