Nature: "Stop talking about tomorrow’s AI doomsday when AI poses risks today"

post by Ben Smith (ben-smith) · 2023-06-28T05:59:49.015Z · LW · GW · 8 comments

This is a link post for https://www.nature.com/articles/d41586-023-02094-7

Contents

8 comments

Overall, a headline that seems counterproductive and needlessly divisive.

I worry very much that coverage like this has the potential to bring political polarization to AI risk and it would be extremely damaging for the prospect of regulation if one side of the US Congress/Senate decided AI risk was something only their outgroup is concerned about, for nefarious reasons.

 but in the spirit of charity, here are perhaps the strongest points of a weak article:

the spectre of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it. This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry. An actual arms race to produce next-generation AI-powered military technology is already under way, increasing the risk of catastrophic conflict

and

governments must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist

and

Researchers must play their part by building a culture of responsible AI from the bottom up. In April, the big machine-learning meeting NeurIPS (Neural Information Processing Systems) announced its adoption of a code of ethics for meeting submissions. This includes an expectation that research involving human participants has been approved by an ethical or institutional review board (IRB)

This would be great if ethical or institutional review boards were willing to restrict research that might be dangerous, but it would require a substantial change in their approach to regulating AI research.

All researchers and institutions should follow this approach, and also ensure that IRBs — or peer-review panels in cases in which no IRB exists — have the expertise to examine potentially risky AI research.

Should people worried about AI existential risk be trying to create resources for IRBs to recognize harmful AI research?

 

Some ominous commentary from Tyler Cowen:

Many of you focused on AGI existential risk do not much like or agree with my criticisms of that position, or perhaps you do not understand my stance, as I have seen stated a few times on Twitter.  But I am telling you -- I take you far more seriously than does most of the mainstream.  I keep on saying -- publish, publish, peer review, peer review -- a high mark of respect....

As it stands, contra that earlier tweet from Rob Wiblin (does anyone have a cite?), you have utterly and completely lost the mainstream debate, whether you admit it or not, whether you see this or not.  (Given the large number of rationality community types who do not like to travel, it is no surprise this point is not better known internally.)  You have lost the debate within scientific communities, within policymaker circles, and in international diplomacy, if it is not too much of an oxymoron to call it that.

I don't really know what he is talking about because it does not seem like we're losing the debate right now. 

8 comments

Comments sorted by top scores.

comment by Razied · 2023-06-28T15:02:13.746Z · LW(p) · GW(p)

Overall, a headline that seems counterproductive and needlessly divisive.

Probably the understatement of the decade, this article is literally an "order" from Official Authority to stop talking about what I believe is literally the most important thing in the world. I guess this is not literally the headline that would maximally make me lose respect for Nature... but it's pretty close. 

This article is a pure appeal to authority. It contains no arguments at all, it only exists as a social signal that Respectable Scientists should steer away from talk of AI existential risk. 

The AI risk debate is now no more about any actual arguments, it's now about slinging around political capital and scientific prestige. It has become political in nature.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-06-29T20:36:26.699Z · LW(p) · GW(p)

Yep, that's the biggest issue I have with my own side of the debate on AI risk, in that quite often, they don't even try to state why it isn't a risk, and instead appeal to social authority, and while social authority is evidence, it's too easy to filter that evidence a lot to be useful.

To be frank, I don't blame a lot of the AI risk people for not being convinced that we aren't doomed, even though reality doesn't grade on a curve, the unsoundness of the current arguments against doom don't help, and it is in fact bad that my side keeps doing this.

comment by the gears to ascension (lahwran) · 2023-06-28T14:57:05.305Z · LW(p) · GW(p)

Yeah I travel and ask randos on the street and they agree AI is about to kill us all. Does he travel?

Replies from: frontier64
comment by frontier64 · 2023-06-28T23:00:10.801Z · LW(p) · GW(p)

I strongly agree. The basic argument Yud laid out is very convincing to randos who listen. Too convincing honestly. A rando doesn't need an in-depth mathematical explanation to understand how incredibly likely it is that AI will turn the world into glass.

My go to is:

  • a rough explanation of intelligence and goal orthogonality
  • Picture just how inconceivable our level of intelligence is to chimps.
  • Picture a thousand immortal Einsteins living in a tiny box where a year for them is a couple days for us. How much smarter than us are those Einsteins?
  • One example of how a super-intelligence could take over the world: mechanical, self-replicating nanobots, novel protein synthesis, brainwashing people.

That's really it. I also know a couple basic counters to the most common arguments people bring up: government regulation, friendly AI being made first, AI wouldn't necessarily want to hurt us, etc. Most people are convinced and unfortunately look disheartened.

Replies from: FinalFormal2
comment by FinalFormal2 · 2023-06-30T00:20:28.823Z · LW(p) · GW(p)

Is there reason to believe 1000 Einsteins in a box is possible?

comment by mako yass (MakoYass) · 2023-06-28T21:14:48.616Z · LW(p) · GW(p)

I actually like the headline. "Stop talking about tomorrow’s AI doomsday" sort of admits that the doomsday is happening tomorrow, and comes out and says "yes, tomorrow is too far in the future for me to want to think about it, regardless of severity. I am a type of addict."

comment by mako yass (MakoYass) · 2023-06-28T21:03:08.233Z · LW(p) · GW(p)

I think you missed one of the valid points being made here (possibly tacitly), roughly, the general public should be focused on the issues of infra-intelligent AI because that's the part of the discussion that public engagement could actually benefit. I don't know if infra issues people know why it is this way, but they might have a genuinely good sense for where the good discourse is or isn't, and I think alignment strategists would tend to agree with them about that, I'm starting to wonder if we're kind of idiots for talking about it.

Like, AGI researchers: "Look at how good and important the alignment research I'm doing is".

Non AGI researchers, just trying to live their lives: "I don't understand your research or why it's important, and I'm not hearing a case as to why it would even help anyone for me to understand it, so no, I don't want to look at it."

AGI researchers: ">:|"

Replies from: ben-smith
comment by Ben Smith (ben-smith) · 2023-06-29T05:47:23.167Z · LW(p) · GW(p)

I think your point is interesting and I agree with it, but I don't think Nature are only addressing the general public. To me, it seems like they're addressing researchers and policymakers and telling them what they ought to focus on as well.