Mati's introduction to pausing giant AI experiments

post by Mati_Roy (MathieuRoy) · 2023-04-03T15:56:58.542Z · LW · GW · 0 comments

Contents

No comments

Pause Giant AI Experiments: An Open Letter – sign it here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Pausing AI Developments Isn't Enough. We Need to Shut it All Down – read here: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Target audience: This is written for a larger audience than frequent LessWrong readers, which I will share on social media.

Update: Pausing AI is an extremely complex lever, and could definitely be activated in a negative way. I think if some individual labs had unilaterally paused Giant AI Experiments for 6 months, it would likely have been negative. But I would really like for humanity to improve its ability to 1) coordinate to pause technological development if needed, and 2) forecast and monitor risk.

Here’s my introduction to the above links.

Many people are concerned that an AI race will accelerate, and that by the time an AI is smart enough to bootstrap itself to overpower humanity, we won't have had time to make this alien mind aligned with our values sufficiently for it to not kill everyone–hence the need to pause giant AI experiments. AI capabilities seems much easier than AI alignment, and you only get one shot at creating an AI more powerful than humanity combined.

I remember years ago, people concerned about existential risks from AI arguing about whether bringing awareness to this issue at large (as opposed to a targeted way) was net positive or negative. For example, it was unclear to which extent it would accelerate AI capabilities compared to AI alignment, or whether the problem was too complex to communicate at large, and politicizing the issue would detract from the technical problem of aligning AI; and it didn’t seem like most people could do something about it anyway.

But now AI is becoming mainstream, and broad public support for AI safety policies seems more likely to be important. I think we need to pause the development of AI in order to have more time to research how to do so safely. And for that to happen, it seems likely that it will need to be regulated by goverments given that pausing AI development doesn’t align with the immediate economic incentives companies have.

A lot of the people saying we need to pause AI development, like myself, are heavily invested in AI: both their career capital and assets. There’s a combination of reasons why people both work in AI and support a pause on giant AI experiments, including individual incentives (just like a climate scientist might not pay a non-existent carbon tax) and preferring AI safety conscious people develop AI if it’s going to be developed anyway (which is a controversial argument). And of course, there are also people that say the level of risks don’t justify a pause.

Most people I know that have been dreaming about a technological Utopia for a long time are currently saying we should pause giant AI experiments. There is partly a bubble effect, but there's also over 50,000+ people that have signed a letter advocating for that. This is not coming from people that have a general fear of technology–this is coming from early adopters, technological enthusiasts, futurists, and people at the forefront of AI development and wanting all the prosperity that comes with technological development. Yet, they/we are saying we should pause giant AI experiments. Not permanently. But we need more time before we can develop AGI safely–and by safely, I mean where non-zero humans survive.

I'm less confident about the risks posed by weaker AIs, but I hope they are significant. I hope that weaker AI tools can do a lot of damage (without permanently curtailing technological development). This way, we'll be forced(/nudged) to heavily control, regulate, and oversee the development of AI. If weaker AIs only bring us prosperity (which also seems plausible), avoiding a race will be even harder. And a misaligned AI agent that has learned the concept of deception will not reveal itself until it’s too late and it can overpower humanity.

0 comments

Comments sorted by top scores.