If AGI were coming in a year, what should we do?

post by MichaelStJules · 2022-04-01T00:41:42.200Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    13 Razied
    5 Chris_Leong
    4 AprilSR
    4 nem
    0 shminux
None
No comments

Suppose AGI was very likely to arrive first around a year from now, with multiple projects close, but one a few months ahead of the others, and suppose that the AI safety community agreed that this was the case. What should our community do?

How would you answer differently for AGI in 4 months from now? 2 years from now? 5 years from now? 10 years from now? 20 years from now?

Some potential subquestions:

  1. What technical research should we try to get the project closest to AGI to rely on? How would we get them to use it? Or, if we could build AGI first, what would we do to reduce risks?
  2. What infrastructure, defences or other technology should we try to have ready or use?
  3. How should we reach out to governments, and what should we try to convince them to do?
  4. What research should we work on?
  5. What else?

My motivations for this question are to get people to generate options for very short timelines and to get an idea of our progress so far.

Answers

answer by Razied · 2022-04-01T01:10:19.754Z · LW(p) · GW(p)

Honestly? If that is the case just act as if you have some painless terminal disease and you have one or two years to live. Do a bunch of things you've always wanted to do, maybe try a bunch of LSD to see what that's about, skydive a bit, make peace with your parents, etc. 

At the one year mark I don't think the overton window contains any actions that would actually be effective at preventing AGI. Which leaves us with morally odious solutions like:

  1.  A coordinated strike against all AI labs in the US and Canada, but even that doesn't actually buy that much time if all the key insights were published, it'll just mean that China gets there shortly after.
  2. Provoking nuclear war between China, Russia and US while having some secret bunker base with enough survivors to outlast whatever fallout there is, maybe relocate all AI safety researchers to New Zealand? 
  3. Release a deadly pandemic? Maybe use crispr to have it target people based on dna, which you've collected by giving our free lemonade at AI conferences?

As you see, the intersection between the set of pleasant solutions and the set effective solutions is empty if we're at the T-1year mark. All the effective solutions that I can see involve killing lots of people for the belief that AI will be dangerous, which means that you darn better have an unshakeable degree of confidence in the idea, which I don't have.

comment by Donald Hobson (donald-hobson) · 2022-04-02T19:58:21.917Z · LW(p) · GW(p)

I think there are pleasant and potentially effective measures.

Offer a free vacation to some top AI experts.

Label decaf coffee as normal and give it to the lab.

DDos stack overflow.

answer by Chris_Leong · 2022-04-01T09:08:42.610Z · LW(p) · GW(p)

Try to figure out the best possible defense-in-depth strategy.

In other words, there's a whole bunch of different proposals for safety. I think it'd be worth thinking about what would be the optimal set to stack on top of each other without them interfering or requiring unrealistic amounts of processing. It'd also be worth thinking about a more minimalistic set that would be more likely to be implemented.

Maybe create a Slack or Discord focused specifically focused on the imminent threat as it'd seem valuable to be able to have these discussions without being distracted by discussions that wouldn't be useful for the immediate crisis.

answer by AprilSR · 2022-04-01T16:16:31.091Z · LW(p) · GW(p)

At a certain point the best strategy becomes physically preventing AI organizations from developing AI, somehow. You could do this by an appeal to governments, or we could pour lots of money into the new EA cause area of taking over the computation power supply chain.

answer by nem · 2022-04-01T13:24:04.650Z · LW(p) · GW(p)

I am not an AI safety researcher; more of a terrified spectator monitoring LessWrong for updates about the existential risk of unnaligned AGI (thanks a bunch HPMOR). That said, if it was a year away, I would jump into action. My initial thought would be to put almost all my net worth into a public awareness campaign. If we can cause enough trepidation in the general public, it's possible we could delay the emergence of AGI by a few weeks or months. My goal is not to solve alignment, rather to prod AI researchers to implement basic safety concerns that might reduce S-Risk by 1 or 2 percent. Then... think deeply about whether I want to be alive for the most Interesting Time in human history.

answer by Shmi (shminux) · 2022-04-01T02:40:46.970Z · LW(p) · GW(p)

I'd try to survive the year, on an off chance that the AGI will be able to solve aging/uploads and let humans live for as long as they want. Sadly, this is not an option for a 2+ years horizon, as the pandemic demonstrated so amply.

comment by mukashi (adrian-arellano-davin) · 2022-04-01T04:27:43.544Z · LW(p) · GW(p)

Why the downvotes?

Replies from: superads91
comment by superads91 · 2022-04-01T07:18:59.649Z · LW(p) · GW(p)

Because they seem to be more afraid of a bad flu with 99% survival rate than an unaligned transformative technology.

Replies from: P.
comment by P. · 2022-04-01T07:55:09.098Z · LW(p) · GW(p)

I interpreted what they said as "As the pandemic clearly demonstrated, given even a minor crisis people will act selfishly and turn against each other, if the singularity were 2 years away it would be a bloodbath" instead of "The pandemic will kill most of us in less than 2 years".

Replies from: shminux
comment by Shmi (shminux) · 2022-04-01T08:24:14.541Z · LW(p) · GW(p)

What I meant was that people do not want to inconvenience themselves for a long time. After a while they will accept an elevated risk of death now rather than spending more time safe but mildly inconvenienced.

Replies from: superads91
comment by superads91 · 2022-04-02T08:09:03.765Z · LW(p) · GW(p)

I misunderstood you then, sorry. But the pandemic analogy still feels a bit off. So, you basically mean that after a while of having the pandemic we just accepted it, because we'd rather have a higher risk of death than keep spending more time inconvenienced?

First, well, I haven't looked at the data, but it would be interesting to know how many more people are dying in total because of COVID. With a 99% survival rate this is not exactly tuberculosis in the 19th century. So I'd guess that the difference is pretty small.

Second, people are doing what they can about it. Maybe not as efficiently as possible, but society is fully aware and committed to it. Very unfortunately, the situation regarding AI safety is quite the opposite.

And worse of all, AI safety is a thousand times more concerning than COVID.

It seems to me that either you're deeply underestimating the former, or overestimating the latter.

I mean, someone tells you "AGI is coming in a year" and all you wanna do is survive that year so that maybe you can get uploaded? That's like saying that the exam is next week and all you wanna do in the meantime is make sure you don't forget your pen so you can maybe score an A.

Replies from: shminux
comment by Shmi (shminux) · 2022-04-02T19:30:44.529Z · LW(p) · GW(p)

Basically, in the near term, given that "AGI is coming in a year" and there is nothing you can do about it (assuming you are not a professional AI/ML researcher), there is a small but non-zero chance of benefiting from the AI superhuman capabilities, instead of being used for paperclips... so you may want to pay with some moderate daily inconvenience in order to increase your chances of survival until that moment.

AGI coming in 30 years makes this strategy unworkable. Covid showed us that the actual timeframe of voluntary inconvenience for most people is months, not decades. A year is already stretching it.

Replies from: superads91
comment by superads91 · 2022-04-02T22:11:45.251Z · LW(p) · GW(p)

I'm sorry but doing nothing seems unnaceptable to me. There are some in this forum who have some influence on AI companies, so those could definitely do something. As for the public in general, I believe that if a good number of people took AI safety seriously, so that we could make our politicians take it seriously, things would change.

So there would definitely be the need to do something. Specially because unfortunately this is not a friendly AI / paperclipper dicotomy like most people here present it by only considering x-risk and not worse outcomes. I can imagine someone accepting death because we've always had to accept it, but not something worse than it.

Replies from: shminux
comment by Shmi (shminux) · 2022-04-02T23:13:28.845Z · LW(p) · GW(p)

Some people could definitely do something. I would not delude myself into thinking that I am one of those people, no matter how seriously I take AI safety.

Replies from: superads91
comment by superads91 · 2022-04-02T23:29:16.488Z · LW(p) · GW(p)

Could be. I'll concede that the probability that the average person couldn't effectively do anything is much higher than the opposite. But imo some of the probable outcomes are so nefarious that doing nothing is just not an option, regardless. After all, if plenty of average people effectively decided to do something, something could get done. A bit like voting - one vote achieves nothing, many can achieve something.

Replies from: shminux
comment by Shmi (shminux) · 2022-04-03T02:39:07.708Z · LW(p) · GW(p)

If only it were a bit like voting, where everyone's vote would add equally or at least close to it. Right now there is basically nothing you can do to help alignment research, unless you are a researcher. They have money, they have talent... It's not even like voting blue in a deep red state, or vice versa. There you are at least adding to the statistics of votes, something that might some day change the outcome of another election down the road. Here you are past the event horizon, given the setup, and there will be no reprieve. You may die in the singularity, or emerge into another universe, but there is no going back.

No comments

Comments sorted by top scores.