Should we postpone AGI until we reach safety?

post by otto.barten (otto-barten) · 2020-11-18T15:43:51.846Z · LW · GW · 36 comments

Contents

36 comments

Should we postpone AGI, until its risks have fallen below a certain level, thereby applying the precautionary principle? And if so, would setting up policy be a promising way to achieve this?

As I argued here in the comments [LW · GW], I think calling for precautionary principle-policy, notably towards political decision makers, would be a good idea. I've had a great LW and telephone discussion with Daniel Kokotajlo about this, who disagrees, with the arguments below. I think it is valuable to make our lines of reasoning explicit and sharpen them by discussion, which is why I'm promoting them to a post.

Assuming AGI in a relevant time span, there are two ways in which humanity can decrease x-risk to acceptable levels: 1) AI alignment, consisting both of technical AI safety and reasonable values alignment, and 2) AGI postponement until 1 can be achieved with sufficient existential safety (this may be anywhere between soon and never).

Since it is unsure whether we can achieve 1 at all, and also whether we can achieve it in time assuming we can achieve it in principle, we should aim to achieve 2 as well. The main reason is that this could lead to a significant reduction of total existential risk if successful, and we don't know much about how hard it is, so it could well be worthwhile. Companies and academics are both not incentivized to postpone limited AGI development in accordance with the precautionary principle. Therefore, I think we need a different body calling for this, and I think states make sense. As a positive side effect, pressure from possible postponement on companies and academia would incentivize them to invest significantly more in alignment, thereby again reducing existential risk.

Daniel disagreed with me mostly because of three reasons. I obtained two more counterarguments from the discussions below and elsewhere. The following so far appears to be a reasonably complete list of counterarguments.

  1. Calling for AGI postponement until safety has been achieved, he thinks, would alienate the AI community from the AI safety community and that would hurt the chances of achieving AI safety.
  2. It would be rather difficult (or extremely difficult, or impossible) to get useful regulations passed, because influencing governments is generally hard and influencing them in the right ways is even harder.
  3. Restricting AGI research while allowing computer hardware progress, other AI progress, etc. to continue should mean that we are making the eventual takeoff faster, by increasing hardware overhang and other kinds of overhang. A faster takeoff is probably more dangerous.
  4. Efficiency: we have a certain amount of labor available for reducing existential risk. Working on AI safety is more efficient than AGI postponement, so let's focus on AI safety and discard AI postponement.
  5. If we manage to delay only the most safety-abiding parties but not the others, we could hurt safety instead of helping it.

On the first argument, I replied that I think a non-AGI safety group could do this, and therefore not hurt the principally unrelated AGI safety efforts. Such a group could even call for reduction of existential risk in general, further decoupling the two efforts. Also, even if there would be a small adverse effect, I think it would be outweighed by the positive effect of incentivizing corporations and academia into funding more AI safety (since this is now also stimulated by regulation). Daniel said that if this would really be true, which we could establish for example by researching respondents' behaviour, that could change his mind.

I agree with the second counterargument, but if the gain would be large assuming success (which I think is true), and the effort uncertain (which I think is also true), I think exploring the option makes sense.

The third counterargument would be less relevant if we are currently already heading for a fast take-off (which I really don't know). I think this argument requires more thought.

For the fourth counterargument, I would say that the vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of arguing for postponement.

Finally regarding the fifth counterargument, I would say we should argue for a global postponement treaty, or perhaps a treaty among the most likely parties (e.g. nation states). If all parties can be affected equally, this argument would loose its weight.

I'm curious about other's opinions on the matter. Do you also think postponing AGI until we reach safety would be a good idea? How could this be achieved? If you disagree, could you explicitly point out which part of the reasoning you agree with (if any), and where your opinion differs?

36 comments

Comments sorted by top scores.

comment by Vaniver · 2020-11-18T18:54:13.871Z · LW(p) · GW(p)

I think it's obviously a bad idea to deploy AGI that has an unacceptably high chance of causing irreparable harm. I think the questions of "what chance is unacceptably high?" and "what is the chance of causing irreparable harm for this proposed AGI?" are both complicated technical questions that I am not optimistic will be answered well by policy-makers or government bodies. I currently expect it'll take serious effort to have answers at all when we need them, let alone answers that could persuade Congress. 

This makes me especially worried about attempts to shift policy that aren't in touch with the growing science of AI Alignment, but then there's something of a double bind: if the policy efforts are close to the safety research efforts, and so you're giving the best available advice to the policymakers, but you pay the price of backlash from AI researchers if they think regulation-by-policy is a mistake. If the two are distant, then the safety researchers can say their hands are clean, but now the regulation is even more likely to be a mistake. 

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-18T20:21:12.011Z · LW(p) · GW(p)

Thanks for your comments, these are interesting points. I agree that these are hard questions and that it's not clear that policymakers will be good at answering them. However, I don't think AI researchers themselves are any better, which you seem to imply. I've worked as an engineer myself and I've seen that when engineers or scientists are close to their own topic, their judgement of any risks/downsides of this topic does not become more reliable, but less. AGI safety researchers will be convinced about AGI risk, but I'm afraid their judgement of their own remedies will also not be the best judgment available. You're right, these risk estimates may be technical and politicians will not have the opportunity to look into the details. What I would have in mind is more a governmental body. We have an environmental planning agency in The Netherlands for example, helping politicians with technical climate questions. Something like that for AGI - with knowledgeable people, but not tied to AI research themselves - that's how close you can come to a good risk estimate I think.

You might also say that any X-risk above a certain threshold, say 1%, is too high. Then perhaps it doesn't even matter whether it's 10% or 15%. Although I still think it's important impartial experts in service of the public find out.

comment by Steven Byrnes (steve2152) · 2020-11-19T01:41:28.871Z · LW(p) · GW(p)

I think that trying to slow down research towards AGI through regulation would fail, because everyone (politicians, voters, lobbyists, business, etc.) likes scientific research and technological development, it creates jobs, it cures diseases, etc. etc., and you're saying we should have less of that. So I think the effort would fail, and also be massively counterproductive by making the community of AI researchers see the community of AGI safety / alignment people as their enemies, morons, weirdos, Luddites, whatever.

Also, I'm not sure how you can stop "AGI research" without also stopping "AI research" (and for that matter some fraction of neuroscience research too), because we don't know what research direction will lead to AGI.

If anti-AGI regulations / treaties were the right thing to do (for the sake of argument), the first step would be to get the larger AI community and scientific community thinking more about planning for AGI, and gradually get them on board and caring about the issue. Only then would you have a prayer of succeeding at the second step, i.e. advocating for such a regulation / treaty. But when you think about it, after you've taken the first step, do you really need the second step? :-P

Oh, and even if such a law / treaty passed, it seems like it might be unenforceable. There will always be a large absolute number of AI researchers who think the rule is stupid, and then they would all go move to the one random country that didn't ratify the treaty. Or maybe AGI would be invented in a secret military lab or whatever.

Just my off-the-cuff opinions :-P

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-18T18:32:06.022Z · LW(p) · GW(p)

I think there are two more general questions that need to be answered first.

  1. How would we find out for sure whether there any tractable methods to put the brakes on a particular arm of technological progress?
  2. What would be the tradeoffs of such a potent investigation into our civilizational capacity?

We clearly do have the capacity to do (1) to some extent:

  • Religious activists have managed to slow progress in stem cell research, though the advent of iPSCs has created a way to bypass this issue to some extent.
  • The anti-nuclear movement has probably helped slow down progress in nuclear power research, though ironically they don't seem to have slowed down research on nuclear bombs (I could be wrong here).
  • Some people argue that the current structure of using cost-benefit analysis to allocate research funding does more harm than good, and thus it could be considered a decelerating force. But I'm not sure that's true, and even if it is, I'm not sure that applies to a field like AI with so many commercial purposes.

But these are clearly not the durable, carefully calibrated brakes we're talking about.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-18T21:00:36.648Z · LW(p) · GW(p)

Why do you think we need to find out 1 before trying? I would say, if it is indeed a good idea to postpone, then we can just start trying to postpone. Why would we need to know beforehand how effective that will be? Can't we find that out by trial and error if needed? Worst case, we would be postponing less. That is of course, as long as the flavor of postponement does not have serious negative side effects.

Or rephrased, why do these brakes need to be carefully calibrated?

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-19T10:05:34.695Z · LW(p) · GW(p)

Presuming that this is a serious topic, then we need to understand what the world would look like if we could put the brakes on technology. Right now, we can’t. What would it look like if we as a civilization were really trying hard to stop a certain branch of research? Would we like that state of affairs?

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-19T17:47:44.362Z · LW(p) · GW(p)

I'm imagining an international treaty, national laws, and enforcement from police. That's a serious proposal.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-20T03:39:25.489Z · LW(p) · GW(p)

So we’d have all major military nations agreeing to a ban on artificial intelligence research, while all of them simultaneously acknowledge that AI research is key to their military edge? And then trusting each other not to carry out such research in secret? While policing anybody who crosses some undefinable line about what constitutes banned AI research?

That sounds both intractable and like a policing nightmare to me - one that would have no end in sight. If poorly executed, it could be both repressive and ineffective.

So I would like to know what a plan to permanently and effectively repress a whole wing of scientific inquiry on a global scale would look like.

The most tractable way seems like it would be to treat it like an illegal biological weapons program. That might be a model.

The difference is that people generally interested in the study of bacteria and viruses still have many other outlets. Also, bioweapons haven’t been a crucial element in any nation’s arsenal. They don’t have a positive purpose.

None of this applies to AI. So I see it as having some important differences from a bioweapons program.

Would we be willing to launch such an intrusive program of global policing, with all the attendant risks of permanent infringement of human rights, and risk setting up a system that both fails to achieve its intended purpose and sucks to live under?

Would such a system actually reduce the chance of unsafe GAI long term? Or, as you’ve pointed out, would it risk creating a climate of urgency, secrecy, and distrust among nations and among scientists?

I’d welcome work to investigate such plans, but it doesn’t seem on its face to be an obviously great solution.

comment by Davidmanheim · 2020-11-22T20:34:21.145Z · LW(p) · GW(p)

On the first argument, I replied that I think a non-AGI safety group could do this, and therefore not hurt the principally unrelated AGI safety efforts. Such a group could even call for reduction of existential risk in general, further decoupling the two efforts.

 

It sounds like you are suggesting that someone somewhere should do this. Who, and how? Because until there is a specific idea being put forward, I can say that pausing AGI would be good, since misaligned AGI would be bad. I don't know how you'd do it, but if choosing between two world, the one without misaligned AGI seems likely to be better.

But in my mind, the proposal falls apart as soon as you ask who this group is, and whether this hypothetical groups has any leverage or any arguments that would convince people who are not already convinced. If the answer is yes, why do we need this new group to do this, and would we be better off using this leverage to increase the resources and effort put into AI safety?

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-23T20:09:36.837Z · LW(p) · GW(p)

Interesting line of thought. I don't know who and how, but I still think we should already think about if it would be a good idea in principle.

Can I restate your idea as 'we have a certain amount of convinced manpower, we should use it for the best purpose, which is AI safety'? I like the way of thinking, but I still think we should use some of them for looking into postponement. Arguments:

- The vast majority of people is unable to contribute meaningfully to AI safety research. Of course all these people could theoretically do whatever makes most money and then donate to AI safety research. But most will not do that in practice. I think many of these people could be used for the much more generic task of convincing others about AI risks, and also arguing for postponement. As an example, I saw a project once with the goal of teaching children about AI safety which claimed they could not continue for lack of 5000$ of funding. I think there's a vast sea of resource-constrained possibility out there once we make the decision that telling everyone about AI risk is officially a good idea.

- Postponement weirdly seems to be a neglected topic within the AI safety community (for dislike of regulation, I guess), but also outside the community (for lack of AI risk insight). I think it's a lot more neglected at this point than technical AI safety, which is perhaps also niche, but does have its own institutes already looking at it. Since it looks important and neglected, I think an hour spent on postponement is probably better spent than an hour on AI safety, perhaps unless you're a talented AI safety researcher.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-11-24T21:20:20.837Z · LW(p) · GW(p)

The idea that most people who can't do technical AI alignment are therefore able to do effective work in public policy or motivating public change seems unsupported by anything you've said. And a key problem with "raising awareness" as a method of risk reduction is that it's rife with infohazard concerns. For example, if we're really worried about a country seizing a decisive strategic advantage via AGI, that indicates that countries should be much more motivated to pursue AGI. 

And I don't think that within the realm of international agreements and pursuit of AI regulation, postponement is neglected, at least relative to tractability, and policy for AI regulation is certainly an area of active research. 

comment by AnthonyC · 2020-11-19T20:47:43.067Z · LW(p) · GW(p)

My feelings on this are analogous to my opinions on, say, legalization of drugs and prostitution. It's much easier to regulate how and where something is done, than whether it is done. There's a lot of room between "full speed ahead!" and "Stop everything!" Off the cuff, there's "Subsidize safety research and narrow AI research, and set strict guidelines for AGI research with monitoring and ethics rules akin to what we do for nuclear or medical research."

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-20T05:18:41.718Z · LW(p) · GW(p)

I agree and thanks for bringing some nuance in the debate. I think that would be a useful path to explore.

comment by adamShimi · 2020-11-19T19:48:02.643Z · LW(p) · GW(p)

I don't think the title of this posts capture the actual question studied in it? Like, I don't think any safety researcher is not, in some sense, for halting AGI research. So the question in the title is trivial (at least in AI Safety). But the actual question debated between you and Daniel (as far as I can see from your post) is whether it's possible to implement it completely without too many adversarial consequences.

About that question, I agree completely with Daniel's position that it's just not a viable option. I see no vaguely probable scenario where the means to act on such a restriction exists.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-20T19:30:22.986Z · LW(p) · GW(p)

Thanks for your insights Adam. If every AGI researcher is in some sense for halting AGI research, I'd like to get more confirmation on that. What are their arguments? Would they also work for non-AGI researchers?

I can imagine the combination of Daniel's point 1 and 2 stops AGI researchers from speaking out on this. But for non-AGI researchers, why not explore something that looks difficult, but may have existential benefits?

comment by avturchin · 2020-11-18T18:08:23.930Z · LW(p) · GW(p)

One possible way to postpone AGI is that one party will reach world domination using powerful Narrow AI and then use this Narrow AI to implement strict control over AGI development. What such Narrow AI could be: some combination of AI-powered drones and Palantir-like targeting + effective control of minds via memes and social network surveillance. It doesn't sound nice and-or sexy, but we are half-way here. 

An alternative in AGI-preventing is a nuclear war against chips manufacturers and AI-labs and it is obviously worse. To clarify, I am against it, just mentioned as a bad alternative.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-18T21:05:39.859Z · LW(p) · GW(p)

I'm not in favour of nuclear war either :)

comment by Dagon · 2020-11-18T15:51:49.723Z · LW(p) · GW(p)

It's certainly desirable to postpone any ascension to power of entities (including humans) who will harm us.  The problem (for AI and for human power-mongers) is implementation.  I think I'm with Daniel that the attempt to interfere would fail, and would do additional harm along the way.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-18T15:59:24.945Z · LW(p) · GW(p)

Thanks for your thoughts. Of course we don't know whether AGI will harm or help us. However I'm making the judgement that the harm could plausibly be so big (existential), that it outweighs the help (reduction in suffering for the time until safe AGI, and perhaps reduction of other existential risks). You seem to be on board with this, is that right?

Why exactly do you think interference would fail? How certain are you? I'm acknowledging it would be hard, but not sure how optimist/pessimist to be on this.

Replies from: Dagon
comment by Dagon · 2020-11-18T17:53:51.682Z · LW(p) · GW(p)

I think it's misleading to use terms like "us".  AGI will harm some humans, almost no matter what.  AGI MAY harm all humans, current and future.  It may also vastly increase flourishing of humans (and other mind types, including AGI itself).  This is true of most significant technologies, but even more so with AGI, which is likely to have much broader impact than most things.

I think that, exactly because of breadth of impact, it's going to be pursued by many people/organizations, with different goals and different kinds of influence that you or I might exert over them.  That diversity makes the pursuit of AGI very resilient, and unlikely to be significantly slowed by our actions.   

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-18T21:07:23.813Z · LW(p) · GW(p)

That goes under Daniel's point 2 I guess?

Replies from: jbash
comment by jbash · 2020-11-18T22:39:20.545Z · LW(p) · GW(p)

Not to speak for Dagon, but I think point 2 as you write it is way, way too narrow and optimistic. Saying "it would be rather difficult to get useful regulation" is sort of like saying "it would be rather difficult to invent time travel".

I mean, yes, it would be incredibly hard, way beyond "rather difficult", and maybe into "flat-out impossible", to get any given government to put useful regulations in place... assuming anybody could present a workable approach to begin with.

It's not a matter of going to a government and making an argument. For one thing, a government isn't really a unitary thing. You go to some *part *of a government, fight to even get your issue noticed. Then you compete with all the other people who have opinions. Some of them will directly oppose your objectives. Others will suggest different approaches, leading to delays in hashing out those differences, and possibly to compromises that are far less effective than any of the sides' "pure" proposals.

Then you get to take whatever you hashed out with the part of the government you've started dealing with, and sell it in all of the other parts of that government and the people who've been lobbying them. In the process, you find out about a lot of oxen you propose to gore that you didn't even imagine existed.

In big countries, people often spend whole careers in politics, arguing, fighting, building relationships, doing deals... to get even compromised, watered-down versions of the policies they came in looking for.

But that's just the start. You have to get many governments, possibly almost all governments, to put in similar or at least compatible regulations... bearing in mind that they don't trust each other, and are often trying either to compete with each other, or to advantage their citizens in competing with each other. Even that formulation is badly oversimplified, because governments aren't the only players.

You also have to get them to apply those regulations to themselves, which is hard because they will basically all believe that the other governments are cheating, and probably that the private sector is also cheating... and they will probably be right about that. And of course it's very easy for any kind of leader to kid themselves that their experts are too smart to blow it, whereas the other guys will probably destroy the world if they get there first.

Which brings you to compliance, whether voluntary or coerced, inside and outside of governments. People break laws and regulations all the time. It's relatively easy to enforce compliance if what you're trying to stamp out is necessarily large-scale and conspicuous... but not all dangerous AI activity necessarily has to be that way. And nowadays you can coordinate a pretty large project in a way that's awfully hard to shut down.

Then there's the blowback. There's a risk of provoking arms races. If there are restrictions, players have incentives to move faster if they think the other players are cheating and getting ahead... but they also have incentives to move if they think the other players are not cheating ,and can therefore be attacked and dominated. If a lot of the work is driven into secrecy, or even if people just think there might be secret work, then there are lots of chances for people to think both of those things... with uncertainty to make them nervous.

... and, by the way, by creating secrecy, you've reduced the chance of somebody saying "Ahem, old chaps, have you happened to notice that this seemingly innocuous part of your plan will destroy the world"? Of course, the more risk-averse players may think of things like that themselves, but that just means that the least risk-averse players become more likely first movers. Probably not what you wanted.

Meanwhile, resources you could be using to win hearts and minds, or to come up with technical approaches, end up tied up arguing for regulation, enforcing regulation, and complying with regulation.

... and the substance of the rules isn't easy, either. Even getting a rough, vague consensus on what's "safe enough" would be really hard, especially if the consensus had to be close enough to "right" to actually be useful. And you might not be able to make much progress on safety without simultaneously getting closer to AGI. For that matter, you may not be able to define "AGI" as well as you might like... nor know when you're about to create it by accident, perhaps as a side effect of your safety research. So it's not as simple as "We won't do this until we know how to do it safely". How can you formulate rules to deal with that?

I don't mean to say that laws or regulations have no place, and still less do I mean to say that not-doing-bloody-stupid-things has no place. They do have a place.

But it's very easy, and very seductive, to oversimplify the problem, and think of regulation as a magic wand. It's nice to dream that you can just pass a law, and this or that will go away, but you don't often get that lucky.

"Relinquish this until it's safe" is a nice slogan, but hard to actually pin down into a real, implementable set of rules. Still more seductive, and probably more dangerous, is the idea that, once you do come up with some optimal set of rules, there's actually some "we" out there that can easily adopt them, or effectively enforce them. You can do that with some rules in some circumstances, but you can't do it with just any rules under just any circumstances. And complete relinquishment is probably not one you can do.

In fact, I've been in or near this particular debate since the 1990s, and I have found that the question "Should we do X" is a pretty reliable danger flag. Phrasing things that way invites the mind to think of the whole world, or at least some mythical set of "good guys", as some kind of unit with a single will, and that's just not how people work. There is no "we" or "us", so it's dangerous to think about "us" doing anything. It can be dangerous to talk about any large entity, even a government or corporation, as though it had a coordinated will... and still more so for an undefined "we".

The word "safe" is also a scary word.

Replies from: AllAmericanBreakfast, otto-barten
comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-20T03:45:14.283Z · LW(p) · GW(p)

This is a much more thorough and eloquent statement echoing what I was articulating in my comment above. I fully endorse it.

comment by otto.barten (otto-barten) · 2020-11-19T17:31:38.731Z · LW(p) · GW(p)

I appreciate the effort you took in writing a detailed response. There's one thing you say in which I'm particularly interested, for personal reasons. You say 'I've been in or near this debate since the 1990s'. That suggests there are many people with my opinion. Who? I would honestly love to know, because frankly it feels lonely. All people I've met, so far without a single exception, are either not afraid of AI existential risk at all, or believe in a tech fix and are against regulation. I don't believe in the tech fix, because as an engineer, I've seen how much of engineering is trial and error (and science even more). People have ideas, try them, it says boom and then they try something else. Until they get there. If we do that with AGI, I think it's sure to go wrong. That's why I think at least some kind of policy intervention is mandatory, not optional. And yes it will be hard. But no argument I've heard so far has convinced me that it's impossible. Or that it's counterproductive.

I think we should first answer the question: is postponement until safety a good idea if it would be implementable. What's your opinion on that one?

Also, I'm serious: who else is on my side of this debate? You would really help me personally to let me talk to them, if they exist.

Replies from: jbash, AllAmericanBreakfast
comment by jbash · 2020-11-21T16:45:05.048Z · LW(p) · GW(p)

You say 'I've been in or near this debate since the 1990s'. That suggests there are many people with my opinion. Who?

Nick Bostrom comes to mind as at least having a similar approach. And it's not like he's without allies, even in places like Less Wrong.

... and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren't "into" the technology itself.

I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech... but the two issues weren't all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes "nanotech" was even taken as shorthand for "AI, nanotech, and anything else that could get really hairy"... vaguely what people would now follow Bostrom and call "X-risk". So you might find some kindred spirits by looking in old "nanotech" discussions.

There always seemed to be plenty of people who'd take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can't remember names; it's been ages.

The whole "outside" world also seemed very pro-regulation. It felt like about every 15 minutes, you'd see an op-ed in the "outside world" press, or even a book, advocating for a "simple precautionary approach", where "we" would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.

I think the word "relinquishment" came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the "precautionary principle". I don't remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.

There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you'd run into. I don't want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.

The most precautionary types in that community probably felt pretty beleaguered, and the most "regulatory" types even more so. But you could definitely still find regulation proponents, even among the formal leadership.

However, it still seems to me that ideas vaguely like yours, while not uncommon, were often "loyal opposition", or brought in by newcomers... or they were things you might hear from the "barbarians at the gates". A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.

So maybe your problem is that your personal "bubble" is more anti-regulation than you are? I mean, you're hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints... including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at "mainstream" politics.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-22T13:39:59.198Z · LW(p) · GW(p)

Thanks for that comment! I didn't know Bill McKibben, but I read up on his 2019 book 'Falter: Has the Human Game Begun to Play Itself Out?' I'll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that's really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.

But having read McKibben's book, I still can't find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I'm still guessing at this moment that a 'postponement side' of this debate is now absent, and it's just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that's not true and there are more voices out there exploring AGI postponement options, I'd still be happy to hear about it. Also if you could find links to old discussions, I'm interested!

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-11-20T03:49:36.218Z · LW(p) · GW(p)

The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.

We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.

That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.

I do not endorse this idea, by the way.

I’m just trying to show that your reaction to “should we” depends hugely on “how.”

Replies from: kevin-lacker
comment by Kevin Lacker (kevin-lacker) · 2020-11-20T07:04:28.297Z · LW(p) · GW(p)

We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).

I think this is a good way of putting it. Many people in the debate refer to "regulation". But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat - a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.

comment by Richard Lucas · 2020-12-10T07:04:56.767Z · LW(p) · GW(p)

It's not clear to me that we have a choice on postponement. If company A refrains, then company B has an edge. If as a nation we constrain companies A and B, then country X will not, and gain an edge. And if country X and we sign a treaty, then country Y will have an edge. And if all countries refrain, then a criminal or terrorist group outside of all civilized nations will have an edge. 

The longer that the postponement occurs, the easier it will become for an entity that is outside of the agreements to create AGI on its own. The bonus value (power) gained by breaking the treaty grows greater as others refrain, reaching its maximum when everyone except the rulebreaker refrain.

All participants are in a dollar auction.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-12-11T08:55:30.112Z · LW(p) · GW(p)

Richard, thanks for your reply. Just for reference, I think this goes under argument 5, right?

It's a powerful argument, but I think it's not watertight. I would counter it as follows:

  1. As stated above, I think the aim should be an ideally global treaty were no country is allowed to go beyond a certain point of research. The countries should then enforce the treaty on all research institutes/companies within their borders. You're right that in this case, a criminal or terrorist group will have an edge. But seeing how hard it currently is for legally allowed and indeed heavily funded groups to develop AGI, I'm not convinced that terrorist or criminal groups could easily do this. For reference, I read this paper by a lawyer this week on an actual way to implement such a treaty. I think signing such a treaty will not affect countries without effective AGI research capabilities, so they won't have a reason not to sign it, and will benefit from the increased existential safety. The ones likely least inclined to sign up will be the countries that are trying to develop AGI now. So effectively, I think a global treaty and a US/China deal would amount to roughly the same thing.
  2. You could make the same argument for tax, (not profitable) climate action, R&D, defense spending against a common enemy, and probably many other issues. Does that mean we have zero tax, climate action, R&D, or defense? No, because at some point countries realize it's better to not be the relative winner, than to all loose. In many cases this is then formalized in treaties, with varying but nonzero success. I think that could work in this case as well. Your argument is indeed a problem in all of the fields I mention, so you have a point. But I think, fortunately, it's not a decisive point.
comment by otto.barten (otto-barten) · 2020-11-24T20:41:01.055Z · LW(p) · GW(p)

My response to counterargument 3 is summarized in this plot, for reference: https://ibb.co/250Qgc9

Basically, this would only be an issue if postponement cannot be done until risks are sufficiently low, and if take-off would be slow without postponement intervention.

comment by George3d6 · 2020-11-20T11:42:05.734Z · LW(p) · GW(p)

Maybe one could just run the assembly lines in the AI factories at half speed, or even better, reduce the assembly quota of AI workers to half.

This will give workers more holidays, maybe even 3-4 day weekends, gets AI labor unions off your back and the AI factories are still cranking, so it's not like progress stops entirely.

Then again, putting a higher VAT on AI sales might be more practical.

The issue is a coherent policy proposal, but I'm sure brilliant regulatory minds like those that wrote legislation for defending us from the evils of UDP sockets and openssl would ride up to the task of ironing out the edges.

Overall though, your idea is brilliant, you're on the right path.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-11-22T20:37:49.413Z · LW(p) · GW(p)

I don't think this type of comment is appropriate or needed. (It was funny, but still not a good thing to post.)

comment by Kevin Lacker (kevin-lacker) · 2020-11-19T07:45:40.535Z · LW(p) · GW(p)

It seems like there is very little chance that you could politically stop work on AGI. Who would you appeal to - China, the U.S., and the United Nations would all have to agree? There just isn't anywhere near the political consensus necessary for it. The general consensus is that any risk is very small, that it's the sort of thing only a few weirdos on the internet worry about. That consensus would have to change before it makes any sense to ask questions like, should we postpone all AGI research indefinitely. I think we need to accept that worry about AGI is a fringe belief, and therefore we should pursue strategies that can have an impact despite it being a fringe belief.

Replies from: otto-barten
comment by otto.barten (otto-barten) · 2020-11-19T17:43:10.314Z · LW(p) · GW(p)

I think a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.

Replies from: kevin-lacker
comment by Kevin Lacker (kevin-lacker) · 2020-11-20T00:43:13.502Z · LW(p) · GW(p)

You can change the world, sure, but not by making a heartfelt appeal to the United Nations. You have to be thoughtful, which means you pick tactics with some chance of success. Appealing to stop AI work is out of the political realm of possibility right now.