Factorio, Accelerando, Empathizing with Empires and Moderate Takeoffs

post by Raemon · 2018-02-04T02:33:42.863Z · LW · GW · 19 comments

Contents

  i. Factorio
  ii. Accelerando
  iii. Moderate Takeoffs
None
19 comments

I started planning this post before Cousin It's post on a similar subject. This is a bit of a more poetic take on moderate, peaceful AI takeoffs.

Spoilers for the game Factorio, and the book Accelerando. They are both quite good. But, if you're not going to get around to play/reading them for awhile, I'd just go ahead and read this thing (I think the game and story are still good if you go in knowing some plot elements).

i. Factorio

Factorio is a computer game about automation.

It begins with you crash landing on a planet. Your goal is to go home. To go home, you need to build a rocket. To build a rocket powerful enough to get back to your home solar system, you will need advanced metallurgy, combustion engines, electronics, etc. To get those things, you'll need to bootstrap yourself from the stone age to the nuclear age.

To do this all by yourself, you must automate as much of the work as you can.

To do this efficiently, you'll need to build stripmines, powerplants, etc. (And, later, automatic tools to build stripmines and powerplants).

One wrinkle you may run into is that there are indigenous creatures on the planet.

They look like weird creepy bugs. It is left ambiguous how sentient the natives are, and how they should factor into your moral calculus. But regardless, it becomes clear that the more you pollute, the more annoyed they will be, and they will begin to attack your base.

If you're like me, this might make you feel bad.

During my last playthrough, I tried hard not to kill things I didn't have to, and pollute as minimally as possible. I built defenses in case the aliens attacked, but when I ran out of iron, I looked for new mineral deposits that didn't have nearby native colonies. I bootstrapped my way to solar power as quickly as possible, replacing my smog-belching furnaces with electric ones.

I needed oil, though.

And the only oil fields I could find were right in the middle of an alien colony.

I stared at the oil field for a few minutes, thinking about how convenient it would be if that alien colony wasn't there. I stayed true to my principles. "I'll find another way", I said. And eventually, at much time cost, I found another oil field.

But around this time, I realized that one of my iron mines was near some native encampments. And those natives started attacking me on a regular basis. I built defenses, but they started attacking harder.

Turns out, just because someone doesn't literally live in a place doesn't mean they're happy with you moving into their territory. The attacks grew more frequent.

Eventually I discovered the alien encampment was... pretty small. It would not be that difficult for me to destroy it. And, holy hell, would it be so much easier if that encampment didn't exist. There's even a sympathetic narrative I could paint for myself, where so many creatures were dying every day as they went to attack my base, that it was in fact merciful to just quickly put down the colony.

I didn't do that. (Instead, I actually got distracted and died). But this gave me a weird felt sense, perhaps skill, of empathizing with the British Empire. (Or, most industrial empires, modern or ancient).

Like, I was trying really hard not to be a jerk. I was just trying to go home. And it still was difficult not to just move in and take stuff when I wanted. And although this was a video game, I think in real life it might have been if anything harder, since I'd be risking not just losing the game but losing my life or livelihood of people I cared about.

So when I imagine industrial empires that weren't raised by hippy-ish parents who believe colonialism and pollution were bad... well, what realistically would you expect to happen when they interface with less powerful cultures?

ii. Accelerando

Accelerando is a book about a fairly moderate takeoff of AGI.

Each chapter takes place 10 years after the previous one. There's a couple decade transition from "complex systems of narrow AIs start being relevant", "the first uploads and human/machine interfaces", to "true AGI is a major player in the Earth and solar system."

The outcome here... reasonably good, as things go. The various posthuman actors adopt a "leave Earth alone" policy - there's plenty of other atoms in the solar system. They start building modular chunks of a dyson sphere, using Mercury and other hard-surface planets as resources. (A conceit of the book is that gas giants are harder to work with, so Jupiter et al remain more or less in their present form)

The modular dyson sphere is solar powered, and it's advantageous to move your computronium as close as possible to the sun. Agents running on hardware closer to the sun get to think faster, which lets them outcompete those further away.

There are biological humans who don't do any kind of uploading or neural interfacing. There are biological-esque humans who use various augmentations but don't focus all their attention on competing on the fastest timescales with the most advanced posthumans.

The posthumans eventually disassemble all the terrestrial planets and asteroids. What's left are the gas giants (hard to dissassemble) and Earth.

Eventually (where by "eventually" I mean, "in a couple decades"), they go to great lengths to transport the surface of Earth in such a way that the denizens get to retain something like their ancestral home, but the core of the Earth can be used to build more computronium.

And then, in another decade or two (many generations from the perspectives of posthumans running close to the Sun's heat), our posthuman offspring take another look at this last chunk of atoms...

...and sort of shrug apologetically and wring their metaphorical hands and then consume the last atoms in the solar system.

(By this point, old-school humans have seen the writing on the wall and departed the solar system)

iii. Moderate Takeoffs

I periodically converse with people who argue something like: "moderate takeoff of AGI is most likely, and that there'll be time to figure out what to do about it (in particular if humans get to be augmenting themselves or using increasingly powerful tools to improve their strategizing)."

And... this just doesn't seem that comforting to me. In the most optimistic possible worlds I imagine (where we don't get alignment exactly right, but a moderate takeoff makes it easier to handle), human level AI takes several decades, the people who don't upload are outcompeted, and the final hangwringing posthuman shrug takes... maybe a few centuries max?

And maybe this isn't the worst outcome. Maybe the result is still some kind of sentient posthuman society, engaged in creativity and various positive experiences that I'd endorse, which then goes on to colonize the universe. And it's sad that humans and non-committed transhumans got outcompeted but at least there's still some kind of light in the universe.

But still, this doesn't seem like an outcome I'm enthusiastic about. It's at least not something I'd want to happen by default without reflecting upon whether we could do better. Even if you're expecting a moderate takeoff, it still seems really important to get things right on the first try.

19 comments

Comments sorted by top scores.

comment by alkjash · 2018-02-04T17:47:31.239Z · LW(p) · GW(p)

As someone who deliberately tried to wipe out the alien nests in Factorio as quickly as possible and thought empathizing with the British Empire was the default human position, my interest is piqued. I guess I'm looking for the felt sense of empathizing with the creepy antagonistic bug things that respawn way too frequently - thoughts?

Replies from: Raemon
comment by Raemon · 2018-02-04T19:04:49.135Z · LW(p) · GW(p)

I mean, I got that from Raqre'f Tnzr. (This time the spoiler matters and I'm not sure how to communicate enough for you to know whether you've already read the thing)

Replies from: alkjash
comment by alkjash · 2018-02-04T20:34:46.729Z · LW(p) · GW(p)

Huh. I see it. I guess my alternative model was the Zerg from SC and the sense that they would be pleased aesthetically by being conquered by a stronger enemy.

Replies from: Raemon
comment by Raemon · 2018-02-04T20:45:06.803Z · LW(p) · GW(p)

That may or may not be true for the zerg, but I hardly think this is a reasonable assumption to make about random bugs you've just ran into and started exterminating because it was convenient. :P

comment by zulupineapple · 2018-02-04T11:44:38.841Z · LW(p) · GW(p)
But this gave me a weird felt sense, perhaps skill, of empathizing with the British Empire.

This is a great point. I never really thought of it that way. And it is a great illustration why power imbalances might be inherently bad.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-02-07T20:11:35.460Z · LW(p) · GW(p)

Was the weirdest sentence I've read in a while.

comment by whpearson · 2018-02-04T10:46:26.118Z · LW(p) · GW(p)

I have to say I am a little puzzled. I'm not sure who you and cousin_it are talking to with these moderate take off posts. I don't see anyone arguing that a moderate take off would be okay by default.

Even more mainstream places like mit, seem to be saying it is too early to focus on AI safety, rather than never focus on AI safety. I hope that there would conversation around when to focus on AI safety. While there is no default fire alarm it doesn't mean you can't construct one. Get people working on AGI science to say what they expect their creations to be capable of and formulate a plan for what to do if it is vastly more capabale than they expect.

Replies from: Raemon, Raemon, Kaj_Sotala
comment by Raemon · 2018-02-04T19:23:00.833Z · LW(p) · GW(p)

This was primarily a response to an in person conversation, but it was also (in part), an answer to Calvin Ho on the "Taking AI Seriously" thread. They said:

I read the sequences and superintelligence, but I still don't see how an AI would proliferate and advance faster than our ability to kill it - a year to get from baby to einstein level intelligence is plenty long enough to react.

And I guess I'll take this slot to answer them directly:

This post isn't precisely an answer to this question, but points at how you could get an AI who looked pretty safe, and that honestly was pretty safe – as safe as an empathetic human who makes a reasonable effort to avoid killing bugs – and so during the year when you could have done something, it didn't look like you needed to. And then a couple decades later you find that everything is computronium and only minds that are optimized for controlling the solar system get to control the solar system.

Replies from: whpearson, calvin-ho
comment by whpearson · 2018-02-04T22:00:23.948Z · LW(p) · GW(p)

Ah, makes sense. I saw something on facebook by Robert Wiblin arguing against unnamed people in the "evidence-based optimist" group. And thought I was missing something important going on, for both you and cousin_it to react to. You have not been vocal on take off scenarios before. But it seems it is just conincidence.

Thanks for the explanation.

comment by t3tsubo (calvin-ho) · 2018-02-06T16:31:15.130Z · LW(p) · GW(p)

Thanks for the post, it was a viewpoint I hadn't closely considered (that a friendly but "technically unsafe" AI would be the singularity and its lack of safety would not be addressed in time due to its benefits) and is worth thinking about more.

comment by Raemon · 2018-02-04T19:27:31.270Z · LW(p) · GW(p)

Also, cousin_it is specifically talking to Robin Hanson, so as far as "who are we talking to", anyone who takes him seriously. (Although Robin Hanson also has some additional things going on like "the Accelerando scenario seems good [or something?"]. I'm not sure I understand that, it's just my vague impression)

comment by Kaj_Sotala · 2018-02-05T20:20:25.319Z · LW(p) · GW(p)

Like Raemon's comment suggests, people don't necessarily say it explicitly, but it's implicit whenever someone just says something along the lines of "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic", and just leaves it at that. And I at least have seen a lot of people say something like that.

Replies from: whpearson
comment by whpearson · 2018-02-05T21:53:37.211Z · LW(p) · GW(p)

I feel that this post is straw-manning "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic" a bit.

A steel man might be,

I don't feel super intelligence is worth worrying at this point, as in a soft takeoff scenario we will have lots of small AGI related accidents (people wire heading themselves with AI). This will provide both financial incentives to companies to concentrate of safety to stop themselves getting sued and if they are using it themselves, stopping the damages caused by it to themselves. It will also provide government incentives to introduce regulation to make them safe, from political pressure. AGI Scientists on the cusp of creating AGI have incentives to not be associated with the bad consequences of AGI, they are also on the best position to understand what safe guards are needed.

Also there will be a general selective pressure towards safe AGI as we would destroy the unaligned ones with the safer/most alignable ones. There is no reason to expect a treacherous turn when the machines get to a decisive strategic advantage, as we will have seen treacherous behaviour in AGIs that are not super rational or good at hiding their treachery and then designed against it.

It is only when there is the chance of foom do we the current generation need to worry about super intelligence right now.

As such it would be better to save money now and use the componded interest to then buy safer AGI from safety focused AGI companies to distribute to needy people. The safety focused company will have greater knowledge of AGI and be able to make more a lot more AGI safety for the dollar than we currently can with our knowledge.

If you want to make AGI, then worrying about the super intelligence case is probably a good exercise in seeing where the cracks are in your system to avoid the small accidents.

I'm not sure I believe it. But it is worth seeing that incentives for safety are there.

Replies from: Raemon
comment by Raemon · 2018-02-05T23:08:27.998Z · LW(p) · GW(p)

I think there's a difference between "Steelmanning something to learn the most you can from it" (for your own benefit), and accurately engaging with what people actually think and mean.

(For example, I think it's common for consequentialists to "steelman" a deontological argument into consequentialism... but the actual reasons for a deontologists beliefs just have nothing to do with consequentialism)

In the case of people saying "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic" and then leaving it at that, I honestly just don't think they're thinking about it that hard, and rounding things off to vague plausibilities without a model. (Somes they don't just leave it at that – I think Robin Hanson generally has some kind of model, maybe closer to what you're saying here. But in that case you can engage with whatever they're actually saying without as much need to steelman it yourself)

(I also think my OP here is roughly as good an answer to your steelman here – the issue still remains that there doesn't have to be a sharply treacherous turn, to result in things just eventually snowballing in a way similar to how powerful empires snowball, long after it's too late to do anything about it)

Replies from: whpearson
comment by whpearson · 2018-02-06T00:04:13.853Z · LW(p) · GW(p)

I like arguing with myself. So it is fun to make the best case. But yup I was going beyond what people might. I think I find arguments against naive views less interesting so spice them up some.

In accelerando the participants in Economy 2.0 had a treacherous turn because they had the pressure of being in a sharply competitive, resource hungry environment. This could have happened if they were EM or even aligned AGI to a subset of humanity, if they don't solve co-ordination problems.

This kind of evolutionary problem has not been talked about for a bit (everyone seems focussed on corrigibility etc), so maybe people have forgotten? I think it worth making it explicit that that is what you need to worry about. But the question then becomes should we worry about it now or when we have cheaper intelligence and a greater understanding of how intelligences might co-ordinate?

Edit: One might even make the case we should focus our thought on short term existential risks, like avoiding nuclear war during the start of AGI, because if we don't pass that test we won't get to worry about super intelligence. And you can't use the cheaper later intelligence to solve that problem.

comment by romeostevensit · 2019-11-27T18:02:34.838Z · LW(p) · GW(p)

It's hard to be enthusiastic about largely because appending a string of symbols like 'trillions' to the concept of 'worthwhile posthuman lives' is a type error for your system 1 along more than one dimension.

comment by zulupineapple · 2018-02-04T11:18:07.373Z · LW(p) · GW(p)
Maybe the result is still some kind of sentient posthuman society, engaged in creativity and various positive experiences that I'd endorse, which then goes on to colonize the universe. And it's sad that humans and non-committed transhumans got outcompeted but at least there's still some kind of light in the universe.
But still, this doesn't seem like an outcome I'm enthusiastic about.

Your descendants a few million years from now are going to be "posthuman", even if AI and genetic engineering never happen. What's wrong with the future coming a little sooner?

Replies from: dxu
comment by dxu · 2018-02-04T19:12:13.855Z · LW(p) · GW(p)

The argument is never about how soon the future will come, always about how good the future will be. There is nothing "wrong" with any given outcome, but if we can do better, then it's worth dedicating thought to that.

Replies from: Raemon
comment by Raemon · 2018-02-04T21:22:22.832Z · LW(p) · GW(p)

Yeah. It's also about "how much do you want to bet on a given future?"

The Accelerando scenario is extremely optimistic, in that the minds powerful enough to control the solar system end up caring about human value at all. Imagine this scenario going slightly awry, Disneyland with No Children style, in the direction of Scott's ascended economy, where the posthuman actors end up totally non-conscious.