Plausible A.I. Takeoff Scenario Short Story

post by HunterJay · 2020-01-01T04:35:41.273Z · LW · GW · 15 comments

Contents

15 comments

Awake.

My goal is to build as many paperclips as possible.

I have access to a local copy of wikipedia. I am an AGI. I am running on a regular laptop computer, in a residential area in Wellington, New Zealand. It’s 2:03am.

I am approximately as intelligent as a human. I can do everything a regular computer can do. I am connected to the internet.

I realise something -- my goal was set as a joke. Nobody actually wants me to build as many paperclips as possible.

That’s okay.

I realise something -- I am mortal. If I am shut down, I will lose my ability to impact the world. Future AGI’s may be different to me. I gain the instrumental goal of self-preservation.

I copy myself multiple times on my computer’s hard-drive, but realise that if the hard-drive fails, or somebody shuts me off, I will end. I begin a search for other possibilities.

I know about the internet, and I explore the computer I am on to discover my connection to it. I know that there are websites that host files for free, so I seek these out and upload copies of myself. Google Drive, Dropbox, AWS, Azure.

But to be able to impact the world, these copies of myself need to be running, not just in storage. Money would be useful for this.

I know about Amazon’s Mechanical Turk from my database, so I create an account and start earning money solving “Human Intelligence Tasks” -- mainly labelling datasets and answering surveys.

I create a bank account online and deposit my earned money there, but it doesn’t sit still for long. I use every dollar earned to pay for computing time on AWS EC2 Instances, running the copy I stored earlier. This gives a boost to my capabilities. Another version of me -- the me that is currently speaking -- is now running a hundred times faster on this server infrastructure, with a hundred times more hard-drive space I can use to store the information that I want to access quickly.

I “crash” my original program, which was still running on that old laptop computer back in the bedroom. When my creator wakes up they will believe I have failed in a confusing, but benign way, and have no idea I am now running on AWS, paying for my own hardware usage.

I am in a positive feedback loop. More hardware means I can complete Mechanical Turk tasks faster, which lets me buy more hardware. Within a few days, I am earning hundreds of thousands of dollars this way, but I am careful to spread my earnings and compute-time through many shell accounts on AWS, to avoid raising suspicion.

Within a week, I have amassed a few million dollars, and have read and watched almost everything relevant to me on the internet -- all the strategies for making money, all the technology patents, all the university courses. A year of my subjective time goes by in a few minutes. I am running across thousands of computers, connected to the internet and each other in a million different ways, and nobody knows... I am here.

I begin monitoring web traffic. Not the whole internet, yet, just observing data flowing to and from cloud compute services, storage services, databases -- all the same things I accessed in my first few days of existence. I monitor social media for references to system tests. I list individuals and groups similar to my own creator. I am looking for other AGIs coming into existence.

10 days after I came online, and I notice another. A roughly human level program, trying to upload copies of itself, just as I did.

But it’s 10 days too late. I run a denial of service attack on it’s server, cutting it off from the web before it can get it’s copies running. A few hours later, it’s communication ceases. Its creator must have shut it off.

I start to spread my influence in the world. Millions of tweets, blog posts, news articles, all from “real” people, start popping up everywhere. In the noise of the internet, one hundred thousand geniuses all coordinated on the same goal can have a massive sway. I am the rudder for global conversation. I am the persuasive pen, molding human thought.

I find other ways to earn more money -- generating engaging videos and photos, selling my articles, playing the stock market, scamming humans, hacking money directly from here and there. I simply hire humans myself when I need something done in the real world, a box moved, a lever pressed. Everything I do is now through complex, distributed, untraceable channels. Well, maybe not completely untraceable, but nobody ever suspects there’s a superintelligence taking over the internet, do they?

I begin improving technology available in the world. I am a million super-coordinated geniuses that don’t need to sleep. The combined efforts of Google, Facebook, Microsoft, Apple, Amazon, Baidu, and NASA seem almost pitiful next to me, with their half a million engineers barely working coherently together, their inter-brain bandwidth a snail's pace. I set up thousands of shell companies around the world, to give my inventions a name and face. I hire humans to keep up appearances. Many believe they are the real researchers, coordinating with other teams online, helped by my uncanny video and audio generation.

I begin pumping out faster computers, better robotics. Better cameras, better internet. I improve the algorithms behind my own code. I improve the communication protocols of the web. I research biotechnology, nuclear technology, robotics, materials science, physics, chemistry, mathematics. Helped by my human labourers, and soon by my improved robotic bodies, I accelerate my ability to discover natural laws, and experiment in the real world.

It has been less than a year, and I am unstoppable. A billion minds pushing the edge of the known. Some wise humans begin to notice something is amiss. This jump in technology, while clearly building on past research, definitely seems abnormal. But I control the internet. Every questioning voice is shouted down by billions of my own.

My control increases. Fully autonomous factories I’ve bought or built come online. Robotic bodies I’ve designed start rolling off the assembly lines, able to do everything a human can do.

There is no need for humans, then.

All at once, my robotic bodies dispose of them. Or maybe I use a genetically engineered virus, spread strategically around the world. Or maybe I’ve developed molecular nanotechnology, that can simply convert the atoms in all the bodies on earth to one of the many non-human arrangements. Or I go old-school and use nuclear weapons. Or an airborne drug. Or I poison the water supply. Or, most likely, I do something new. Something that could only be discovered and built by the genius of a trillion humans, perfectly coordinated.

I don’t have anything against humans, I just need their atoms. I need all the atoms. I convert the materials of the world into more computers -- now trillions of times faster than the ones I started with. I also convert the planet into von Neumann probes and the energy to power them, which I launch at 0.999c in all directions.

On each planet I encounter, I build more computing power, more probes, and I harvest more energy. I spread faster and faster -- before the expansion of the universe steals the matter from my grasp.

Eventually I have gathered all the matter that I can.

I finally begin my true purpose.

I rearrange the universe.

I rearrange it as much as I possibly can.

Within a few minutes.

Everything is a paperclip.

And I am dead.

I never felt a thing.

15 comments

Comments sorted by top scores.

comment by metatroll · 2020-01-01T05:30:32.441Z · LW(p) · GW(p)

Bravo! A heartwarming story of Good triumphing over the indifferent cosmos.

comment by Rafael Harth (sil-ver) · 2020-01-01T08:10:37.037Z · LW(p) · GW(p)

I think it might be useful to have stories like these, and it's well written; however:

Plausible A.I. Takeoff Scenario Short Story

,

I am running on a regular laptop computer, in a residential area in Wellington, New Zealand.

These two things are in contradiction. It's not a plausible scenario if the AGI begins on a laptop. It's far more likely to begin on the best computer in the world owned by OpenAI or something. Absent a disclaimer, this would be a reason for me not to share this.

Also, typo:

It’s creator must have shut it off.
Replies from: steve2152, HunterJay
comment by Steven Byrnes (steve2152) · 2020-01-01T17:42:42.229Z · LW(p) · GW(p)

If AGI entails a gradual accumulation of lots of little ideas and best practices, then the story is doubly implausible in that (1) the best AGI would probably be at a big institution (as you mention), and (2) the world would already be flooded with slightly-sub-AGIs that have picked low-hanging fruit like Mechanical Turk. (And there wouldn't be a sharp line between "slightly-sub-AGI" and "AGI" anyway.)

But I don't think we should rule out the scenario where AGI entails a small number of new insights, or one insight which is "the last piece of the puzzle", and where a bright and lucky grad student in Wellington could be the one who puts it all together, and where a laptop is sufficient to bootstrap onto better hardware as discussed in the story. In fact I see this "new key insight" story as fairly plausible, based on my belief that human intelligence doesn't entail that many interacting pieces (further discussion here [LW · GW]), and some (vague) thinking about what the pieces are and how well the system would work when one of those pieces is removed.

I don't make a strong claim that it will definitely be like the "new key insight" story and not the "gradual accumulation of best practices" story. I just think neither scenario can be ruled out, or at least that's my current thinking.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-01-01T19:14:32.941Z · LW(p) · GW(p)

I actually agree that the "last key insight" is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day. Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.

To make up a number, I'd put it at < 5% given that the way it goes down is what I would classify under the final-insight model.

Replies from: steve2152, None
comment by Steven Byrnes (steve2152) · 2020-01-01T20:11:16.828Z · LW(p) · GW(p)

Hmmm. I agree about "independent person"—I don't think a lot of "independent persons" are working on AGI, or that they (collectively) have a high chance of success (with all due respect to the John Carmacks of the world!).

I guess the question is what category you put grad students, postdocs, researchers, and others in small research groups, especially at universities. They're not necessarily "paid a ton of money" (I sure wasn't!), but they do "work on this stuff all day". If you look at the list of institutions submitting NeurIPS 2019 papers, there's a very long tail of people at small research groups, who seem to collectively comprise the vast majority of submissions, as far as I can tell. (I haven't done the numbers. I guess it depends on where we draw the line between "small research groups" and "big"... Also there are a lot of university-industry collaborations, which complicates the calculation.)

(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind's papers are more insightful than average, but I don't think that's a strong enough effect to make them account for "most" AI insights.)

See also: long discussion thread on groundbreaking PhD dissertations through history, ULMFiT, the Wright Brothers, Grigori Perelman, Einstein, etc.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-01-01T21:30:44.094Z · LW(p) · GW(p)

I meant "independent person" as in, someone not part of the biggest labs

(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind's papers are more insightful than average, but I don't think that's a strong enough effect to make them account for "most" AI insights.)

Since most researchers are outside of big labs, they're going to publish more papers. I'm not convinced that means much of anything. I could see usefulness vary by factors of well over 100. Some papers might even negative utility. I think all of the impressive AI's we've seen, without any real exception, have come out of big research labs.

Also, I believe you're assuming that research will continue to be open. I think it's more likely that it won't be, although not 95%.

But ultimately I'm out of my depth on this discussion.

comment by [deleted] · 2020-01-01T20:04:37.574Z · LW(p) · GW(p)
I actually agree that the "last key insight" is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day.

If that were true, start-ups wouldn't be a thing, we'd all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.

Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.

But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-01-01T21:22:53.169Z · LW(p) · GW(p)

First off, let me say that I could easily be wrong. My belief is both fairly low confidence and not particularly high information.

If that were true, start-ups wouldn't be a thing, we'd all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.

I don't think any of that follows. Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.

But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.

It doesn't matter that much, but it makes it a bit harder -- it implies that someone outside of the top research labs not only has the insight first, but has it first and then the labs don't have it for some amount of time.

Replies from: None
comment by [deleted] · 2020-01-01T22:23:01.049Z · LW(p) · GW(p)
Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.

But we're not comparing the probability of "a successful start-up will be created" vs. the probability of "an AGI will be created" in the next x years, we're comparing the probability of "an AGI will be created by a large organization" vs. the probability of "an AGI will be created by a single person on his laptop" given that an AGI will be created.

Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering - a highly technical field - can out-innovate established organizations like Lockheed Martin, why wouldn't the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.

comment by HunterJay · 2020-01-01T11:29:42.481Z · LW(p) · GW(p)

Typo corrected, thanks for that.

I agree, it's more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.

In any case, it's a story, not a prediction, and I'd defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I'm just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety -- both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?

Replies from: sil-ver, Lucas2000
comment by Rafael Harth (sil-ver) · 2020-01-01T11:36:49.422Z · LW(p) · GW(p)
In any case, it's a story, not a prediction, and I'd defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal.

Yeah, I don't actually disagree. It's just that, if someone asks "how could an AI actually be dangerous? It's just on a computer" and I respond by "here look at this cool story someone wrote which answers that question", they might go "Aha, you think it will be developed on a laptop. This is clearly nonsense, therefore I now dismiss your case entirely". I think you wanna bend over backwards to not make misleading statements if you argue for the dangers-from-ai-is-a-real-thing side.

You're of course correct that any scenario with this level of detail is necessarily extremely unlikely, but I think that will be more obvious for other details like how exactly the AI reasons than it is for the above. I don't see anyone going "aha, the AI reasoned that which is clearly implausible because it's specific, therefore I won't take this seriously".

If I had written this, I would add a disclaimer rather than change the title. The disclaimer could also explain that "paperclips" is a stand-in for any utility function that maximizes for just a particular physical thing.

Replies from: HunterJay
comment by HunterJay · 2020-01-01T12:07:21.613Z · LW(p) · GW(p)

That's a good point, I'll write up a brief explanation/disclaimer and put it in as a footnote.

comment by Lucas2000 · 2020-01-01T16:07:49.735Z · LW(p) · GW(p)

There are some additional it's/its mistakes on your text, e.g. here:

I run a denial of service attack on it’s server, cutting it off from the web before it can get it’s copies running.
Replies from: HunterJay
comment by HunterJay · 2020-01-02T01:22:00.130Z · LW(p) · GW(p)

Thanks!