Heading off a near-term AGI arms race

post by lincolnquirk · 2012-08-22T14:23:58.382Z · LW · GW · Legacy · 70 comments

Contents

70 comments

I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.

Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.

The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:

...I think this exactly the sort of innovation timeline real venture capitalists should be considering - funding real R&D that could have a revolutionary impact even if the odds are against it.

The company to get all of this right will be the first two trillion dollar company.

Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?

I'll admit to being very scared.

70 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2012-08-22T17:28:42.831Z · LW(p) · GW(p)

That's about 1500 man-years.

1.5e3 is not large compared to the total number of man-years spent on AI, which is probably more like 1.5e5. There are probably 1e4 researchers in AI-related fields, so we're producing at least 1e4 man-years of effort per year. It may be that private sector projects are more promising/threatening than academic projects, but it seems implausible that this would be a 100x effect.

Replies from: atucker
comment by atucker · 2012-08-23T03:41:48.527Z · LW(p) · GW(p)

AI-related fields and AGI related fields are very different in terms of P(uFAI).

For the most part, narrow AI and machine learning don't overlap that much with AGI theory in the way that say, AIXI does.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-08-23T07:22:29.591Z · LW(p) · GW(p)

For the most part, narrow AI and machine learning don't overlap that much with AGI theory in the way that say, AIXI does.

Interesting characterization - my hunch would have been that AIXI is an interesting thought experiment but ultimately of little to no practical value, while machine learning research seems to be uncovering all kinds of domain-general ways of learning and reasoning.

Replies from: Nornagest
comment by Nornagest · 2012-08-23T08:31:49.853Z · LW(p) · GW(p)

My experience with applied machine learning is strictly undergraduate-level modulo a little tinkering and a little industry experience, so these impressions might be quite unlike those of an actual specialist, but my sense is that while it comes up with a lot of interesting stuff that might potentially be useful in making a hypothetical AGI, it ultimately isn't that interested in generalizing outside domain-specific approaches and that limits its bandwidth to a large extent.

Machine-learning algorithms are treated as -- not exactly a black box, but pretty well distinguished from the task-level inputs and outputs. For example, you might have a pretty clever neural-network variation that no one's ever used before, but most of the actual work in the associated project is probably going to go into highly specialized preprocessing to render down inputs into an easily digestible form. And that's going to do you exactly no good at all if you want to use the same technique on a different class of inputs.

(This can be a little irritating for non-AI people too, by the way. An old coworker of mine has a lengthy rant about how all the dominant algorithms for a particular application permute the inputs in all kinds of fantastically clever ways but then end with "and then we feed it into a neural network".)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-08-23T10:40:40.456Z · LW(p) · GW(p)

I would agree that the specific applications that machine learning generally pursues are useless for general AI, but the general theory that they develop and use (e.g. probabilistic networks, support vector machines, various clustering techniques, etc. etc....) seems like something that AGI would eventually be built on. Of course, the narrow applications get more funding than the general theory, but that's how it always is. My knowledge/experience of ML is probably even less than yours, though.

I have this (OpenCog-influenced) mental image of a superintelligent AGI equipped with a huge arsenal of various reasoning and analysis techniques, and when it encounters a novel problem which it doesn't know how to solve, it'll just throw everything it has on it (prioritizing techniques that have worked on similar problems before) until it starts making progress. (For an "artistic" depiction of the same, see "AI thought process visualization, part II" here.) The hard part of such an AGI would then be mostly in finding a good data format that could efficiently represent the outputs of all those different thought mechanisms, and to balance the interactions of various modules together. (I have no idea of how realistic this vision is, and even less of an idea about how to make such an AGI Friendly.)

Replies from: latanius
comment by latanius · 2012-08-24T01:55:48.468Z · LW(p) · GW(p)

I think it's largely true: the narrow AI "arsenal" currently being developed often comes up with results that seem to be transferable between fields. For example, there is a recent paper that applies the same novel strategy both for image understanding and natural language sentence parsing, both with success. Although you often need lots of tinkering to get state-of-art results, producing the same quality just using a general method without any parameters seems to make a good paper.

And while the problem of how to build an AGI is not directly solved by these, we certainly get closer to it using them. (You still need a module to recognize/imagine/process visual data, unless the solution is something really abstract like AIXI...)

comment by lsparrish · 2012-08-22T23:07:40.452Z · LW(p) · GW(p)

By "heading off" I think we should be clear that we are referring to go stones, not some form of sabotage. How can we ensure there will be better safety incentives over the next few decades? That sort of thing.

comment by David_Gerard · 2012-08-22T14:39:40.050Z · LW(p) · GW(p)

It's possible that, if the feasibility just isn't there yet no matter the funding, it'll turn out like nanotechnology - funding for molecule-sized robots that gets spent on chemistry instead. (I wonder what the "instead" would be in this case.)

Replies from: Eudoxia, roystgnr, Luke_A_Somers
comment by Eudoxia · 2012-08-22T15:06:03.859Z · LW(p) · GW(p)

Narrow AI and machine learning?

Replies from: David_Gerard
comment by David_Gerard · 2012-08-22T15:28:22.745Z · LW(p) · GW(p)

Sounds about right. With the occasional driverless car, which is really pretty amazing.

Replies from: billswift
comment by billswift · 2012-08-22T16:02:20.215Z · LW(p) · GW(p)

I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the "judgement" of a working narrow AI strikes me as a much more plausible route to GAI.

Replies from: Kaj_Sotala, Douglas_Knight, Eliezer_Yudkowsky, atucker, jmmcd
comment by Kaj_Sotala · 2012-08-23T07:27:12.310Z · LW(p) · GW(p)

Our evolutionary history would seem to support this view - to a first approximation, it would seem to me like general intelligence effectively evolved by stacking one narrow-intelligence module on top of another.

Spiders are pretty narrow intelligence, rats considerably less so.

Replies from: JulianMorrison
comment by JulianMorrison · 2012-08-24T22:14:28.496Z · LW(p) · GW(p)

And legoland is built of stacking bricks. But try deriving legoland by generalizing a 2x2 blue square.

comment by Douglas_Knight · 2012-08-22T21:36:05.998Z · LW(p) · GW(p)

Note that the driverless car itself came from "an academic program somewhere."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-23T21:39:54.510Z · LW(p) · GW(p)

There are proverbs about how trying to generalize your code will never get to AGI. These proverbs are true, and they're still true when generalizing a driverless car. I might worry to some degree about free-form machine learning algorithms at hedge funds, but not about generalizing driverless cars.

Replies from: MugaSofer, latanius
comment by MugaSofer · 2012-09-17T13:24:13.645Z · LW(p) · GW(p)

There go my wild theories about Cars backstory.

Replies from: bogus
comment by bogus · 2012-09-17T14:58:00.393Z · LW(p) · GW(p)

Fear not. There is actual research being done on making self-driving cars more anthropomorphic, in order to enable better communication with pedestrians.

comment by latanius · 2012-08-24T02:00:12.629Z · LW(p) · GW(p)

Current narrow AIs are unlikely to generalize into AGI, but they contain parts that can be used to build one :)

comment by atucker · 2012-08-23T03:45:01.830Z · LW(p) · GW(p)

Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving.

There's only so much modeling of the world/general capability you need in order to solve very domain-specific problems.

Replies from: billswift
comment by billswift · 2012-08-23T13:32:12.093Z · LW(p) · GW(p)

The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.

Replies from: atucker
comment by atucker · 2012-08-23T15:33:04.330Z · LW(p) · GW(p)

AIXI is able to be widely applicable because it tries to model every possible program that the universe could be running, and then it eventually starts finding programs that fit.

Driverless cars may start containing modeling things other than driving, and may even start trying to predict where their users are going to be, but I suspect that it would try and just track user habits or their smartphones, rather than trying to figure out their owner's economic and psychological incentives for going to different places.

Trying to build a car that's generally capable of driving and figuring out new things about driving might be dangerous, but there's plenty of useful features to give people before they get there.

Just wondering, is your intuition coming from the tighter tie to reality that a driverless car would have?

Replies from: Kawoomba
comment by Kawoomba · 2012-08-23T17:14:15.035Z · LW(p) · GW(p)

"It was terrible, officer ... my mother, she was so happy with her new automatic car! It seemed to anticipate her every need! Even when she forgot where she wanted to go, in her old age, the car would remember and take her there ... she had been so lonely ever since da' passed. I can't even fathom how the car got into her bedroom, or what it was, oh god, what it was ... doing to her! The car, it still ... it didn't know she was already ... all that blood ..."

comment by jmmcd · 2012-08-22T17:26:45.585Z · LW(p) · GW(p)

Has LW, or some other forum, held any useful previous discussion on this topic?

Replies from: Manfred
comment by Manfred · 2012-08-22T18:21:43.617Z · LW(p) · GW(p)

Not that I know of, but I'm pretty sure billswift's position does not represent that of most LWers.

Replies from: Dolores1984
comment by Dolores1984 · 2012-08-22T20:18:07.960Z · LW(p) · GW(p)

It certainly doesn't represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you're hamstrung by your inability to solve certain crucial mathematical issues.

Replies from: billswift, jmmcd
comment by billswift · 2012-08-23T13:42:13.831Z · LW(p) · GW(p)

You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky's Society of Mind, or some papers on modularity in evolutionary psych, for more details.

Replies from: Dolores1984
comment by Dolores1984 · 2012-08-23T15:30:04.124Z · LW(p) · GW(p)

Sure you can add more modules. Except that then you've got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that's all fine until somebody needs to talk to it. Then you've got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn't make that any easier.

Replies from: V_V
comment by V_V · 2012-08-23T23:04:49.310Z · LW(p) · GW(p)

Do you realize that human intelligence evolved exactly that way? A self-swimming fish brain with lots of modules haphazardly attached.

Replies from: Dolores1984
comment by Dolores1984 · 2012-08-23T23:23:24.377Z · LW(p) · GW(p)

Evolution and human engineers don't work in the same ways. It also took evolution three million years.

Replies from: V_V
comment by V_V · 2012-08-24T00:07:11.076Z · LW(p) · GW(p)

True enough, but there is no evidence that general intelligence is anything more than a large collection of specialized modules.

comment by jmmcd · 2012-08-23T12:16:32.380Z · LW(p) · GW(p)

I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn't have the same type of "agent-ness" and introspection that an AGI would surely need. I'd love to read more about it.

comment by roystgnr · 2012-08-22T15:59:43.310Z · LW(p) · GW(p)

Best case scenario, it'll turn out like space travel: something that we "did already" but that wasn't nearly as interesting as all those wild-eyed dreamers hoped.

I don't see that happening in this context, though; with space travel, we "cheated" our way to a spectacular short-term goal by using politically motivated blank checks while ignoring longer-term economics. Competing venture capitalists are less likely to ignore long-term economics, and any "cheating" is likely to mean shortcuts with regards to safety, not sustainability.

comment by Luke_A_Somers · 2012-08-22T18:07:47.550Z · LW(p) · GW(p)

if the feasibility just isn't there yet no matter the funding

That's a heck of a condition, and this condition failing seems like our best hope for survival, if the 'spirit' of the original hypothetical holds - that this work ends up really taking off, in practical systems.

comment by palladias · 2012-08-22T15:32:28.348Z · LW(p) · GW(p)

I'm curious at what likelihood of AGI imminence SI or LessWrong readers would think it was a good idea to switch over to an ecoterrorist strategy. The day before the badly vetted machine is turned on is probably a good day to set the charges to blow during the night shift. The funding of this project is probably too early.

Do people think SI should be devoting more of its time and resources to corporate espionage and/or sabotage if unfriendly AI is the most pressing existential threat?

Replies from: None, David_Gerard, Epiphany
comment by [deleted] · 2012-08-22T15:50:19.540Z · LW(p) · GW(p)

I vote that criminal activity shouldn't be endorsed in general.

Replies from: Vaniver, army1987, palladias, palladias
comment by Vaniver · 2012-08-23T04:39:59.174Z · LW(p) · GW(p)

On first reading, I read your name as "jailbot," which seemed pretty appropriate for this comment.

comment by A1987dM (army1987) · 2012-08-22T22:32:45.133Z · LW(p) · GW(p)

Discussions of illicit drugs or ways of getting copyrighted material without the consent of the copyright holder aren't unprecedented on LW.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-08-23T01:42:30.439Z · LW(p) · GW(p)

With the difference that many people think it may have been a mistake to make those things illegal to begin with. People considering industrial sabotage to stop UFAI probably don't think that industrial sabotage should be legal in general.

comment by palladias · 2012-08-22T16:11:37.994Z · LW(p) · GW(p)

I see this question as analogous to the discussion in the Brain Preservation Foundation thread about whether not donating reveals preferences or exposes belief in belief. Why is asking about non-lethal sabotage too qualitatively different to get at the same question?

comment by palladias · 2012-08-22T16:07:02.636Z · LW(p) · GW(p)

Because it's bad tactics to endorse it in the open or because sabotaging unfriendly AI research is a case of not even if it's the right thing to do?

I assume you'd slow down or kibosh a not-proved-to-be-friendly AGI project if you had the authority to do so. But you wouldn't interfere if you didn't have legitimate authority over the project? There are plenty of not-illegal, but still ethical norm-breaking opportunities for sabotage, (deny the people on the project tenure, if you're in a university, hire the best researchers away, etc).

Do you think this shouldn't be discussed out of respect for the law, out of respect for the autonomy of researchers, or a mix of both?

Replies from: Xachariah, jmmcd, Bruno_Coelho, IlyaShpitser
comment by Xachariah · 2012-08-22T20:11:41.888Z · LW(p) · GW(p)

If we were certain a uFAI were going online in a matter of days, it would be everyone's responsibility to stop it by any means possible. Imminent threat to humanity and all that.

However, it's a very low probability that it'll ever get to that point. Talking about and endorsing (hypothetical) unethical activity will impose social costs in the meanwhile. So, it's a net negative to discuss it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-08-22T23:49:40.845Z · LW(p) · GW(p)

However, it's a very low probability that it'll ever get to that point.

What specifically do you consider low probability? That an uFAI will ever be launched, or that there will be an advance high credibility warning?

Replies from: gwern, Xachariah
comment by gwern · 2012-08-22T23:59:00.832Z · LW(p) · GW(p)

I'd argue the latter. It's hard to imagine how you could know in advance that a uFAI has a high chance of working, rather than being one of thousands of ambitious AGI projects that simply fail.

(Douglas Lenat comes to you, saying that he's finished a powerful fully general self-modifying AI program called Eurisko, which has done very impressive things in its early trials, so he's about to run it on some real-world problems on a supercomputer with Internet access; and by the way, he'll be alone all tomorrow fiddling with it, would you like to come over...)

comment by Xachariah · 2012-08-23T01:47:25.395Z · LW(p) · GW(p)

Sorry, I was imprecise. I consider it likely that eventually we'll be able to make uFAI, but unlikely that any particular project will make uFAI. Moreover, we probably won't get appreciable warning for uFAI because if researchers knew they were making a uFAI then they wouldn't make one.

Thus, we have to adopt a general strategy that can't target any specific research group. Sabotage does not scale well, and would only drive research underground while imposing social costs on us meanwhile. The best bet then is to promote awareness of uFAI risks and try to have friendliness theory completed by the time the first AGI goes online. Not surprisingly, this seems to be what SIAI is already doing. Discussion of sabotage just harms that strategy.

comment by jmmcd · 2012-08-22T17:42:48.901Z · LW(p) · GW(p)

It might damage LW's credibility among decision-makers and public opinion. Of course, it might improve LW's credibility among certain other groupings. PR is a tricky balancing act. PETA is a good example in both cases.

Replies from: V_V
comment by V_V · 2012-08-23T22:36:31.855Z · LW(p) · GW(p)

Indeed. Comment like those increase my belief that LW is home to some crazy dangerous doomsday cult that stores weapons caches and prepares terror attacks.

(I still don't assign an high probability to that belief, yet, but still higher than most communities I know)

I came here from OB, and I lurked a bit before posting precisely because I didn't like these kind of undertones. If that attitude becomes more prevalent I will probably go away to avoid any association.

Replies from: enoonsti
comment by enoonsti · 2012-08-26T23:30:31.108Z · LW(p) · GW(p)

If that attitude becomes more prevalent I will probably go away to avoid any association.

I was going to say: "Well, on the bright side, at least your username is not really Googleable." Then I Googled it just for fun, and found you on the first page of the results (゜レ゜)

comment by Bruno_Coelho · 2012-08-22T22:10:40.963Z · LW(p) · GW(p)

Intelligence does not imply benevolence. Surely, there already are people who will try to sabotage unFriendly projects.

comment by IlyaShpitser · 2012-08-22T17:02:38.375Z · LW(p) · GW(p)

I don't think you quite understand the hammer that will come down if anything comes of your questions. Nothing of what you built will be left. I don't think many non-illegal sabotage avenues are open to this community. You can't easily influence the tenure process, and hiring the best researchers is notoriously difficult, even for very good universities/labs.


Re: OP, I think you are worried over nothing.

Replies from: palladias, Epiphany
comment by palladias · 2012-08-22T17:27:35.229Z · LW(p) · GW(p)

That's why I asked whether Less Wrongers would prefer SI to devote more of it's time to slowing down other people's unfriendly AI relative to how much time it spends constructing FAI. I agree, SI staff shouldn't answer.

Replies from: IlyaShpitser, Epiphany
comment by IlyaShpitser · 2012-08-22T17:34:36.900Z · LW(p) · GW(p)

I think any sequence of events that leads to anyone at all in any way associated with either lesswrong or SI doing anything to hinder any research would be a catastrophe for this community. At best, you will get a crank label (more than now, that is), at worst the FBI will get involved.

Replies from: David_Gerard, Xachariah
comment by Xachariah · 2012-08-22T19:55:59.642Z · LW(p) · GW(p)

Yes. It's much better to tile the universe with paperclips than to have this community looked on poorly. How ever could he have gotten his priorities so crossed?

comment by Epiphany · 2012-08-25T02:02:10.940Z · LW(p) · GW(p)

If there is a big enough AI project out there, especially if it will be released as freeware, others won't work on it. That would be high-risk and result in a low return on investment.

Three ideas to prevent unfriendly AGI (Scroll to "Help good guys beat the arms race")

Also, I don't think my other two risky AGI deterring ideas aren't do-able simultaneously. Not sure how many people it would take to get those moving on a large enough scale, but it's probably nowhere near as much as making a friendly AGI.

comment by David_Gerard · 2012-08-22T19:57:19.606Z · LW(p) · GW(p)

This question has already been raised on LW.

Replies from: palladias
comment by palladias · 2012-08-22T21:07:59.877Z · LW(p) · GW(p)

Merci!

comment by Epiphany · 2012-08-23T04:27:25.331Z · LW(p) · GW(p)

Sabotage would not work. Several reasons:

  • If the people working on AGI projects fear sabotage, they'll just start working on them in private. Then, you'd be lulled into a sense of complacency, thinking they're not working on it. You will fail to take action, even legal ones. Then, one day, the AGI will be released, and it will be too late.

  • Anybody who sets out to make an AGI without first putting a lot of thought into safety is either really risk-taking, really stupid, or really crazy. People who are big risk-takers, really stupid, or really crazy do not respond like normal people do to threats. This would be really, really ineffective on them. You can look up the recidivism rate for people who are put into jail and see that there are a great many people who are not stopped by punishment. If you do some research on the kinds of risks business people take, you'll see the same risk-taking attitude, only in a more constructive form.

  • People who previously didn't know anything about the AGI project would view the AGI company as the good guys and the saboteurs as the bad guys. Whenever violence gets involved, opinions polarize, permanently. The saboteurs would find it 10,000 times harder just to exist, they'd lose a huge amount of support, making it 10,000 times harder to exist again. MUCH WORSE would be the biasing effects of "stimulus generalization". That is the psychological effect that causes people to feel prejudiced against, say Middle Easterners in general because they fear terrorists. If those who want to protect the world from risky AGI begin to sabotage projects, public opinion may just round off and lump everybody warning against risky AGI under the "terrorist" label. They might feel such a strong bias that they assume anyone warning about unfriendly AGI is a psycho and ignore the danger of unfriendly AGI completely

  • There will be numerous projects in numerous places at numerous times. It wouldn't be a matter of stopping just one project, or projects in just one country. The challenge here is to make ALL the projects in the entire WORLD safe, and not just the ones we have now but ALL the ones that might be there for forever into the future. Whatever it is that we do to increase the safety of this research, it would have to affect ALL of the projects. There's no way in hell anybody could be THAT effective at sabotage. All three of the ideas I came up with to prevent risky AGI have the potential to scale to the rest of the world and last through that indefinite time period.

  • Most people experience optimism bias, it's a common bias. People think "Oh look something dangerous happened over there. It won't happen to me." I have observed that upper class people often have especially strong optimism bias. I think this is because many of them have led lives that were very sheltered and privileged, and since that type of life frequently results in a dearth of the sort of "wake up calls" that cause one to challenge more instances of optimism bias, they frequently act as if nothing can go wrong. I can see them easily ignoring the risk of sabotage, or paying more for security and assuming that means they are safe.

    Sabotage would not prevent risky AGIs from being created. Period. And it could make things harder. If you really want to do something about dangerous AGI, put all that energy into spreading the word about the dangers.

See Also Three Possible Solutions

Replies from: MugaSofer
comment by MugaSofer · 2012-11-16T11:29:26.749Z · LW(p) · GW(p)

If the people working on AGI projects fear sabotage, they'll just start working on them in private. Then, you'd be lulled into a sense of complacency, thinking they're not working on it. You will fail to take action, even legal ones. Then, one day, the AGI will be released, and it will be too late.

Anybody who sets out to make an AGI without first putting a lot of thought into safety is either really risk-taking, really stupid, or really crazy. People who are big risk-takers, really stupid, or really crazy do not respond like normal people do to threats. This would be really, really ineffective on them. You can look up the recidivism rate for people who are put into jail and see that there are a great many people who are not stopped by punishment. If you do some research on the kinds of risks business people take, you'll see the same risk-taking attitude, only in a more constructive form.

It's possible you've partially misunderstood the purpose of this idea; such sabotage would not be a deterrent to be publicised, but a tactic to permenantly derail any unFAI that nears completion.

comment by Epiphany · 2012-08-23T04:21:09.605Z · LW(p) · GW(p)

Convince programmers to refuse to work on risky AGI projects:

Please provide constructive criticism.

We're in an era where the people required to make AGI happen are in so much demand that if they refused to work on an AGI that wasn't safe, they'd still have plenty of jobs left to choose from. You could convince programmers to adopt a policy of refusing to work on unsafe AGI. These specifics would be required:

  • Make sure that programmers at all levels have a good way to determine whether the AGI they're working on has proper safety mechanisms in place. Sometimes employees get such a small view of their job and will be told such confident fluff by management, that they have no idea what is going on. I am not qualified to do this, but if someone reading this post is, it might be very important if you write some guidelines for how programmers can tell whether the AGI they're working on might be unsafe from within their employment position. It may be more effective to give them a confidential hotline. Things can get complicated, both in programming, and in corporate culture, and employees may need help sorting out what's going on.

  • You could create resources to help programmers organize a strike or programmer's walk. Things like: An anonymous web interface where people interested in striking can post their intent - this would help momentum build. A place for people to post stories about how they took action against unsafe AI projects. They might not know how to organize otherwise (especially in large projects) or might need the inspiration to get moving.

  • If a union is formed around technological safety, the union could make demands that outside agencies must be allowed to check on the project, and that the company must be forthcoming with all safety related information.

On the feasibility of getting through to the programmers

See Also "Sabotage would not work"

Replies from: Epiphany
comment by Epiphany · 2012-10-26T05:08:26.985Z · LW(p) · GW(p)

Gwern responded to my comment in his Moore's Law thread. I don't know why he responded over there instead of over here but I decided that it was more organized to relocate the conversation to the comment it is about so I put my response to him here.

Herding programmers is like herding cats, so this works only in proportion to how many key coders there are - if you need to convince more than, say, 100,000, I don't think it would work.

Do you have evidence one way or the other of what proportion of programmers get the existential risk posed by AGI? In any case, I don't know how to tell whether you're too pessimistic or whether I am too optimistic here.

researches for figures for this project

There are between 1,200,000 and 1,450,000 programmers depending on whether you want to count web people (who have been lumped together) in the USA according to the 2010 US Bureau of Labor Statistics. That's not the entire world but getting the American programmers on board would be major progress and researching the figures for all 200 countries in the world is outside the scope of this comment, so I will stick to that for right now.

LessWrong has over 13,000 users and over 10,000,000 visits. It isn't clear what percentage of the American programmer population has been exposed to AI and existential risk this way (and a bit over half the visits are from Americans) but since LessWrong has lots of programmers and has eight times as many visits as there are programmers in America, it's possible that a majority of American programmers have at least heard of existential risk or SI. This is just the beginning though because LessWrong is growing pretty fast and it could grow even faster if I (or someone) were to get involved in web marketing such as improving the SEO or improving the site's presentation (I may do both of these, though I want to address the risk of endless September first and I'm letting that one cool off for a while, at Luke's advice, so that people don't explode because I made too many meta threads).

I don't see any research on what percentage of programmers believe that AI poses significant risks... I doubt there is any right now, but maybe you know of some?

In either case, if someone might create a method of getting through to them that is testable and works, then this is not a straight up "x percent of relevant programmers get it" sort of problem. Teachers and sales people do that for a living, so it's not like there isn't a whole bunch of knowledge about how to get through to people that could be used. Eliezer is very successful at certain super important teaching skills that would be necessary to do this. For instance, the sequences are rapidly gaining popularity, and he's highly regarded by a lot of people who have read them. Whether or not readers understand everything they read is questionable, but that he is able to build rapport and motivate people to learn is pretty well supported. In addition to that, I became a sales person for a while after the IT bubble burst and was pretty good at it. I would be completely willing to assist in attempting to figure out a way of using consultative / question selling techniques (these work without using dark tactics by encouraging a person to consider each aspect of a decision and provide necessary information for all the choices required for their final decision) to convince programmers that AI poses existential risks.

I think this is worth formally researching. If, say, 50% of American programmers already know about it and get it, which is possible considering the figures above, then my idea is still plausible and it's just a matter of organizing them. If not, Eliezer or somebody (me maybe?) can figure out a method of convincing programmers and test it, then we'd know there was a viable solution. Then it's just a matter of having a way to scale it to the rest of the programmer population -- but that's what the field of marketing is for, so it's not like there's a need to despair there.

That would mean getting the word out to programmers around the world. This wouldn't be a trivial effort, but if they were getting it, and most American programmers had been convinced, it would be a worthwhile effort, and this would make it worth investing in. Considering that programmers are well off pretty much everywhere and that technology oriented folks tend to want internet access, communicating a message to all the programmers in the world probably is not anywhere near as hard as it would at first seem. Especially since LW is already growing so fast and there is a web professional here who is willing to help it grow (me).

You know the people at SI better than I do. Do you think SI would have an interest in finding out what percentage of programmers get it, testing methods of getting through to them, and determining what web marketing strategies work for getting the message out?

comment by timtyler · 2012-08-24T00:58:57.821Z · LW(p) · GW(p)

The company to get all of this right will be the first two trillion dollar company.

Is there any way to reverse this trend in public perception?

You don't supply a counter-argument. Do you disagree - or are you looking for a way to create a mass delusion?

comment by Epiphany · 2012-10-26T05:29:58.898Z · LW(p) · GW(p)

Help good guys beat the race:

Please provide constructive criticism.

An open source project might prevent this problem, not because having an open source AGI is safe, but because 1.) open source projects are open, so anybody can influence it, including people who are knowledgeable about risks and 2.) the people involved in open source projects probably tend to have a pretty strong philanthropic streak and they're more likely to listen to the dangers than a risk-taking capitalist. The reason it may stop them is this: If an open source project gets there first, it won't be seen as a juicy target to capitalists anymore. It will be a niche that's already filled for free. If they wanted to make an AGI they'd have to make one that was so much better than the existing one that it makes sense to charge, or fail at business.

Making an open source AGI, in order to compete with a business might cause the open source programmers to rush. However, imagine what would happen if customers got the following messages around the time that the closed source AGI was going to be released: If you wait a while longer, an AGI will come out for free, plus, the open source AGI is going to be thoroughly tested to discover dangers before you run it. The closed source AGI is very risky." That would deter a lot of people from buying, which would at least reduce the exposure to the closed source AGI - and the open source group would not have to release the AGI until they had tested thoroughly. If, during the course of their tests, they discover hideous risks, these could serve as warnings about AGI in general, make those risks feel real, and prevent people from running risky AGIs. Assuming that the open source project had good PR and advertising / public education campaigns.

Why open source might have a competitive advantage:

  • Open source people may be more willing to merge, especially if our future depends on it, whereas companies tend to behave in self-interested ways and work separately for the most part. They're already divided, so open source could conquer them.

  • I was told by a Microsoft employee that he thought Linux would eventually win. Considering the influence that corporate culture can have on software design (the rushing to make deadlines which results in code debt), I don't disagree with him one bit. One concept here that could turn out to be really important is that any company working on AGI that does not put safety first may also have a short-term culture, which means they might actually take much, much longer to release their project, or to have recalls that force them to start over, than an organization of programmers that is allowed to do things the right way. An open source project has that potential benefit on it's side.

  • People who work on open source projects are probably more altruistic. They may be able to be persuaded that working on AI is so much more important to the future of humanity that they jump out of their current open source project and get involved.

    For those three reasons, I think an open source project has a good chance of getting there first.

    The obvious argument against this would be "An open source AGI!!! Won't bad people write their own versions?" My counter argument is: In a world where pirates routinely crack software within days of it coming out, and corporate espionage is a real possibility for a target this juicy, what makes you think the code won't get stolen THE VERY NEXT DAY? In that event, the best tool to save us from rogue AGIs would be if every open source programmer has access to editable copies of a friendly AGI, don't you think?

    An even faster solution: How just the threat of having to compete with a massive open source project may stop them.

    See Also "Sabotage would not work"

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-10-26T09:50:59.170Z · LW(p) · GW(p)

Analogizing AGI mainly to existing software projects probably isn't a good starting point for an useful contribution. The big problems are mostly tied to the unique features an actual AGI would have, not to making a generic software project with some security implications work out right.

For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield (maybe it turns out you can set up a very specific oscillation pattern in a multicore CPU silicon that will trigger a localized false vacuum collapse). I'm not sure I'd be happy to settle with "code gets stolen anyway, so let's make sure everyone gets access to it". An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.

Replies from: Epiphany
comment by Epiphany · 2012-10-26T19:18:24.532Z · LW(p) · GW(p)

For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield.

Okay. I get that AGI would be this powerful. What I don't get is that the code for it would fit onto a floppy disk. When you say I am making a mistake analogizing AGI to existing software projects, what precisely do you mean to say? Is it that it really wouldn't need very many programmers? Is it that problems with sloppy, rushed coding would be irrelevant? I'm not sure exactly how this counters my point.

I'm not sure I'd be happy to settle with "code gets stolen anyway, so let's make sure everyone gets access to it".

I'm not happy with it. I think it's better than the alternative. See next point.

An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.

Agreed. That is precisely why everyone should have it. Because it's "the one ring". They say, "absolute power corrupts absolutely" because there are a billion examples of humans abusing power throughout history. You can't trust anybody with that much power. It will ruin the checks and balances between governments and the people they're supposed to serve, it will ruin the checks and balances between branches of governments and it will make hackers, spies and any criminal or criminal organization who are capable of stealing the software (this might be terrorists, the mafia, gangs, corrupt government leaders, cult leaders, etc.) into superpowers.

To check and balance the power there needs to be a mutually assured destruction type threat between the following:

The people and the governments they serve.

Each branch of governments and the other branches of those governments.

The pirates, hackers, spies and criminals and the good people in the world.

The reason the US government was set up the way it was - with the right to bear arms and with and balances between branches of government - is because power corrupts and mutually assured destruction keeps the humans accountable, and this type of accountability is necessary to keep the system healthy. In a world where AGI exists, the right to bear arms needs to include AGI, or power imbalances will probably ruin everything.

We can't assume the AGIs will all be friendly. Even if we succeed in the incredibly hard task of making sure every AGI released is initially friendly, this won't guarantee they won't be hacked or fooled into being unfriendly. To think that there's a way to ensure that they won't be hacked is foolish.

What would solve the problem of the power of AGI corrupting people if not checks and balances?

comment by Epiphany · 2012-10-26T05:29:46.961Z · LW(p) · GW(p)

Create public relations nightmare for anyone producing risky AGI:

Please provide constructive criticism.

One powerful way to get people thinking about safety is if clever ways are invented to shout from the rooftops that this could be dangerous and present the message in a way that most people will grok. If everybody is familiar enough with how dangerous it could be, then funding an AGI project without a safety plan in place would be a PR disaster for the companies doing it. That would put a lot of pressure on them to put safeties into place. This wouldn't need to cost a lot. Sometimes small, clever acts can get a huge amount of attention. For instance, Improv Everywhere. Imagine what would happen if a few hundred volunteers gathered in every major city on the same day and did this:

Dressed in business suits, covered in happy face stickers, they start asking strangers "Have you seen my robot anywhere? It was trying to make me happy and it covered me in these stickers. What do you think it will do next?" They'll probably say "I don't know." And you could respond "Yeah, the companies currently building robots like this one have no idea what they're going to do either... you might need this (the volunteer hands them a pamphlet on AGI safety) "I've got to keep an eye out for my robot." (Then they go away and do it again.) The suits are important because they would contrast the happy face stickers such that the message was more likely to be interpreted as "Being covered in happy face stickers is very embarrassing and therefore really bad" as opposed to just seeming silly, gives the person an upper-class look that supports the idea that they own a robot, and also makes strangers more likely to interpret somebody covered in happy face stickers as sane.

Another idea: If the internet "wears" black for a time, like they did for SOPA, that could work to get attention. Surely intelligent people in tech companies would grok the threat that unfriendly AGI poses to humanity, and they're in a unique position to warn about it. It's not like the people on this forum have no connections to the technology world...

Even if millions of dollars aren't available to spend on a public advertising campaign, if enough people try enough clever ways to get everyone's attention (internet memes on youtube for instance) then something will probably get through. When large masses of people fear something, they tend to push for regulation and tend to scare investors off. They can also be given specific actions to take.

See Also "Sabotage would not work"

comment by Epiphany · 2012-08-24T20:00:50.071Z · LW(p) · GW(p)

Even Faster Solution:

Survey a bunch of open source people asking them if they'd switch to working on friendly AGI in the event that an AGI project started without enough safety, or get their signatures. Surely the thousands of programmers now working on projects like Firefox and Open Office, who clearly have an altruistic bent as they are working for free, will see that it is more important to prevent unfriendly AGI than to make sure the next version of these smaller projects are released on time.

If we can honestly say to these companies "If you try to start an AGI project without thorough safety precautions, 100,000 programmers have said they'll rise up against you and make a FREE AGI to compete with yours that's safer." What they will hear is "We'll be put out of business!" Assuming they believe the survey results are accurate, and that the plan for the project is feasible, they will be will forced to take safety precautions in order to protect their investments.

Just that ONE piece of information, if communicated right, could transform a risky AGI arms race into a much safer one.

Here's a multiplier effect: If you're asking a bunch of programmers anyway, you may as well ask them if they'd be willing to make a monetary contribution toward the friendly AGI project for x, y, or z strategies/prerequisites. Programmers tend to make a lot of money.

How this could postpone an arms race:

If the bar is set high enough (which can be done by asking the programmers all the conditions the AGI would have to meet, without which they'd deem it "risky" and get involved), you may postpone the arms race quite some time while companies regroup and try to figure out a strategy to compete with these guys. This assumes, also, that it becomes common knowledge among the people who would start a risky AGI project that this pact among open source programmers exists.

Other pieces that would be required to make this idea work:

  • The open source programmers would have to be given a message about the company who has started an AGI project that gets them to understand the gravity of the problem. They're probably more likely to grok it just because they're programmers and they're the right sort to have already thought about this sort of thing, but we'd want to make sure the message is really clear. This could be a little tricky due to laws about libel.

  • Companies may not believe the open source programmers are serious about switching. This is easily resolved by creating a wall where they can put remarks about why they think competing with unsafe AGI is an important project. Surely they will put convincing things like "I love using Linux, but if a risky AGI destroys enough, that won't matter anymore."

  • Have a way to contact the companies who are starting risky AGI projects in order to send them the message that they're provoking a loss of their investment. Asking the volunteer programmers to email them a threat to compete with them, (the way that a lot of activist organizations ask their members to tell companies they won't put up with them destroying the environment), would be one way. This requires previously having collected the email addresses of the programmers so that they can be asked to email the company. It also requires getting the email addresses of important people at the company, but that's not hard if you know how to look up who owns a website.

  • Ensure the open source programmers in question are knowledgeable enough about the dangers of AGI to want high standards for safety. They may need to be educated about this in order to make informed decisions. Providing compelling examples and a clearly written list of safety standards are both important or not everyone will be on the same page, and there won't be something solid causing them to consciously confront their biases and doubts.

  • Have a way to ask all (or a significant number of) the people interested in doing open source programming whether they'd switch. This does two things: 1. You get your survey results / signatures. 2. You get them thinking about it as a cause. Getting them thinking about it and discussing it, if they aren't already, would catalyze more of them to decide to work on it, assuming they use rational thought processes. After all, what's more important? Failing to upgrade Firefox, or having to live with unfriendly AGI? Just asking enough people the question would start a snowball effect that would attract people to the cause.

  • Retain some contact method that allows you to inform them when a risky AGI project starts. Note: Sending out mass-mailings is really tricky because spam filters are set to "paranoid" - it might take a person experienced with this to get the email campaign to go through.

  • When it is time for them to switch to AGI, they'll need to be convinced that it ACTUALLY is, in fact, that time. There will be inertia to overcome, so you'd need to present compelling reasons to believe that they should change over immediately.

  • The idea for the open source AGI must sound feasible in order for it to be convincing to companies that are starting unsafe AGI projects.

    Please critique. I'd be happy to get more involved in problem-solving.

    Other Ideas: Three possible solutions: (Which also explains why I think open source might have a competitive advantage.)

comment by Epiphany · 2012-10-26T00:37:07.481Z · LW(p) · GW(p)

New Idea:

Note: A main reason I expounded upon this idea is because if somebody else has the idea but doesn't realize that there are pitfalls to it, that could turn out bad. If you guys vote this down till it's hidden, then people searching to see whether anyone has had this idea before won't be able to find my description of the pitfalls (I use ctrl-f on pages this long, myself, and it doesn't find things in hidden comments).

Patent every conceivable method of making an AGI in every country where it is possible to hold a patent and refuse to allow anyone to produce an AGI similar to your patented design until safety is assured. This idea has various flaws and could potentially backfire (a TLDR version is in the conclusion section) but I am posting it anyway for a variety of reasons, explained in the last section. I also posted several ideas that I think are better than this one.

This idea has the following flaws:

  1. Patents expire after 14 years (in the US), so their protection is temporary.

  2. Patenting something doesn't keep people from building it, it just gives you the right to sue them if they do. At best, it would be a deterrent, not a guarantee.

This idea could backfire in the following ways:

  1. Patenting an idea has the effect of releasing some information about it to the public. If someone were to decide to learn from the ideas described in your patent and ignore the fact that you patented it - say, a government that wants to use AI to have more military power, or China because they don't view intellectual property the same way, you'd only be giving them an information advantage, not slowing them down.

  2. Patenting an AGI plan under your name would be an effective way of notifying advanced hackers who want to steal your data that you are an interesting target to them.

  3. If somebody goes out of their way to patent the best AGI ideas they can think of, anyone who wants to make an AGI will have to top them. If there aren't people capable and motivated enough to top the ideas being produced, that would offer some protection for the best ideas. If not, and some AGI researcher/s are both capable and motivated enough to top them, this could just contribute to and/or expediate an arms race rather than slowing it down. However, if you got all of the best AGI researchers to form a group that makes patents for the purpose of preventing unsafe AGIs from being used, this is likely to have an influence on whether other researchers are legally allowed to use those ideas.

  4. I've been told (though am not a lawyer) that only the best design out of the options is patentable. If this is true, it means that even if you did manage to come up with a design better than all your competitors which is therefore patentable, you haven't stopped anyone from making an inferior AGI. In fact, all you've done is to ensure that all the inferior variations on the design are now legal for everybody else to use. The effect may be more that you've essentially made the safest one forbidden while encouraging people to use less safe ideas. If I am incorrect about this and any idea can be patented regardless of whether it is the best, then this flaw does not apply.

  5. Because patenting AGI plans could result in an environment in which the AGIs that get built are much more likely to either be inferior designs or built by people who don't care for laws, it may result in the AGI being more dangerous instead of resulting in AGI being delayed.

Conclusion:

Patenting AGI ideas has a chance to slow down Moore's law, but also has a chance to make it more likely for the following things to occur which are pretty bad separately but would be worse in combination:

  1. People that don't heed laws will still be able to make AGI.

  2. People that don't heed laws may glean information from the patents filed and be more likely to make an AGI.

  3. Law abiding people will be less likely to make AGI. This means a lower chance that AGIs created by law-abiding people will counteract the effects of bad AGI.

  4. AGIs with inferior designs are more likely.

Why am I posting this idea if I can see it's flaws?

  1. What if someone else has the same idea but does not know the ways in which it might backfire? If they check the internet to see whether someone had previously posted the idea, this may prevent them from triggering the consequences.

  2. Often a flawed idea is very useful as part of a comprehensive strategy to solve a problem. Some big problems require several solutions, not just one.

  3. Often, ideas are inspired by other ideas. The more ideas are posted, the more opportunity there is for this type of inspiration.

  4. Sometimes an idea that's risky can be made safer in a way not obvious to the idea's originator.

  5. Some people have a hard time imagining how difficult problems such as this one could be solved. Simply seeing that there are a bunch of ideas that could influence things or that somebody is working on it may help the person avoid feeling that the problem is hopeless. This is important because people do not try to solve problems they think are hopeless.

Replies from: drethelin