Peter Thiel's AGI discussion in his startups class @ Stanford [link]
post by Dr_Manhattan · 2012-06-07T12:27:48.075Z · LW · GW · Legacy · 17 commentsContents
17 comments
http://blakemasters.tumblr.com/post/24464587112/peter-thiels-cs183-startup-class-17-deep-thought
Some perspectives on AI risk that might be interesting. Peter is (the primary?) donor for SI, and an investor in AGI startup Vicarious Systems.
17 comments
Comments sorted by top scores.
comment by [deleted] · 2012-06-07T16:47:33.717Z · LW(p) · GW(p)
The other class notes are worth checking out too.
comment by jimrandomh · 2012-06-07T21:48:53.397Z · LW(p) · GW(p)
Peter Thiel: So you’re both thinking it will all fundamentally work out.
Scott Brown: Yes, but not in a wishful thinking way. We need to treat our work with the reverence you’d give to building bombs or super-viruses. At the same time, I don’t think hard takeoff scenarios like Skynet are likely. We’ll start with big gains in a few areas, society will adjust, and the process will repeat.
I don't believe anyone working on AI is actually treating it that way. I do hope, however, that whenever there are signs of a possible breakthrough, researchers will stop, assess what they have very carefully, and build a lot of safety features before doing any more development. Most important of all, I hope that whoever makes the key discoveries does not publish their results in a way that would enable more reckless groups to copy them
Replies from: timtyler↑ comment by timtyler · 2012-06-08T10:51:27.399Z · LW(p) · GW(p)
I expect the military and megacorps will be the biggest advocates of closed source machine intelligence software.
That's what happened when the government tried to keep cryptography out of citizens' hands, anyway.
Such efforts are ultimately futile, but they do sometimes act to slow progress down - thereby helping those with early access attain their own goals.
Replies from: jimrandomh↑ comment by jimrandomh · 2012-06-08T13:55:21.612Z · LW(p) · GW(p)
I didn't say that researchers should publish binaries without source code, I said they should hold off on publishing at all. This isn't about open vs. closed source.
Replies from: timtyler↑ comment by timtyler · 2012-06-09T00:40:02.288Z · LW(p) · GW(p)
Open source is about publishing the code (and allowing it to be reused). You're talking about not publishing the code. Plenty of software companies don't publish binaries (e.g. Google, Facebook). Binaries or no, it's not open source if you don't even publish the code.
Replies from: loup-vaillant↑ comment by loup-vaillant · 2012-06-12T12:50:15.726Z · LW(p) · GW(p)
Nevertheless, when you have the binary, you stand a chance at reverse engineering. If you broadcast such a binary, you have a guaranteed leak. At least, when you don't publish at all, you stand a chance at actual secrecy. (Pretty unlikely, though, if too much people are involved.)
Replies from: timtylercomment by stcredzero · 2012-06-07T20:22:49.704Z · LW(p) · GW(p)
So, {Possible AI} > {Evolved Intelligence} > {Human Intelligence}.
What about {AI practically discoverable/inventable by humans}? This could be an even smaller set than {Human Intelligence}. If it's of a much higher order of intelligence than {Human Intelligence}, it's argued that it would build smarter and smarter AI. How do we know it's likely to be of a higher order?
I guess, I'd like to know more about: {AI practically discoverable/inventable by humans+non-sentient computers in the current generation}. Is there a compelling reason to believe this set is quite large, or quite small?
In particular, there is this quote:
The community and class of algorithms we’re using is fairly well defined, so we think we have a good sense of the competitive and technological landscape. There are probably something like 200—so, to be conservative, let’s say 2000—people out there with the skills and enthusiasm to be able to execute what we’re going after. But are they all tackling the exact same problems we are, and in the same way? That seems really unlikely.
Somehow the diversity that could be generated by 2000, 20000, or even 200000 researchers, presumably working in project teams of a few or a dozen, seems to be much smaller than the evolutionary diversity generated by a population of 10 billion Homo sapiens. (Though it may well span a much larger "volume" of design space, only a relatively few points would be represented across that volume.)
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-06-08T00:59:57.140Z · LW(p) · GW(p)
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon. Think mold colonies growing rapidly from a small handful of spores. Also, remember bootstrapping. It's not just the AI's that can be build by a few thousand humans (plus all the disparate fields they draw on). It's all the AI's that can be built by those AI's, and on and on, ad infinitum.
Replies from: stcredzero↑ comment by stcredzero · 2012-06-09T15:15:14.533Z · LW(p) · GW(p)
Keep in mind, successful designs will expand in mindspace as they are easy to copy, modify, and improve upon.
You are essentially defining "successful designs" as such. And what we know about evolution is strong supporting evidence for this.
What makes you think the first generation of AI will have all of those qualities? What makes you think the first gen AIs will be useful for building more AIs?
The first biological replicators on earth would be considered non-viable clunky disasters by today's standards.
The only way we can have "Friendly AI" beyond the first generation is if such entities are part of a larger "ecosystem" and they face economic, group dynamics, and evolutionary pressures that motivate (and "motivate") them to be this way.
Perhaps the key to "Friendly AI" is going to be competitive augmentation of Human Intelligence.
comment by Dr_Manhattan · 2012-06-07T13:22:09.847Z · LW(p) · GW(p)
In light of the discussion he seems to be hedging bets in this area. I'm not sure if that the right strategy from the ex-risk perspective. At the very least it seems inconsistent.
Replies from: reup↑ comment by reup · 2012-06-07T21:32:29.547Z · LW(p) · GW(p)
I think it could be consistent if you treat his efforts as designed to gather information.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2012-06-08T01:43:05.686Z · LW(p) · GW(p)
Only if he thinks he can only weakly affect outcomes, or exert large amount of control as the evidence starts coming in.
Replies from: reup↑ comment by reup · 2012-06-08T03:27:33.042Z · LW(p) · GW(p)
Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.
Replies from: radical_negative_one↑ comment by radical_negative_one · 2012-06-08T05:32:16.547Z · LW(p) · GW(p)
I remember reading, on the topic of optimal charity, that it's only rational to select a single cause to donate to... until the point of giving enough money to noticeably change the marginal utility of each additional dollar. (Thiel has that much money, of course.) This information-gathering strategy could be a new reason for spreading donations at the level of large-scale donations, if it hasn't been discussed before.
Replies from: reup, Douglas_Knight↑ comment by reup · 2012-06-08T06:41:46.962Z · LW(p) · GW(p)
I remember reading and enjoying that article (this one, I think).
I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.
↑ comment by Douglas_Knight · 2012-06-08T18:09:40.510Z · LW(p) · GW(p)
It's the same reason. You expect the marginal value of information to decline rapidly.