Short versions of the basic premise about FAI
post by NancyLebovitz · 2010-10-31T23:14:59.772Z · LW · GW · Legacy · 23 commentsContents
23 comments
I've been using something like "A self-optimizing AI would be so powerful that it will just roll over the human race unless it's programmed to not do that."
Any others?
23 comments
Comments sorted by top scores.
comment by nhamann · 2010-10-31T23:25:24.364Z · LW(p) · GW(p)
I personally don't think we need to talk about self-improving AI at all to consider the problem of friendliness. I would say a viable alternative statement is "Evolution has shaped the values of human minds. Such preferences will not exist in engineered minds unless they are explicitly engineered. Human values are complex, so explicit engineering will be extremely difficult or impossible."
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-10-31T23:37:08.912Z · LW(p) · GW(p)
Self-optimization is what makes friendliness a serious problem.
Replies from: nhamann↑ comment by nhamann · 2010-10-31T23:45:28.551Z · LW(p) · GW(p)
Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I've often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).
Edit: See also Vladimir Nesov's summary, which is quite good, but not quite as short as you're looking for here.
Replies from: NancyLebovitz, simpleton↑ comment by NancyLebovitz · 2010-11-01T00:14:01.832Z · LW(p) · GW(p)
Friendliness would certainly be worth pursuing-- it applies to a lot of human issues in addition to what we want from computer programs.
Still, concerns about FOOM is the source of urgency here.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-11-01T03:04:52.849Z · LW(p) · GW(p)
Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.
Skip the "instant godlike superintelligence with nanotech arms" shenanigans, and AI ethics still remains an interesting and important problem, as you observed.
But it's much easier to get people to look at an interesting problem so you can then persuade them that it's serious, than it is to convince them that they are about to die in order to make them look at your problem. Especially since modern society has so inured people to apocalyptic warnings that the wiser half of the population takes them with a few kilograms of salt to begin with.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-01T16:53:20.096Z · LW(p) · GW(p)
Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.
Statements like this make posters look like they confuse rationality with the rejection of non-intuitive ideas.
Replies from: Kingreaper↑ comment by Kingreaper · 2010-11-01T17:20:45.803Z · LW(p) · GW(p)
Just because a rational person would believe something, doesn't mean a rational person would say that thing.
If telling people the fate of the world depends on you is going to make them less likely to listen, you probably shouldn't tell them that. Especially if it's true (because that just makes it more important that they listen)
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-01T17:53:02.350Z · LW(p) · GW(p)
FOOM is central to the argument that we need to solve Friendliness up front, rather than build it incrementally as patches to a slowly growing AGI. If you leave it out to get past weirdness censors, you can no longer support the same conclusions.
Replies from: Kingreaper↑ comment by Kingreaper · 2010-11-01T18:03:11.275Z · LW(p) · GW(p)
NihilCredo said:
But it's much easier to get people to look at an interesting problem so you can then persuade them that it's serious, than it is to convince them that they are about to die in order to make them look at your problem.
Notice that Nihil didn't propose never mentioning the urgency you believe exists, just not using it as your rallying cry.
I got fascinated by friendliness theorem despite never believing in the singularity (and in fact, not knowing much about the idea other than that it was being argued on the basis of extrapolating moore's law, which explains why I didn't buy it.)
Other people could be drawn in by the interesting philosophical and practical challenges of Friendliness Theorem without the FOOM threat.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-01T18:13:22.930Z · LW(p) · GW(p)
It is more important to convince the AGI researchers who see themselves as practical people trying to achieve good results in the real world than people who like an interesting theoretical problem.
Replies from: None↑ comment by [deleted] · 2010-11-04T16:25:00.672Z · LW(p) · GW(p)
Because people who like theoretical problems are less effective than people trying for good results? I don't buy it.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-11-04T16:30:59.882Z · LW(p) · GW(p)
No. Because it is better for the people who would otherwise be working on dangerous AGI to realize they should not do that, than to have people who would not have been working on AI at all to comment that the dangerous AGI researchers shouldn't do that.
↑ comment by simpleton · 2010-10-31T23:53:24.251Z · LW(p) · GW(p)
The Hidden Complexity of Wishes
Replies from: nhamann↑ comment by nhamann · 2010-11-01T00:41:14.387Z · LW(p) · GW(p)
I do not understand your point. Would you care to explain?
Replies from: simpleton↑ comment by simpleton · 2010-11-01T03:13:27.088Z · LW(p) · GW(p)
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Replies from: nhamanncomment by David_Gerard · 2010-11-01T02:05:28.993Z · LW(p) · GW(p)
How I attempted to nutshell it for the RW article on EY:
"Yudkowsky identifies the big problem in AI research as being that there is no reason to assume an AI would give a damn about humans or what we care about in any way at all - not having a million years as a savannah ape or a billion years of evolution in its makeup. And he believes AI is imminent. As such, working out how to create a Friendly AI (one that won't kill us, inadvertently or otherwise) is the Big Problem he has taken as his own."
It needs work, but I hope does justice to the idea in trying to get it across to the general public, or at least people who are somewhat familiar with SF tropes.
comment by XiXiDu · 2010-11-02T16:19:37.667Z · LW(p) · GW(p)
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
This includes everything in my opinion. Goals, utility and value, economics, perspective...or was I supposed to come up with my own version? :-)
comment by James_Miller · 2010-11-01T04:54:29.409Z · LW(p) · GW(p)
If the Laws of Thermodynamics are correct then:
(A) There is a limited, non-replenishable amount of free energy in the universe.
(B) Everything anyone does uses up free energy.
(C) When you run out of free energy you die.
(D) Anything a human could do an AI god could do at a lower free energy cost.
(E) Humans use up lots of free energy.
Consequently:
If an AI god didn't like humans it would extinguish us.
If an AI god were indifferent towards humans it would extinguish us to save free energy.
If an AI god had many goals including friendliness towards humanity then it would have an internal conflict because although it would get displeasure from extinguishing humans, killing us would allow it to have more free energy to devote to its other objectives.
We are only safe if an AI god's sole objective is friendliness towards humanity.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-27T17:16:04.050Z · LW(p) · GW(p)
It's a bit ironic that current supercomputers are hugely less energy efficient (megaWatts) that human brains (20W).
Replies from: gwern↑ comment by gwern · 2013-02-27T18:09:53.499Z · LW(p) · GW(p)
One of the interesting observations in computing is that Moore's law of processing power is almost as much a Moore's law of energy efficiency. This makes sense since ultimately you have to deal with the waste heat, so if energy consumption (and hence heat production) were not halving roughly every turn of Moore's law, quickly you'd wind up in a situation where you simply cannot run your faster hotter new chips.
This leads to Ozkural's projection that increasing (GPU) energy efficiency is the real limit on any widespread economical use of AI, and given past improvements, we'll have the hardware capability to run cost-effective neuromorphic AI by 2026 and then the wait is just software based...