AGI Quotes
post by lukeprog · 2011-11-02T08:25:53.179Z · LW · GW · Legacy · 90 commentsContents
90 comments
Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
90 comments
Comments sorted by top scores.
comment by James_Miller · 2011-11-02T14:07:24.435Z · LW(p) · GW(p)
The best answer to the question, "Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
comment by lukeprog · 2011-11-02T08:32:41.733Z · LW(p) · GW(p)
The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim.
Edsger Dijkstra (1984)
Replies from: James_Miller↑ comment by James_Miller · 2011-11-02T14:15:05.303Z · LW(p) · GW(p)
I don't understand this.
Replies from: betterthanwell, thomblake, mwengler↑ comment by betterthanwell · 2011-11-02T15:15:19.378Z · LW(p) · GW(p)
It is seemingly easy to get stuck in arguments over whether or not machines can "actually" think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.
By sidestepping the question of what, exactly, it means to "think",
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.
Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.
If the purpose of thinking is isomorphic to:
Model the world in order to formulate plans for executing actions which implement goals.
Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.
Discussion of whether the machine really thinks is now superfluous.
↑ comment by thomblake · 2011-11-02T14:48:06.660Z · LW(p) · GW(p)
It is a similar idea as that proposed by Turing. If you have submarines, and they move through the water and do exactly what you want them to do, then it is rather pointless to ask if what they're doing is "really swimming". And the arguments on both sides of the "swimming" dispute will make reference to fish.
Replies from: mwengler↑ comment by mwengler · 2011-11-02T16:25:03.341Z · LW(p) · GW(p)
Consider in spongebob when plankton builds a fake "Mr Crabs." The machine is superb, at least as functional as the real Mr Crabs and stronger and more durable to boot. But without a plankton up in the control room running the thing, it does nothing.
Implicit in the ideas of those who think machines may take over is that the increase in capabilities of machines will in some sense naturally, or perhaps even accidentally, include the creation of machine volition, machine will, a machine version of the driver of the machine.
This quoter apparently doubts this assumption, at least about current machines. As long as every powerful machine we build needs a human driver lest it sit there with its metaphorical screen saver on waiting for a volitional agent to command it, then all machines no matter how powerful are just tools.
I don't think Kurzweil necessarily thinks machines will get volition in his version of the singularity. Kurzweil is much more oriented towards enhanced humans, essentially or eventually a human which can access the solution to provlems that require a lot of intelligence, but who is still supplying all the volition in the system.
Around here, on the other hand, I think it is essentially assumed that machine intelligence will be independent and with its own volition, which humans will have a hand in constraining by design.
The maker of the quote questions whether volition, will, primal drive, will arise naturally as part of the progression
Replies from: CuSithBell↑ comment by CuSithBell · 2011-11-02T16:42:42.252Z · LW(p) · GW(p)
I disagree.
What he's saying is: submarines traverse water, so it's irrelevant whether we call what they do "swimming". Likewise, if a machine can do the things that a thinking being can do, then it's irrelevant whether it's "actually" "thinking".
He refers to this as a settled question in the origin of the quote. Moreover, he capitalizes the terms in question, indicating he perceives the concept as an incorrect reification.
comment by hankx7787 · 2011-11-04T11:49:32.281Z · LW(p) · GW(p)
"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen, SL4
Replies from: ciphergoth, MichaelAnissimov↑ comment by Paul Crowley (ciphergoth) · 2011-11-16T12:37:53.749Z · LW(p) · GW(p)
↑ comment by MichaelAnissimov · 2011-11-18T20:04:22.213Z · LW(p) · GW(p)
This is one of the earliest quotes I read that made it click that nothing I could do with my life would have greater impact than pursuing superintelligence.
comment by James_Miller · 2011-11-02T13:57:29.888Z · LW(p) · GW(p)
Replies from: Logos01We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?
↑ comment by Logos01 · 2011-11-04T02:05:11.858Z · LW(p) · GW(p)
... I wonder how "alone" I am in the notion that AGI causing human extinction may not be a net negative, in that so long as it is a sentient product of human endeavors it is essentially a "continuation" of humanity.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-04T02:10:34.331Z · LW(p) · GW(p)
Two problems: An obnoxious optimizing process isn't necessarily sentient. And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
If it helps ask yourself how you feel about a human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoners' Dilemma, Humanity Defects!" That sounds pretty bad doesn't it? Now note that the AGI expansion is probably worse than that.
Replies from: Logos01↑ comment by Logos01 · 2011-11-04T02:13:46.154Z · LW(p) · GW(p)
Two problems: An obnoxious optimizing process isn't necessarily sentient.
Hence my caveat.
And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
If it helps ask yourself how you feel about human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoner's Dilemma, Humanity Defects!" That sounds pretty bad doesn't it?
Not especially, no.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-04T02:39:48.148Z · LW(p) · GW(p)
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
Not especially, no.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don't even realize that humans originally came from a single planet.
If this isn't clear, I'm trying to make this about as dystopian as I plausibly can. If I haven't succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.
Replies from: Logos01↑ comment by Logos01 · 2011-11-04T02:50:10.183Z · LW(p) · GW(p)
It is one example of what could happen, smileys are but a specific example.
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do -- or perhaps even more greatly so, as a result of being more generally capable than we are.
The size of mindspace that does something approximating what we want is probably a small portion of that.
I'm already assuming that the AGI would not do things we want. Such as letting us continue living. But again; if it is sentient, and capable of making decisions, learning, finding values and establishing goals for itself... even if it also turns the entire cosmos into paperclips while doing so -- where's the net negative utility?
I value achieving heights of intellect, ultimately. Lower-level goals are negotiable when you get down to it.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability.
And eats babies.
You're willfully trying to make this hypothetical horrible and then expect me to find it informationally significant that a bad thing is bad. This is meaningless discourse; it reveals nothing.
If this isn't clear, I'm trying to make this about as dystopian as I plausibly can.
If it isn't clear that by willfully painting a dystopia you are denuding your position of any meaningfulness -- it's a non-argument -- then I don't know what will be.
You haven't provided an argument about why what you initially described would be dystopic. You simply assumed that humanity spreading itself at the cost of all other sentient beings would be dystopic.
That's simply a bald assertion, sir.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-11-04T02:59:59.016Z · LW(p) · GW(p)
Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do -- or perhaps even more greatly so, as a result of being more generally capable than we are.
Human values change in part because we aren't optimizers in any substantial sense. We're giant mechas for moving around DNA (after the RNA's replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There's no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word "empire" than were stated explicitly in my remark and that they are not shared. Here's possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
Replies from: FAWS, Logos01↑ comment by FAWS · 2011-11-04T12:45:22.578Z · LW(p) · GW(p)
Not Logos, but:
If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
↑ comment by Logos01 · 2011-11-04T15:18:51.622Z · LW(p) · GW(p)
There's no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
I was assuming the latter. As to the former, again: hence my caveat. I don't much care what the possibility of AGI mindspace is, I've already arbitrarily limited the kinds I'm talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, "But there's no reason to think it would be in that window!" -- just shows that you're lacking reading skills, to be quite frank.
I don't much care what the range of possible values is for f(x) for x=0..10000000, when I've already asked the question what is f(10)? If it's a sentient entity that is recursively intelligent, then at some point it alone would become more "cognizant" than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?
comment by lukeprog · 2011-11-02T09:01:27.512Z · LW(p) · GW(p)
Machines can within certain limits beget machines of any class, no matter how different to themselves... Complex now, but how much simpler and more intelligibly organised may [a machine] not become in another hundred thousand years? or in twenty thousand? For man at present believes that his interest lies in that direction; he spends an incalculable amount of labour and time and thought in making machines breed always better and better; he has already succeeded in effecting much that at one time appeared impossible, and there seem no limits to the results of accumulated improvements if they are allowed to descend with modification from generation to generation. It must always be remembered that man’s body is what it is through having been moulded into its present shape by the chances and changes of many millions of years, but that his organisation never advanced with anything like the rapidity with which that of the machines is advancing. This is the most alarming feature in the case, and I must be pardoned for insisting on it so frequently.
Samuel Butler (1872)
comment by lukeprog · 2011-11-02T08:26:38.771Z · LW(p) · GW(p)
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Eliezer Yudkowsky (2008)
Replies from: James_Miller↑ comment by James_Miller · 2011-11-02T20:15:19.080Z · LW(p) · GW(p)
EY changed it in the published version to:
"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."
Replies from: lukeprog, Millercomment by lukeprog · 2011-11-02T08:46:29.019Z · LW(p) · GW(p)
Though we have to live and work with (and against) today's mechanical morons, their deficiencies should not blind us to the future. In particular, it should be realized that as soon as the borders of electronic intelligence are passed, there will be a kind of chain reaction, because the machines will rapidly improve themselves... there will be a mental explosion; the merely intelligent machine will swiftly give way to the ultraintelligent machine.... Perhaps our role on this planet is not to worship God but to create Him.
Arthur C. Clarke (1968)
comment by hankx7787 · 2011-11-04T11:49:46.034Z · LW(p) · GW(p)
"There are lots of people who think that if they can just get enough of something, a mind will magically emerge. Facts, simulated neurons, GA trials, proposition evaluations/second, raw CPU power, whatever. It's an impressively idiotic combination of mental laziness and wishful thinking." - Michael Wilson
comment by James_Miller · 2011-11-02T13:55:20.257Z · LW(p) · GW(p)
If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad.
... in original.
comment by lukeprog · 2012-03-25T22:53:19.768Z · LW(p) · GW(p)
Replies from: cousin_itI once remarked that to design ultraintelligent machines was to play with fire, that we had played with fire once before, and it had kept the other animals at bay. Arthur Clarke's reply was that this time we are the other animals.
comment by hankx7787 · 2011-11-04T11:50:07.063Z · LW(p) · GW(p)
"In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority." - Eliezer Yudkowsky
comment by James_Miller · 2011-11-02T14:11:04.169Z · LW(p) · GW(p)
We must develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it.
comment by lukeprog · 2011-11-02T09:57:11.085Z · LW(p) · GW(p)
Compare any kind of machine you may happen to think of with what its ancestor was only twenty-five years ago. Its efficiency has doubled, trebled... By knowledge alone man might extinguish himself utterly... Man's further task is... to learn how best to live with these powerful creatures of his mind [the machines], how to give their fecundity a law and... how not to employ them in error against himself.
Garet Garrett (1926)
comment by Paul Crowley (ciphergoth) · 2011-11-02T08:50:34.401Z · LW(p) · GW(p)
The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. [...] There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. [...] The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm
comment by lukeprog · 2011-11-02T08:45:18.284Z · LW(p) · GW(p)
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Good (1965)
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2011-11-02T12:07:44.610Z · LW(p) · GW(p)
The use of "unquestionably" in this quote has always irked me a bit, despite the fact that I find the general concept reasonable.
comment by lukeprog · 2011-11-02T08:44:39.674Z · LW(p) · GW(p)
...we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race... the time will come when the machines will hold the real supremacy over the world and its inhabitants.
Samuel Butler (1863)
comment by Halfwit · 2013-06-10T20:47:46.578Z · LW(p) · GW(p)
The mathematician John von Neumann, born Neumann Janos in Budapest in 1903, was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake." One night in early 1945, von Neumann woke up and told his wife, Klari, that "what we are creating now is a monster whose influence is going to change history, provided there is any history left. Yet it would be impossible not to see it through." Von Neumann was creating one of the first computers, in order to build nuclear weapons. But, Klari said, it was the computers that scared him the most.
Konstantin Kakaes
comment by lukeprog · 2011-11-02T08:31:42.242Z · LW(p) · GW(p)
In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.
George Dyson (1998)
Replies from: amcknight↑ comment by amcknight · 2011-11-02T18:49:57.059Z · LW(p) · GW(p)
Now we just need machines on our side and we'll have a cute little love-triangle.
Replies from: shminux↑ comment by Shmi (shminux) · 2011-11-02T23:25:23.173Z · LW(p) · GW(p)
So then, even when we have an FAI, all three parties will be unhappy?
comment by lukeprog · 2011-11-02T10:15:00.564Z · LW(p) · GW(p)
"It seems inevitable that sometime in this century, Moore's Law combined with greater understanding of intelligence itself will drive machine intelligence to levels beyond, and soon thereafter, beyond anything we can imagine. When intelligent machines begin designing superintelligence machines and their software, intelligence should grow exponentially. The result could be a runaway intelligence explosion.
T.M. Georges (2004)
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-11-16T12:43:44.604Z · LW(p) · GW(p)
Where did he say this? A search turns up only this page. Thanks!
Replies from: lukeprogcomment by lukeprog · 2013-01-13T03:41:45.047Z · LW(p) · GW(p)
Most people are strongly biased toward not wanting a computer to be able to think. Why? For a variety of reasons, the layperson's concept think has become so intertwined with the concept human that many people have an emotional reaction against the idea of nonhuman things thinking...
However, despite their strong feelings against the idea of thinking computers, most people have not thought about the issue very carefully and are at a loss to come up with a definition of thinking that would include most humans (babies, for example) and exclude all computers. It is sometimes humorous to hear the criteria that people who are unfamiliar with current work in artificial intelligence come up with, for they invariably choose something that computers can actually do. For example, many people propose the criterion "ability to learn from experience," only to be told that some robots and [AI] systems have fulfilled this criterion...
Usually the second choice is something like "creativity" ("coming up with something that people judge as useful that no person has thought of before"...). When told that most experts agree that computers have fulfilled this criterion, the person still does not admit the possibility of thinking machines.
Often the person abandons the attempt to derive an operational definition at this point and instead attempts to argue that computers could not possibly think because "humans built them and programmed them; the only follow their programs."... [but] we do not invoke the "origins" argument for other processes. Consider the process of heating food. Consider the question "Do ovens heat?" Do we say, "Ovens don't really heat, because ovens are built by people. Therefore, it only makes sense to say that people heat. Ovens don't really heat"? ...Of course not. The origin of something is totally irrelevant to its ability to carry out a particular process.
comment by lukeprog · 2012-05-25T02:42:12.354Z · LW(p) · GW(p)
Why we're doomed reason #692...
Here is Hugo de Garis, in the opening of The Artilect War:
You may ask, "Well, if you are so concerned about the negative impact of your work [on artificial brains] on humanity, why don't you stop it and do something else?" The truth is, I feel that I'm constructing something that may become rather godlike in future decades... The prospect of building godlike creatures fills me with a sense of religious awe that... motivates me powerfully to continue, despite the possible horrible negative consequences.
comment by lukeprog · 2012-03-25T22:47:08.806Z · LW(p) · GW(p)
The first intelligent machine is the last invention that man need ever make since it will lead, without further human invention, to the ultraintelligent machine... To up-date Voltaire: if God does not exist we shall have constructed him, or at any rate a reasonable approximation. Or will it be the Devil?
comment by Grognor · 2012-02-14T04:29:03.259Z · LW(p) · GW(p)
Before building FAI you built an oracle AI to help you. With its help, you found a mathematical definition of U, the utility of humanity’s extrapolated volition (or whatever). You were all pretty pleased with yourselves, but you didn’t stop there: you found a theory of everything, located humanity within it, and wrote down the predicate F(X) = “The humans run the program described by X.”
To top it off, with the help of your oracle AI you found the code for a “best possible AI”, call it FAI, and a proof of the theorem:
There exists a constant Best such that U ≤ Best, but F(FAI) implies U = Best.”
Each of these steps you did with incredible care. You have proved beyond reasonable doubt that U and F represent what you want them to.
You present your argument to the people of the world. Some people object to your reasoning, but it is airtight: if they choose to stop you from running FAI, they will still receive U ≤ Best, so why bother?
Now satisfied and with the scheduled moment arrived, you finally run FAI. Promptly the oracle AI destroys civilization and spends the rest of its days trying to become as confident as possible that Tic-Tac-Toe is really a draw (like you asked it to, once upon a time).
comment by lukeprog · 2011-11-02T08:29:04.130Z · LW(p) · GW(p)
If, then, men... were only machines of so complicated a make that it was less trouble to us to cut the difficulty and say that that kind of mechanism was 'being alive,' why should not machines ultimately become as complicated as we are... and to be indeed as living as it was in the nature of anything at all to be?
Samuel Butler (1880)
comment by James_Miller · 2011-11-02T14:00:47.865Z · LW(p) · GW(p)
Replies from: ataIf you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts.
comment by djcb · 2011-11-02T13:22:06.652Z · LW(p) · GW(p)
I am the psychotherapist. Please, describe your problems. Each time you are finished talking, type RET twice.
I feel so tired
Why do you say?
I fall asleep all the time
Is it because of your plans that you say?
Yes, I work too hard
Maybe your life has something to do with this.
Indeed!
Can you elaborate on that?
Well, I work so can improve my life
What makes you believe?
I read it somewhere
Is it because of the people you hang around with that you say?
I bet it is because of that
I would appreciate it if you would continue.
[...]
my secretary will send you a bill.
-- emacs, doctor mode, the built-in psychotherapist
comment by lukeprog · 2013-06-01T05:02:44.999Z · LW(p) · GW(p)
Members of the artificial intelligence community bear an ominous resemblance to... the Sorcerer's Apprentice. The apprentice learnt just enough magic for his master to save himself the trouble of performing an onerous task, but not quite enough to stop the spellbound buckets and brooms from flooding the castle.
Margaret Boden, Artificial Intelligence and Natural Man, p. 463
comment by lukeprog · 2012-03-26T02:56:59.663Z · LW(p) · GW(p)
Once a machine is designed that is good enough… it can be put to work designing an even better machine. At this point an "explosion" will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.
It seems probable that no mechanical brain will be really useful until it is somewhere near to the critical size. If so, there will be only a very short transition period between having no very good machine and having a great many exceedingly good ones. Therefore the work on simulation of artificial intelligence on general-purpose computers is especially important, because it will lengthen the transition period, and give human beings a chance to adapt to the future situation.
comment by lukeprog · 2012-03-26T00:43:32.866Z · LW(p) · GW(p)
...it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents. When we delegate those responsibilities, then we may not realize, before it is too late to turn back, that our goals have been misinterpreted, perhaps even maliciously. We see this in such classic tales of fate as Faust, the Sorcerer's Apprentice, or the Monkey's Paw by W.W. Jacobs.
[Another] risk is exposure to the consequences of self-deception. It is always tempting to say to oneself... that "I know what I would like to happen, but I can't quite express it clearly enough." However, that concept itself reflects a too-simplistic self-image, which portrays one's own self as [having] well-defined wishes, intentions, and goals. This pre-Freudian image serves to excuse our frequent appearances of ambivalence; we convince ourselves that clarifying our intentions is merely a matter of straightening-out the input-output channels between our inner and outer selves. The trouble is, we simply aren't made that way. Our goals themselves are ambiguous.
The ultimate risk comes when [we] attempt to take that final step — of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. It will be tempting to do this, both to gain power and to decrease our own effort toward clarifying our own desires. If some genie offered you three wishes, would not your first one be, "Tell me, please, what is it that I want the most!" The problem is that, with such powerful machines, it would require but the slightest accident of careless design for them to place their goals ahead of [ours]. The machine's goals may be allegedly benevolent, as with the robots of With Folded Hands, by Jack Williamson, whose explicit purpose was allegedly benevolent: to protect us from harming ourselves, or as with the robot in Colossus, by D.H.Jones, who itself decides, at whatever cost, to save us from an unsuspected enemy. In the case of Arthur C. Clarke's HAL, the machine decides that the mission we have assigned to it is one we cannot properly appreciate. And in Vernor Vinge's computer-game fantasy, True Names, the dreaded Mailman... evolves new ambitions of its own.
comment by Dr_Manhattan · 2011-11-03T14:47:33.804Z · LW(p) · GW(p)
Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."
comment by lukeprog · 2011-11-02T10:21:45.543Z · LW(p) · GW(p)
The human race, as we know it, is very likely in its end game; our period of dominance on earth is about to be terminated. We can try and reason and bargain with the machines which take over, but why should they listen when they are far more intelligent than we are?
Kevin Warwick (1998)
comment by lukeprog · 2011-11-02T08:46:50.447Z · LW(p) · GW(p)
The survival of man may depend on the early construction of an ultraintelligent machine — or the ultraintelligent machine may take over and render the human race redundant or develop another form of life. The prospect that a merely intelligent man could ever attempt to predict the impact of an ultraintelligent device is of course unlikely but the temptation to speculate seems irresistible.
Julius Lukasiewicz (1974)
Replies from: Nonecomment by Grognor · 2012-05-04T10:13:48.356Z · LW(p) · GW(p)
A superintelligent computer designed to win at chess will keep trying to win at chess, ignoring any other goals along the way. It doesn't matter whether it's a million times smarter than Einstein, it's not going to start wanting to fight for freedom just because humans like that kind of thing any more than it's going to start wanting to have sex with pretty actors and actresses just because humans like that kind of thing. It's just going to be really, really good at winning at chess.
comment by lukeprog · 2012-04-25T20:37:44.385Z · LW(p) · GW(p)
In the event of a super-intelligent machine deciding upon a major change of environment, it might regard the biological society which had served it with no more consideration than a brewer gives to colonies of yeast when they have served their purpose in the brewery.
Cade (1966), p. 225
comment by lukeprog · 2012-04-25T20:29:54.671Z · LW(p) · GW(p)
Our own technological development is so rapid that we must accept the fact of imminent developments which are beyond our present understanding... It is therefore certain that a society only a few hundred years more advanced than our own could, if they thought it expedient, exterminate terrestrial life without effort...
It is useless to think of any form of defence against any action by superior intelligeneces... it would be just as futile as the occupants of an antheap declaring war against a bulldozer.
Cade (1966), p. 220
Page 223 includes this drawing of self-reproducing machines.
comment by lukeprog · 2012-04-25T20:10:49.932Z · LW(p) · GW(p)
political leaders on Earth will slowly come to realize... that intelligent machines having superhuman thinking ability can be built. The construction of such machines, even taking into account all the latest developments in computer technology, would call for a major national effort. It is only to be expected that any nation which did put forth the financial and physical effort needed to build and programme such a machine, would also attempt to utilize it to its maximum capacity, which implies that it would be used to make major decisions of national policy. Here is where the awful dilemma arises. Any restriction to the range of data supplied to the machine would limit its ability to make effective political and economic decisions, yet if no such restrictions are placed upon the machine's command of information, then the entire control of the nation would virtually be surrendered to the judgment of the robot.
On the other hand, any major nation which was led by a superior, unemotional intelligence of any kind, would quickly rise to a position of world domination. This by itself is sufficient to guarantee that, sooner or later, the effort to build such an intelligence will be made — if not in the Western world, then elsewhere, where people are more accustomed to iron dictatorships.
...It seems that, in the forseeable future, the major nations of the world will have to face the alternative of surrendering national control to mechanical ministers, or being dominated by other nations which have already done this. Such a process will eventually lead to the domination of the whole Earth by a dictatorship of an unparalleled type — a single supreme central authority.
...the transition from biological evolution to mechanical evolution... could be rapid if some nation takes the plunge and goes in for government by computer, or very much slower if the dangers in this step are recognized, and man merely mechanized himself, by a gradual replacement of defective or inadequate biological components.
...There is little point in pursuing this line of thought any further, since a world of machines, governed by machines, for machines... will be as incomprehensible to us as would be the engines of a trawler to the ship's cat.
Cade, Other Worlds Than Ours (1966), pp. 214-219
comment by lukeprog · 2012-04-25T17:53:39.669Z · LW(p) · GW(p)
future machines could be more intelligent than any man, and it is possible that a sort of mechanical evolution could be introduced, using existing computers to design their descendents, and so on, generation after generation, getting a little more brilliant at every step. It is in fact a theoretical possibility that, above a certain level of complexity, any computer can design a better computer than itself; this was first pointed out by the late John von Neumann.
...whenever mechanical or inorganic brains have been discussed, there have been passionate denials that such automata could ever think creatively. Some of these objections, perhaps a majority of them, arise from conflicts with the religious beliefs of the individual. Other negative views, expressed in some cases by noted scientists, result from an interpretation of mechanical thinking as a blow to their ego... a resentment that any mere thing of metal could be superior in any way to a human brain. And yet our brains are comparatively badly organized, inaccurate, and (except for memory) slow. They were not evolved for the purpose of abstract thought, but developed slowly through millions of years as a product of the struggle for survival... Bearing these things in mind, surely it is feasible that we shall eventually build mechanical minds of superhuman thinking ability, just as we now build bulldozers of superhuman muscle power?
Cade, Other Worlds Than Ours (1966), pp. 213-214
comment by lukeprog · 2012-04-25T17:33:54.759Z · LW(p) · GW(p)
Assume for the sake of argument that conscious beings have existed for some 20 million years: see what strides machines have made in the last thousand! May not the world last 20 million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?
Samuel Butler, 1872
(My own answer to Butler's question is "No" for the reason Moravec gave in 1988.)
comment by lukeprog · 2012-04-12T01:20:43.376Z · LW(p) · GW(p)
As the human race, we are delicately positioned. We have the... ability... to create machines that will not only be as intelligent as humans but that will go on to be far more intelligent still. This will spell the end of the human race as we know it. Is that what we want? Should we not at least have an international body monitoring and even controlling what goes on?
When the first nuclear bombs were dropped on Japan, killing thousands of people, we took stock of our actions and realised the threat that such weapons posed to our existence. Despite the results achieved by the Hiroshima and Nagasaki bombs, even deadlier nuclear bombs have been built, much more powerful, much more accurate and much more intelligent. But with nuclear weapons we saw what they could do and we gave ourselves another chance.
With intelligent machines we will not get a second chance. Once the first powerful machine... is switched on, we will most likely not get the opportunity to switch it back off again. We will have started a time bomb ticking on the human race, and we will be unable to switch it off.
Kevin Warwick, March of the Machines (1997)
comment by Grognor · 2012-02-14T08:32:57.119Z · LW(p) · GW(p)
A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next -- so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.
comment by lukeprog · 2014-01-29T21:38:35.133Z · LW(p) · GW(p)
intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics...
...We can already see a glimmer of how computers might make [ethical] choices in Jaime Carbonell's model of subjective understanding. Carbonell showed how programs could be governed by heirarchies of goals, which would guide their reasoning processes in certain directions and not in others. Thus, it might very well be possible to formulate a hierarchy of goals that embody ethical concepts; the hard part, as always, would lie in formulating precisely what those concepts ought to be.
...the effort of understanding machine ethics may turn out to be invaluable not just as a matter of practicality, but for its own sake. The effort to endow computers with intelligence has led us to look deep within ourselves to understand what intelligence is. In much the same way, the effort to construct ethical machines will inevitably lead us to look within ourselves and reexamine our own conceptions of right and wrong. Of course, this... has been the domain of religion and philosophy for millennia. But then, pondering the nature of intelligence is not a new activity, either. The difference in each case is that, for the first time, we are having to explain ourselves to an entity that knows nothing about us. A computer is the proverbial Martian. And for that very reason, it is like a mirror: the more we have to explain ourselves, the more we may come to understand ourselves.
comment by lukeprog · 2014-01-29T18:02:06.185Z · LW(p) · GW(p)
Some philosophers and scholars who study and speculate on the [intelligence explosion]... maintain that this question is simply a matter of ensuring that AI is created with pro-human tendencies. If, however, we are creating an entity with greater than human intelligence that is capable of designing its own newer, better successors, why should we assume that human-friendly programming traits will not eventually fall by the wayside?
Al-Rodhan (2011), pp. 242-243, notices the stable self-modification problem.
comment by lukeprog · 2013-10-29T03:27:48.171Z · LW(p) · GW(p)
From Michie (1982):
Competent medical and biological research authorities in various parts of the world are concerned about genetic engineering... There is the possibility that as an accident, a side-effect of such research, some quite new and virulent micro-organism might multiply to an extent with which we are not able to cope. As a consequence, the Medical Research Council in Britain... recently supported a six-month moratorium on research in that specific area while the matter was studied more deeply and new safeguards drawn up.
It is conceivable that machine intelligence research could at some future stage raise legitimate concerns of that character. If that ever happened then I would certainly support such a 'holding operation'.
comment by lukeprog · 2013-07-04T06:59:25.271Z · LW(p) · GW(p)
In the long run, AI is the only science.
Woody Bledsoe, quoted in Machines Who Think.
comment by lukeprog · 2013-06-01T04:41:47.424Z · LW(p) · GW(p)
When machines acquire an intelligence superior to our own, they will be impossible to keep at bay... [Human-level AI] will threaten the very existence of human life as we know it... We should not... expect the main battles of the twenty-first century to be fought over such issues as the environment, overpopulation, or poverty. No, we should expect the fight to be about how we cope with [AI]; and the issue [of] whether we or they — our silicon challengers — control the future of the earth.
Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, p. 341
comment by lukeprog · 2013-05-24T21:47:08.469Z · LW(p) · GW(p)
"Speculations concerning the first ultraintelligent machine" started with the dubious sentence "The survival of man [sic] depends on the early construction of an ultraintelligent machine." That might have been appropriate during the Cold War. Today I suspect that the word survival should be replaced by extinction.
comment by lukeprog · 2013-05-19T02:23:29.436Z · LW(p) · GW(p)
People tend to feel that intelligence is a good thing, even if they are unable to say exactly what it is. But its presence in a machine might not be an unmitigated blessing... there are purposes for which we might not want machines to be intelligent.
comment by lukeprog · 2012-10-03T03:00:09.059Z · LW(p) · GW(p)
The greatest task before civilisation at present is to make machines what they ought to be, the slaves, instead of the masters of men.
Havelock Ellis, 1922
comment by lukeprog · 2012-05-04T01:49:34.630Z · LW(p) · GW(p)
The singularity literature perhaps does a service by highlighting the ways in which AI developments could produce new degrees of intelligence and operational autonomy in AI agents—especially as current AI agents play an increasingly important role in the design of future AI agents. Bearing in mind the far-reaching implications of such possible future scenarios, the urgency of work in [machine ethics] to ensure the emergence of ‘friendly AI’ (Yudkowsky 2001, 2008) is all the more important to underline.
comment by lukeprog · 2012-04-25T20:38:42.715Z · LW(p) · GW(p)
The establishment of contact with any superior community would obviously be of unparalleled importance for the human race — socially, scientifically, and culturally. It could lead either to our rapidly attaining superior status ourselves, or it could lead to our extinction. It probably depends upon how well we conceal, or overcome, our own grave failings as social beings.
Cade (1966), p. 228
comment by lukeprog · 2012-02-21T08:29:08.697Z · LW(p) · GW(p)
Shorter I.J. Good intelligence explosion quote:
once an intelligent machine is built, it can be used for the design of an even better machine and so on; so that the invention of the first intelligent machine is the last invention that man need make.
comment by lukeprog · 2011-12-28T02:39:42.346Z · LW(p) · GW(p)
Technological progress is like an axe in the hands of a pathological criminal.
Albert Einstein
Replies from: lukeprog↑ comment by lukeprog · 2014-04-09T02:22:25.751Z · LW(p) · GW(p)
Looking more closely, this much-duplicated "quote" seems to be a paraphrase of something he wrote in a letter to Heinrich Zaggler in the context of the first world war: "Our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal."
I do think about the AGI problem in much this way, though. E.g. in Just Babies, Paul Bloom wrote:
Families survive the Terrible Twos because toddlers aren’t strong enough to kill with their hands and aren’t capable of using lethal weapons. A two-year-old with the physical capacities of an adult would be terrifying.
I think our current civilization i like a two-year old. The reason we haven't destroyed ourselves yet, but rather just bit some fingers and ruined some carpets, is because we didn't have any civilization-lethal weapons. We've had nuclear weapons for a few decades now and not blown ourselves up yet but there were some close calls. In the latter half of the 21st century we'll acquire some additional means of destroying our civilization. Will we have grown up by then? I doubt it. Civilizational maturity progresses more slowly than technological power.