Did AI pioneers not worry much about AI risks?
post by lisperati · 2020-02-09T19:58:53.395Z · LW · GW · 8 commentsThis is a question post.
Contents
Answers 9 Dagon None 8 comments
It's seems noteworthy just how little the AI pioneers from the 40s-80s seemed to care about AI risk. There is no obvious reason why a book like "Superintelligence" wasn't written in the 1950s, but for some reason that didn't happen... any thoughts on why this was the case?
I can think of three possible reasons for this:
1. They actually DID care and published extensively about AI risk, but I'm simply not well enough schooled on the history of AI research.
2. Deep down, people involved in early AI research knew that they were still a long long way from achieving significantly powerful AI, despite the optimistic public proclamations that were made at that time.
3. AI risks are highly counter-intuitive and it simply required another 60 years of thinking to understand.
Anyone have any thoughts on this question?
Answers
Would you say it's taken particularly seriously NOW? There are some books about it, and some researchers focusing on it. A very tiny portion of the total thought put into the topic of machine intelligence.
I think:
1) about the same percentage of publishing on the overall topic went to risks, then as now. There's a ton more on AI risks now, because there are 3 orders of magnitude more overall thought and writing on AI generally.
2) This may still be true. Humans aren't good at long-term risk analysis.
3) Perhaps more than 60 years of thinking will be required. We're beginning to ask the right questions (I hope).
8 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2020-02-09T23:18:01.396Z · LW(p) · GW(p)
I know at least Alan Turing, IJ Good, Marvin Minsky, and Norbert Wiener discussed AI risks, but I can't offer quotes and references right at this moment. And I'm not sure how much they dove into it, versus talking about it in one speech or whatever.
ETA: I have the impression that AI risk is one of the major themes of Norbert Wiener's book The Human Use of Human Beings. I haven't read it myself, so I'm not really sure. For Alan Turing it may have just been the one speech excerpted by ofer below.
Replies from: ofer↑ comment by Ofer (ofer) · 2020-02-10T12:57:59.281Z · LW(p) · GW(p)
The following quoted texts are from this [LW · GW] post by Scott Alexander:
Alan Turing:
Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious tolerance since the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…At some stage therefore we should have to expect the machines to take control.
[EDIT: a similar text, attributed to Alan Turing, appears here (from the last paragraph) - continued here.]
I. J. Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make
[EDIT: I didn't manage to verify it yet, but it seems that that last quote is from a 58 page paper by I. J. Good, titled Speculations Concerning the First Ultraintelligent Machine; here is an archived version of the broken link in Scott's post.]
comment by Charlie Steiner · 2020-02-10T01:10:03.435Z · LW(p) · GW(p)
I think it's mostly (3). Not because AI safety is an outlier, but because of how much work people had to do to come to grips with Moravec's paradox.
If you take someone clever and throw them at the problem of GAI, the first thing they'll think of is something doing logical reasoning, able to follow natural language commands. Their intuition will be based on giving orders to a human. It takes a lot of work to supplant that intuition with something more mechanistic.
Like, it seems obvious to us now that building something that takes natural language commands and actually does what we mean is a very hard problem. But this is exactly a Moravec's paradox situation, because knowing what people mean is mostly effortless and unconscious to us.
comment by riceissa · 2020-02-12T21:20:37.314Z · LW(p) · GW(p)
Some other relevant links:
AI Risk & Opportunity: A Timeline of Early Ideas and Arguments [LW · GW]
AI Risk and Opportunity: Humanity's Efforts So Far [LW · GW]
comment by Alex Kudryavtsev (alex-kudryavtsev) · 2020-02-10T21:57:22.107Z · LW(p) · GW(p)
As for science-fiction books, Stanislaw Lem wrote a few excellent stories about AI taking over humanity back in mid-20th century.
Two short but fascinating ones are
1) Limfater's formula
2) Friend.
Replies from: Vaniver, jmh↑ comment by Vaniver · 2020-02-11T00:12:21.431Z · LW(p) · GW(p)
From The Investigation, written by Lem in 1959:
Once they begin to escalate their efforts, both sides are trapped in an arms race. There must be more and more improvements in weaponry, but after a certain point weapons reach their limit. What can be improved next? Brains. The brains that issue the commands. It isn’t possible to make the human brain perfect, so the only alternative is a transition to mechanization. The next stage will be a fully automated headquarters equipped with electronic strategy machines. And then a very interesting problem arises, actually two problems. McCatt called this to my attention. First, is there any limit on the development of these brains? Fundamentally they’re similar to computers that can play chess. A computer that anticipates an opponent’s strategy ten moves in advance will always defeat a computer that can think only eight or nine moves in advance. The more far-reaching a brain’s ability to think ahead, the bigger the brain must be. That’s one.”
…
“Strategic considerations dictate the construction of bigger and bigger machines, and, whether we like it or not, this inevitably means an increase in the amount of information stored in the brains. This in turn means that the brain will steadily increase its control over all of society’s collective processes. The brain will decide where to locate the infamous button. Or whether to change the style of the infantry uniforms. Or whether to increase production of a certain kind of steel, demanding appropriations to carry out its purposes. Once you create this kind of brain you have to listen to it. If a Parliament wastes time debating whether or not to grant the appropriations it demands, the other side may gain a lead, so after a while the abolition of parliamentary decisions becomes unavoidable. Human control over the brain’s decisions will decrease in proportion to the increase in its accumulated knowledge. Am I making myself clear? There will be two growing brains, one on each side of the ocean. What do you think a brain like this will demand first when it’s ready to take the next step in the perceptual race?”
“An increase in its capability.”
…
“No, first it demands its own expansion — that is to say, the brain becomes even bigger! Increased capability comes next.”
“In other words, you predict that the world is going to end up a chessboard, and all of us will be pawns manipulated in an eternal game by two mechanical players.”
↑ comment by jmh · 2020-02-12T01:50:29.471Z · LW(p) · GW(p)
I had also thought about the social/cultural environment of the time and how SciFi tended to lean more towards the positive side. Asimov's 3 laws would protect us. The Twilight Zone episode of the machine Nannies that provided loving care for unfortunate children who lost their mothers. The robot in the original Lost in Space series.
Of course there was also the Forbidden Plant, but that was not some much about AI.
Maybe the original Star Trek where they showed up at the advanced vacation planet where the computers created the world based on everyone's fantasy thoughts. Similar to the Forbidden Planet thesis but that was clearly controllable but the AI was not prepared for human minds that have somewhat uncontrolled thoughts.
But still, there seems to be something of an element of the zeitgeist of the time -- is it more optimistic or pessimistic and I get the impression, on average, people are more pessimistic now than 50 or 60 years ago about what the future will bring.
comment by Kaj_Sotala · 2020-02-10T17:55:37.762Z · LW(p) · GW(p)
The notion of AI risk might also have seemed less compelling in a period before widely networked computers. Sometimes people say that you could just a pull a plug on a computer that misbehaved, which seems a little silly in an era where physical installations can be damaged by hacking and where it can be impossible to get a software uploaded to the Internet removed... but it probably felt a lot more plausible in an era where networking was mostly limited to military systems at first, and university networks later.