After critical event W happens, they still won't believe you

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T21:59:09.515Z · LW · GW · Legacy · 107 comments

Contents

107 comments

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Example 1:  "After a 2-year-old mouse is rejuvenated to allow 3 years of additional life, society will realize that human rejuvenation is possible, turn against deathism as the prospect of lifespan / healthspan extension starts to seem real, and demand a huge Manhattan Project to get it done."  (EDIT:  This has not happened, and the hypothetical is mouse healthspan extension, not anything cryonic.  It's being cited because this is Aubrey de Grey's reasoning behind the Methuselah Mouse Prize.)

Alternative projection:  Some media brouhaha.  Lots of bioethicists acting concerned.  Discussion dies off after a week.  Nobody thinks about it afterward.  The rest of society does not reason the same way Aubrey de Grey does.

Example 2:  "As AI gets more sophisticated, everyone will realize that real AI is on the way and then they'll start taking Friendly AI development seriously."

Alternative projection:  As AI gets more sophisticated, the rest of society can't see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them.  The same people who were talking about robot overlords earlier continue to talk about robot overlords.  The same people who were talking about human irreproducibility continue to talk about human specialness.  Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone's previous ideological commitment to a basic income guarantee, inequality reduction, or whatever.  The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before.  If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it's on the way, e.g. Hugo de Garis, Ben Goertzel, etc.

Consider the situation in macroeconomics.  When the Federal Reserve dropped interest rates to nearly zero and started printing money via quantitative easing, we had some people loudly predicting hyperinflation just because the monetary base had, you know, gone up by a factor of 10 or whatever it was.  Which is kind of understandable.  But still, a lot of mainstream economists (such as the Fed) thought we would not get hyperinflation, the implied spread on inflation-protected Treasuries and numerous other indicators showed that the free market thought we were due for below-trend inflation, and then in actual reality we got below-trend inflation.  It's one thing to disagree with economists, another thing to disagree with implied market forecasts (why aren't you betting, if you really believe?) but you can still do it sometimes; but when conventional economics, market forecasts, and reality all agree on something, it's time to shut up and ask the economists how they knew.  I had some credence in inflationary worries before that experience, but not afterward...  So what about the rest of the world?  In the heavily scientific community you live in, or if you read econblogs, you will find that a number of people actually have started to worry less about inflation and more about sub-trend nominal GDP growth.  You will also find that right now these econblogs are having worry-fits about the Fed prematurely exiting QE and choking off the recovery because the elderly senior people with power have updated more slowly than the econblogs.  And in larger society, if you look at what happens when Congresscritters question Bernanke, you will find that they are all terribly, terribly concerned about inflation.  Still.  The same as before.  Some econblogs are very harsh on Bernanke because the Fed did not print enough money, but when I look at the kind of pressure Bernanke was getting from Congress, he starts to look to me like something of a hero just for following conventional macroeconomics as much as he did.

That issue is a hell of a lot more clear-cut than the medical science for human rejuvenation, which in turn is far more clear-cut ethically and policy-wise than issues in AI.

After event W happens, a few more relatively young scientists will see the truth of proposition X, and the larger society won't be able to tell a damn difference.  This won't change the situation very much, there are probably already some scientists who endorse X, since X is probably pretty predictable even today if you're unbiased.  The scientists who see the truth of X won't all rush to endorse Y, any more than current scientists who take X seriously all rush to endorse Y.  As for people in power lining up behind your preferred policy option Z, forget it, they're old and set in their ways and Z is relatively novel without a large existing constituency favoring it.  Expect W to be used as argument fodder to support conventional policy options that already have political force behind them, and for Z to not even be on the table.

107 comments

Comments sorted by top scores.

comment by CarlShulman · 2013-06-13T23:44:51.630Z · LW(p) · GW(p)

For big jumpy events, look at the reactions to nuclear chain reactions, Sputnik, ENIGMA, penicillin, the Wright brothers, polio vaccine...

Then consider the process of gradual change with respect to the Internet, solar power, crop yields...

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-06-14T15:41:14.531Z · LW(p) · GW(p)

Some amount of bias (selection? availability?) there, in that part of why we think of your first paragraph examples is because they did make major news. There were probably others that were mostly ignored and so are much harder to think of. (Invention of the bifurcated needle, used for smallpox inoculations? What else has turned out to be really important in retrospect?)

comment by Halfwit · 2013-06-13T22:25:40.058Z · LW(p) · GW(p)

I do tend to think that Aubrey de Grey's argument holds some water. That is, it's not so much general society that will be influenced as wealthy elites. Elites seem more likely to update when they read about a 2x mouse. I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics? But cryonics is a lot harder to believe than life extension. You need to buy pattern identity theory and nanotechnology and Hanson's value of life calculations. In the case of LE, all you have to believe is that the techniques that worked on the mouse will, likely, be useful in treating human senescence. And anyway, Aubrey hopes to first convince the gerontology community and then the public at large. This approach has worked for climate science and a similar approach may work for AI risk.

Replies from: CarlShulman
comment by CarlShulman · 2013-06-14T02:55:06.937Z · LW(p) · GW(p)

I suppose the Less Wrong response to this argument would be: how many of them are signed up for cryonics?

LessWrongers, and high-karma LessWrongers, on average seem to think cryonics won't work, with mean odds of 5:1 or more against cryonics (although the fact that they expect it to fail doesn't stop an inordinate proportion from trying it for the expected value).

On the other hand, if mice or human organs were cryopreserved and revived without brain damage or loss of viability, people would probably become a lot more (explicitly and emotionally) confident that there is no severe irreversible information loss. Much less impressive demonstrations have been enough to create huge demand to enlist in clinical trials before.

Replies from: ciphergoth, GeraldMonroe
comment by Paul Crowley (ciphergoth) · 2013-06-14T09:11:20.586Z · LW(p) · GW(p)

That number is the total probability of being revived taking into account x-risk among other things. It would be interesting to know how many people think it's likely to be technically feasable to revive future cryo patients.

Replies from: Dentin
comment by Dentin · 2013-06-14T23:18:56.295Z · LW(p) · GW(p)

X-risk is a fairly unimportant factor in my survivability equation. Odds of dying due to accident and/or hardware failure trump it by a substantial margin. At my age, hardware failure is my most probable death mode.

That's why I have the Alcor paperwork in progress even as we speak, and why I'm donating a substantial fraction of my income to SENS and not CFAR.

It's not that X-risk is unimportant. It's that it's not of primary importance to me, and I suspect that a lot of LW people hold the same view.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-06-15T13:10:21.480Z · LW(p) · GW(p)

When you say "hardware failure", could you give an example of the sort of thing you have in mind?

Replies from: Leonhart
comment by Leonhart · 2013-06-15T19:10:41.505Z · LW(p) · GW(p)

I imagine he means cancer, heart disease, &c.

comment by GeraldMonroe · 2013-06-23T06:11:44.615Z · LW(p) · GW(p)

Alas, cryonics may be screwed with regards to this. It simply may not be physically possible to freeze something as large and delicate as a brain without enough damage to prevent you from thawing it and have it still work. This is of course is no big deal if you just want the brain for the pattern it contains. You can computationally reverse the cracks and to a lesser extent some of the more severe damage the same way we can computationally reconstruct a shredded document.

The point is, I think in terms of relative difficulty, the order is :

  1. Whole brain emulation
  2. Artificial biological brain/body
  3. Brain/body repaired via MNT
  4. Brain revivable with no repairs.

Note that even the "easiest" item on this list is extremely difficult.

comment by AlanCrowe · 2013-06-14T19:25:23.526Z · LW(p) · GW(p)

I don't find either example convincing about the general point. Since I'm stupid I'll fail to spot that the mouse example uses fictional evidence and is best ignored

We are all pretty sick of seeing a headline "Cure for Alzheimer's disease!!!" and clicking through to the article only to find that it is cured in mice, knock-out mice, with a missing gene, and therefore suffering from a disease a little like human Alzheimer. The treatment turns out to be injecting them with the protein that the missing gene codes for. Relevance to human health: zero.

Mice are very short lived. We expect big boosts in life span by invoking mechanisms already present in humans and already working to provide humans with much longer life spans than mice. We don't expect big boosts in the life span of mice to herald very much for human health. Cats would be different. If pet cats started living 34 years instead of 17, their owners would certainly be saying "I want what Felix is getting."

The sophistication of AI is a tricky thing to measure. I think that we are safe from unfriendly AI for a few years yet, not so much because humans suck at programming computers, but because they suck in a particular way. Some humans can sit at a keyboard typing in hundreds of thousands of lines of code specific to a particular challenge and achieve great things. We can call that sophistication if we like, but it isn't going to go foom. The next big challenge requires a repeat of the heroic efforts, and generates another big pile of worn out keyboards. We suck at programming in the sense that we need to spend years typing in the code ourselves, we cannot write code that writes code.

Original visions of AI imagined a positronic brain in an anthropomorphic body. The robot could drive a car, play a violin, cook dinner, and beat you at chess. It was general purpose.

If one saw the distinction between special purpose and general purpose as the key issue, one might wonder: what would failure look like? I think the original vision would fail if one had separate robots, one for driving cars and flying airplanes, a second for playing musical instruments, a third to cook and clean, and fourth to play games such as chess, bridge, and baduk.

We have separate hand-crafted computer programs for chess and bridge and baduk. That is worse than failure.

Examples the other way.

After the Wright brothers people did believe in powered, heavier-than-air flight. Aircraft really took of after that. One crappy little hop in the most favourable weather and suddenly every-one's a believer.

Sputnik. Dreamers had been building rockets since the 1930's, and being laughed at. The German V2 was no laughing matter, but it was designed to crash into the ground and destroy things, which put an ugh field around thinking about what it meant. Then comes 1957. Beep, beep, beep! Suddenly every-one's a believer and twelve years later Buzz Aldrin and the other guy are standing on the moon :-)

The Battle of Cambrai is two examples of people "getting it". First people understood before the end of 1914 that the day of the horse-mounted cavalry charge was over. The Hussites had war wagons in 1420 so there was a long history of rejecting that kind of technology. But after event W1 (machine guns and barbed wire defeating horses) it only took three years before the first tank-mounted cavalry charge. I think we tend to miss understand this by measuring time in lives lost rather than in years. Yes, the adoption of armoured tanks was very slow if you count the delay in lives, but it couldn't have come much faster in months.

The second point is that first world war tanks were crap. The Cambrai salient was abandoned. The tanks were slow and always broke down, because they were too heavy and yet the armour was merely bullet proof. There only protection against artillery was that the gun laying techniques of the time were ill suited to moving targets. The deployment of tanks in the first world war fall short of being the critical event W. One expects the horrors of trench warfare to fade and military doctrine to go back to horses and charges in brightly coloured uniforms.

In reality the disappointing performance of the tanks didn't cause military thinkers to miss their significance. Governments did believe and developed doctrines of Blitzkreig and Cruiser tanks. Even a weak W can turn every-one into believers.

comment by paulfchristiano · 2013-06-14T09:13:43.764Z · LW(p) · GW(p)

As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc. (I was claiming a few year's warning in the piece you are responding to, which is pretty minimal). Do you think there are counterexamples? You are claiming that a completely unprecedented will happen with very high probability. If you don't think that requires strong arguments to justify than I am confused, and if you think you've provided strong arguments I'm confused too.

I agree that AI has the potential to develop extremely quickly, in a way that only a handful of other technologies did. As far as I can tell the best reason to suspect that AI might be a surprise is that it is possible that only theoretical insights are needed, and we do have empirical evidence that sometimes people will be blindsided by a new mathematical proof. But again, as far as I know that has never resulted in a surprising economic impact, not even a modest one (and even in the domain of proofs, most of them don't blindside people, and there are strong arguments that AI is a harder problem than the problems that one person solves in isolation from the community---for example, literally thousands of times more effort has been put into it). A priori you might say "well, writing better conceptual algorithms is basically the same as proofs---and also sometimes blindsides people---and the total economic value of algorithms is massive at this point, so surely we would sometimes see huge jumps" but as far as I know you would be wrong.

There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact. There are a number of reasons to suspect this a priori (lots of people work on economically relevant problems, lots of people try to pay attention to development in those areas because it actually matters, economically relevant problems tend to have lots of moving pieces and require lots of work to get right, lots of people create working intermediate versions because those tend to also have economic impact, etc. etc.) and it seems to be an extremely strong empirical trend.

Like I said, I agree that AI has the potential to develop surprisingly quickly. I would say that 10% is a reasonable probability for such a surprising development (we have seen only a few cases of tech developments which could plausibly have rapid scale-up in economic significance; we also have empirical evidence from the nature of the relationship between theoretical progress and practical progress on software performance). This is a huge deal and something that people don't take nearly seriously enough. But your position on this question seems perplexing, and it doesn't seem surprising to me that most AI researchers dismiss it (and most other serious observers follow their lead, since your claim appears to be resting on a detailed view about the nature of AI, and it seems reasonable to believe people who have done serious work on AI when trying to evaluate such claims).

Making clear arguments for a more moderate and defensible conclusions seems like a good idea, and the sort of thing that would probably cause reasonable AI researchers to take the scenario more seriously.

Replies from: Eliezer_Yudkowsky, Wei_Dai, John_Maxwell_IV
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T16:16:29.805Z · LW(p) · GW(p)

As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.

Is the thesis here that the surprisingness of atomic weapons does not count because there was still a 13-year delay from there until commercial nuclear power plants? It is not obvious to me that the key impact of AI is analogous to a commercial plant rather than an atomic weapon. I agree that broad economic impacts of somewhat-more-general tool-level AI may well be anticipated by some of the parties with a monetary stake in them, but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).

Replies from: paulfchristiano
comment by paulfchristiano · 2013-06-15T00:09:02.165Z · LW(p) · GW(p)

Regarding atomic weapons:

  • Took many years and the prospect was widely understood amongst people who knew the field (I agree that massive wartime efforts to keep things secret are something of a special case, in terms of keeping knowledge from spreading from people who know what's up to other people).
  • Once you can make nuclear weapons you still have a continuous increase in destructive power; did it start from a level much higher than conventional bombing?

I do think this example is good for your case and unusually extreme, but if we are talking about a few years I think it still isn't surprising (except perhaps because of military secrecy).

but this is not the same as anticipating a FOOM (X), endorsing the ideals of astronomical optimization (Y) and deploying the sort of policies we might consider wise for FOOM scenarios (Z).

I don't think people will suspect a FOOM in particular, but I think they are open to the possibility to the extent that the arguments suggest it is plausible. I don't think you have argued against that much.

I don't think that people will become aggregative utilitarians when they think AI is imminent, but that seems like an odd suggestion at any rate. The policies we consider wise for a FOOM scenario are those that result in people basically remaining in control of the world rather than accidentally giving it up, which seems like a goal they basically share. Again, I agree that there is likely to be a gap between what I do and what others would do---e.g., I focus more on aggregate welfare, so am inclined to be more cautious. But that's a far cry from thinking that other people's plans don't matter, or even that my plans matter much more than everyone else's taken together.

comment by Wei Dai (Wei_Dai) · 2013-06-14T11:57:18.881Z · LW(p) · GW(p)

I think I may be missing a relevant part of the previous discussion between you and Eliezer.

As far as I know, people have predicted every single big economic impact from technology well in advance, in the strong sense of making appropriate plans, making indicative utterances, etc.

By "people" do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what's the strategic relevance of the question?

(I was claiming a few year's warning in the piece you are responding to, which is pretty minimal).

Which piece?

There seems to be a big gap between the sort of problem on which progress is rapid and surprising, and the sort of problem on which progress would have an economic impact.

Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you're probably right, as it's hard to think of another similar example. There was some discussion about this here.)

Replies from: paulfchristiano
comment by paulfchristiano · 2013-06-14T12:34:20.650Z · LW(p) · GW(p)

By "people" do you mean at least one person, at least a few people, most people, most elites, or something else? What are we arguing about here, and what's the strategic relevance of the question?

I mean if you suggested "Technology X will have a huge economic impact in the near future" to a smart person who knew something about the area, they would think that was plausible and have reasonable estimates for the plausible magnitude of that impact.

The question is whether AI researchers and other elites who take them seriously will basically predict that human-level AI is coming, so that there will be good-faith attempts to mitigate impacts. I think this is very likely, and that improving society's capability to handle problems they recognize (e.g. to reason about them effectively) has a big impact on improving the probability that they will handle a transition to AI well. Eliezer tends to think this doesn't much matter, and that if lone heroes don't resolve the problems then there isn't much hope.

Which piece?

On my blog I made some remarks about AI, in particular saying that in the mainline people expect human-level AI before it happens. But I think the discussion makes sense without that.

Would you consider Bitcoin to be a counterexample, at least potentially, if its economic impact keeps growing? (Although in general I think you're probably right, as it's hard to think of another similar example. There was some discussion about this here.)

  • The economic impact of bitcoin to date is modest, and I expect it to increase continuously over a scale of years rather than jumping surprisingly.
  • I don't think people would have confidently predicted no digital currency prior to bitcoin, nor that they would predict that now. So if e.g. the emergence of digital currency was associated with big policy issues which warranted a pre-emptive response, and this was actually an important issue, I would expect people arguing for that policy response would get traction.
  • Bitcoin is probably still unusually extreme.

If Bitcoin precipitated a surprising shift in the economic organization of the world, then that would count.

I guess this part does depend a bit on context, since "surprising" depends on timescale. But Eliezer was referring to predictions of "a few years" of warning (which I think is on the very short end, and he thinks is on the very long end).

Replies from: Wei_Dai, jsteinhardt
comment by Wei Dai (Wei_Dai) · 2013-06-16T00:37:15.227Z · LW(p) · GW(p)

But Eliezer was referring to predictions of "a few years" of warning (which I think is on the very short end, and he thinks is on the very long end).

My own range would be a few years to a decade, but I guess unlike you I don't think that is enough warning time for the default scenario to turn out well. Does Eliezer think that would be enough time?

comment by jsteinhardt · 2013-06-15T04:15:40.759Z · LW(p) · GW(p)

For what it's worth, I think that (some fraction of) AI researchers are already cognizant of the potential impacts of AI. I think a much smaller number believe in FOOM scenarios, and might reject Hansonian projections as too detailed relative to the amount of uncertainty, but would basically agree that human-level AI changes the game.

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-15T18:48:53.836Z · LW(p) · GW(p)

(I was claiming a few year's warning in the piece you are responding to, which is pretty minimal).

Could we get a link to this? Maybe EY could add it to the post?

comment by Scott Alexander (Yvain) · 2013-06-13T23:31:16.837Z · LW(p) · GW(p)

You mention Deep Blue beating Kasparov. This sounds look a good test case. I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess - Wikipedia gives the example of a 1960s MIT professor who claimed that "no computer program could defeat even a 10-year-old child at chess". And it seems to me that by the time Deep Blue beat Kasparov, most people in the know agreed it would happen someday even if they didn't think Deep Blue itself would be the winner. A quick Google search doesn't pull up enough data to allow me to craft a full narrative of "people gradually became more and more willing to believe computers could beat grand masters with each incremental advance in chess technology", but it seems like the sort of thing that probably happened.

I think the economics example is a poor analogy, because it's a question about laws and not a question of gradual creeping recognition of a new technology. It also ignores one of the most important factors at play here - the recategorization of genres from "science fiction nerdery" to "something that will happen eventually" to "something that might happen in my lifetime and I should prepare for it."

Replies from: rhollerith_dot_com, Eliezer_Yudkowsky
comment by RHollerith (rhollerith_dot_com) · 2013-06-14T02:23:46.512Z · LW(p) · GW(p)

I know that there were times when it was very controversial whether computers would ever be able to beat humans in chess

Douglas Hofstadter being one on the wrong side: well, to be exact, he predicted (in his book GEB) that any computer that could play superhuman chess would necessarily have certain human qualities, e.g., if you ask it to play chess, it might reply, "I'm bored of chess; let's talk about poetry!" which IMHO is just as wrong as predicting that computers would never beat the best human players.

Replies from: gwern, Houshalter
comment by gwern · 2013-06-14T03:25:59.420Z · LW(p) · GW(p)

I thought you were exaggerating there, but I looked it up in my copy and he really did say that: pg684-686:

To conclude this Chapter, I would like to present ten "Questions and Speculations" about AI. I would not make so bold as to call them "Answers" - these are my personal opinions. They may well change in some ways, as I learn more and as AI develops more...

Question: Will there be chess programs that can beat anyone?

Speculation: No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players. They will be programs of general intelligence, and they will be just as temperamental as people. "Do you want to play chess?" "No, I'm bored with chess. Let's talk about poetry." That may be the kind of dialogue you could have with a program that could beat everyone. That is because real intelligence inevitably depends on a total overview capacity - that is, a programmed ability to "jump out of the system", so to speak - at least roughly to the extent that we have that ability. Once that is present, you can't contain the program; it's gone beyond that certain critical point, and you just have to face the facts of what you've wrought.

I wonder if he did change his opinion on computer chess before Deep Blue and how long before? I found two relevant bits by him, but they don't really answer the question except they sound largely like excuse-making to my ears and like he was still fairly surprised it happened even as it was happening; from February 1996:

Several cognitive scientists said Deep Blue's victory in the opening game of the recent match told more about chess than about intelligence. "It was a watershed event, but it doesn't have to do with computers becoming intelligent," said Douglas Hofstadter, a professor of computer science at Indiana University and author of several books about human intelligence, including "Godel, Escher, Bach," which won a Pulitzer Prize in 1980, with its witty argument about the connecting threads of intellect in various fields of expression. "They're just overtaking humans in certain intellectual activities that we thought required intelligence. My God, I used to think chess required thought. Now, I realize it doesn't. It doesn't mean Kasparov isn't a deep thinker, just that you can bypass deep thinking in playing chess, the way you can fly without flapping your wings."...In "Godel, Escher, Bach" he held chess-playing to be a creative endeavor with the unrestrained threshold of excellence that pertains to arts like musical composition or literature. Now, he says, the computer gains of the last decade have persuaded him that chess is not as lofty an intellectual endeavor as music and writing; they require a soul. "I think chess is cerebral and intellectual," he said, "but it doesn't have deep emotional qualities to it, mortality, resignation, joy, all the things that music deals with. I'd put poetry and literature up there, too. If music or literature were created at an artistic level by a computer, I would feel this is a terrible thing."

And from January 2007:

Kelly said to me, "Doug, why did you not talk about the singularity and things like that in your book?" And I said, "Frankly, because it sort of disgusts me, but also because I just don't want to deal with science-fiction scenarios." I'm not talking about what's going to happen someday in the future; I'm not talking about decades or thousands of years in the future...And I don't have any real predictions as to when or if this is going to come about. I think there's some chance that some of what these people are saying is going to come about. When, I don't know. I wouldn't have predicted myself that the world chess champion would be defeated by a rather boring kind of chess program architecture, but it doesn't matter, it still did it. Nor would I have expected that a car would drive itself across the Nevada desert using laser rangefinders and television cameras and GPS and fancy computer programs. I wouldn't have guessed that that was going to happen when it happened. It's happening a little faster than I would have thought, and it does suggest that there may be some truth to the idea that Moore's Law [predicting a steady increase in computing power per unit cost] and all these other things are allowing us to develop things that have some things in common with our minds. I don't see anything yet that really resembles a human mind whatsoever. The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself than to a human mind, and certainly the computer program that plays chess doesn't have any intelligence or anything like human thoughts.

Replies from: Nominull
comment by Nominull · 2013-06-16T09:09:48.327Z · LW(p) · GW(p)

I suspect the thermostat is closer to the human mind than his conception of the human mind is.

comment by Houshalter · 2013-06-17T06:39:28.548Z · LW(p) · GW(p)

To be fair, people expected a chess playing computer to play chess in the same way a human does, thinking about the board abstractly and learning from experience and all that. We still haven't accomplished that. Chess programs work by inefficiently computing every possible move, so many moves ahead, which seemed impossible before computers got exponentially faster. And even then, deep blue was a specialized super-computer and had to use a bunch of little tricks and optimizations to get it just barely past human grand master level.

Replies from: BlueSun
comment by BlueSun · 2013-06-17T20:03:07.228Z · LW(p) · GW(p)

I was going to point that out too as I think it demonstrates an important lesson. They were still wrong.

Almost all of their thought processes were correct, but they still got to the wrong result because they looked at solutions too narrowly. It's quite possible that many of the objections to AI, rejuvenation, cryonics, are correct but if there's another path they're not considering, we could still end up with the same result. Just like a Chess program doesn't think like a human, but can still beat one and an airplane doesn't fly like a bird, but can still fly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T23:39:00.830Z · LW(p) · GW(p)

Yes, people now believe that computers can beat people at chess.

Replies from: CarlShulman, Thomas
comment by CarlShulman · 2013-06-13T23:55:45.877Z · LW(p) · GW(p)

I.e., they didn't update to expecting HAL immediately after, and they were right for solid reasons. But I think that the polls, and moreso polls of experts do respond to advancements in technology, e.g. on self-driving cars or solar power.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T00:05:42.902Z · LW(p) · GW(p)

Do we have any evidence that they updated to expecting HAL in the long run? Normatively, I agree that ideal forecasters shouldn't be doing their updating on press releases, but people sometimes argue that press release W will cause people to update to X when they didn't realize X earlier.

comment by Thomas · 2013-06-14T07:16:14.516Z · LW(p) · GW(p)

Yes, people now believe that computers can beat people at chess.

It was on our national television, few months ago. Kasparov was here, opened some international chess center for young players in Maribor. He gave an interview and among other things, he told us how fishy was the Deep Blue victory and not real in fact.

At least a half of the population believed him.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T14:09:09.153Z · LW(p) · GW(p)

I notice I am confused (he said politely). Kasparov is not stupid and modern chess programs on a home computer e.g. Deep Rybka 3.0 are overwhelmingly more powerful than Deep Blue, there should be no reasonable way for anyone to delude themselves that computer chess programs are not crushingly superior to unassisted humans.

Replies from: Vaniver, fezziwig, army1987, Qiaochu_Yuan, John_Maxwell_IV
comment by Vaniver · 2013-06-14T15:49:35.743Z · LW(p) · GW(p)

I seem to recall that there was some impoliteness surrounding the Deep Blue game specifically- basically, it knew every move Kasparov had ever played, but Kasparov was not given any record of Deep Blue's plays to learn how it played (like he would have had against any other human chessplayer who moved up the chess ranks); that's the charitable interpretation of what Kasparov meant by the victory being fishy. (This hypothetical Kasparov would want to play many matches against Deep Rybka 3.0 before the official matches that determine which of them is better- but would probably anticipate losing at the end of his training anyway.)

Replies from: army1987
comment by fezziwig · 2013-06-14T15:44:02.303Z · LW(p) · GW(p)

Nowadays, sure, but Deep Blue beat Kasparov in 1997. Kasparov has always claimed that IBM cheated during the rematch, supplementing Deep Blue with human insight. As far as I know there's no evidence that he's right, but he's suspected very consistently for the last 15 years.

comment by A1987dM (army1987) · 2013-06-16T17:29:58.085Z · LW(p) · GW(p)

Well, for that matter he also believes this stuff.

comment by Qiaochu_Yuan · 2013-06-15T19:12:56.192Z · LW(p) · GW(p)

Request that Thomas be treated as a troll. I'm not sure if he's actually a troll, but he's close enough.

Edit: This isn't primarily based on the above comment, it's primarily based on this comment.

Replies from: Kawoomba, Eliezer_Yudkowsky, army1987
comment by Kawoomba · 2013-06-15T19:44:33.301Z · LW(p) · GW(p)

Actually, starting at and around the 30 minute mark in this video -- an interview with Kasparov done in Maribor, a couple months ago, no less -- he whines about the whole human versus machine match up a lot, suggests new winning conditions (human just has to win one game of a series to show superiority, since the "endurance" aspect is the machine "cheating") which would redefine the result etcetera.

Honi soit qui mal y pense.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-15T19:48:09.511Z · LW(p) · GW(p)

Honi soit qui mal y pense.

I looked this up but I don't understand what it was intended to mean in this context.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-15T19:59:53.254Z · LW(p) · GW(p)

"Shame on him, who suspects illicit motivation" is given as one of the many possible translations. Don't take the "shame" part too literally, but there is some irony in pointing out someone as a troll when the one comment you use for doing so turns out to be true, and interesting to boot (Kasparov engaging in bad-loser-let's-warp-the-facts behavior).

I'm not taking a stance on the issue whether Thomas is or isn't a troll, you were probably mostly looking for a good-seeming place to share your opinion about him.

(Like spotting a cereal thief in a supermarket, day after day. Then when you finally hold him and call the authorities, it turns out that single time he didn't steal.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-15T21:00:24.019Z · LW(p) · GW(p)

Hm. A brief glance at Thomas's profile makes it hard to be sure. I will be on the lookout.

comment by A1987dM (army1987) · 2013-06-16T17:31:27.068Z · LW(p) · GW(p)

So why did you write that here rather than there?

Ah, right, the karma toll.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-06-16T18:57:18.185Z · LW(p) · GW(p)

I thought it would be more likely to be seen by Eliezer if I responded to Eliezer.

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-15T18:45:06.157Z · LW(p) · GW(p)

Hm?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-15T18:58:24.067Z · LW(p) · GW(p)

Those were matches with Rybka handicapped (an odds match is a handicapped match) and Deep Rybka 3.0 is a substantial improvement over Rybka. The referenced "Zappa" which played Rybka evenly is another computer program. Read the reference carefully.

Replies from: John_Maxwell_IV
comment by Locaha · 2013-06-14T15:15:55.834Z · LW(p) · GW(p)

Example 1: "After a 2-year-old mouse is rejuvenated to allow 3 years of additional life, society will realize that human rejuvenation is possible, turn against deathism as the prospect of lifespan / healthspan extension starts to seem real, and demand a huge Manhattan Project to get it done."

A quick and dirty Google search reveals:

Cost of Manhattan Project in 2012 dollars: 30 billion

Pharma R&D budget in 2012: 70 billion

http://www.fiercebiotech.com/special-reports/biopharmas-top-rd-spenders-2012

http://nuclearsecrecy.com/blog/2013/05/17/the-price-of-the-manhattan-project/

Replies from: CarlShulman
comment by CarlShulman · 2013-06-15T22:42:37.883Z · LW(p) · GW(p)

I think this is a good point, but I'd mention that real U.S. GDP is ~7 times higher now than then, aging as such isn't the focus of most pharma R&D (although if pharma companies thought they could actually make working drugs for it they would), and scientists' wages are higher now due to Baumol's cost disease.

Replies from: Locaha
comment by Locaha · 2013-06-16T08:47:11.408Z · LW(p) · GW(p)

The point was, Pharma is spending enormous sums of money trying to fight individual diseases (and failing like 90% of the time), and people are proposing a Manhattan project to do something far more ambitious. But a Manhattan project is in the same order of magnitude as an individual drug. IOW, a Manhattan project won't be enough.

In any case, 3 years of additional life to a mouse won't be enough, because people can always claim that the intervention is not proportional to life span. What will do the trick is an immortalized mouse, as young at 15 years as it was at 0.5.

comment by Qiaochu_Yuan · 2013-06-14T00:06:59.519Z · LW(p) · GW(p)

My version of Example 2 sounds more like "at some point, Watson might badly misdiagnose a human patient, or a bunch of self-driving cars might cause a terrible accident, or more inscrutable algorithms will do more inscrutable things, and this sort of thing might cause public opinion to turn against AI entirely in the same way that it turned against nuclear power."

Replies from: CarlShulman
comment by CarlShulman · 2013-06-14T03:00:25.362Z · LW(p) · GW(p)

I think that people will react more negatively to harms than they react positively to benefits, but I would still expect the impacts of broadly infrahuman AI to be strongly skewed towards the positive. Accidents might lead to more investment in safety, but a "turn against AI entirely" situation seems unlikely to me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T16:12:40.255Z · LW(p) · GW(p)

You could say the same about nuclear power. It's conceivable that with enough noise about "AI is costing jobs" the broad positive impacts could be viewed as ritually contaminated a la nuclear power. Hm, now I wonder if I should actually publish my "Why AI isn't the cause of modern unemployment" writeup.

Replies from: Yosarian2, Halfwit
comment by Yosarian2 · 2013-06-17T16:10:19.920Z · LW(p) · GW(p)

I don't know about that; I think that a lot of the people who think that AI is "costing jobs" view that as a positive thing.

comment by Halfwit · 2013-06-14T16:46:30.013Z · LW(p) · GW(p)

I don't think that's a good analogy. The cold war had two generations of people living under the very real prospect of nuclear apocalypse.Grant Morrison wrote once about how, at like age five, he was concretely visualizing nuclear annihilation regularly. By his early twenties, pretty much everyone he knew figured civilization wasn't going to make it out of the cold war--that's a lot of trauma, enough to power a massive ugh field. Vague complaints of "AI is costing jobs" just can't compare to the bone-deep terror that was pretty much universal during the cold war.

comment by [deleted] · 2013-06-14T02:59:18.461Z · LW(p) · GW(p)

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, "After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Sir Karl Popper came to the same conclusion in his 1963 book Conjectures and Refutations. So did Harold Walsby in his 1947 book The Domain of Ideologies. You're in good company.

comment by diegocaleiro · 2013-06-14T03:40:24.513Z · LW(p) · GW(p)

At the Edge question 2009 6 people spoke of immortality (de Grey not included). and 17 people spoke of superintelligence/humans 2.0.

This seems like evidence for Aubrey's point of view.

Of all the things that 151 top scientists could think they'd live to see, that more than 10% converged on that without previous communication in that stuff is perplexing for anyone who was a transhumanist in 2005.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-06-15T04:37:02.250Z · LW(p) · GW(p)

without previous communication

No, these people all have long term relationships with Brockman/Edge, which even holds parties bringing them together.

Replies from: gwern
comment by gwern · 2013-06-15T19:48:35.140Z · LW(p) · GW(p)

Indeed, when I was looking at Edge's tax filings, it seemed to me that the entire point of Edge was basically funding their parties.

comment by Daniel_Burfoot · 2013-06-14T01:32:33.361Z · LW(p) · GW(p)

I agree that almost no actual individual will change his or her mind. But humanity as a whole can change its mind, as young impressionable scientists look around, take in the available evidence, and then commit themselves to a position and career trajectory based on that evidence.

Replies from: jsteinhardt
comment by jsteinhardt · 2013-06-15T04:18:03.519Z · LW(p) · GW(p)

Note that this would be a pretty slow change and is likely not fast enough if we only get, say, 5 years of prior warning.

comment by Yosarian2 · 2013-06-17T16:42:20.233Z · LW(p) · GW(p)

I do think there is a lot of truth to that. Reminds me of the people who said in the 1990's "Well, as soon as the arctic ice cap start to melt, then the climate deniers will admit that climate change is real", but of course that haven't happened either.

I do wonder, though, if that's equally true for all of those fields. For example, in terms of anti-aging technology, it seems to me that the whole status quo is driven by a very deep and fundamental sense that aging is basically unchangeable, and that that's the only thing that makes it acceptable to people, and I would suspect that anything that starts to change or threaten that perception even a little bit could cause radical alterations to how people act.

AI, though, is a field where it's especially hard for a lay people to get an idea of either how fast things are advancing or how much farther they have to go before a human-level GAI becomes possible; everyone is used to a constant stream of "cool new things" coming out of the tech fields, but it's much harder to get an idea of what the larger picture is. I also think that technology news generally isn't covered very well in the media, which makes it even harder. Fundamentally, I think that most people do understand that computers are a transformative technology, it's just that they think that the computer revolution has already mostly happened.

comment by Douglas_Knight · 2013-06-15T04:44:50.380Z · LW(p) · GW(p)

"When the author of the original data admits that he fabricated it, people will stop believing that vaccines cause autism."

Replies from: satt
comment by satt · 2013-06-15T15:03:06.496Z · LW(p) · GW(p)

Did Wakefield ever admit his MMR & autism paper was a fraud? I know he's acknowledged taking blood samples from children without complying with ethical guidelines, and failing to disclose a conflict of interest, but I don't recall him saying the paper's results were BS.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-06-15T15:41:44.410Z · LW(p) · GW(p)

No, I was completely mistaken about this case. For some reason I thought that the 12 children didn't even exist.

Replies from: satt
comment by satt · 2013-06-15T16:11:52.819Z · LW(p) · GW(p)

I should add that although you were mistaken about the details, I basically agree with your example. Plenty of people still reckon vaccines cause autism.

comment by Luke_A_Somers · 2013-06-14T14:18:18.952Z · LW(p) · GW(p)

Funny that I just hit this with my comment from yesterday:

http://lesswrong.com/lw/hoz/do_earths_with_slower_economic_growth_have_a/95hm

Idea: robots can be visibly unfriendly without being anywhere near a FOOM. This will help promote awareness.

I think this is different from your examples.

A mouse living to the ripe old age of 5? Well, everyone has precedent for animals living that long, and we also have precedent for medicine doing all sorts of amazing things for the health of mice that seem never to translate into treatments for people.

Economic meltdowns? Economics is so political that you expect the populace to update on evidence? You might as well expect them to update on evidence in respect to religion.

Meanwhile, we are so very primed to suspect the worst of computers.

comment by Sniffnoy · 2013-06-14T13:56:00.008Z · LW(p) · GW(p)

Huh, my first thought on seeing the title was that this would be about Richard Stallman.

comment by Zaine · 2013-06-14T05:34:24.713Z · LW(p) · GW(p)

I wonder how a prolonged event Y, something with enough marketability to capture the public's eye for some time, might change opinions on the truth of proposition X. Something along the lines of the calculus, Gutenberg's printing press, the advent of manoeuvrable cannons, flintlocks, quantum mechanics, electric stoves (harnessed electricity), the concept of a national debt, etcetera.

I'd be interested in what if any effects the American National Security Agency scandal, or a worldwide marketing campaign by Finland advertising their healthcare system will or would have to this end.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T16:21:22.265Z · LW(p) · GW(p)

Prolonged events with no clearly defined moment of hitting the newspapers all at once seem to me to have lower effects on public opinion. Contrast the long, gradual, steady incline of chessplaying power going on for decades earlier, vs. the Deep Blue moment.

comment by Carinthium · 2013-06-13T22:25:03.000Z · LW(p) · GW(p)

Presumably the best you can do solution-wise is to try and move policy options through a series of "middle stages" towards either optimal results, or more likely the best result you can realistically get?

EDIT: Also- how DID the economists figure it out anyway? I would have thought that although circumstances can incease or reduce it inflationary effects would be inevitable if you increased the money supply that much.

Replies from: CronoDAS, Eliezer_Yudkowsky
comment by CronoDAS · 2013-06-13T23:55:34.107Z · LW(p) · GW(p)

EDIT: Also- how DID the economists figure it out anyway? I would have thought that although circumstances can increase or reduce it inflationary effects would be inevitable if you increased the money supply that much.

When interest rates are virtually zero, cash and short-term debt become interchangeable. There's no incentive to lend your cash on a short-term basis, so people (and corporations) start holding cash as a store of value instead of lending it. (After all, you can spend cash - or, more accurately, checking account balances - directly, but you can't spend a short-term bond.) Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it.

See also.

Replies from: PeterDonis
comment by PeterDonis · 2014-03-02T02:18:30.239Z · LW(p) · GW(p)

Sorry for the late comment but I'm just running across this thread.

Prices don't go up if all the new money just ends up under Apple Computer's proverbial mattress instead of in the hands of someone who is going to spend it.

But as far as I know the mainstream economists like the Fed did not predict that this would happen; they thought quantitative easing would start banks (and others with large cash balances) lending again. If banks had started lending again, by your analysis (which I agree with), we would have seen significant inflation because of the growth in the money supply.

So it looks to me like the only reason the Fed got the inflation prediction right was that they got the lending prediction wrong. I don't think that counts as an instance of "we predicted critical event W".

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T22:30:07.948Z · LW(p) · GW(p)

Demand for extremely safe assets increased (people wanted to hold more money), the same reason Treasury bonds briefly went to negative returns; demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking; the shadow banking sector contracted so financial entities had to use money instead of collateral; etc.

Replies from: PeterDonis
comment by PeterDonis · 2014-03-02T02:21:01.517Z · LW(p) · GW(p)

Sorry for the late comment but I'm just running across this thread.

demand for loans decreased and this caused destruction of money via the logic of fractional reserve banking

This is an interesting comment which I haven't seen talked about much on econblogs (or other sources of information about economics, for that matter). I understand the logic: fractional reserve banking is basically using loans as a money multiplier, so fewer loans means less multiplication, hence effectively less money supply. But it makes me wonder: what happens when the loan demand goes up again? Do you then have to reverse quantitative easing and effectively retire money to keep things in balance? Do any mainstream economists talk about that?

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2019-08-17T04:55:55.774Z · LW(p) · GW(p)

The two examples here seem to not have alarming/obvious enough Ws. It seems like you are arguing against a straw-man who makes bad predictions, based on something like a typical mind fallacy.




comment by CronoDAS · 2013-06-15T03:26:11.893Z · LW(p) · GW(p)

Making a mouse live for 5 years isn't going to get anyone's attention. When they can make a housecat live to be 60, then we'll talk.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-06-15T13:18:25.553Z · LW(p) · GW(p)

So we won't be talking for at least thirty years? That's quite a while to wait.

Replies from: CronoDAS
comment by CronoDAS · 2013-06-15T22:51:33.849Z · LW(p) · GW(p)

I know. It's a real problem with this kind of research. For example, it took a long time before we got the results of the caloric restriction study on primates.

Replies from: gwern
comment by gwern · 2013-06-16T01:16:38.059Z · LW(p) · GW(p)

For example, it took a long time before we got the results of the caloric restriction study on primates.

As far as I knew, the recent studies weren't even the final results, they were just interim reports about the primates which had died up to that point and hobbled by the small sample size that implies.

comment by Shmi (shminux) · 2013-06-13T22:07:01.313Z · LW(p) · GW(p)

What would be examples of a critical event in AI?

Replies from: AlanCrowe, lukeprog, Eliezer_Yudkowsky
comment by AlanCrowe · 2013-06-15T22:02:15.065Z · LW(p) · GW(p)

I'm not connected to the Singularity Institute or anything, so this is my idiosyncratic view.

Think about theorem provers such as Isabelle or ACL2. They are typically structured a bit like an expert system with a rule base and an inference engine. The axioms play the role of rule base and the theorem prover plays the role of the inference engine. While it is easy to change the axioms, this implies a degree of interpretive overhead when it come to trying to prove a theorem.

One way to reduce the interpretative overhead is to use a partial evaluator to specialize the prover to the particular set of axioms.

Indeed, if one has a self-applicable partial evaluator one could use the second Futamura projection and, specializing the partial evaluator to the theorem prover, produce a theorem prover compiler. Axioms go in, an efficient theorem prover for those axioms comes out.

Self-applicable partial evaluators are bleeding-edge software technology and current ambitions are limited to stripping out interpretive overhead. They only give linear speed ups. In principle a partial evaluator could recognise algorithmic inefficiencies and, rewriting the code more aggressively produce super-linear speed ups.

This is my example of a critical event in AI: using a self-applicative partial evaluator and the second Futamura projection to obtain a theorem prover compiler with a super-linear speed up compared to proving theorems in interpretive mode. This would convince me that there was progress on self-improving AI and that the clock had started counting down towards an intelligence explosion that changes everything.

How long would be on the clock? A year? A decade? A century? Guessing wildly I'd put my critical event at the halfway point. AI research started in 1960, so if the critical event happens in 2020 that puts the singularity at 2080.

Notice how I am both more optimistic and more pessimistic about the prospects for AI than most commentators.

I'm more pessimistic because I don't see the current crop of wonderful, hand crafted, AI achievements, such as playing chess and driving cars as lying on the path towards recursively improving AI. These are the Faberge egg's of AI. They will not hatch into chickens that lay even more fabulous eggs...

I'm more optimistic because I'm willing to accept a technical achievement, internal to AI research, as a critical event. It could show that things are really moving, and that we can start to expect earth-shattering consequences, even before we've seen real-world impacts from the internal technical developments.

Replies from: LEmma
comment by LEmma · 2013-06-20T13:30:30.016Z · LW(p) · GW(p)

Vampire uses specialisation according to wikipedia:

A number of efficient indexing techniques are used to implement all major operations on sets of terms and clauses. Run-time algorithm specialisation is used to accelerate forward matching.

comment by lukeprog · 2013-06-14T01:36:09.518Z · LW(p) · GW(p)

BTW, the term for this is AGI Sputnik moment.

Replies from: shminux, Halfwit
comment by Shmi (shminux) · 2013-06-14T01:46:21.196Z · LW(p) · GW(p)

Neat. I guess Eliezer's point is that there will not be one until it's too late.

comment by Halfwit · 2013-06-14T06:26:44.629Z · LW(p) · GW(p)

By "the term" do you mean something Ben Ben Goertzel said once on SL4, or is this really a thing?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-06-14T06:48:16.867Z · LW(p) · GW(p)

I don't know what you mean by "really a thing", but it has been used more than once, including some academic papers.

Replies from: Halfwit
comment by Halfwit · 2013-06-14T06:54:47.570Z · LW(p) · GW(p)

I just found it amusing that lukeprog was designating it technical term, as I believe I first read the phrase in the Sl4 archives. And it seemed a casual thing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T22:23:13.931Z · LW(p) · GW(p)

I don't know, actually. I'm not the one making these forecasts. It's usually described as some broad-based increase of AI competence but not cashed out any further than that. I'll remark that if there isn't a sharp sudden bit of headline news, chances of a significant public reaction drop even further.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-13T22:49:51.467Z · LW(p) · GW(p)

Sorry, what I meant is, what would you consider an event that ought to be taken seriously but won't be? Eh, that's not right, presumably that's long past, like Deep Blue or maybe first quine.

What would you consider an even that an AI researcher not sold on AI x-risks ought to take seriously but likely will not? A version of Watson which can write web apps from vague human instructions? A perfect simulation of C.elegans? A human mind upload?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-13T22:56:46.553Z · LW(p) · GW(p)

Even I think they'd take a mind upload seriously - that might really produce a huge public update though probably not in any sane direction - though I don't expect that to happen before a neuromorphic UFAI is produced from the same knowledge base. They normatively ought to take a spider upload seriously. Something passing a restricted version of a Turing test might make a big public brouhaha, but even with a restricted test I'm not sure I expect any genuinely significant version of that before the end of the world (unrestricted Turing test passing should be sufficient unto FOOM). I'm not sure what you 'ought' to take seriously if you didn't take computers seriously in the first place. Aubrey was very specific in his prediction that I disagree with, people who forecast watershed opinion-changing events for AI are less so at least as far as I can recall.

Replies from: novalis, DanielVarga
comment by novalis · 2013-06-14T00:51:01.698Z · LW(p) · GW(p)

unrestricted Turing test passing should be sufficient unto FOOM

I don't think this is quite right. Most humans can pass a Turing test, even though they can't understand their own source code. FOOM requires that an AI has the ability to self-modify with enough stability to continue to (a) desire to continue to self-modified, and (b) be able to do so. Most uploaded humans would have a very difficult time with this - - just look at how people resist even modifying their beliefs, let alone their thinking machinery.

Replies from: Eliezer_Yudkowsky, ShardPhoenix
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T00:59:01.737Z · LW(p) · GW(p)

The problem is that an AI which passes the unrestricted Turing test must be strictly superior to a human; it would still have all the expected AI abilities like high-speed calculation and so on. A human who was augmented to the point of passing the Pocket Calculator Equivalence Test would be superhumanly fast and accurate at arithmetic on top of still having all the classical human abilities, they wouldn't be just as smart as a pocket calculator.

Replies from: novalis, Locaha
comment by novalis · 2013-06-14T01:12:31.150Z · LW(p) · GW(p)

High speed calculation plus human-level intelligence is not sufficient for recursive self-improvement. An AI needs to be able to understand its own source code, and that is not a guarantee that passing the Turing test (plus high-speed calculation) includes.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-06-14T01:54:34.570Z · LW(p) · GW(p)

If I am confident that a human is capable of building human-level intelligence, my confidence that a human-level intelligence cannot build a slightly-higher-than-human intelligence, given sufficient trials, becomes pretty low. Ditto my confidence that a slightly-higher-than-human intelligence cannot build a slightly-smarter-than-that intelligence, and so forth.

But, sure, it's far from zero. As you say, it's not a guarantee.

comment by Locaha · 2013-06-14T08:59:12.270Z · LW(p) · GW(p)

A human who was augmented to the point of passing the Pocket Calculator Equivalence Test

I thought a Human with a Pocket Calculator is this augmented human already. Unless you want to implant the calculator in your skull and control it with your thoughts. Which will also soon be possible.

comment by ShardPhoenix · 2013-06-14T10:38:33.186Z · LW(p) · GW(p)

The biggest reason humans can't do this is that we don't implement .copy(). This is not a problem for AIs or uploads, even if they are otherwise only of human intelligence.

Replies from: novalis
comment by novalis · 2013-06-14T20:24:08.911Z · LW(p) · GW(p)

Sure, with a large enough number of copies of you to practice on, you would learn to do brain surgery well enough to improve the functioning of your brain. But it could easily take a few thousand years. The biggest problem with self-improving AI is understanding how the mind works in the first place.

comment by DanielVarga · 2013-06-14T09:37:24.277Z · LW(p) · GW(p)

unrestricted Turing test passing should be sufficient unto FOOM

I tend to agree, but I have to note the surface similarity with Hofstadter's disproved "No, I'm bored with chess. Let's talk about poetry." prediction.

Replies from: gjm
comment by gjm · 2013-06-14T13:48:08.502Z · LW(p) · GW(p)

Consider first of all a machine that can pass an "AI-focused Turing test", by which I mean convincing one of the AI team that built it that it's a human being with a comparable level of AI expertise.

I suggest that such a machine is almost certainly "sufficient unto FOOM", if the judge in the test is allowed to go into enough detail.

An ordinary Turing test doesn't require the machine to imitate an AI expert but merely a human being. So for a "merely" Turing-passing AI not to be "sufficient unto FOOM" (at least as I understand that term) what's needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert.

It seems unlikely that there's a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn't close to being FOOM-ready, it seems like what's needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn't "scale up" to harder problems like the ordinary human architecture apparently does.

Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can't be entirely ruled out, but it feels much harder to believe than a "narrow" chess-playing AI was even at the time of Hofstadter's prediction.

Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it's a physicist, competent literary critics that it's a novelist, civil rights activists that it's a black person who's suffered from racial discrimination, etc.

comment by Unknowns · 2013-06-16T17:57:57.102Z · LW(p) · GW(p)

Exactly. This is part of the reason I will win the bet, i.e. it is the reason the first super intelligent AI will be programmed without attention to Friendliness.

Replies from: wedrifid
comment by wedrifid · 2013-06-16T19:30:40.891Z · LW(p) · GW(p)

Exactly. This is part of the reason I will win the bet, i.e. it is the reason the first super intelligent AI will be programmed without attention to Friendliness.

Unfortunately being right isn't sufficient for winning a bet. You also have to not have been torn apart and used as base minerals for achieving whatever goals a uFAI happens to have.

Replies from: Unknowns
comment by Unknowns · 2013-06-17T13:53:04.153Z · LW(p) · GW(p)

True, that's why it's only a part of the reason.

Replies from: wedrifid
comment by wedrifid · 2013-06-17T15:44:46.245Z · LW(p) · GW(p)

True, that's why it's only a part of the reason.

This implies that you have some other strategy for winning a bet despite strong UFAI existing. At very least this requires that some entity with which you are able to make a bet now will be around to execute behaviours that conditionally reward you after the event has occurred. This seems difficult to arrange.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2013-07-11T19:59:17.873Z · LW(p) · GW(p)

Actually, all he needs is someone who believes the first AI will be friendly and who think they'll have a use for money after a FAI exists. Then they could make an apocalypse bet where Unknowns gets paid now and then pays their opponent back if and when the first AI is built and it turns out to be friendly.

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-15T18:35:52.777Z · LW(p) · GW(p)

if you look at what happens when Congresscritters question Bernanke, you will find that they are all terribly, terribly concerned about inflation

I imagine that Congress is full of busy people who don't have time to follow blogs that cover every topic they debate. Did they get any testimony from expert economists when questioning Bernanke?

comment by elharo · 2013-06-14T11:04:50.767Z · LW(p) · GW(p)

"After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z."

Isn't this just a standard application of Bayesianism? I.e. after event W happens, people will consider proposition X to be somewhat more likely, thereby making them more favorable to Y and Z. The stronger evidence event W is, the more people will update and the further they will update. But no one piece of evidence is likely to totally convince everyone immediately, nor should it.

For instance, if "a 2-year-old mouse is rejuvenated to allow 3 years of additional life" that's some evidence for lifespan extension and I would update accordingly, but not all the way to infinite life extension is obviously possible and worthy of reshaping the economy around. If a two-year old mouse trained in a maze is rejuvenated and still remembers the maze it was trained in, that's much stronger evidence for useful lifespan extension, and I start to think maybe we should fund a Manhattan Project for human rejuvenation. If a 20-year-old signing chimpanzee is rejuvenated to allow 30 years of additional life, and still remembers how to sign, and perhaps can tell us facts from its previous life, that's really strong evidence. (Though that might actually require language skills beyond those of a non-rejuvenated signing chimpanzee.)

On the other hand, if the two-year old mouse is rejuvenated but seems to have forgotten the maze, or the chimpanzee is rejuvenated, but not only has forgotten how to sign, but is demonstrably mentally handicapped relative to other chimpanzees, that's actually evidence against life extension.

The wheel of progress turns slowly but it does turn. Expecting no one to update at all on new evidence is as equally wrong as expecting everyone to update from 0 to 1 overnight. After event W happens, most scientists who know what W means, young and old, will increase their estimate of the truth of proposition X; and after event W2, and then W3, happen they'll update still more. I've seen this happen repeatedly over my lifetime. For instance, in the last 20 years pretty much all astrophysicists have come to believe in a positive cosmological constant, dark matter, and dark energy, as implausible as those ideas sounded 40 years ago, because the evidence has piled up one observation at a time. Certainly some astrophysicists have retired and died and been replaced in those 20 years; but absolutely those scientists remaining have changed their minds. The rest of society will follow along as soon as it becomes important for them to do so. In astrophysics that day of relevance may never come, but in more down-to-earth subjects it may take years but not a lifetime's worth of years.

comment by Jonathan_Graehl · 2013-06-13T22:13:47.030Z · LW(p) · GW(p)

I'd missed the mouse "rejuvenation for 3 more years of life" result (did you mean cryo freeze -> revive, or something else?). Could you supply a cite?

Replies from: ChristianKl, Jonathan_Graehl
comment by ChristianKl · 2013-06-13T22:24:54.192Z · LW(p) · GW(p)

Aubrey de Grey thinks that it's worthwhile to fund a big price for the first group who achieves that result. That's one of the main strategies he advocates to convince everyone to take aging seriously.

comment by Jonathan_Graehl · 2013-06-13T22:15:08.240Z · LW(p) · GW(p)

Oh. IRCers point out that I misread, and this is only a hypothetical :( Too bad, I was about to perform a massive update :)

comment by Mimosa · 2013-06-14T02:37:44.411Z · LW(p) · GW(p)

This is the very definition of the status quo bias.