Engelbart: Insufficiently Recursive

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-26T08:31:09.000Z · LW · GW · Legacy · 22 comments

Contents

22 comments

Followup toCascades, Cycles, Insight, Recursion, Magic
Reply toEngelbart As Ubertool?

When Robin originally suggested that Douglas Engelbart, best known as the inventor of the computer mouse, would have been a good candidate for taking over the world via compound interest on tools that make tools, my initial reaction was "What on Earth?  With a mouse?"

On reading the initial portions of Engelbart's "Augmenting Human Intellect: A Conceptual Framework", it became a lot clearer where Robin was coming from.

Sometimes it's hard to see through the eyes of the past.  Engelbart was a computer pioneer, and in the days when all these things were just getting started, he had a vision of using computers to systematically augment human intelligence.  That was what he thought computers were for.  That was the ideology lurking behind the mouse.  Something that makes its users smarter - now that sounds a bit more plausible as an UberTool.

Looking back at Engelbart's plans with benefit of hindsight, I see two major factors that stand out:

  1. Engelbart committed the Classic Mistake of AI: underestimating how much cognitive work gets done by hidden algorithms running beneath the surface of introspection, and overestimating what you can do by fiddling with the visible control levers.
  2. Engelbart anchored on the way that someone as intelligent as Engelbart would use computers, but there was only one of him - and due to point 1 above, he couldn't use computers to make other people as smart as him.

To start with point 2:  They had more reverence for computers back in the old days.  Engelbart visualized a system carefully designed to flow with every step of a human's work and thought, assisting every iota it could manage along the way.  And the human would be trained to work with the computer, the two together dancing a seamless dance.

And the problem with this, was not just that computers got cheaper and that programmers wrote their software more hurriedly.

There's a now-legendary story about the Windows Vista shutdown menu, a simple little feature into which 43 different Microsoft people had input.  The debate carried on for over a year.  The final product ended up as the lowest common denominator - a couple of hundred lines of code and a very visually unimpressive menu.

So even when lots of people spent a tremendous amount of time thinking about a single feature of the system - it still didn't end up very impressive.  Jef Raskin could have done better than that, I bet.  But Raskins and Engelbarts are rare.

You see the same effect in Eric Drexler's chapter on hypertext in Engines of Creation:  Drexler imagines the power of the Web to... use two-way links and user annotations to promote informed criticism.  (As opposed to the way we actually use it.)  And if the average Web user were Eric Drexler, the Web probably would work that way by now.

But no piece of software that has yet been developed, by mouse or by Web, can turn an average human user into Engelbart or Raskin or Drexler.  You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing.

Which brings us to point 1.

It does look like Engelbart was under the spell of the "logical" paradigm that prevailed in AI at the time he made his plans.  (Should he even lose points for that?  He went with the mainstream of that science.)  He did not see it as an impossible problem to have computers help humans think - he seems to have underestimated the difficulty in much the same way that the field of AI once severely underestimated the work it would take to make computers themselves solve cerebral-seeming problems.  (Though I am saying this, reading heavily between the lines of one single paper that he wrote.)  He talked about how the core of thought is symbols, and speculated on how computers could help people manipulate those symbols.

I have already said much on why people tend to underestimate the amount of serious heavy lifting that gets done by cognitive algorithms hidden inside black boxes that run out of your introspective vision, and overestimating what you can do by duplicating the easily visible introspective control levers.  The word "apple", for example, is a visible lever; you can say it or not say it, its presence or absence is salient.  The algorithms of a visual cortex that let you visualize what an apple would look like upside-down - we all have these in common, and they are not introspectively accessible.  Human beings knew about apples a long, long time before they knew there was even such a thing as the visual cortex, let alone beginning to unravel the algorithms by which it operated.

Robin Hanson asked me:

"You really think an office worker with modern computer tools is only 10% more productive than one with 1950-era non-computer tools?  Even at the task of creating better computer tools?"

But remember the parable of the optimizing compiler run on its own source code - maybe it makes itself 50% faster, but only once; the changes don't increase its ability to make future changes.  So indeed, we should not be too impressed by a 50% increase in office worker productivity - not for purposes of asking about FOOMs.  We should ask whether that increase in productivity translates into tools that create further increases in productivity.

And this is where the problem of underestimating hidden labor, starts to bite.  Engelbart rhapsodizes (accurately!) on the wonders of being able to cut and paste text while writing, and how superior this should be compared to the typewriter.  But suppose that Engelbart overestimates, by a factor of 10, how much of the intellectual labor of writing goes into fighting the typewriter.  Then because Engelbart can only help you cut and paste more easily, and cannot rewrite those hidden portions of your brain that labor to come up with good sentences and good arguments, the actual improvement he delivers is a tenth of what he thought it would be.  An anticipated 20% improvement becomes an actual 2% improvement.  k way less than 1.

This will hit particularly hard if you think that computers, with some hard work on the user interface, and some careful training of the humans, ought to be able to help humans with the type of "creative insight" or "scientific labor" that goes into inventing new things to do with the computer.  If you thought that the surface symbols were where most of the intelligence resided, you would anticipate that computer improvements would hit back hard to this meta-level, and create people who were more scientifically creative and who could design even better computer systems.

But if really you can only help people type up their ideas, while all the hard creative labor happens in the shower thanks to very-poorly-understood cortical algorithms - then you are much less like neutrons cascading through uranium, and much more like an optimizing compiler that gets a single speed boost and no more.  It looks like the person is 20% more productive, but in the aspect of intelligence that potentially cascades to further improvements they're only 2% more productive, if that.

(Incidentally... I once met a science-fiction author of a previous generation, and mentioned to him that the part of my writing I most struggled with, was my tendency to revise and revise and revise things I had already written, instead of writing new things.  And he said, "Yes, that's why I went back to the typewriter.  The word processor made it too easy to revise things; I would do too much polishing, and writing stopped being fun for me."  It made me wonder if there'd be demand for an author's word processor that wouldn't let you revise anything until you finished your first draft.

But this could be chalked up to the humans not being trained as carefully, nor the software designed as carefully, as in the process Engelbart envisioned.)

Engelbart wasn't trying to take over the world in person, or with a small group.  Yet had he tried to go the UberTool route, we can reasonably expect he would have failed - that is, failed at advancing far beyond the outside world in internal computer technology, while selling only UberTool's services to outsiders.

Why?  Because it takes too much human labor to develop computer software and computer hardware, and this labor cannot be automated away as a one-time cost.  If the world outside your window has a thousand times as many brains, a 50% productivity boost that only cascades to a 10% and then a 1% additional productivity boost, will not let you win against the world.  If your UberTool was itself a mind, if cascades of self-improvement could fully automate away more and more of the intellectual labor performed by the outside world - then it would be a different story.  But while the development path wends inexorably through thousands and millions of engineers, and you can't divert that path through an internal computer, you're not likely to pull far ahead of the world.  You can just choose between giving your own people a 10% boost, or selling your product on the market to give lots of people a 10% boost.

You can have trade secrets, and sell only your services or products - many companies follow that business plan; any company that doesn't sell its source code does so.  But this is just keeping one small advantage to yourself, and adding that as a cherry on top of the technological progress handed you by the outside world.  It's not having more technological progress inside than outside.

If you're getting most of your technological progress handed to you - your resources not being sufficient to do it in-house - then you won't be able to apply your private productivity improvements to most of your actual velocity, since most of your actual velocity will come from outside your walls.  If you only create 1% of the progress that you use, then a 50% improvement becomes a 0.5% improvement.  The domain of potential recursion and potential cascades is much smaller, diminishing k.  As if only 1% of the uranium generating your neutrons, were available for chain reactions to be fissioned further.

We don't live in a world that cares intensely about milking every increment of velocity out of scientific progress.  A 0.5% improvement is easily lost in the noise.  Corporations and universities routinely put obstacles in front of their internal scientists that cost them more than 10% of their potential.  This is one of those problems where not everyone is Engelbart (and you can't just rewrite their source code either).

For completeness, I should mention that there are generic obstacles to pulling an UberTool.  Warren Buffett has gotten a sustained higher interest rate than the economy at large, and is widely believed to be capable of doing so indefinitely.  In principle, the economy could have invested hundreds of billions of dollars as soon as Berkshire Hathaway had a sufficiently long track record to rule out chance.  Instead, Berkshire has grown mostly by compound interest.  We could live in a world where asset allocations were ordinarily given as a mix of stocks, bonds, real estate, and Berkshire Hathaway.  We don't live in that world for a number of reasons: financial advisors not wanting to make themselves appear irrelevant, strange personal preferences on the part of Buffett...

The economy doesn't always do the obvious thing, like flow money into Buffett until his returns approach the average return of the economy.  Interest rate differences much higher than 0.5%, on matters that people care about far more intensely than Science, are ignored if they're not presented in exactly the right format to be seized.

And it's not easy for individual scientists or groups to capture the value created by scientific progress.  Did Einstein die with 0.1% of the value that he created?  Engelbart in particular doesn't seem to have tried to be Bill Gates, at least not as far as I know.

With that in mind - in one sense Engelbart succeeded at a good portion of what he actually set out to do: computer mice did take over the world.

But it was a broad slow cascade that mixed into the usual exponent of economic growth.  Not a concentrated fast FOOM.  To produce a concentrated FOOM, you've got to be able to swallow as much as possible of the processes driving the FOOM into the FOOM.  Otherwise you can't improve those processes and you can't cascade through them and your k goes down.  Then your interest rates won't even be as much higher than normal as, say, Warren Buffett's.  And there's no grail to be won, only profits to be made:  If you have no realistic hope of beating the world, you may as well join it.

22 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by luzr · 2008-11-26T10:10:01.000Z · LW(p) · GW(p)

I think you should try to consider one possible thing:

In your story, Engelbart failed to produce the UberTool.

Anyway, looking around and seeing the progress since 1970, I would say, he PRETTY MUCH DID. He was not alone and we should rather speak about technology that succeeded, but what else is all the existing computing infrastructure, internet, Google etc.. than the ultimate UberTool, augmenting human cognitive abilities?

Do you think we could keep the Moore's law going without all this? Good luck placing those two billions transistors of next generation high-end CPU on silicon without using current high-end CPU.

Hell, this blog would not even exist and you would not have any thoughts about friendliness of AI. You certainly would not be able to produce a post each day and get comments from people all around of world withing several hours after that - and many of those people came here using Google ubertool because they share interest in AI, never heard about you before and came back only because all of this is way interesting :)

Actually, maybe the lesson to learned is that we sort of expect a singularity moment as single AI "going critical" - and everything will change after that point. But in fact, maybe we already are "critical" now, we just do not see the forest because of trees.

Now, when I say "we", I mean the whole human civilisation as "singleton". Indeed, if you consider "us" as single mind, this "I" (as in inteligence), composed of humans minds and interconnected by internet, is exploding right now...

comment by Tim_Tyler · 2008-11-26T11:51:47.000Z · LW(p) · GW(p)
Indeed, if you consider "us" as single mind, this "I" (as in inteligence), composed of humans minds and interconnected by internet, is exploding right now

Uh huh. See my video-and-essay: http://www.alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

comment by Johnicholas · 2008-11-26T13:03:13.000Z · LW(p) · GW(p)

Tim Tyler: Your essay is convincing. How do you suggest we should act to improve Friendliness?

comment by luzr · 2008-11-26T13:25:53.000Z · LW(p) · GW(p)

Tim:

Thanks for the link. I have the website to my AI portfolio.

BTW, I guess you got it right:) I have came to similar conclusions, including the observation about exponential functions :)

Johnicholas:

I guess that in Tim's scenario, "friendliness" is no near as important subject. Without "foom" there is a plenty of time for debugging...

comment by Johnicholas · 2008-11-26T13:45:50.000Z · LW(p) · GW(p)

luzr: I do not agree. The situation is, we have a complex self-modifying entity (the web of human society, including machines) which is already quite powerful and capable and is growing more powerful and capable more and more rapidly. The "foom" is now.

There are not many guarantees that we can make about the behavior of society as a whole. Does society act like it values human life? Does society act like it values human comfort?

comment by denis_bider · 2008-11-26T14:24:29.000Z · LW(p) · GW(p)

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.

I think your time would be better spent actually working, or writing about, the actual details of the problems that need to be solved. Alternately, instead of adding to the already enormous cumulative volume of your posts, perhaps you might try writing something clearer and shorter.

But just piling more on top of what's already been written doesn't seem like it will have an influence.

comment by Zubon · 2008-11-26T14:57:34.000Z · LW(p) · GW(p)

You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing.

You must admit: that would be a very impressive mouse.

comment by PK · 2008-11-26T15:38:04.000Z · LW(p) · GW(p)

"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."

I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.

comment by Tim_Tyler · 2008-11-26T16:58:58.000Z · LW(p) · GW(p)

Thanks for your interest - but comments on someone else's blog post are not the place for my policy recommendations.

As far as risks go, the idea that the explosion has started probably makes little difference. Risks from going too fast would be about the same. Risks from suddenly changing speed might be slightly reduced.

The main implications of my essay on the topics discussed here are probably that it makes extrapolation from our recent history and not-so-recent evolutionary history seem like a more promising approach - and it makes the whole idea of some highly-localised significant future event that changes everything seem less likely.

comment by luzr · 2008-11-26T18:03:28.000Z · LW(p) · GW(p)

Johnicholas:

"The "foom" is now."

I like that. Maybe we can get some T-shirts? :)

"There are not many guarantees that we can make about the behavior of society as a whole. Does society act like it values human life? Does society act like it values human comfort?"

Good point. Anyway, it is questionable whether we can apply any Elizier's friendlines guidelines to the whole society instead of single strong general AI entity.

comment by GenericThinker · 2008-11-26T21:15:20.000Z · LW(p) · GW(p)

PK you are absolutely right. We can even take things a step further and say positive AI will happen regardless of Eliezer's involvement, and even as far as to say his involvement not having the needed experience in both math and programming will be as a cheerleader and not someone who makes it happen.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-26T23:44:06.000Z · LW(p) · GW(p)

Humanity is in a FOOM relative to the rest of the biosphere but of course it doesn't seem ridiculously fast to us; the question from our standpoint is whether a brain in a box in a basement can go FOOM relative to human society. Anyone who thinks that because we're already growing at a high rate, the distinction between that and a nanotech-capable superintelligence must not be very important, is being just a little silly. It may not even be wise to call them by the same name, if it tempts you to such folly - and so I would suggest reserving "FOOM" for things that go very fast relative to you.

For the record, I've been a coder and judged myself a reasonable hacker - set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn't about programming languages.)

comment by Phil_Goetz6 · 2008-11-27T00:23:39.000Z · LW(p) · GW(p)

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.
As you may have guessed, I think just the opposite. The idea that Eliezer, on his own, can figure out
  1. how to build an AI
  2. how to make an AI stay within a specified range of behavior, and
  3. what an AI ought to do
suggests that somebody has read Ender's Game too many times. These are three gigantic research projects. I think he should work on #2 or #3.

Not doing #1 would mean that it actually matters that he convince other people of his ideas.

I think that #3 is really, really tricky. Far beyond the ability of any one person. This blog may be the best chance he'll have to take his ideas, lay them out, and get enough intelligent criticism to move from the beginnings he's made, to something that might be more useful than dangerous. Instead, he seems to think (and I could be wrong) that the collective intelligence of everyone else here on Overcoming Bias is negligible compared to his own. And that's why I get angry and sometimes rude.

Generalizing from observations of points at the extremes of distributions, we can say that when we find an effect many standard deviations away from the mean, its position is almost ALWAYS due more to random chance than to the properties underlying that point. So when we observe a Newton or an Einstein, the largest contributor to their accomplishments was not their intellect, but random chance. So if you think you're relying on someone's great intellect, you're really relying on chance.

comment by Tim_Tyler · 2008-11-27T03:17:37.000Z · LW(p) · GW(p)
I would suggest reserving "FOOM" for things that go very fast relative to you.

It sounds as though "FOOM" will always lie about a dozen doublings in the future - for anyone riding the curve. Like the end of the rainbow, it will recede as it is approached.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-11-26T13:45:50.961Z · LW(p) · GW(p)

This is true, but not particularly comforting. After all, we won't be the ones riding the curve if/when the FOOM happens.

comment by GenericThinker · 2008-11-27T04:38:57.000Z · LW(p) · GW(p)

"For the record, I've been a coder and judged myself a reasonable hacker - set out to design my own programming language at one point, which I say not as a mark of virtue but just to demonstrate that I was in the game. (Gave it up when I realized AI wasn't about programming languages.)"

AI is about programming languages since AI is about computers, and current "AI" languages really aren't that great. I would say that it would be of huge value if someone could design an AI specific language that would be better then Lisp. Also a programming language that better deals with mass parallelism would be of great value to AI. Devoting yourself to that goal would further AI since the problem is one of theory and one of enabling technology.

Just an aside the good hacker in your own view isn't a good metric due to the fact that people always think of themselves as being better at something then they really are.

comment by michael_vassar3 · 2008-11-27T05:13:43.000Z · LW(p) · GW(p)

Phil: It seems clear to me that Newton and Einstein were not universally brilliant relative to ordinary smart people like you in the same sense that ordinary smart people like you are universally brilliant relative to genuinely average people, but it seems equally clear that it was not a coincidence that the same person invented Calculus, Optics AND universal gravitation or general relativity, special relativity, the photoelectric effect, brownian motion etc. Newton and Einstein were obviously great scientists in a sense that very few other people have been. It likewise isn't chance that Tiger Woods or Michael Jordan or Kasparov dominated game after game or that Picasso and Beethoven created many artistic styles.

That said, ELiezer doesn't have any accomplishments that strongly suggest that his abilities at your tasks 1-3 are comparible to the domain specific abilities of the people mentioned above, and in the absense of actual accomplishments of a world-historical magnitude the odds against any one person accomplishing goals of that magnitude seems to be hundreds to one (though uncertainty regarding the difficulty of the goals and the argument itself justify a slightly higher estimate of the probabilities in question). In addition, we don't have strong arguments that tasks 1-3 are related enough to expect solutions to be highly correlated, furthering the argument that building a community is a better idea than trying to be a lone genius.

comment by mtraven · 2008-11-28T01:09:35.000Z · LW(p) · GW(p)

Google's internal facilities and processes seem to have something of the Ubertool about them. There's a famous quote going around: “Google uses Bayesian filtering the way Microsoft uses the if statement." Certainly they seem closer to taking over the world than anyone else.

comment by Aaron_Denney · 2008-11-29T08:48:26.000Z · LW(p) · GW(p)

PK: One big point that Eliezer is trying to make is that just "hacking away at code" without a much better understanding of intelligence is actually a terrible idea. You just aren't going to get very far. If, by some miracle, you do, the situation's even worse, as you most likely won't end up with a Friendly AI. And an UnFriendly AI could be very very bad news.

GenericThinker: A DSL might help, but you need to understand the domain extremely well to design a good DSL.

Replies from: techhistory
comment by techhistory · 2009-08-12T15:47:03.802Z · LW(p) · GW(p)

A new book "The Engelbart Hypothesis: Dialogs with Douglas Engelbart" by Valerie Landau and Eileen Clegg in conversation with Douglas Engelbart does seems to back up the original statement that Doug Engelbart's Augmentation Framework may just be the UBERsolution. http://engelbartbook.com

comment by kilobug · 2011-09-22T20:05:47.070Z · LW(p) · GW(p)

« You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing. » Are you really sure it's not possible ? To me it's just a harder version of the AI Box problem : I'm pretty sure (> 95% probability) a sufficiently smart AI can convince anyone (who actually listens to it, at least) to let it out of the box. I'm not so sure about an AI being able to rewire my brain enough to make me an Einstein or Engelbart using only sensory inputs, but I would definitely give a significant probability (higher than 10%) than a Bayesian superintelligence could reverse-engineer the way my brain learns from stimulus enough to rewire enough parts of my brain without using any nanotechnology. Just from other humans, by reading things (like Less Wrong or many books), we can improve a lot. A superintelligence could probably do much, much more to help us improve ourselves with just normal communication.

comment by A1987dM (army1987) · 2013-10-20T18:08:16.041Z · LW(p) · GW(p)

Incidentally... I once met a science-fiction author of a previous generation, and mentioned to him that the part of my writing I most struggled with, was my tendency to revise and revise and revise things I had already written, instead of writing new things. And he said, "Yes, that's why I went back to the typewriter. The word processor made it too easy to revise things; I would do too much polishing, and writing stopped being fun for me." It made me wonder if there'd be demand for an author's word processor that wouldn't let you revise anything until you finished your first draft.

What's wrong with just using cat?