Moravec's Paradox Comes From The Availability Heuristic

post by james.lucassen · 2021-10-20T06:23:52.782Z · LW · GW · 2 comments

This is a link post for https://jlucassen.com/moravecs-paradox-comes-from-the-availability-heuristic/

Contents

  Setting Up The Paradox
  And Dissolving It
None
2 comments

Epistemic Status: very quick one-thought post, may very well be arguing against a position nobody actually holds, but I haven’t seen this said explicitly anywhere so I figured I would say it.

Setting Up The Paradox

According to Wikipedia:

Moravec’s paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.

-https://en.wikipedia.org/wiki/Moravec’s_paradox

I think this is probably close to what to Hans Moravec originally meant to say in the 1980’s, but not very close to how the term is used today. Here is my best attempt to specify the statement I think people generally point at when they use the term nowadays:

Moravec’s paradox is the observation that in general, tasks that are hard for humans are easy for computers, and tasks that are easy for humans are hard for computers.

-me

If you found yourself nodding along to that second one, that’s some evidence I’ve roughly captured the modern colloquial meaning. Even when it’s not attached to the name “Moravec’s Paradox”, I think this general sentiment is a very widespread meme nowadays. Some example uses of this version of the idea that led me to write up this post are here [? · GW] and here [LW · GW].

To be clear, from here on out I will be talking about the modern, popular-meme version of Moravec's Paradox.

And Dissolving It

I think Moravec’s Paradox is an illusion that comes from the availability heuristic, or something like it. The mechanism is very simple – it’s just not that memorable when we get results that match up with our expectations for what will be easy/hard.

If you try, you can pretty easily come up with exceptions to Moravec’s Paradox. Lots of them. Things like single digit arithmetic, repetitive labor, and drawing simple shapes are easy for both humans and computers. Things like protein folding, the traveling salesman problem, and geopolitical forecasting are difficult for both humans and computers. But these examples aren’t particularly interesting, because they feel obvious. We focus our attention on the non-obvious cases instead – the examples where human/computer strengths are opposite, counter to our expectations.

Now, this isn’t to say that the idea behind Moravec’s Paradox is wrong and bad and we should throw it out. It’s definitely a useful observation, I just think it needs some clearing up and re-phrasing. This is the key difference: it’s not that human and computer difficulty ratings for various tasks are opposites – they’re just not particularly aligned at all.

We expected “hard for humans” and “hard for computers” to be strongly correlated. In reality they aren’t. But when we focus just on the memorable cases, this makes it appear as if all tasks are either easy for humans and hard for computers, or vice versa.

The observations cited to support Moravec’s Paradox do still give us an update. They tell us that something easy for humans won’t necessarily be easy for computers. But we should remember that something easy for humans won’t necessarily be hard for computers either. The lesson we take from Moravec’s observations in the 1980’s is that it tells us very little about what to expect.

That’s definitely valuable to know. But the widespread meme that humans and computers have opposite strengths is mostly just misleading. It updates too hard, due to a focus on the memorable cases. This produces a conclusion that’s the opposite of the original (negative correlation vs positive correlation), but that’s equally incorrect (should be little to no correlation at all).

2 comments

Comments sorted by top scores.

comment by gwern · 2021-10-20T18:21:27.199Z · LW(p) · GW(p)

One of the harms of the modern version, I think (I'm not sure how widespread it is, but sure, a decent number of people believe the claim, whether or not they call it "Moravec's paradox") is that it creates complacency if you believe that "centaur" situations are the default expectation. They don't seem to be. Sometimes they happen, but if AIs reach human level at all (which they usually don't), then complete replacement or deskilling is what happens.

That is, nobody turned carriages into 1-horse+1-automobile-engine hybrids: you either kept using horses for various good reasons, or you used a car. For example, in chess, for all the energetic PR by Kasparov & Cowen, the 'centaur' era was short-lived and appears to be long over. In Go, it never existed (or the window was so brief as to have gone unmeasured): the best players at every point in team were either humans, or neural nets, and never humans+NNs. With protein-folding, AFAIK AlphaFold's mistakes are on cases where the folding is genuinely extremely hard or unknown, and there are few or no cases where even a grad student-level expert can glance at it and instantly say what the structure obviously has to be. With tools like machine translation (or audio transcription), the most skilled labor doesn't become centaur labor because of decent AI tooling, it just becomes a way to take the least-skilled translators and make them more productive by doing the easy work for them (and on the low end, substitutes entirely for human translators, like in e-commerce). And so on.

The regular Moravec's paradox continues to hold true, I think. As awesome as DL is for suddenly giving perception capabilities incomparably better to what was available even a decade ago, there's still a gap between computer vision and instantaneous human understanding of an image. It seems to be closing, but currently at too-high prices for many tasks like self-driving cars.

comment by JBlack · 2021-10-21T12:55:40.395Z · LW(p) · GW(p)

I suspect that it's even worse: that even the concept of correlation of difficulty is irrelevant and misleading. Your illustrations show a range of values for "difficulty for humans" and "difficulty for computers" of around the same scale.

My thesis is that this is completely illusory. I suspect that problems are not 1-dimensional, that their (computational) difficulties can be measured on multiple scales. I further expect that these scales cover many orders of magnitude, and that the range of difficulty that humans find "easy" to "very difficult" covers in most cases one order of magnitude or less. On a logarithmic scale from 0 to 10 for "visual object recognition" capability for example, humans might find 8 easy and 9 difficult, while on a scale of 0 to 10 for "symbolic manipulation" humans might find 0.5 easy (3 items) and 1.5 (30 items) very difficult.

At any given time in our computing technology development, the same narrow range effect occurs, but unlike human ranges they change substantially over time. We build generally faster computers, and all the points move down. We build GPUs, and problems in the "embarrassingly parallel" scales move down even more. We invent better algorithms, and some other set of graphs has its points move down.

Note that the problems (points) in each scale can still have a near-perfect correlation between difficulty for computers and difficulty for humans. In any one scale, problems may lie on a nearly perfectly straight line. Unlike the diagrams in the post though, at any given time there is essentially nothing in the middle. For any given scale, virtually all the data points are off the charts in three of the four quadrants.

The bottom left off-the-scale quadrant is the Boring quadrant, trivial for both humans and computers and of interest to nobody. The top right is the Impossible quadrant, of interest only to science fiction writers. The other two are the Moravec quadrants, trivial for one and essentially impossible for the other.

Over time, the downward motion of points due to technological progress means that now and then the line for some small subclass class of problems briefly overlaps the "human capability" range. Then we get some Interesting problems that are not Moravec! But even in this class, the vast majority of problems are still Boring or Impossible and so invisible. The other classes still have lots of Moravec problems so the "paradox" still holds.