Could a digital intelligence be bad at math?

post by leplen · 2016-01-20T02:38:23.147Z · LW · GW · Legacy · 22 comments

Contents

22 comments

One of the enduring traits that I see in most characterizations of artificial intelligences is the idea that an AI would have all of the skills that computers have. It's often taken for granted that a general artificial intelligence would be able to perfectly recall information, instantly multiply and divide 5 digit numbers, and handily defeat Gary Kasparov at chess. For whatever reason, the capabilities of a digital intelligence are always seen as encompassing the entire current skill set of digital machines.

But this belief is profoundly strange. Consider how much humans struggle to learn arithmetic. Basic arithmetic is really simple. You can build a bare bones electronic calculator/arithmetic logic unit on a breadboard in a weekend. Yet humans commonly spend years learning how to perform those same simple operations. And the mental arithmetic equipment humans assemble at the end of this is still relatively terrible: slow, labor intensive, and prone to frequent mistakes. 

It is not totally clear why humans are this bad at math. It is almost certainly unrelated to brains computing using neurons instead of transistors. Based on personal experience and a cursory literature review, counting seems to rely primarily on identifying repeated structures in a linked list, and seems to be stored as verbal memory. When we first learn the most basic arithmetic we rely on visual pattern matching, and as we do more math basic math operations get stored in a look-up table in verbal memory. This is an absolutely bonkers way to implement arithmetic. 

While humans may be generally intelligent, that general intelligence seems to be accomplished using some fairly inelegant kludges. We seem to have a preferred framework for understanding built on our visual and verbal systems, and we tend to shoehorn everything else into that framework. But there's nothing uniquely human about that problem. It seems to be characteristic of learning algorithms in general, and so if our artificial learner started off by learning skills unrelated to math, it might learn arithmetic via a similarly convoluted process. While current digital machines do arithmetic via a very efficient process, a digital mind that has to learn those patterns may arrive at a solution as slow and convoluted as the one humans rely on.

 

22 comments

Comments sorted by top scores.

comment by shminux · 2016-01-20T06:00:20.712Z · LW(p) · GW(p)

Humans are not bad at math. We are excellent at math. We can calculate the best trajectory to throw a ball into a hoop, the exact way to move our jiggly appendages to achieve it, accounting for a million little details, all in a blink of an eye. Few if any modern computers can do as well.

The problem is one of the definition: we call "math" the part of math that is HARD FOR HUMANS. Because why bother giving a special name to something that does not require special learning techniques?

Replies from: Houshalter, leplen, Brillyant
comment by Houshalter · 2016-01-21T07:09:07.739Z · LW(p) · GW(p)

You are missing OP's point. OP is talking about arithmetic, and other things computers are really good at. There is a tendency, when talking about AI, to assume the AI will have all the abilities of modern computers. If computers can play chess really well, then so will AI. If computers can crunch numbers really well, then so will AI. That is what OP is arguing against.

If AIs are like human brains, then they likely won't be really good at those things. They will have all the advantages of humans of course, like being able to throw a ball or manage jiggly appendages. But they won't necessarily be any better than us at anything. If humans take ages to do arithmetic, so will AI.

There are some other comments saying that the AI can just interface with calculators and chess engines and gain those abilities. But so can humans. AI doesn't have any natural advantage there. The only advantage might be that it's easier to do brain-computer interfaces. Which maybe gets you a bit more bandwidth in usable output. But I don't see many domains where that would be very useful, vs humans with keyboards. Basically they would just be able to type faster or move a mouse faster.

And even your argument that humans are really good at analog math doesn't hold up. There have been some experiments done to see if humans could learn to do arithmetic better if it's presented as an analog problem. Like draw a line the same length as two shorter lines added together. Or draw a shape with the same area as two lines would make if formed into a rectangle.

Not only does it take a ton of training, but you are still only accurate within a few percent. Memorizing multiplication tables is easier and more accurate.

Replies from: shminux, Lumifer
comment by shminux · 2016-01-21T15:29:41.011Z · LW(p) · GW(p)

My implied point is that the line between hard math and easy math for humans is rather arbitrary, drawn mostly by evolution. AI is designed, not evolved, so the line between hard and easy for AI is based on the algorithm complexity and processing power, not on millions of years of trying to catch a prey or reach a fruit.

Replies from: Houshalter
comment by Houshalter · 2016-01-29T10:07:39.241Z · LW(p) · GW(p)

I'm not sure I agree with that. Currently most progress in AI is with neural networks, which are very similar to human brains. Not exactly the same, but they have very similar strengths and weaknesses.

We may not be bad at things because we didn't evolve to do them. They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.

Replies from: shminux
comment by shminux · 2016-01-30T06:23:02.764Z · LW(p) · GW(p)

They might just be limits of our type of intelligence. NNs are good at big messy analog pattern matching, and bad at other things like doing lots of addition or solving chess boards.

That could be true, we don't know enough about the issue. But interfacing a regular computer with a NN should be a... how should I put it... no-brainer?

comment by Lumifer · 2016-01-21T15:35:53.542Z · LW(p) · GW(p)

If AIs are like human brains, then they likely won't be really good at those things ... they won't necessarily be any better than us at anything.

For how long?

One of the point of AIs is rapid change and evolution.

comment by leplen · 2016-01-20T16:44:11.341Z · LW(p) · GW(p)

This is a really broad definition of math. There is regular structure in kinetic tasks like throwing a ball through a hoop. There's also regular structure in tasks like natural language processing. One way to describe that regular structure is through a mathematical representation of it, but I don't know that I consider basketball ability to be reliant on mathematical ability. Would you describe all forms of pattern matching as mathematical in nature? Is the fact that you can read and understand this sentence also evidence that you are good at math?

comment by Brillyant · 2016-01-20T18:39:40.056Z · LW(p) · GW(p)

jiggly appendages

Do you have tentacles?

comment by Viliam · 2016-01-20T09:14:15.754Z · LW(p) · GW(p)

Could a natural intelligence be bad at biology?

We need to be more specific about what kinds of "artificial intelligences" we discuss. I can imagine an uploaded human who would be completely bad at math. And we could create an intelligent neural network, which would be bad at math for similar reasons.

The situation is different with computer programs that perceive themselves as computer programs, and which have a possibility (and skill) to modify their own code. They are analogical to a human programmer, equipped with a calculator and a programmable computer. Is there a reason to suspect that they would have problem multiplying and dividing numbers?

comment by gwern · 2016-01-20T16:59:48.107Z · LW(p) · GW(p)

It is not totally clear why humans are this bad at math. It is almost certainly unrelated to brains computing using neurons instead of transistors.

Why do you think that? Adding numbers is highly challenging for RNNs and is a standard challenge in recent papers investigating various kinds of differentiable memory and attention mechanisms, precisely because RNNs do so badly at it (like humans).

Replies from: paulfchristiano
comment by paulfchristiano · 2016-01-20T17:38:47.549Z · LW(p) · GW(p)

It's a bit hard for RNN's to learn, but they can end up much better than humans. (Also, the reason it is being used as a challenge is because it is a bit tricky but not very tricky.)

It is probably also easy to "teach" humans to be much better at math than we currently are (over evolutionary time), there's just no pressure for math performance. That seems like the most likely difference between humans and computers.

Replies from: gwern
comment by gwern · 2016-01-20T20:41:38.633Z · LW(p) · GW(p)

It's a bit hard for RNN's to learn, but they can end up much better than humans.

After some engineering effort. Researchers didn't just throw a random RNN at the problem in 1990 and found they worked as great as transistors at arithmetic... Plus, if you want to pick extremes (the best RNNs now), are the best RNNs better at adding or multiplying extremely large numbers than human savants?

Replies from: leplen
comment by leplen · 2016-01-21T01:22:28.644Z · LW(p) · GW(p)

This raises a really interesting point that I wanted to include in the top level post, but couldn't find a place for. It seems plausible/likely that human savants are implementing arithmetic using different, and much more efficient algorithms than those used by neurotypical humans. This was actually one of the examples I considered in support of the argument that neurons can't be the underlying reason humans struggle so much with math.

Replies from: HungryHobo
comment by HungryHobo · 2016-01-21T16:42:59.538Z · LW(p) · GW(p)

It has only been in recent generations that arithmetic involving numbers of more than 2 or 3 digits has mattered to peoples wellbeing and survival. I doubt our brains are terribly well wired up for large numbers.

comment by Oligopsony · 2016-01-20T03:55:41.921Z · LW(p) · GW(p)

If it's digitally embedded, even if the "base" module was bad at math in the same way we are, it would be trivial to cybernetically link it to a calculator program, just as us physical humans are cyborgs when we use physical calculators (albeit with a greater delay than a digital being would have to deal with.)

comment by ChristianKl · 2016-01-20T11:52:51.275Z · LW(p) · GW(p)

An artificial intelligence can easily interface with other software in a way that humans can't.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-01-21T18:12:19.549Z · LW(p) · GW(p)

Yes, but many variants are also true so it's unclear what you want to imply.

  • A human can easily interface with software.
  • A human can easily interface with other humans in a way that an artificial intelligence can't.

(for some plausible meaning of 'interface' and 'can't'.

Replies from: ChristianKl, Dagon
comment by ChristianKl · 2016-01-21T21:08:26.282Z · LW(p) · GW(p)

A human can easily interface with software.

Not as easily because there no direct read and write access to neurons.

comment by Dagon · 2016-01-22T14:58:12.381Z · LW(p) · GW(p)

An AI can much more easily interface with software, more easily with other AIs (unless this is the singletoon foom) and likely nearly-as-easily with humans.

This does bring up the point that the post is wrong: humans aren't bad at math. We built stuff to help us and are now good at it.

comment by kithpendragon · 2016-01-20T14:10:06.046Z · LW(p) · GW(p)

I honestly assumed that most AI would probably have hardware access to a math co-processor of some kind. After all, humans are pretty awesome at arithmetic if you interpret calculator use as an analog to that kind of setup. No need for the mind to even understand what is going on at the hardware level. As long as it understands what the output represents, it can just depend on the module provided to it.

comment by hairyfigment · 2016-01-22T18:21:24.633Z · LW(p) · GW(p)

To spell this out: if someone makes an AGI with poor arithmetical ability, and doesn't keep their research secret, someone else can just write a version without that flaw. (They might not even need to add a fundamentally different routine.) And that's if the AI itself has severely limited self-modifying ability.

comment by Richard_Kennaway · 2016-01-20T11:56:54.306Z · LW(p) · GW(p)

A human with a calculator is excellent at arithmetic. An AI with a calculator would also be. But the calculator would, for practical purposes, be part of the AI.

Why make the AI learn arithmetic by the same inefficient methods as humans? Just give it direct access to arithmetic.