Deep learning - deeper flaws?

post by Richard_Ngo (ricraz) · 2018-09-24T18:40:00.705Z · LW · GW · 17 comments

This is a link post for http://thinkingcomplete.blogspot.com/2018/03/deep-learning-deeper-flaws.html

Contents

  Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution - Pearl, 2018
  Deep Learning: A Critical Appraisal - Marcus, 2018
  Generalisation without systematicity - Lake and Baroni, 2018
  Deep reinforcement learning doesn't work yet - Irpan, 2018
None
17 comments

In this post I summarise four lines of argument for why we should be skeptical about the potential of deep learning in its current form. I am fairly confident that the next breakthroughs in AI will come from some variety of neural network, but I think several of the objections below are quite a long way from being overcome.

Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution - Pearl, 2018

Pearl describes three levels at which you can make inferences: association, intervention, and counterfactual. The first is statistical, identifying correlations - this is the level at which deep learning operates. The intervention level is about changes to the present or future - it answers questions like "What will happen if I do y?" The counterfactual level answers questions like "What would have happened if y had occurred?" Each successive level is strictly more powerful than the previous one: you can't figure out what the effects of an action will be just on the association level, without a causal model, since we treat actions as interventions which override existing causes. Unfortunately, current machine learning systems are largely model-free.

Causal assumptions and conclusions can be encoded in the form of graphical models, where a directed arrow between two nodes represents a causal influence. Constraints on the structure of a graph can be determined by seeing which pairs of variables are independent when controlling for which other variables: sometimes controlling removes dependencies, but sometimes it introduces them. Pearl's main claim is that this sort of model-driven causal analysis is an essential step towards building human-level reasoning capabilities. He identifies several important concepts - such as counterfactuals, confounding, causation, and incomplete or biased data - which his framework is able to reason about, but which current approaches to ML cannot deal with.

Deep Learning: A Critical Appraisal - Marcus, 2018

Marcus identifies ten limitations of current deep learning systems, and argues that the whole field may be about to hit a wall. According to him, deep learning:

  1. Is data hungry - it can't learn abstractions through explicit verbal definition like humans can, but instead requires thousands of examples.
  2. Is shallow, with limited capacity for transfer. If a task is perturbed even in minor ways, deep learning breaks, demonstrating that it's not really learning the underlying concepts. Adversarial examples showcase this effect.
  3. Has no natural way to deal with hierarchical structure. Even recursive neural networks require fixed sentence trees to be precomputed. See my summary of 'Generalisation without systematicity' below.
  4. Struggles with open-ended inference, especially based on real-world knowledge.
  5. Isn't transparent, and remains essentially a "black box".
  6. Is not well-integrated with prior knowledge. We can't encode our understanding of physics into a neural network, for example.
  7. Cannot distinguish causation from correlation - see my summary of Pearl's paper above.
  8. Presumes a largely stable world, like a game, instead of one like our own in which there are large-scale changes.
  9. Is vulnerable to adversarial examples, which can be constructed quite easily.
  10. Isn't robust as a long-term engineering solution, especially on novel data.

Some of these problems seem like they can be overcome without novel insights, given enough engineering effort and compute, but others are more fundamental. One interpretation: deep learning can interpolate within the training space, but can't extrapolate to outside the training space, even in ways which seem natural to humans. One of Marcus' examples: when a neural network is trained to learn the identity function on even numbers, it rounds down on odd numbers. In this trivial case we can solve the problem by adding odd training examples or manually adjusting some weights, but in general, when there are many features, both may be prohibitively difficult even if we want to make a simple adjustment. To address this and other problems, Marcus offers three alternatives to deep learning as currently practiced:

  1. Unsupervised learning, so that systems can constantly improve - for example by predicting the next time-step and updating afterwards, or else by setting itself challenges and learning from doing them.
  2. Further development of symbolic AI. While this has in the past proved brittle, the idea of integrating symbolic representations into neural networks has great promise.
  3. Drawing inspiration from humans, in particular from cognitive and developmental psychology, how we develop commonsense knowledge, and our understanding of narrative.

Generalisation without systematicity - Lake and Baroni, 2018

Lake and Baroni identify that human language and thought feature "systematic compositionality": we are able to combine known components in novel ways to produce arbitrarily many new ideas. To test neural networks on this, they introduce SCAN, a language consisting of commands such as "jump around left twice and walk opposite right thrice". While they found that RNNs were able to generalise well on new strings similar in form to previous strings, performance dropped sharply in other cases. For example, the best result dropped from 99.9% to 20.8% when the test examples were longer than any training example, even though they were constructed using the same compositional rules. Also, when a command such as "jump" had only been seen by itself in training, RNNs were almost entirely incapable of understanding instructions such as "turn right and jump". The overall conclusion: that neural networks can't extract systematic rules from training data, and so can't generalise compositionality anything like how humans can. This is similar to the result of a project I recently carried out, in which I found that capsule networks which had been trained to recognise transformed inputs such as rotated digits and digits with negative colours still couldn't recognise rotated, negated digits: they were simply not learning general rules which could be composed together.

Deep reinforcement learning doesn't work yet - Irpan, 2018

Irpan runs through a number of reasons to be skeptical about using deep learning for RL problems. For one thing, deep RL is still very data-inefficient: DeepMind's Rainbow DQN takes around 83 hours of gameplay to reach human-level performance on an Atari game. By contrast, humans can pick them up within a minute or two. He also points out that other RL methods often work better than deep RL, particularly model-based ones which can utilise domain-specific knowledge.

Another issue with RL in general is that designing reward functions is difficult. This is a theme in AI safety - specifically when it comes to reward functions which encapsulate human values - but there are plenty of existing examples of reward hacking on much simpler tasks. One important consideration is the tradeoff is between shaped and sparse rewards. Sparse rewards only occur at the goal state, and so can be fairly precise, but are usually too difficult to reach directly. Shaped rewards give positive feedback more frequently, but are easier to hack. And even when shaped rewards are designed carefully, RL agents often find themselves in local optima. This is particularly prevalent in multi-agent systems, where each agent can overfit to the behaviour of the other.

Lastly, RL is unstable in a way that supervised learning isn't. Even successful implementations often fail to find a decent solution 20 or 30% of the time, depending on the random seed with which they are initialised. In fact, there are very few real-world success stories featuring RL. Yet achieving superhuman performance on a wide range of tasks is a matter of when, not if, and so I think Amara's law applies: we overestimate the effects RL will have in the short run, but underestimate its effects in the long run.

17 comments

Comments sorted by top scores.

comment by interstice · 2018-09-25T19:26:08.580Z · LW(p) · GW(p)

In the past, people have said that neural networks could not possibly scale up to solve problems of a certain type, due to inherent limitations of the method. Neural net solutions have then been found using minor tweaks to the algorithms and (most importantly) scaling up data and compute. Ilya Sutskever gives many examples of this in his talk here. Some people consider this scaling-up to be "cheating" and evidence against neural nets really working, but it's worth noting that the human brain uses compute on the scale of today's supercomputers or greater, so perhaps we should not be surprised if a working AI design requires a similar amount of power.

On a cursory reading, it seems like most the problems given in the papers could plausibly be solved by meta-reinforcement learning on a general-enough set of environments, of course with massively scaled-up compute and data. It may be that we will need a few more non-trivial insights to get human-level AI, but it's also plausible that scaling up neural nets even further will just work.

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-09-26T18:52:35.409Z · LW(p) · GW(p)

Can't use regression methods for problems that are not regression problems. Causal inference is generally not a regression problem. It's not an issue of scale, it's an issue of wrong tool for the job.

Replies from: interstice
comment by interstice · 2018-09-26T22:36:05.115Z · LW(p) · GW(p)

Okay, but (e.g.) deep RL methods can solve problems that apparently require quite complex causal thinking such as playing DotA. I think what is happening here is that while there is no explicit causal modelling happening at the lowest level of the algorithm, the learned model ends up building something that serves the functions of one because that is the simplest way to solve a general class of problems. See the above meta-RL paper for good examples of this. There seems to be no obvious obstruction to scaling this sort of thing up to human-level causal modelling. Can you point to a particular task needing causal inference that you think these methods cannot solve?

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-09-26T23:37:28.519Z · LW(p) · GW(p)

Sure, and RL is not a regression problem. The reason RL methods can do causality is they can perform an essentially infinite number of experiments in toy worlds. DL can help RL scale up to more complex toy worlds, and some worlds that are not so toy anymore. But there, it's not DL on it's own -- it's DL+RL.

DL is very useful, indeed! In fact, one could use DL as a "subroutine" for causal analysis of the sort Pearl worries about. In fact, people do this now.

Point being it's no longer "DL", it's "DL-as-a-way-to-do-regression + other-methods-that-use-regressions-as-a-subroutine."

"Can you point to a particular task needing causal inference that you think these methods cannot solve?"

To answer this -- anything that's not a regression problem. At best, you can use DL as a subroutine in some other larger algorithm that needs its own insights to work, that are unrelated to DL. So why would DL get all the credit for solving the problem?

---

Dota's far from solved, so far.

Replies from: interstice, ricraz
comment by interstice · 2018-09-27T16:11:31.441Z · LW(p) · GW(p)

I agree that you do need some sort of causal structure around the function-fitting deep net. The question is how complex this structure needs to be before we can get to HLAI. It seems plausible to me(at least a 10% chance, say) that it could be quite simple, maybe just consisting of modestly more sophisticated versions of the RL algorithms we have so far, combined with really big deep networks.

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-09-27T16:28:45.022Z · LW(p) · GW(p)

I disagree. Why would it be simple? Even people who try to get self-driving cars to work (e.g. at Uber) are now using an "engineering stack" approach, rather than formal RL+DL.

Replies from: interstice
comment by interstice · 2018-09-27T19:49:47.219Z · LW(p) · GW(p)

Well, the DotA bot pretty much just used PPO,. AlphaZero used MCTS + RL, OpenAI recently got a robot hand to do object manipulation with PPO and a simulator(the simulator was hand-built, but in principle it could be produced by unsupervised learning like in this). Clearly it's possible to get sophisticated behaviors out of pretty simple RL algorithms. It could be the case that these approaches will "run out of steam" before getting to HLAI, but it's hard to tell at the moment, because our algorithms aren't running with the same amount of compute + data as humans (for humans, I am thinking of our entire lifetime experiences as data, which is used to build a cross-domain optimizer).

re: Uber, I agree that at least in the short term most applications in the real world will feature a fair amount of engineering by hand. But the need for this could decrease as more power becomes available, as has been the case in supervised learning.

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-09-27T20:32:15.213Z · LW(p) · GW(p)

It's easy to tell -- they will run out of steam. Want to bet money on a concrete claim? I love money.

Replies from: interstice
comment by interstice · 2018-09-27T22:08:06.447Z · LW(p) · GW(p)

Have something in mind?

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-09-28T12:57:26.177Z · LW(p) · GW(p)

Well, I am fairly sure DL+RL will not lead to HLAI, on any reasonable timescale that would matter to us. You are not sure. Seems to me, we could turn this into a bet. Any sort of bet where you say DL+RL -> HLAI after X years, I will probably take the negation of, gladly.

Replies from: interstice
comment by interstice · 2018-09-28T17:07:21.607Z · LW(p) · GW(p)

Hmmm...but if I win the bet then the world may be destroyed, or our environment could change so much the money will become worthless. Would you take 20:1 odds that there won't be DL+RL-based HLAI in 25 years?

Replies from: DanielFilan, ilya-shpitser
comment by DanielFilan · 2018-09-29T06:59:40.190Z · LW(p) · GW(p)

If you think money will be worth a lot now but not much in the future, Ilya could pay you money now in exchange for you paying him a lot of money in the future.

comment by Ilya Shpitser (ilya-shpitser) · 2018-09-29T02:17:29.589Z · LW(p) · GW(p)

I often hear this response: "I can't make bets on my beliefs about the Eschaton, because they are about the Eschaton."

My response to this response is: you have left the path of empiricism if you can't translate your insight into [topic] (in this case "AI progress") into taking money via {bets with empirically verifiable outcomes} from folks without your insight.

---

If you are worried the world will change too much in 25 years, can you formulate a nearer-term bet you would be happy with? For example, something non-toy DL+RL would do in 5 years.

Replies from: interstice
comment by interstice · 2018-09-29T19:19:46.254Z · LW(p) · GW(p)

"I can't make bets on my beliefs about the Eschaton, because they are about the Eschaton." -- Well, it makes sense. Besides, I did offer you a bet taking into account a) that the money may be worth less in my branch b) I don't think DL + RL AGI is more likely than not, just plausible. If you're more than 96% certain there will be no such AI, 20:1 odds are a good deal.

But anyways, I would be fine with betting on a nearer-term challenge. How about -- in 5 years, a bipedal robot that can run on rough terrain, as in this video, using a policy learned from scratch by DL + RL(possibly including a simulated environment during training) 1:1 odds.

Replies from: ilya-shpitser
comment by Ilya Shpitser (ilya-shpitser) · 2018-10-02T04:50:33.223Z · LW(p) · GW(p)

No, that wouldn't surprise me in 5 years. Nor would that count as "scary progress" to me. That's bipedalism, not strides towards general intelligence.

---

"Well, it makes sense."

That makes your beliefs a religion, my friend.

comment by Richard_Ngo (ricraz) · 2018-09-27T05:27:57.372Z · LW(p) · GW(p)

OpenAI Five is very close to being superhuman at Dota. Would you be surprised if it got there in the next few months, without any major changes?

Replies from: ilya-shpitser