Deep limitations? Examining expert disagreement over deep learning
post by Richard_Ngo (ricraz) · 2021-06-27T00:55:53.327Z · LW · GW · 6 commentsThis is a link post for https://link.springer.com/article/10.1007/s13748-021-00239-1
Contents
6 comments
A recent publication by Carla Zoe Cremer, who's working at the Future of Humanity Institute:
We conducted 25 expert interviews resulting in the identification of 40 limitations of the deep learning approach and 5 origins of expert disagreement. These origins are open scientific questions that partially explain different interpretations by experts and thereby elucidate central issues in AI research. They are: abstraction, generalisation, explanatory models, emergence of planning and intervention. We explore both optimistic and pessimistic arguments that are related to each of the fve key questions. We explore common beliefs that underpin optimistic and pessimistic argumentation. Our data provide a basis upon which to construct a research agenda that addresses key deep learning limitations.
- Abstraction: Do current artificial neural networks (ANNs) form abstract representations effectively?
- Generalisation: Should ANNs’ ability to generalise inspire optimism about deep learning?
- Explanatory, causal models: Is it necessary, possible and feasible to construct compressed, causal, explanatory models of the environment as described in Lake et al. (2017) using deep learning?
- Emergence of planning: Will sufficiently complex environments be sufficient in enabling deep learning algorithms to develop the capacity for hierarchical, long-term reasoning and planning?
- Intervention: Will deep learning support and require learning by intervening in a complex, real environment?
Personally, I'm fairly confident that the first three won't be major problems for deep learning. I'm much more uncertain about the fourth and fifth, since they correspond to types of data that seem quite difficult to obtain. (I'm happy to agree with the fourth in principle, but in practice the "sufficient complexity" might be well beyond the sorts of training environments AI researchers currently think about.)
6 comments
Comments sorted by top scores.
comment by gwern · 2021-06-27T02:05:22.718Z · LW(p) · GW(p)
Interviews were conducted in 2019 to early 2020
For context, this timing implies that all of these are pre-GPT-3/CLIP/DALL-E/MLP-Mixer & my scaling-hypothesis writeup, possibly pre-MuZero and much of the recent planning/model-based DRL work (eg MuZero Unplugged, Decision Transformer), and pre much of the Distill.pub Circuits work & the semi-supervised revolution. Already some of the quotes are endearingly obsolete:
“[For instance] using convolutional attention mechanisms and applying it to graphs structures and training to learn how to represent code by training it on GitHub corpora…that kind of incremental progress would carry us to [...] superintelligence.” (P21).
(Convolutions? OK grandpa. But he's right that the program synthesis Transformers trained on Github are pretty sweet.*) Unfortunately still contemporary are the pessimistic quotes:
“Those people who say that’s going to continue are saying it as more of a form of religion. It’s blind faith unsupported by facts. But if you have studied cognition, if you have studied the properties of language… [...] you recognise that there are many things that deep learning [...] right now isn’t doing.” (P23).
“My hunch is that deep learning isn’t going anywhere. It has very good solutions for problems where you have large amounts of labelled data, and fairly well-defined tasks and lots of compute thrown at problems. This doesn’t describe many tasks we care about.” (P10).
“If you think you can build the solution even if you don’t know what the problem is, you probably think you can do AI” (P2).
I assume if interviewed now, they'd say the same things but even more loudly and angrily - the typical pessimism masquerading as intellectual seriousness.
* I wrote this before OA/GH Copilot, but it makes the point even more strongly.
Replies from: carlazoe↑ comment by carlazoe · 2021-06-30T09:56:38.105Z · LW(p) · GW(p)
I so appreciate your candid reaction.
Here just a quick response. The intended point of the paper was to allow readers to engage with the position opposite of the one they hold at the time. If read with attention to detail and arguments that could change their minds, it is unlikely to strengthen the readers views, but instead make the reader more uncertain about their position.
There's considerable fuzziness and speculation at each position along the spectrum from optimism to pessimism. No position depended on a view papers alone, so I disagree with the claim that progress within the last year will make the analysis completely irrelavent and tip the balance very clearly to one side. Worldviews which are non-falsifyable at this stage, played a role in views on both sides.
I can confirm that the experts I interviewed were neither loud or angry. We should probably not assume (no matter the side of the debate we support) that the views of annonymous experts, who do not share our views, isn't rooted in intellectual seriousness.
Thanks for reading my paper Gwern!
comment by Rohin Shah (rohinmshah) · 2021-06-29T14:11:14.329Z · LW(p) · GW(p)
Planned summary for the Alignment Newsletter:
This paper reports on the results of a qualitative survey of 25 experts conducted in 2019 and early 2020, on the possibility of deep learning leading to high-level machine intelligence (HLMI), defined here as an “algorithmic system that performs like average adults on cognitive tests that evaluate the cognitive abilities required to perform economically relevant tasks”. Experts disagreed strongly on whether deep learning could lead to HLMI. Optimists tended to focus on the importance of scale, while pessimists tended to emphasize the need for additional insights.
Based on the interviews, the paper gives a list of 40 limitations of deep learning that some expert pointed to, and a more specific list of five areas that both optimists and pessimists pointed to as in support of their views (and thus would likely be promising areas to resolve disagreements). The five areas are (1) abstraction, (2) generalization, (3) explanatory, causal models, (4) emergence of planning, and (5) intervention.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-02-21T17:45:44.552Z · LW(p) · GW(p)
Came here from this thread about whether ML experts can readily provide examples of barriers between today and AGI. What a blast from the past. Seems like most of the alleged barriers that ML experts cited in this 2020 survey have already been smashed through in 2023. (I'm looking at Table 1, which contains things like "Insufficient ability to construct and decompose sentences according to grammatical rules.")
comment by delton137 · 2021-10-22T03:21:35.156Z · LW(p) · GW(p)
Here's my opinions on what deep learning can do, FWIW -
1 (abstraction) yes, but they aren't sample efficient !
2. (generalization) eh, not if you define generalization as going out of distribution (note: that's not how it's normally defined in ML literature). Deep learning systems can barely generalize outside their training data distribution at all. The one exception I know is how GPT-3 learned addition but even then it broke down at large numbers. Some GPT-3 generalization failures can be seen here.
3. (causality) maybe?
4. (long term planning) - I think DL can do this, but maybe not necessarily using the same kind of hierarchical planning framework that humans seem to use
5. (need for intervention) - is this getting at embodiment? I'm a bit unclear. In any case, it doesn't seem like this is real critical to me.
So for me, the main issues I have with deep learning are the low sample efficiency and lack of generalization ability. Which is why I'm skeptical that just scaling up the deep learning methods we have now can lead to true AGI, although it might get to something which is for many practical purposes pretty close.
comment by IlyaShpitser · 2021-06-27T12:58:05.919Z · LW(p) · GW(p)
3: No, that will never work with DL by itself (e.g. as fancy regressions).
4: No, that will never work with DL by itself (e.g. as fancy regressions).
5: I don't understand this question, but people already use DL for RL, so the "support" part is already true. If the question is asking whether DL can substitute for doing interventions, then the answer is a very qualified "yes," but the secret sauce isn't DL, it's other things (e.g. causal inference) that use DL as a subroutine.
---
The problem is, most folks who aren't doing data science for a living themselves only view data science advances through the vein of hype, fashion trends, and press releases, and so get an entirely wrong sense of what is truly groundbreaking and important.