Why aren't more people in AIS familiar with PDP?
post by Prometheus · 2023-09-01T15:27:03.562Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 7 cfoster0 None 6 comments
Why haven't more people who work on alignment read Parallel Distributed Processing or even seem at all familiar with Rumelhart's work? This is the fundamental model of cognition and behaviour that all of modern AI is built on, the work that Hinton used for most of his insights. The model that has constantly been validated over the years by neural network performance and capabilities, but most seem incredibly unfamiliar or disinterested in it. Am I missing something? Is there some grounded reason why they think PDP and connectionism will fail at a certain point? It seems this should be required reading for anyone wanting to get into alignment research.
Answers
Parallel distributed processing (as well as "connectionism") is just an early name for the line of work that was eventually rebranded as "deep learning". They're the same research program.
↑ comment by Prometheus · 2023-09-01T20:41:11.860Z · LW(p) · GW(p)
But it's also an entire School of Thought in Cognitive Science. I feel like DL is the method, but without the understanding that these are based on well-thoughtout, mechanistic rules for how cognition fundamentally works, building potentially toward a unified theory of cognition and behaviour.
Replies from: Iknownothing↑ comment by Iknownothing · 2023-12-23T02:11:12.819Z · LW(p) · GW(p)
What do you think someone who knows about PDP knows that someone with a good knowledge of DL doesn't?
And why would it be useful?
6 comments
Comments sorted by top scores.
comment by interstice · 2023-09-01T18:26:24.057Z · LW(p) · GW(p)
My uninformed impression was that it had essentially been superceded by deep learning, SGD, etc., so most of the parts relevant to modern AI are already contained in the standard deep learning curriculum. What’s the most important PDP idea that you think is neglected by this curriculum?
Replies from: Prometheus↑ comment by Prometheus · 2023-09-01T18:58:14.341Z · LW(p) · GW(p)
My answer is a bit vague, but I would say that the current DL curriculum tells you how these things work, but it doesn't go into the reasoning about cognition that allowed these ideas to exist in the first place.
comment by johnswentworth · 2023-09-01T16:51:16.828Z · LW(p) · GW(p)
Haven't read it, but my default guess based on approximately-zero exposure would be that it's one of those "theories" which basically says "hey let's stick lots of simple units together and run simulations on them", while answering basically-zero questions which I actually care about. (For instance: does it tell me how internal structures or activation-patterns in a mental system will map to specific structures in the external environment? Or how to detect the formation of general-purpose search?) Or, a lower bar: does it make any universal quantitative predictions about neural systems at all? (No, "bigger system do better" does not count as quantitative. If it successfully predicted e.g. the parameters of the scaling curves of model nets 20+ years before those nets were built, then I'd definitely pay attention.) I'd be surprised if there's any real model of cognition there at all, as opposed to somebody just vibing that neural-network-style stuff is vaguely good somehow.
So to answer your question: because I have those expectations, I haven't looked into the topic. If my expectations are wrong, then maybe that's a mistake on my part.
Replies from: Prometheus↑ comment by Prometheus · 2023-09-01T19:06:48.057Z · LW(p) · GW(p)
I don't have an adequate answer for this, since these models are incomplete. But the way I see it is that these people had a certain way of mathematically reasoning about cognition (Hinton, Rumelhart, McClelland, Smolensky), and that reasoning created most of the breakthroughs we see today in AI (backprop, multi-layed models, etc.) It seems trying to utilize that model of cognition could give rise to new insights about the questions you're asking, attack the problem from a different angle, or help create a grounded paradigm for alignment research to build on.
comment by Hastings (hastings-greer) · 2023-09-01T17:00:25.164Z · LW(p) · GW(p)
For those who haven’t heard of PDP, what in your opinion was its most impressive advance prediction that was not predicted by other theories?
Replies from: Prometheus↑ comment by Prometheus · 2023-09-01T18:49:57.417Z · LW(p) · GW(p)
You could say it "predicted" everything post-AlexNet, but it's more that it created the fundamental understanding for everything post-AlexNet to exist in the first place. It's the mathematical models of cognition that all of modern AI is built on. This is how we got back propagation, "hidden" layers, etc.