Don't expect AGI anytime soon
post by cveres · 2022-10-10T22:38:21.764Z · LW · GW · 6 commentsContents
6 comments
6 comments
Comments sorted by top scores.
comment by SD Marlow (sd-marlow) · 2022-10-11T04:25:38.367Z · LW(p) · GW(p)
Python syntax is closer to natural language, which plays into what the LLMs do best. I don't think the "symbolic" aspect plays into this in any way, and that kind of misses the argument on symbolic reasoning (that the LLM's are still just just doing correlation, and have no "grounding" of what the symbols mean, nor does any processing happen at that grounding level).
I'm still confused by your position. Saying DL is capturing symbolic values (in this one case), but also that DL is going to fail (because...?).
Replies from: cveres, lahwran↑ comment by cveres · 2022-10-11T10:54:48.631Z · LW(p) · GW(p)
So what I am saying is that Python is symbolic, which no one doubts, and that language is also symbolic, which neural network people doubt. That is how the symbolic argument becomes important. Because whatever LLMs do with Python, I suggest they do the same thing with natural language. And whatever they are doing with Python is the wrong thing so I am suggesting what they do with language is also "the wrong thing".
So what I am saying is that DL is not doing symbolic reasoning with Python or natural language, and will fail in case Python or NL require symbolic reasoning.
Replies from: lincolnquirk↑ comment by lincolnquirk · 2022-10-11T13:15:35.364Z · LW(p) · GW(p)
I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)
Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won't be able to eventually correct its errors in the same way humans do?
My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning -- it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can't think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won't be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
Replies from: cveres↑ comment by cveres · 2022-10-11T21:44:18.729Z · LW(p) · GW(p)
I think humans do symbolic as well as non symbolic reasoning. This is what is often called "hybrid". I don't think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.
The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings
BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.
↑ comment by the gears to ascension (lahwran) · 2022-10-11T04:31:21.034Z · LW(p) · GW(p)
my read was "we've already got models as strong as they're going to get, and they're not agi". I disagree that they're as strong as they're going to get.
Replies from: cveres↑ comment by cveres · 2022-10-11T10:57:55.736Z · LW(p) · GW(p)
No I didn't say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn't one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).