David Deutsch on Universal Explainers and AI
post by alanf · 2020-09-24T07:50:23.615Z · LW · GW · No commentsThis is a question post.
Contents
Answers 12 TAG 9 Charlie Steiner 1 AnthonyC None No comments
In The Beginning of Infinity David Deutch claims that the world is explicable and that human beings can explain anything that can be explained (Chapter 3):
The astrophysicist Martin Rees has speculated that somewhere in the universe ‘there could be life and intelligence out there in forms we can’t conceive. Just as a chimpanzee can’t understand quantum theory, it could be there are aspects of reality that are beyond the capacity of our brains.’ But that cannot be so. For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’ In terms of computational repertoire, our computers – and brains – are already universal (see Chapter 6). But if the claim is that we may be qualitatively unable to understand what some other forms of intelligence can – if our disability cannot be remedied by mere automation – then this is just another claim that the world is not explicable. Indeed, it is tantamount to an appeal to the supernatural, with all the arbitrariness that is inherent in such appeals, for if we wanted to incorporate into our world view an imaginary realm explicable only to superhumans, we need never have bothered to abandon the myths of Persephone and her fellow deities.
So human reach is essentially the same as the reach of explanatory knowledge itself. An environment is within human reach if it is possible to create an open-ended stream of explanatory knowledge there. That means that if knowledge of a suitable kind were instantiated in such an environment in suitable physical objects, it would cause itself to survive and would then continue to increase indefinitely. Can there really be such an environment? This is essentially the question that I asked at the end of the last chapter – can this creativity continue indefinitely? – and it is the question to which the Spaceship Earth metaphor assumes a negative answer.
Deutsch claims that an AI would be a universal explainer (Chapter 7):
There is a deeper issue too. AI abilities must have some sort of universality: special-purpose thinking would not count as thinking in the sense Turing intended. My guess is that every AI is a person: a general-purpose explainer. It is conceivable that there are other levels of universality between AI and ‘universal explainer/constructor’, and perhaps separate levels for those associated attributes like consciousness. But those attributes all seem to have arrived in one jump to universality in humans, and, although we have little explanation of any of them, I know of no plausible argument that they are at different levels or can be achieved independently of each other. So I tentatively assume that they cannot. In any case, we should expect AI to be achieved in a jump to universality, starting from something much less powerful. In contrast, the ability to imitate a human imperfectly or in specialized functions is not a form of universality. It can exist in degrees. Hence, even if chatbots did at some point start becoming much better at imitating humans (or at fooling humans), that would still not be a path to AI. Becoming better at pretending to think is not the same as coming closer to being able to think.
So according to Deutsch there is no qualitative distinction between an AI and a human being. Comments?
Answers
Quantitative limitations amount to qualitative limitations in this case.
The only truly universal TM has infinite memory and is infinitely programmable. Neither is true of humans.
We can't completely wipe and reload our brains, so we might be forever constrained by some fundamental hardcoding , something like Chomskyan innate linguistic structures , or Kantian perceptual categories.
And having quantitative limitations puts a ceiling on which concepts and theories we can entertain. Which is effectively a qualitative limit.
AIs are also finite , although they might have less restrictive limits.
There's no jump to universality because there is no jump to infinity.
Turing completeness misses some important qualitative properties of what it means for people to understand something. When I understand something I don't merely compute it, I form opinions about it, I fit it into a schema for thinking about the world, I have a representation of it in some latent space that allows it to be transformed in appropriate ways, etc.
I could, given a notebook of infinite size, infinite time, and lots of drugs, probably compute the Ackermann function A(5,5). But this has little to do with my ability to understand the result in the sense of being able to tell a story about the result to myself. In fact, there are things I can understand without actually computing, so long as I can form opinions about it, fit it into a picture of the world, represent it in a way that allows for transformations, etc.
↑ comment by alanf · 2020-09-25T21:09:24.241Z · LW(p) · GW(p)
The quotes aren't about Turing completeness. What you wrote is irrelevant to the quoted material.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2020-09-25T22:18:37.492Z · LW(p) · GW(p)
it could be there are aspects of reality that are beyond the capacity of our brains.’ But that cannot be so. For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers
I'm disagreeing with the notion, equivalent to taking turing completeness as understanding-universality, that the human capacity for understanding is the capacity for universal computation.
Replies from: alanf↑ comment by alanf · 2020-09-26T07:31:07.093Z · LW(p) · GW(p)
If you read the quote carefully you will find that it is incompatible with the position you are attributing to Deutsch. For example, he writes about
levels of universality between AI and ‘universal explainer/constructor’,
which would hardly be necessary if computational universality was equivalent to universal explainer.
Replies from: Charlie Steiner, TAG↑ comment by Charlie Steiner · 2020-09-26T20:39:20.387Z · LW(p) · GW(p)
That's a good point. It's still not clear to me that he's talking about precisely the same thing in both quotes. The point also remains that if you're not associating "understanding" with a class as broad as turing-completeness, then you can construct things that humans can't understand, e.g. by hiding them in complex patterns, or by using human blind spots.
↑ comment by TAG · 2020-09-26T09:30:23.058Z · LW(p) · GW(p)
The quotes aren’t about Turing completeness
But that creates it's own problem: there's no longer a strong reason to believe in Universal Explanation. We don't know that humans are universal explainers, because if there is something a human can't think of ... well a human can't think of it! All we can do is notice confusion.
For if the ‘capacity’ in question is mere computational speed and amount of memory, then we can understand the aspects in question with the help of computers – just as we have understood the world for centuries with the help of pencil and paper. As Einstein remarked, ‘My pencil and I are more clever than I.’
Sure, but then the understanding must lie in the combined human-pencil system, not the human brain alone, just as a human slowly following instructions in (forgive my use of this thought experiment, but it is an extension of the same idea) Searle's Chinese room doesn't understand Mandarin, even if the instructions they're executing do. An AI's CPU is not itself conscious, even if the AI is. The key in Einstein's case is that after writing everything down as a memory aid and an error-correcting mechanism, the important points the pencil made are stored and processed in his brain, and he can reason further with them. You could show me a Matrioshka brain simulation a human with planck-scale precision, and prove to me it did so, but even if I built the thing myself I still wouldn't understand it in the way I usually use the word "understand." Like in thermodynamics, at some point more is qualitatively different.
Now, if you very slowly augmented my brain with better hardware (and/or wetware), such that my thoughts really interfaced seamlessly across my biological evolved brain and any added components used as aids, then I started to consider those part of my mind instead of external tools. So in that sense, yes, future-me could come to understand anything.
That just doesn't mean they could come back in time and explain it in a way current-me could grasp, any more than I could meaningfully explain the implications of group theory for semiconductor physics to kindergarden-me (early-high-school-me could probably follow it with some extra effort, though). Kindergarden-me knew enough basic arithmetic and could have learned the symbol manipulations needed for Boolean logic (I think Scratch and Scratch Jr are proof enough that this is something young kids are capable of if it is presented correctly), so there's no computational operation he couldn't perform. He'd just have no idea why he'd be doing any of it, or how it related to anything else, and if he forgot it he couldn't re-derive it and might not even notice the loss. It would not be truly part of him [LW · GW].
No comments
Comments sorted by top scores.