Ask LW: ω-self-aware systems
post by robertzk (Technoguyrob) · 2012-12-16T22:18:52.514Z · LW · GW · Legacy · 10 commentsContents
10 comments
I was having a conversation with a religious friend of mine who argued that even if materialism was true, we would never be able to fully understand or replicate human intelligence because a physical system cannot understand itself--it would need to use the resources contained within it to perform that understanding, excluding the possibility of full understanding.
I countered with the following argument. Assume you are what your neurons are doing, and suppose you wish to extend your consciousness to fully grasp yourself (be aware of the larger systems functioning of your neuronal circuits, as well as possibly the smaller biochemical details, and the larger conceptual maps). Since consciousness offers us gestalt parallel information processing, and we will assume it can be extended to arbitrarily large concurrent information flow, one could create a (much larger) co-brain which consciously perceives all the functioning of your original brain. Now you can identify with your old consciousness and the newly added (much more expansive) co-consciousness.
The problem is that now you do not understand the full brain & co-brain system. But you can perform the process again, adding a co-co-brain which gives a realtime gestalt understanding of the co-brain consciousness. Since this process may be performed to arbitrarily large nesting levels, we can say that any physical system that is like the brain is ω-self-aware, with n-self-aware referring to the nesting level n. Since we do not expect the neural structures required to encode an n-self-aware system in an n+1-self-aware system to be any functionally different, we can say we've satisfactorily produced a physical system with full understanding of itself. Denying this would be equivalent to claiming we do not understand the natural numbers because we have not written every one of them down.
Does anyone see any trouble with this argument?
10 comments
Comments sorted by top scores.
comment by evand · 2012-12-16T23:00:31.982Z · LW(p) · GW(p)
Has your friend ever written a quining program? Is his argument also an argument against the existence of such? What does he see as the difference between "understand" and "be capable of fully specifying"?
I suspect that, for anyone who has written (or at least studied in detail) a quining program, and has fully specified a definition of "understand" by which the program either does or does not understand itself, the question will be dissolved, and cease to hold much interest.
In other words, I don't believe you need to invoke arbitrarily deep recursion to make the argument. I think you just need to specify that the co-brain be a quining computer system, to whatever level of fidelity is required to make you happy.
Replies from: Technoguyrob↑ comment by robertzk (Technoguyrob) · 2012-12-16T23:20:44.734Z · LW(p) · GW(p)
Thanks, this should work!
comment by TimS · 2012-12-17T00:36:02.081Z · LW(p) · GW(p)
In addition to what others have said, the trivial case of replicating human intelligence is actually pretty easy - conceive a child :). This is true even though we don't currently understand human intelligence well enough to create it "artificially."
comment by Matt_Simpson · 2012-12-16T22:42:47.264Z · LW(p) · GW(p)
I would challenge your friend on two premises that are pretty clearly false:
1) You need to fully understand a human brain in order to create intelligence. This is no more true than the statement "you need to fully understand microsoft windows in order to create an operating system" or "you need to fully understand the internal combustion engine in order to create an automobile."
2) A human brain needs to fully understand some form intelligence at all in order to create intelligence. This is false because the human brain very rarely fully understands something using just resources within itself. How many times have you used a pen-and-paper in order solve a math problem? The human brain just needs to be able to understand the intelligence in pieces well enough to write down equations that it can tackle one-at-a-time or to write useful computer programs. This is related to your co-brain idea, though this is more general - in your argument, the co-brain is another conscious entity while in my argument, it could be a bunch of tools, or maybe tools + more conscious entities.
Replies from: Technoguyrob↑ comment by robertzk (Technoguyrob) · 2012-12-16T22:47:17.645Z · LW(p) · GW(p)
Thanks! I presented him with these arguments as well, but they are more familiar on LW and so I didn't see the utility of posting them here. The above argument felt more constructive in the mathematical sense. (Although my friend is still not convinced.)
comment by Dre · 2012-12-16T23:00:11.813Z · LW(p) · GW(p)
I think you need to start by cashing out "understand" better. Certainly no physical system can simulate itself with full resolution. But there are all sorts of things we can't simulate like this. Understanding (as I would say its more commonly used) usually involves finding out which parts of the system are "important" to whatever function you're concerned with. For example, we don't have to simulate every particle in a gas because we have gas laws. And I think most people would say that gas laws show more understanding of thermodynamics than whatever you would get out of a complete simulation anyway.
Now the question is whether the brain actually does have any "laws" like this. IIRC, this is a relatively open question (though I do not follow neuroscience very closely) and in principle it could go either way.
I guess I don't really understand what the purpose of the argument is. Unless we can prove things about this stack of brains, what does it gets us? And how far "down" the evolutionary ladder does this argument work? Are cats omega-self-aware? Computing clusters?
Replies from: Manfred, Technoguyrob↑ comment by robertzk (Technoguyrob) · 2012-12-16T23:21:55.226Z · LW(p) · GW(p)
Good point. It might be that any 1-self-aware system is ω-self-aware.
comment by roystgnr · 2012-12-17T04:50:14.249Z · LW(p) · GW(p)
I would agree that it is impossible for a human being to losslessly emulate a copy of themselves with no external assistance. However, practical emulation may still be possible with the external assistance of enough exaflops of computer hardware.
comment by Vladimir_Nesov · 2012-12-24T09:33:08.441Z · LW(p) · GW(p)
I countered with the following argument.
This sounds like a wrong thing to do, countering vague statements. Explain how something actually works instead of arguing with assertions that are essentially unrelated to how it works.