↑ comment by XiXiDu ·
2012-03-11T15:20:06.326Z · LW(p) · GW(p)
One way in which people lose their sensitivity to such questions is that they train themselves to turn every problem into something that can be solved by their favorite formalized methods.
I tried to ask a question that best fits this community. The answer to it would be interesting even if it is a wrong question.
Besides, I am not joking when I say that I haven't thought about the whole issue. It does simply not have any priority at this point because it is a very complex issue and I still have to learn other things first. I admit my ignorance here. Yet I used the chance to indirectly ask one of the leading experts in the field to answer a question that I perceived to suit the Less Wrong community.
So if it can't be turned into a program, or a Bayesian formula, or..., it's deemed to be meaningless.
I don't think that way at all. I think that it is a fascinating possibility and I am very glad that people like you take it seriously and encourage you to keep it up and to not let yourself be discouraged by negative reactions.
Yet I don't know what it would mean for a problem to be algorithmically or mathematically undefinable. I can only rely on my intuition here and admit that I feel that there are such problems. But that's it.
But every formalism starts life as an ontology.
You pretty much lost me at ontology. All I really know about the term "ontology" is the Wikipedia abstract (I haven't read your LW posts on the topic either).
Please just stop here if I am wasting your time. I really didn't do the necessary reading yet.
To better be able to fathom what you are talking about, here are three points you might or might not agree with:
1) "Experiences like "green" are ontologically basic, are not reducible to interacting nonmental parts."
Well, I don't know what "mental" denotes so I am unable to discern it from that which is "nonmental".
Isn't the reason that we perceive "green" instead of "particle interactions" the same that we don't perceive the earth to be a sphere? Due to a lack of information and our relation to earth we perceive it to be round from afar and flat at a close-up range.
If you view "green" as an approximation then the fact that there are "subjects" that experience "green" can be deduced from fundamental limitations in the propagation and computation of information.
It is simply impossible for a human being to view a sphere. We are only able to deduce it. Does that mean that flatness of earth is ontologically basic, because it does not follow from physics? Well, it does.
2) "If consciousness is the spatial arrangement of particles, then you ought to be able to build a conscious machine in the Game of Life. It is Turing-complete after all, and most physicalists insist that consciousness is computable as well.
But I find that absurd. Sure you can get complex patterns, but how would these patterns share information? Individual cells aren't aware of whatever pattern they're in. If there is awareness, it should either cover only individual cells (no unified field of consciousness, only pixel-like awareness) or cover all cells - panpsychism, which isn't true either. It would be really weird to look at the board and say, this 10000x10000 section of the board is consciousness, but the 10x10 section to the left isn't, nor is this 10k x 10k section of noise.
Where is the information coming from that allows someone to make this judgment about consciousness? It doesn't seem to be in the board. So if you stick to the automaton, you have only two options."
The above has been written by user:muflax in a reply on Google+. I don't know enough about cellular automatons but it sure sounds like a convincing argument that Turing-completeness is insufficient.
3) See my post here.
How is what I wrote there related to the overall problem, if at all, and do you agree?
Before we "formalized" logic or arithmetic, we related to the ontological content of those topics: truth, reasoning, numbers...
I am not sure that truth, reasoning or numbers really exist in any meaningful sense because I don't know what it means for something to "exist". There are no borders out there.
There was a time when there was no such thing as algebra, or calculus, or propositional logic. How were they invented? Look into that question...
I don't like the word "invented" very much. I think that everything is discovered. And the reason for the discovery is that on the level that we reside, given that we all have similar computationally limits, the world appears to have distinguishable properties that can be labeled by the use of shapes. But that's simply a result of our limitations rather than hinting at something more fundamental.
The question is why it feels like something to be what we are: why is there any awareness.
Is the human mind a unity? Different parts seem to observe each other. What you call an "experience" might be the interactive inspection of conditioned data. A sort of hierarchical computation that decreases quickly. Your brain might have a strong experience of green but a weaker experience that you are experiencing green. That you have an experience of the experience of the experience of green is an induction that completely replaces the original experience of green and is observed instead.
I have this vision of consciousness that is similar to two cameras behind semi-transparent mirrors facing each other, fainting as all computational resources are being exhausted.
I don't know how this could possible work out given a cellular automaton though.
Chalmers never asserts that you can't simulate consciousness, in the sense of making an abstract state-machine model that imitates the causal relations of consciousness with the world.
Wow okay, I seem to confuse him with someone else.