It's not like anything to be a bat
post by Scott Alexander (Yvain) · 2010-03-27T14:32:52.050Z · LW · GW · Legacy · 192 commentsContents
192 comments
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?". If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects.
And this could be that animals lack subjective experience. This would explain quite nicely why I'm not an animal: because you can't be an animal, any more than you can be a toaster. So Thomas Nagel can stop worrying about what it's like to be a bat, and the rest of us can eat veal and foie gras guilt-free.
But before we break out the dolphin sausages - this is a pretty weird conclusion. It suggests there's a qualitative and discontinuous difference between the nervous system of other beings and our own, not just in what capacities they have but in the way they cause experience. It should make dualists a little bit happier and materialists a little bit more confused (though it's far from knockout proof of either).
The most significant objection I can think of is that it is significant not that we are beings with experiences, but that we know we are beings with experiences and can self-identify as conscious - a distinction that applies only to humans and maybe to some species like apes and dolphins who are rare enough not to throw off the numbers. But why can't we use the reference class of conscious beings if we want to? One might as well consider it significant only that we are beings who make anthropic arguments, and imagine there will be no Doomsday but that anthropic reasoning will fall out of favor in a few decades.
But I still don't fully accept this argument, and I'd be pretty happy if someone could find a more substantial flaw in it.
192 comments
Comments sorted by top scores.
comment by Psychohistorian · 2010-03-27T18:17:38.954Z · LW(p) · GW(p)
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
The anthropic principle creeps in again here, and methinks you missed it. The ability to make this argument is contingent upon being an entity capable of a certain level of formal introspection. Since you have enough introspection to make the argument, you can't be an animal. In your next million lives, so to speak, you won't be able to make this argument, though someone else out there will.
comment by [deleted] · 2010-03-28T08:08:19.686Z · LW(p) · GW(p)
I'm sorry, but I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?" except as early april's fool jokes. I am of course necessarily me because I call whoever I am me. And I live necessarily in the present because I call the time I live in the present. The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else. I think the confusion stems from treating your own consciousness at the same time as something special and not.
Replies from: Kevin, Jack↑ comment by Kevin · 2010-03-28T09:46:17.798Z · LW(p) · GW(p)
I'm a bit shocked how people on this site can seriously entertain ideas like "why am I me?" or "why do I live in the present?
Out of all of the questions we can ask, "why am I me?" is one of the most interesting, especially if done with the goal of being able to concisely explain it to other people. Your post is confusing to me, because I think "why am I me?" is not a nonsense question but "Why am I not somebody else" is a nonsense question.
Does anyone here think that "why am I me?" is actually a really easy question? What's the answer then, or how do I dissolve the question? I do not claim to understand the mystery of subjective experience. Where I stop understanding is something mysterious connected to the Born probabilities.
Replies from: Jack↑ comment by Jack · 2010-03-28T09:35:55.675Z · LW(p) · GW(p)
The question "Why am I not somebody else?" is nonsensical because for almost anybody I am somebody else.
More precisely: "I" refers to some numerically unique entity x. Thus "I is someone else" means x = -x which is an outright contradiction and we shouldn't waste our time asking why contradictions aren't the case.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-03-28T13:21:21.987Z · LW(p) · GW(p)
It only sounds nonsensical because of the words in which it's asked. The question raised by anthropic reasoning isn't "why do I live in a time I call the present" (to which, as you say, the answer is linguistic - of course we'd call our time the present) but rather "why do I live in the year 2010?" or, most precisely of all, "Given that I have special access to the subjective experience of one being, why would that be the experience of a being born in the late 20th century, as opposed to some other time?"
That may still sound tautological - after all, if it wasn't the 20th century, it'd be somewhen else and we'd be asking the same question - but in fact it isn't. Consider these two questions:
- Why am I made out of carbon, as opposed to helium?
- Why do I live in the 20th century, as opposed to the 30th?
The correct answer to the second is not saying, "Well, if you were made out of helium, you could just ask why you were made out of helium, so it's a dumb question", it's pointing out the special chemical properties of carbon. Anthropic reasoning suggests that we can try doing the same to point out certain special properties of the 20th century.
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
Replies from: Jack, Clippy, Furcas↑ comment by Jack · 2010-03-28T14:40:58.800Z · LW(p) · GW(p)
I think maybe some of this was meant for the comment above me.
That said I think the "I" really is the source of some if not all of these confusions and:
The big difference is that the first question can be easily rephrased to "why are people made out of carbon and not of helium", but the second can't. But that difference isn't enough to make the second tautological or meaningless.
I think the difference is exactly enough to make the second one tautological or meaningless. What you have to do is identify some characteristics of "I" and then ask: Why do entities of this type exist in the 20th century, as opposed to the 30th? If you have identified features that distinguish 20th century people from 30th century people you will have asked something interesting and meaningful.
↑ comment by Furcas · 2010-03-29T16:51:40.628Z · LW(p) · GW(p)
If 'you' lived in the 30th century you'd have different memories, at the very least, and thus 'you' would be a different person. That is to say, you wouldn't exist.
On the other hand, if the brain is reasonably substrate-independent, you could be exactly the same person if you were made out of helium.
Replies from: Jackcomment by Jonii · 2010-03-27T15:42:09.901Z · LW(p) · GW(p)
If you'd be any other animal on Earth, you wouldn't be considering what it would be like to be something else. Doomsday argument and arguments like it are usually formulated in a way "Of all the persons that could reason like me, only this small percentage ever were wrong". When animals are prevented, due to their neurological limitations, from reasoning as necessiated by the argument, they're not part of this consideration.
This doesn't mean that they're not sentient, it just means that by thinking about anthropic problems you're part of much narrower set of beings than just sentient ones.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-03-27T17:23:28.143Z · LW(p) · GW(p)
Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years? Is this a reductio ad absurdum, or do you think it's a valid conclusion?
Replies from: Jack, Jordan, Jonii, Unknowns↑ comment by Jack · 2010-03-28T02:30:00.734Z · LW(p) · GW(p)
Perhaps the fact that we are so confused by anthropic reasoning is a priori evidence that we are a very early anthropic reasoners and thus the Doomsday argument is false. Further, not every human is an anthropic reasoner. If the growth rate of anthropic reasoners is less than the growth rate of humans we should then extend the estimation of the lifespan of a human race with anthropic reasoners (and of course this says nothing about the lifespan of humanity without anthropic reasoners).
A handful of powerful anthropic reasoners could enforce a ban on anthropic reasoning: burning books, prohibiting it's teaching and silencing those who came to be anthropic reasoners on their own. If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% (I think, anyone have an educated estimate of how many anthropic reasoners there have been up to this point in time?) until a permanent solution was reached or humanity began spreading and we would need at least one enforcer for every colony-- but given optimistic longevity scenarios we could still keep the anthropic reasoner population to a minimum. The permanent solution is probably obvious: A singleton could enforce the ban by itself and make itself the last or at least close to last anthropic reasoner in the galaxy.
The above strikes me as obviously insane so there has to be a mistake somewhere, right?
Replies from: JGWeissman, Strange7↑ comment by JGWeissman · 2010-03-28T03:50:41.765Z · LW(p) · GW(p)
If within two generations we could stabilize the anthropic reasoner population at around 35 (say 10 enforcing, 25 to account for enforcement failure) with life spans averaging 100 years that would put us in the final 95% ...
That sounds like something Evidential Decision Theory would do, but not Timeless or Updateless Decision Theories. Unless you think that reaching a certain number of anthropic reasoners would cause human extinction.
Replies from: Jack↑ comment by Jack · 2010-03-28T05:36:32.598Z · LW(p) · GW(p)
Hmmm. Yes thats right, as far as I understand those theories at least. I guess my point is that something seems very wrong with an argument that makes predictions but offers nothing in the way of causal regularities whose variables could in principle be manipulated to alter the result. It isn't even like seen barometer indicate low pressure and then predicting a storm (while not understanding the variable that lead to the correlation of barometers indicating low pressure and storms coming): there isn't even any causal knowledge involved in the Doomsday argument afaict. Note that this isn't the case with all anthropic reasoning, it is peculiar to this argument. The only way we know of predicting the future is by knowing earlier conditions and rules governing those conditions over time: the Doomsday argument is thus an entirely knew way of making predictions. This suggests to me something has to be wrong with it.
Maybe the self-indication assumption is the way out, I can't tell if I would have the same problem with it.
↑ comment by Strange7 · 2011-01-14T12:56:23.378Z · LW(p) · GW(p)
Maybe somebody will just come up with an elegant explanation of the underlying probability theory some time in the next few years, it'll go viral among the sorts of people who would otherwise have attempted anthropic reasoning, and the whole thing will go the way of geocentrism, but with fewer religiously-motivated defenders.
↑ comment by Jonii · 2010-03-27T17:36:20.663Z · LW(p) · GW(p)
"Why not limit the set of people who could reason like me to "people who are using anthropic reasoning" and just assume people will stop using anthropic reasoning in the next hundred years?"
That's known as the Doomsday argument, as far as I can tell.
My point, in a bit simplifying way, is that anthropic reasoning is only applicable to beings are capable of anthropic reasoning. If you know that there are billion agents, of which one thousand are capable of anthropic reasoning, and you know that of anthropic reasoners 950 are on island A and 50 are on the B, and all the non-anthropic reasoners are on island B, you know, based on anthropic reasoning, that you're on the island A 95% certainly. The rest of the agents simply don't matter. You can't conclude anything about those beyond that they're most likely not capable of anthropic reasoning
Replies from: khafra↑ comment by Unknowns · 2010-03-30T17:16:22.206Z · LW(p) · GW(p)
I argued before -- in the discussion of the Self-Indication Assumption -- that this is exactly the right anthropic reference class, namely people who make the sorts of considerations that I am engaging in. However, that doesn't show that people will just stop using anthropic reasoning. It shows that this is one possibility. On the other hand, it is still possible that people will stop using such reasoning because there will be no more people.
comment by wedrifid · 2010-03-27T16:44:56.738Z · LW(p) · GW(p)
The key point I will remember from reading this post is that the anthropic Doomsday argument can safely be put away in a box labelled 'muddled thinking about consciousness' alongside 'how can you get blue from not-blue?', 'if a tree falls in a forest with nobody there does it make a sound?' and 'why do quantum events collapse when someone observes them?'.
There are situations in which anthropic reasoning can be used but it is a mistake to think that this is because of the ability of a bunch of atoms to perform the class of processing we happen to describe as consciousness.
Replies from: fmgncomment by JohannesDahlstrom · 2010-03-29T22:10:24.139Z · LW(p) · GW(p)
The probability of a randomly picked currently-living person having a Finnish nationality is less than 0.001. I observe myself being a Finn. What, if anything, should I deduce based on this piece of evidence?
The results of any line of anthropic reasoning are critically sensitive to which set of observers one chooses to use as the reference class, and it's not at all clear how to select a class that maximizes the accuracy of the results. It seems, then, that the usefulness of anthropic reasoning is limited.
Replies from: Mallah, AlephNeil↑ comment by Mallah · 2010-03-30T16:28:01.437Z · LW(p) · GW(p)
That kind of anthropic reasoning is only useful in the context of comparing hypotheses, Bayesian style. Conditional probabilities matter only if they are different given different models.
For most possible models of physics, e.g. X and Y, P(Finn|X) = P(Finn|Y). Thus, that particular piece of info is not very useful for distinguishing models for physics.
OTOH, P(21st century|X) may be >> P(21st century|Y). So anthropic reasoning is useful in that case.
As for the reference class, "people asking these kinds of questions" is probably the best choice. Thus I wouldn't put any stock in the idea that animals aren't conscious.
↑ comment by AlephNeil · 2010-05-15T04:27:46.610Z · LW(p) · GW(p)
Just think: In a universe that contains a countable infinity of conscious observers (but finite up to any given moment of time), people's heads would explode as they tried to cope with the not-even-well-defined probability of being born on or before their birth date.
comment by PhilGoetz · 2010-03-27T21:27:36.126Z · LW(p) · GW(p)
That's an interesting observation.
There's a problem in assuming that consciousness is a 0/1 property; that you're either conscious, or not.
There's another problem in assuming that YOU are a 0/1 property; that there is exactly one atomic "your consciousness".
Reflect on the discussion in the early chapters of Daniel Dennet's "Consciousness Explained", about how consciousness is not really a unitary thing, but the result of the interaction of many different processes.
An ant has fewer of these processes than you do. Instead of asking "What are the odds that 'I' ended up as me?", ask, "For one of these processes, what are the odds that it would end up in me, rather than in an ant?"
According to Wikipedia's entry on biomass, ants have 10-100 times the biomass of humans today.
According to Wikipedia's list of animals by neuron count, ants have 10,000 neurons.
According to that page, and this one, humans have 10^11 neurons.
Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons, which is likely somewhere between N and N^2. I'm gonna call it NlogN.
I weigh as much as 167,000 ants. Each of them has ~ 10,000 log(10,000) bits of info. I have ~ 10^11 log(10^11) bits of info. I contain as much information as 165 times my body-mass worth of ants.
So if we ignore how much longer ants have lived than humans, the odds are better that a random unit of consciousness today would turn up in a human, than in an ant.
(Also note that we can only take into account ants in the past, if reincarnation is false. If reincarnation is true, then you can't ask about the chances of you appearing in a different time. :) )
If you're gonna then say, "But let's not just compare ourselves to ants; let's ask about turning up in a human vs. turning up in any other species", then you have the dice-labelling problem argued below: You're claiming humans are the 1 on the die.
Replies from: wnoise, SilasBarta, RobinHanson, Yvain↑ comment by wnoise · 2010-03-30T18:38:19.068Z · LW(p) · GW(p)
Information is proportional not to the number of neurons, but to the number of patterns that can be stored in those neurons,
No, it's proportional to the log of the number of patterns that can be (semi-stably) stored. E.g. n bits can store 2^n patterns.
which is likely somewhere between N and N^2. I'm gonna call it NlogN.
I'd like to see a lot more justification for this. If each connection were binary (it's not), and connections were possible between all N neurons (they're not), than we would have N^2 bits.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-30T19:11:55.302Z · LW(p) · GW(p)
No, it's proportional to the log of the number of patterns that can be (semi-stably) stored. E.g. n bits can store 2^n patterns.
Oops! Correct. That's what I was thinking, which is why I said info NlogN for N neurons. N neurons => max N^2 connections, 1 bit per connection, max N^2 bits, simplest model.
The math trying to estimate the number of patterns that can be stored in different neural networks is horrendous. I've seen "proofs" for Hopfield network capacity ranging from, I think, N/logN to NlogN.
Anyway, it's more-than-proportional to N, if for no other reason than that the number of connections per neuron is related to the number of neurons. A human neuron has about 10,000 connections to other neurons. Ant neurons don't.
↑ comment by SilasBarta · 2010-03-30T18:10:43.646Z · LW(p) · GW(p)
Humans are more analogous to an ant colony than to an individual ant, so that's where you should make the comparison: to a number of ant colonies with ant mass equal to your mass. Within each colony, you should treat each ant as a neuron in a large network, meaning you multiply the ant information not by the number of ants Na, but by Na log Na.
Assume 1000 ants/colony. You weight as much as 167 colonies. Letting N be the number of neurons in an ant (and measuring in Hartleys to make the math easier), each colony has
(N log N) (Na log Na)
= (1e4 log 1e4) (1e3 log 1e3)
= 1.2e8 H
Multiplying by the number of colonies (since they don't act like a mega-colony) gives
1.2e8 H * 167
=2e10 H
This compares with the value for humans:
1e11 log 1e11
1.1e12 H
So that means you have ~55 times as much information per unit body weight, not that far from your estimate of 165.
I don't know what implications this calculation has for the topic, even assuming it's correct, but there you go.
Replies from: PhilGoetz↑ comment by RobinHanson · 2010-03-27T23:55:28.752Z · LW(p) · GW(p)
This is a very intriguing line of thought. I'm not sure it makes sense, but it seem worth pondering further.
↑ comment by Scott Alexander (Yvain) · 2010-03-28T13:32:44.326Z · LW(p) · GW(p)
I weigh as much as 167,000 ants. Each of them has ~ 10,000 log(10,000) bits of info. I have ~ 10^11 log(10^11) bits of info. I contain as much information as 165 ants.
I'm not following your math here, and I'm especially not following the part where if a person contains as much information as 165 ants and there are 1 quadrillion ants and ~ 10 billion people, a given unit of information is more likely to end up in a human than in an ant. And since we do believe reincarnation is false, it's much worse than that, since ants have been around longer than humans.
Also, I have a philosophical objection with basing it on units consciousness. If we're to weight the chances of being a certain animal with the number of bits information they have, doesn't that imply we're working from a theory where "I" am a single bit of information? I'd much sooner say that I am all the information in my head equally, or an algorithm that processes that information, or at least not just a single bit of it.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-28T16:04:12.038Z · LW(p) · GW(p)
Oops; that was supposed to say, "I contain as much information as 165 times my body-mass in ants".
I'm kinda disappointed that your objection was that the math didn't work, and not that I'm smarter than 165 ants. (I admit they are winning the battle over the kitchen counter. But that's gotta be, like, 2000 ants. Don't sell me short.)
If you want to say that you're all the information in your head equally, then you can't ask questions like "What are the odds I would have been an ant?"
comment by PhilGoetz · 2010-03-28T00:29:08.068Z · LW(p) · GW(p)
Can't I use the same reasoning to prove that non-Americans aren't conscious?
Replies from: JGWeissman, Yvain↑ comment by JGWeissman · 2010-03-28T00:52:58.986Z · LW(p) · GW(p)
The anthropic principal only provides between 4 and 5 bits of evidence for this this theory, not nearly enough to support the complexity of the same brain structures being conscious in Americans but not in non-Americans.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-28T00:58:09.188Z · LW(p) · GW(p)
All right, then. I got 33 bits that says everyone except me is unconscious!
Replies from: bogus↑ comment by bogus · 2010-03-28T15:38:25.483Z · LW(p) · GW(p)
This is actually a very good point. If the quantum mind hypothesis is false, then either subjective experience doesn't exist at all (which anyone who's reading this post ought to take as an empirically false statement) or solipsism is true and only a single subjective experience exists. 33 bits of info are just not nearly enough to explain how subjective experience is instantiated in billions of complex human brains each slightly different from all others, as opposed to a single brain.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-03-28T16:02:59.426Z · LW(p) · GW(p)
If the quantum mind hypothesis is false, then either subjective experience doesn't exist at all (which anyone who's reading this post ought to take as an empirically false statement) or solipsism is true and only a single subjective experience exists.
Why's that?
Replies from: bogus↑ comment by bogus · 2010-03-28T16:16:50.229Z · LW(p) · GW(p)
Why's that?
Because "I am my brain" is actually an extremely complex hypothesis; you need to relate all of your inner subjective experience to brain states, action potentials, firing patterns and what not. Since all brains are actually slightly different from one another (at least from a purely physical point of view), the hypothesis that other brains also have subjective experience is untenable due to its sheer complexity.
Replies from: None, PhilGoetz, Morendil↑ comment by [deleted] · 2010-03-28T19:12:55.193Z · LW(p) · GW(p)
That's like saying that "there is a prime number greater than 3^^^3" is an extremely complex and therefore untenable hypothesis, because such a number needs to be coprime to all of the natural numbers below it.
Every possible way to realize the hypothesis "I am my brain" is extremely unlikely, but there are extremely many ways to realize it. A disjunction of lots of unlikely things need not be unlikely.
Replies from: bogus↑ comment by bogus · 2010-03-28T20:30:12.003Z · LW(p) · GW(p)
Every possible way to realize the hypothesis "I am my brain" is extremely unlikely, but there are extremely many ways to realize it.
No, there aren't. The physical state of your brain is known, and (assuming physicalism/epiphenomenalism/property dualism is true) the physical state must explain everything you might claim about your subjective experience. Either you're a p-zombie and do not actually have subjective experience, or this explanation must be evaluated for simplicity on Occam's razor/Solomonoff induction grounds.
Replies from: None↑ comment by [deleted] · 2010-03-29T02:31:37.013Z · LW(p) · GW(p)
You've managed to confuse me. I suspect, though, that this analogy is relevant:
What is the probability that the text between the quotation marks in this paragraph is "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce id velit urna, ac sollicitudin libero. Phasellus ac rutrum nisl. In volutpat scelerisque justo, non congue diam vestibulum sit amet. Donec."? The prior probability of this being true is minuscule, looking something like 10^-60; therefore, you might as well rule it out now.
On the other hand, I suspect that we don't actually disagree at all. After all, you seem to be arguing for a position I agree with; I'm simply not sure whether you're arguing correctly or not.
Replies from: wedrifid, bogus↑ comment by wedrifid · 2010-03-29T11:40:27.233Z · LW(p) · GW(p)
What is the probability that the text between the quotation marks in this paragraph is "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce id velit urna, ac sollicitudin libero. Phasellus ac rutrum nisl. In volutpat scelerisque justo, non congue diam vestibulum sit amet. Donec."? The prior probability of this being true is minuscule, looking something like 10^-60; therefore, you might as well rule it out now.
Prior to what exactly? I do have a prior for "a randomly generated ascii string of that particular length being the same as the string given". I wouldn't be able to know to use that as the prior unless I have already been given some information. Then there is all the knowledge of human languages and cultural idiosyncracies I happen to have. Which of those am I allowed to consider? It's hard to tell since, well, you've alreay given me the answer. It's a bit post for any 'prior' except meta-uncertainty. I would need a specific counter-factual state of knowledge to be able to give a reasonable prior.
(All of which I believe supports your point.)
↑ comment by bogus · 2010-03-29T13:09:35.042Z · LW(p) · GW(p)
This seems to be a case of extraordinary claims are extraordinary evidence. It's like saying, "well yes, the fact that I have a brain is pretty extraordinary, but so what? I clearly have one". It doesn't distinguish between a Boltzmann brain and a brain arising normally via natural selection. So is your consciousness a Boltzmann consciousness?
↑ comment by PhilGoetz · 2010-03-28T16:45:51.812Z · LW(p) · GW(p)
Since all brains are actually slightly different from one another (at least from a purely physical point of view), the hypothesis that other brains also have subjective experience is untenable due to its sheer complexity.
I don't see any justification for the connecting "since".
Replies from: bogus↑ comment by bogus · 2010-03-28T17:58:36.232Z · LW(p) · GW(p)
You believe that a single mapping between physical brains and subjective experiences can apply to all humans? What does this mapping look like? How many bits are needed to fully specify it?
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-28T19:06:54.400Z · LW(p) · GW(p)
Is it reasonable to expect me to necessarily be able to answer that question if materialism is true?
(What does the mapping from entangled quantum states to experiences look like?)
Replies from: bogus↑ comment by bogus · 2010-03-28T19:37:26.549Z · LW(p) · GW(p)
Is it reasonable to expect me to necessarily be able to answer that question if materialism is true?
It's about as reasonable as demanding an account of how the brain can mantain mesoscopic quantum superpositions long enough to influence neural processes.
(What does the mapping from entangled quantum states to experiences look like?)
We don't know, but it's going to be far simpler than a mapping from classical brains.
(For all we know, it could be trivial; perhaps each quale is a GENSYM which maps directly to a basis of the quantum system.)
↑ comment by Morendil · 2010-03-28T16:18:03.697Z · LW(p) · GW(p)
What does that have to do with the quantum mind hypothesis?
Replies from: bogus↑ comment by bogus · 2010-03-28T16:29:44.597Z · LW(p) · GW(p)
That allows you to replace "I am my brain" with "I am a complex quantum state which is instantiated by my brain; and my inner experience maps directly to this quantum state." Other brains have evolved to maintain quantum states in the same way, hence they also have subjective experience.
Replies from: PhilGoetz↑ comment by Scott Alexander (Yvain) · 2010-03-28T13:37:41.195Z · LW(p) · GW(p)
Not unless you have a strong reason to privilege the state of being an American as especially interesting. Otherwise, you're in the position Jordan mentioned of just knowing you're in one unexceptional condition out of many.
One thing you could say based on your being an American is that you have weak evidence that America is likely to be one of the more populous countries, and strong evidence that there's no country thousands or billions of times more populous than America. Both conclusions are correct.
And further, if a Luxembourgian posts a reply here saying "My Luxembourgian citizenship disproves the anthropic principle", that doesn't count, because you're not him and he's self-selected by posting here o_O
Replies from: rwallace↑ comment by rwallace · 2010-03-28T15:21:56.418Z · LW(p) · GW(p)
So we seem to have concluded that my Irish citizenship disproves the anthropic principle, and I can know this, but you cannot know it :-)
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-03-28T16:08:48.950Z · LW(p) · GW(p)
As a matter of fact, I live in Ireland (although I'm a US citizen). That coincidence probably disproves some sort of important principle right there.
I think you've mentioned before that you live in Dublin; I live in Cork, so sadly we're a little too far to meet up for a chat one night.
Replies from: rwallacecomment by AllanCrossman · 2010-03-27T15:49:41.779Z · LW(p) · GW(p)
"why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Well, quite. Both are absurd.
Replies from: Raincomment by [deleted] · 2010-03-28T22:12:13.011Z · LW(p) · GW(p)
I'm becoming more skeptical of anthropics every day.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-04-09T14:13:01.554Z · LW(p) · GW(p)
I think that anthropics is a useless distraction, but until I've worked out why it's a useless distraction it still gets in the way of everything.
Replies from: Strange7↑ comment by Strange7 · 2011-01-14T12:42:19.281Z · LW(p) · GW(p)
People don't understand the difference between extreme improbability and actual impossibility. "I observe that I exist, therefore some mysterious 'great filter' will soon wipe out all humanity" is as innumerate a mistake as winning $10^6 with your first-ever lottery ticket and then immediately spending the entire sum on more tickets, because (based on that initial evidence) it's got the biggest, fastest ROI.
We have only what we observe, and what we observe is a green world in a mostly empty universe. Perhaps, in gaining that much, we were very, very lucky; perhaps not. Either way, does it change anything? We have only what we observe. Make the most of it.
comment by Mitchell_Porter · 2010-03-28T00:51:39.757Z · LW(p) · GW(p)
At one time I wondered, why am I not a particle? The anthropic "explanation" is that particles can't be conscious. But that doesn't remove the prior improbability of my existence in this form. Empirically I know I'm conscious, so being a particle (under the usual assumptions) has a posterior probability of zero. But if I think of myself as a random sample from the set of all entities - and why shouldn't I? - then my apriori probability of having been conscious is vanishingly small. (Unless I change my notion of reality rather radically.)
comment by Jordan · 2010-03-27T20:16:06.719Z · LW(p) · GW(p)
Let's look at examples where we know the 'right' answer:
Someone flips a coin. If it's heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it's tails they do the opposite.
You wake up in a green room and conclude that the coin was likely tails.
Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you conclude that the group of people like you is only you. Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
Now assume that instead of random memory erasure, if the coin was heads the people in the red room forget about anthropics, and if the coin was tails the people in the green room forget about anthropics. You wake up in a green room and remember to apply the anthropic principle. Now it matters that you know to use the anthropic principle, and you should conclude with 100% certainty that the coin came up heads.
So, sometimes we need to consider the fact that the other people can apply the anthropic principle, and sometimes we don't need to consider it. I think I've confused myself.
Replies from: AlephNeil, torekp↑ comment by AlephNeil · 2010-05-17T16:34:35.356Z · LW(p) · GW(p)
Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
I think your intuitions lead you astray at exactly this point.
Suppose that the 1000 of you are randomly 'tagged' with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707.
If all the clones remember to apply anthropic reasoning (assuming for argument's sake that my current line of reasoning is 'anthropic') then you can easily work out that the probability of the observed event "number 707 is an anthropic reasoner in a green room" is 1/1000 if coin was heads or 999/1000 if coin was tails.
However, if 998 clones have their 'anthropic reasoning' capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-05-17T16:47:22.854Z · LW(p) · GW(p)
However, if 999 clones have their 'anthropic reasoning' capacity removed then both probabilities are 1/1001, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won't tell you anything extra about that.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-05-17T16:57:28.990Z · LW(p) · GW(p)
Are you sure?
(Note: I got the numbers slightly wrong - the 1001s should have been 1000s etc.)
Yes: If the coin was heads then the probability of event "clone #707 is in a green room" is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of "clone #707 is an anthropic reasoner in a green room" is still 1/1000.
On the other hand, if the coin was tails then the probability of "clone #707 is in a green room" is 999/1000. However, clone #707 also knows that "clone #707 is an AR", and P(#707 is AR | coin was tails and #707 is in a green room) is only 1/999.
Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-05-17T17:28:42.503Z · LW(p) · GW(p)
If the coin was heads then the probability of event "clone #707 is in a green room" is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of "clone #707 is an anthropic reasoner in a green room" is still 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you're taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000.
and P(#707 is AR | coin was tails and #707 is in a green room) is only 1/999.
Why shouldn't it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
Replies from: AlephNeil↑ comment by torekp · 2010-04-03T00:27:18.251Z · LW(p) · GW(p)
I like this example because it has nice tidy prior probabilities. That's very much lacking in the Doomsday Argument - how do you distribute a prior over a value that has no obvious upper bound? For any finite number of people that will ever live, is there much greater than zero prior probability of that being the number? Even if I can identify something truly special about the reference class "among the first 100 billion people" as opposed to any other mathematically definable group - and thus push down the posterior probabilities of very large numbers of people eventually living - it doesn't seem to push down very far.
comment by komponisto · 2010-03-27T15:55:52.132Z · LW(p) · GW(p)
Following bogus, I could imagine endorsing a weaker form of the argument: not that it's like nothing to be a bat, but that it's like less to be a bat than to be a human.
In fact, if you've ever wondered why you happen to be the person you are, and not someone else, it may be that the reflectivity you are displaying by asking this question puts you in a more-strongly-anthropically-weighted reference class.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2010-03-28T13:47:28.116Z · LW(p) · GW(p)
Given 10 billion bats , that bats have been around for 50 million years, and bat generations taking let's say 5 years, and assuming that population has been stable for evolutionary history, we have a super rough estimate of something on the order of (10B * (50M/5)) = 100 quadrillion historical bats. I think a lot of anthropic calculations assume there have been 100 billion historical humans, so probability of being a human is 1/1 millionth the probability of being a bat.
I don't see a whole lot of difference between not having subjective experiences and having one one-millionth the subjective experience of a human. Once we expand this to all animals instead of just bats, the animals come out even worse.
Replies from: komponisto, Sticky, Jack↑ comment by komponisto · 2010-03-28T17:01:26.105Z · LW(p) · GW(p)
I'm not sure it follows that a bat has one one-millionth the subjective experience of a human. The problem is that you can't necessarily add a bunch of bat-experiences together to get something equivalent to a human experience; in fact, it seems to me that this sort of additivity only holds when the experiences are coherently connected to each other. (If someone hooked up a million bat-brains into a giant network, then it might make sense to ask "Why am I a human, rather than a million bats"?)
So it may be, for instance, that each bat has 10% the subjective experience of a human, but that that extra 90% makes it millions of times more probable that the experiencer will be pondering this question.
↑ comment by Sticky · 2010-03-31T14:20:48.557Z · LW(p) · GW(p)
Is there a difference between having no subjective experience and having one-millionth the subjective experience of a Tra'bilfin, which are advanced aliens with artificially augmented brains capable of a million times the processing of a current human?
comment by MBlume · 2010-03-28T01:08:42.261Z · LW(p) · GW(p)
and the rest of us can eat veal and foie gras guilt-free.
I don't think this works.
Obama can use the same argument to decide that, since if he could have been any person, it would be vanishingly likely that he'd be the president of the most powerful nation on earth. Thus, clearly, the rest of us (he would conclude) have no conscious experience, and he had better go ahead and be an egoist, and run the country in whatever way gives him the most personal gain.
I don't want Obama to do this, so I think I had better not do it either.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-03-28T01:31:52.997Z · LW(p) · GW(p)
Obama can use the same argument to decide that, since if he could have been any person, it would be vanishingly likely that he'd be the president of the most powerful nation on earth. Thus, clearly, the rest of us (he would conclude) have no conscious experience, and he had better go ahead and be an egoist, and run the country in whatever way gives him the most personal gain.
Same argument as here, I don't think 33 bits is enough to support the complexity penalty of the prior.
This is kind of scary though, if I imagine the emperor of a multi-galactic civilization, eventually the population is large enough. It seems unlikely though, even discounting speed of light issues, that a civilization of that size would be united under one single most powerful person.
Replies from: MBlume↑ comment by MBlume · 2010-03-28T07:36:13.387Z · LW(p) · GW(p)
The argument still shouldn't work though. Every one of those bits of evidence that you're the only guy around is counterbalanced by a doubling of the negative consequences if you're wrong.
So yes, maybe Obama should assume he's probably the only guy on earth, but his actions matter so massively much more in the tiny branch where he's really the most powerful man in a world of billions of thinking living people, that he should still be working to optimize for it.
comment by ata · 2010-03-27T22:14:13.355Z · LW(p) · GW(p)
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
Only in the sense that it's impossible for you to be a rock, or a tree, or an alien, or another person, because you clearly aren't any of those things. All this tells you is that you should be nearly 100% certain that you are you, and that's no great insight.
comment by JGWeissman · 2010-03-27T20:19:35.551Z · LW(p) · GW(p)
The anthropic principle seems to imply that our subjective experiences take place in amazingly common ancestor simulations that don't simulate animals in sufficient detail to give them subjective experience. That I find myself experiencing being a human rather than being a bat, even though bats are in principle capable of subjective experience, is because there are vastly more detailed simulations of humans than of bats.
Replies from: PhilGoetzcomment by DanielLC · 2010-04-09T04:42:20.197Z · LW(p) · GW(p)
The fact that you are human is evidence that only humans are conscious, but it's far from proof. If you have no a priori reason to believe that only humans are conscious, that means it's just as likely that it's only humans as only bats. If the a priori probability of all animals being conscious is only the same as the probability that it's just a given species (I'd say it's much, much larger), and it's impossible for it to just be two species etc., then a posteriori, there would still be a 50:50 chance that all animals are conscious.
Of course, there is an a priori reason to believe humans are conscious. We are more intelligent than the vast majority of animals. We have bigger brains. That said, I still find it very unlikely that humans are sentient but dolphins aren't. Their brains are bigger, after all.
Replies from: wnoise↑ comment by wnoise · 2010-04-09T13:48:39.260Z · LW(p) · GW(p)
(Psst: almost all animals are sentient (have senses), you might be thinking of sapient (conscious, having thoughts)).
Replies from: DanielLC↑ comment by DanielLC · 2010-05-14T22:53:03.831Z · LW(p) · GW(p)
I thought sentient was having qualia and sapient was intelligent thought.
I just checked a few dictionaries (Wikipedia, Dict.org etc.). It looks like my usage is the more common one.
Replies from: Blueberry↑ comment by Blueberry · 2010-05-14T23:01:11.262Z · LW(p) · GW(p)
Qualia is a confused concept and doesn't really exist as such, so that may not be the best way to phrase it.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-15T03:27:14.356Z · LW(p) · GW(p)
"Qualia" is effectively a name for all those properties which constitute your experience of the world, but which do not exist in the current ontology of natural science (thus we have the spectacle of people on this site needing to talk about "how it feels" to be a brain or a computer program, an additional property instinctively tacked on to the physical description precisely to make up for this lack).
This is a problem that has been building in scientific culture for centuries, ever since a distinction between primary and secondary properties was introduced. Mathematical physics raised the description and analysis of the "primary" properties - space, quantity, causality - to a high art, while the "secondary" properties - all of sensation, to begin with, apart from the bare geometric form of things - were put to one side. And there have always been a few people so enraptured by the power of physics and related disciplines that they were prepared to simply deny the existence of the ontological remainder (just as there have been "irrationalists" who were really engaged in affirming the reality of what was being denied).
We are now at the stage of figuring out rather detailed correlations between parts and states of the brain, described in material terms, and aspects of conscious experience, as experienced and reported "subjectively" or "in the first person". But a correlation is not yet an identity (and the verifiable correlations are still mostly of the form "X has something to do with Y"). Mostly people are being property dualists without realizing it: they believe their experiences are real, they believe those experiences are identical with brain states, but out of sheer habit they haven't noticed that the two sides of the identity are actually quite different ontologically.
Dennett belongs to that minority of materialists, more logically consistent but also more in denial of reality, who really are trying to deny the existence of the secondary properties, now known as qualia. It's possible to read him otherwise, because he does talk about his own experience; but if you look at his checklist of properties to deny, you can see he's a sort of neo-behaviorist, focused on verbal behavior. Indeed, the only thing neo about his behaviorism is that he has a physical model of how this behavior is caused (connectionist neural networks). But he is careful to say quite explicitly that there is no "Cartesian theater", no phenomenal color, no inner life, just people talking about these things.
I cannot tell if you are truly in Dennett's camp, or if you're just rejecting the view that there's something especially problematic about explaining sensations. A lot of people who talk about qualia are trying to emphasize that a description of human beings in terms of causal interactions between pieces of matter is leaving something out. But the things being left out are not in any way elusive or ineffable.
Science seems to be telling us that your whole life, everything you have ever experienced, is nothing but changes of state occurring in a few trillion neurons which have been sitting inside the same small dark space (your skull) for a few decades. Now if that's the case, I may not be able to write an equation describing the dynamics, but I do know what that is physically. It's a large number of electrons and quarks suspended in space by electromagnetic fields. If we are to unconditionally accept this as a description of what our lives and experiences really are, then we have to be able to identify everything - everything - we have ever thought, known, or done, the whole of our subjective realities, as a process composed of nothing but changes of states of particles all occurring within a few cubic centimeters of space. And I have no hesitation at all in saying that this is impossible, at least if the basic ingredients, those particles and fields, are understood as we currently conceive them to be.
Quite apart from the peculiar difficulty involved in identifying complex subjective states like "going diving in the Caribbean on your 25th birthday" with the electrical state of a captive neuronal porridge, the basic raw ingredients of subjective experience, like color qualia, simply aren't there in an arrangement of pointlike objects in space. This is why materialists who aren't eliminativists like Dennett are instead dualists, whether they realize it or not - because they simultaneously assert the existence of both the world of atoms in space and the world of subjective experience. These two worlds may be correlated, that is being demonstrated every day by neuroscience, but they simply cannot be identified under the physical ontology we have.
In my opinion the future lies with a new monism. But "physics" will have to be reconceptualized, if that world of subjective experience really is going to be found somewhere inside the skull, because as things stand there is nothing like it in there. I would also say that doing this is going to require a leap as big as anything in human intellectual and cultural history. It won't just be a matter of identifying the "neural correlate of consciousness". Someone is going to have to go right back to the epistemic beginning, before the distinction between primary and secondary properties, and rethink the whole of natural science from Galileo through to molecular neuroscience, while keeping the secondary properties in view. You can always reduce science to subjectivity, if you're prepared to let go of your models and remember that everything that has ever happened to you has occurred within your own subjective experience, so that's the easy part. What we're aiming for is far more difficult, namely, an objective world-picture which really does contain subjective experience and is true to its nature while also encompassing everything else. Of course, all those people who are out there trying to "naturalize subjective experience" or "naturalize phenomenology" are trying to do this, but without exception they presuppose the current "naturalistic" ontology, and yet somehow that is where the change and the progress has to occur.
Replies from: TheOtherDave, AlephNeil, pjeby, Blueberry, Jack, PhilGoetz↑ comment by TheOtherDave · 2011-01-07T18:09:15.703Z · LW(p) · GW(p)
Suppose on Tuesday I perceive object O as red.
For labeling convenience, I'm going to start referring to my subjective experience of that perception as . In other words, on Tuesday I experience O as .
If I've understood you, you claim the is due in part to color qualia in some way associated with O, which are distinct from the set of things happening inside my skull.
So, OK, assuming that, some questions.
I assume we agree that if I suddenly become color-blind, I might suddenly stop experiencing . Do you assert that in that case the -causing qualia continue to exist, I just stop experiencing them? (I would say something analogous about photons and perception, for example, if I suddenly lose my eyes.) Or do you assert that they stop existing? Or something else?
Either way: is that assertion something someone has confirmed in some way, or is it a purely theoretical prediction?
I assume we agree that if I suddenly manifest synesthesia -- say, due to a stroke -- I might also start experiencing a honking car horn as . I assume you would therefore say that there must be -causing qualia present, since my brain is unable to construct on its own. Do you assert that the -causing qualia were always present, and I've only just become able to perceive them? Or that they became present when I had the stroke, but not previously? Or something else?
Again: is that assertion something confirmed or theoretical?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-08T03:34:45.698Z · LW(p) · GW(p)
If I've understood you, you claim the is due in part to color qualia in some way associated with O, which are distinct from the set of things happening inside my skull.
No. I think that in reality, is in the head. But our current physical ontology contains no such entity. That is why I say that if you accept our current physical ontology, you're either an eliminativist or a dualist.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-08T07:50:24.675Z · LW(p) · GW(p)
I'm not in the least bit interested in the labels. But yes, if we're agreed that is constructed by my brain, rather than being a property of my environment, then I don't understand what grounds you have for believing that isn't explicable by entities in our current physical ontology.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-08T09:19:29.522Z · LW(p) · GW(p)
Just imagine if you were having a discussion with someone who said that the world is made of numbers. And you picked up a rock and said, so, this rock is made of numbers? And they said, sure. And you said, that's absurd. How could a rock be equal to 1+1, for example? They're completely different kinds of things. And they went off on a riff about how science has shown that all is number, and whenever you tried to point out the non-numerical aspects of reality, they'd just subsume that back into the all-is-number reductionism, and they'd stubbornly insist that, even if the rock was not equal to 1+1, it might be equal to some other numbers, and besides, what other sort of things could there be, besides numbers?
For me, the idea that is identical to some arrangement of particles in space is just like saying that 1+1 is a rock. The gulf between the nature of the allegedly identical entities is so great that the problem with the assertion ought to be obvious. In a sprinkling of point objects throughout space, where is the color? It's really that simple. It's just not there. It's not intrinsically there, anyway. You might propose that redness is a property of certain special configurations, but when you say that, you've embarked upon a form of dualism, property dualism. It's a dualism because on the one side, you have properties which are intrinsic to a geometrically defined situation, like distances and angles and shapes; and on the other side, you have properties which are logically independent of the geometric facts and have to be posited separately. For example, the existence of color experiences, or indeed any kind of experiences, in a brain.
In other words, the onus is on you to explain just what you think the connection is between arrangements of particles in space (e.g. a brain), and experiences of color. I have my own answer, but I want to hear yours first.
Replies from: TheOtherDave, Mass_Driver↑ comment by TheOtherDave · 2011-01-08T10:24:54.924Z · LW(p) · GW(p)
You won't find my answer interesting, but since you asked: I think experiences of color are among the states that particles in space can get into, just as the impulse to blink is a state particles in space can get into, just as a predisposition to generate meaningful English but not German sentences is a state that particles in space can get into, just as an appreciation for 17th-century Romanian literature is a state that particles in space can get into, just as a contagious head cold is a state that particles in space can get into. (Which is not to say that all of those are the same kinds of states.)
We can certainly populate our ontologies with additional entities related to those various things if we wish... color qualia and motor-impulse qualia and English qualia and German qualia and 17th-century Romanian literary qualia and contagious head cold qualia and so forth. I have no problem with that in and of itself, if positing these entities is useful for something.
But before I choose to do so, I want to understand what use those entities have to offer me. Populating my ontology with useless entities is silly.
I understand that this hesitation seems to you absurd, because you believe it ought to seem obvious to me that arrangements of matter simply aren't the kind of thing that can be an experience of color, just like it should seem obvious that numbers aren't the kind of thing that can be a rock, just as it seems obvious to Searle that formal rules aren't the kind of thing that can be an understanding of Chinese, just as it seemed obvious to generations of thinkers that arrangements of matter aren't the kind of thing that can be an infectious living cell.
These things aren't, in fact, obvious to me. If you have reasons for believing any of them other than their obviousness, I might find those reasons compelling, but repeated assertions of their obviousness are not.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-08T12:04:06.654Z · LW(p) · GW(p)
An arrangement of particles in space can embody a blink reflex with no problems, because blinking is motion, and so it just means they're changing position in space.
Generating meaningful sentences - here we begin to run into problems, though not so severe as the problem with color. If the sentences are understood to be physical objects, such as sequences of sound waves or sequences of letter-shapes, then they can fit into physical ontology. We might even be able to specify a formal grammar of allowed sentences, and a combinatorial process which only produces physical sentences from that grammar. But meaning per se, like color, is not a physical property as ordinarily understood. (I know I'll get into extra trouble here, because some people are with me on the color qualia being a problem, but believe that causal theories of reference can reduce meaning to a conjunction of known physical properties. However, so far as I can see, intrinsic meaning is a property only of certain constituents of mental states - the meaning of sentences and all other intersubjective signs is not intrinsic and derives from a shared interpretive code - and the correct ontology of meaning is going to be bound up with the correct ontology of consciousness in general.)
Anyway, you say it's not obvious to you that "arrangements of matter simply aren't the kind of thing that can be an experience of color". Okay. Let's suppose there is an arrangement of matter in space which is an experience of color. Maybe it's a trillion particles in a certain arrangement executing a certain type of motion. Now, we can think about progressively simpler arrangements and motions of particles - subtracting one particle at a time from the scenario, if necessary... progressively simpler until we get all the way back to empty space. Somewhere in that conceptual progression we stopped having an experience of color there. Can you give me the faintest, slightest hint of where the magic transition occurs - where we go from "arrangement of particles that's an experience of color" to "arrangement of particles that's not an experience of color"?
I could also simply ask for you to indicate where in the magic arrangement of particle the color is. That is, assuming that you agree that one aspect of the existence of an experience of color is that something somewhere actually is that color. If it turns out that, according to you, brain state X is an experience of only because the brain in question outputs the word "red" when queried, or only because a neural network somewhere is making the categorization "red" - then that is eliminativism. There's no actual , no actual color, just color words or color categories.
The reason it is obvious that there is no color inherently inhabiting an arrangement of particles in space is because it's easy to see what the available ontological ingredients are, and it's easy to see what you can and cannot make by combining them. If we include dynamics and a notion of causality, then the ingredients are position, time, and causal dependence. What can you construct from such ingredients? You can make complicated structures; you can make complicated motions; you can make complicated causal dependencies among structures and motions. As you can see, it's no mystery that such an ontological scheme can encompass something like a blink reflex, which is a type of motion with a specified causal dependency.
With respect to the historical case of vitalism, it's interesting that what the vitalists posited was a "vital force". That's not an objection to the logical possibility of reducing life, and especially replication, to matter in motion. They just didn't believe that the known forces were capable of producing the right sort of motion, so they felt the need to postulate a new, complicated form of causal interaction, capable of producing the complexly orchestrated motion which must be occurring for living things to take shape. As it turned out, there was no need to postulate a special vital force to do that; the orchestration can be produced by the same forces which are at work in nonliving matter.
I'm emphasizing the way in which the case of vitalism differs from the case of qualia, because it is so often cited as a historical precedent. The vitalists - at least, the ones who talked about vital forces - were not saying that life is not material. They just postulated an extra force; in that respect, they were proposing only a conservative extension to the physical ontology of their time. But the observation that consciousness presents a basic ontological problem, in a universe consisting of nothing but matter in motion through space, has been around for a very long time. Democritus took note of this objection. I think Leibniz stated it in a recognizably modern form. It is an old insight, and it has not gone away just because the physical sciences have been so successful. Celia Green writes that this success actually sharpens the problem: the clearer our conception of material ontology and our causal account of the world becomes, the more obvious it becomes that this concept and this account do not contain the "secondary qualities" like your .
Even at the dawn of modern physical science, in the time of Galileo, there was some discussion as to how these qualities were being put aside, in favor of an exclusive focus on space, time, motion, extension. It's quite amazing that from humble beginnings like Kepler's laws, we've come as far as quantum mechanics, string theory, molecular biology, all the time maintaining that exclusion. Some new ontological factors did enter the set of ingredients that physical ontology can draw upon, especially probability, but those elementary sensory qualities remain absent from the physical conception of reality. The 20th-century revolution in thought regarding information, communication, and computation goes just a little way towards bringing them back, but in the end it's nowhere near enough, because when you ask, what are these information states really, you end up having to reduce them to statistical properties of particles in space, because that's still all that the physical ontology gives you to work with.
I'm probably an idiot for responding at such length on this topic, because all my experience to date suggests that doing so changes nothing fundamentally. Some people get that there's a problem, but don't know how to solve it and can only hope that the future does so, or they embrace a fuzzy idea like emergence dualism or panpsychism out of intellectual desperation. Some people don't get that there's a problem - don't perceive, for example, that "what it feels like to be a bat" is an extra new property on top of all the ordinary physical properties that make up a bat - and are happy with a philosophical formula like "thought is computation".
I believe there is a problem to be solved, a severe problem, a problem of the first order, whose solution will require a change of perspective as big as the one which introduced us to the problem. Once, we had naive realism. The full set of objects and properties which experience reveals to us were considered equally real. They all played a part in the makeup of reality, to which the human mind had a partial but mysteriously direct access. Now, we have physics; ontological atomism, plus calculus. Amazingly, it predicts the behavior of matter with incredible precision, so it's getting something right. But mind, and everything that is directly experienced, has vanished from the model of reality. It hasn't vanished in reality; everything we know still comes to us through our minds, and through that same multi-sensory experience which was once naively identified with the world itself, and which we now call conscious experience. The closest approximation within the physical ontology to all of that is computation within the nervous system. But when you ask what neural computation are, physically, it once again reduces to matter in motion through space, and the same mismatch between the apparent character of experience, and the physical character of the brain, recurs. Since denying that experience does have this distinct character is false and therefore hopeless, the only way out must be to somehow reconceive physical ontology so that it contains, by construction, consciousness as it actually is, and so that it preserves the causal structural relations (between fundamental entities whose inner nature is opaque and therefore undetermined by the theory) responsible for the success of quantitative predictions.
I imagine my manifesto there is itself opaque, if you're one of those people who don't get the problem to begin with. Nonetheless, I believe that is the principle which has to be followed in order to solve the problem of consciousness. It's still only the barest of beginnings, you still have to step into darkness and guess which way to turn, many times over, in order to get anywhere, and if my private ideas about how to proceed are right, then you have to take some really big leaps in the darkness. But that's the kernel of my answer.
Replies from: Will_Sawin, TheOtherDave↑ comment by Will_Sawin · 2011-01-10T02:14:34.602Z · LW(p) · GW(p)
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
Let's try to communicate through intuition pumps:
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn't be able to tell the difference - your behavior would be the same either way.
Two meditations on an optical illusion: I heard, possibly on lesswrong, that in illusions like this one: http://www.2dorks.com/gallery/2007/1011-illusions/12-kanizsatriangle.jpg your edge-detecting neurons fire at both the real and the fake edges.
Doesn't that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
Doesn't the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
↑ comment by Mitchell_Porter · 2011-01-10T11:35:33.191Z · LW(p) · GW(p)
My latest comment might clarify a few things. Meanwhile,
Your remove-an-atom argument also disproves the existence of many other things, such as heaps of sand.
No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.
Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.
As I've attempted to clarify in the new comment, my problem is not with subsuming consciousness into physics per se, it is specifically with subsuming consciousness into a particular physical ontology, because that ontology does not contain something as basic as perceived color, either fundamentally or combinatorially. To consider that judgement credible, you must believe that there is an epistemic faculty whereby you can tell that color is actually there. Which leads me to your next remark--
Stolen from Dennet: You are not aware of your qualia, only of relationships between your qualia. I could swap and in your conscious experience, and I could swap them in your memories of conscious experience, and you wouldn't be able to tell the difference - your behavior would be the same either way.
--and so obviously I'm going to object to the assumption that I'm not aware of my qualia. If you performed the swap as described, I wouldn't know that it had occurred, but I'd still know that and are there and are real; and I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.
Doesn't that image look exactly like neurons detecting edges between neurons detecting white and neurons detecting like should look like?
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".
Doesn't the conflict between a physical universe and conscious experience feel sort of like the conflict between uniform whiteness and edgeness?
It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-10T22:13:09.600Z · LW(p) · GW(p)
No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you're not denying current physics, but just ontologically reorganizing it., then you're vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We're all in the same boat.
Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.
No you wouldn't. People can't tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can't have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I'm typing in is based on regularities the size of a transistor. I wouldn't expect to notice if my images were, really, fundamentally, completely different. I wouldn't expect to notice if something physical happened - the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
Replies from: Mitchell_Porter, Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-14T11:37:57.024Z · LW(p) · GW(p)
(part 1 of reply)
No-one's telling me that a heap of sand has an "inside". It's a fuzzy concept and the fuzziness doesn't cause any problems because it's just a loose categorization. But the individual consciousness actually exists and is actually distinct from things that aren't it, so in a physical ontology it has to correspond to a hard-edged concept.
Degree-of-existence seems likely to be well-defined and useful, and may play a part in, for example, quantum mechanics.
However, my new response to your argument is that, if you're not denying current physics, but just ontologically reorganizing it., then you're vulnerable to the same objection. You can declare something to be Ontologically Fundamental, but it will still mathematically be a heap of sand, and you can still physically remove a grain. We're all in the same boat.
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping - many exact physical states correspond to the same conscious state - then that's property dualism.
When you say, later on, that your consciousness "is a computation based mainly or entirely on regularities the size of a single neuron or bigger", that implies dualism or eliminativism, depending on whether you accept that qualia exist. Believe what I quoted, and that qualia exist, and you're a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn't really exist, even as appearance), and you're an eliminativist. This is because a many-to-one mapping isn't an identity.
"Degrees of existence", by the way, only makes sense insofar as it really means "degrees of something else". Existence, like truth, is absolute.
Consider Cyc. Isn't one of the problems of Cyc that it can't distinguish itself from the world? It can distinguish the Cyc-symbol from other symbols, but only in the same way that it distinguishes any symbol from any other symbol. Any attempt to make it treat the Cyc-symbol really differently requires that the Cyc-symbol gets special treatment on the algorithmic level.
Do you think Cyc could not be programmed to treat itself different from others without use of a quantum computer? If not, how can you make inferences about quantum entanglement from facts about our programming.
Does Cyc have sensors or something? If it does/did, it seems like it would algorithmically treat raw sensory data as separate from symbols and world-models.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming. Because I prefer the monistic alternative to the dualistic one, and because the program Cyc is definitely "based on regularities the size of a transistor", I would normally say that Cyc does not and cannot have thoughts, perceptions, beliefs, or other mental properties at all. All those things require consciousness, consciousness is only a property of a physical ontological unity, the computer running Cyc is a causal aggregate of many physical ontological unities, ergo it only has these mentalistic properties because of the imputations of its users, just as the words in a book only have their meanings by convention. When you introduced your original thought-experiment--
Suppose I built a machine that could perceive the world, and make inferences about the world, and talk. Then of course (or with some significant probability), the things it directly perceives about the world would seem fundamentally, inextricably different from the things it infers about the world. It would insist that the colors of pixels could not consist solely of electrical impulses - they had to be, in addition, the colors of pixels.
--maybe I should have gone right away to the question of whether these "perceptions" are actually perceptions, or whether they are just informational states with certain causal roles, and how this differs from true perception. My answer, by the way, is that being an informational state with a causal role is necessary but not sufficient for something to be a perceptual state. I would add that it also has to be "made of qualia" or "be a state of a physical ontological unity" - both these being turns of phrase which are a little imprecise, but which hopefully foreshadow the actual truth. It comes down to what ought to be a tautology: to actually be a perception of , there has to be some actually there. If there isn't, you just have a simulation.
Just for completeness, I'll say again that I prefer the monistic alternative, but it does seem to imply that consciousness is to be identified with something fundamental, like a set of quantum numbers, rather than something mesoscopic and semiclassical, like a coarse-grained charge distribution. If that isn't how it works, the fallback position is an informational property dualism, and what I just wrote would need to be modified accordingly.
Back to your questions about Cyc. Rather than say all that, I countered your original thought-experiment with an anecdote about Douglas Lenat's Cyc program. The anecdote (as conveyed, for example, in Eliezer's old essay "GISAI") is that, according to Lenat, Cyc knows about Cyc, but it doesn't know that it is Cyc. But then Lenat went and said to Wired that Cyc is self-aware. So I don't know the finer details of his philosophical position.
What I was trying to demonstrate was the indeterminate nature of machine experience, machine assertions about ontology as based upon experience, and so on. Computation is about behavior and about processes which produce behavior. Consciousness is indeed a process which produces behavior, but that doesn't define what it is. However, the typical discussion of the supposed thoughts, beliefs, and perceptions of an artificial intelligence breezes right past this point. Specific computational states in the program get dubbed "thoughts", "desires" and so on, on the basis of a loose structural isomorphism to the real thing, and then the discussion about what the AI feels or wants (and so on) proceeds from there. The loose basis on which these terms are used can easily lead to disagreements - it may even have led Lenat to disagree with himself.
In the absence of a rigorous theory of consciousness it may be impossible to have such discussions without some loose speculation. But my point is that if you take the existence of consciousness seriously, it renders very problematic a lot of the identifications which get made casually. The fact that there is no in physical ontology (or current physical ontology); the fact that from a fundamental perspective these are many-to-one mappings, and a many-to-one mapping can't be an identity - these facts are simple but they have major implications for theorizing about consciousness.
So, finally answering your questions: 1. yes, it could be programmed to treat itself as something special, and 2. sense data would surely be processed differently, but there's a difference between implicit and explicit categorizations (see remarks about ontology, below). But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness. And my argument is that the usual position - a casual version of identity theory - is not tenable. Either it's dualism, or it's a monism made possible by exotic neurophysics.
(continued)
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-14T15:38:52.063Z · LW(p) · GW(p)
This is why I said (to TheOtherDave, and not in these exact words) that the mental can be identified with the physical only if there is a one-to-one mapping between exact physical states and exact conscious states. If it is merely a many-to-one mapping - many exact physical states correspond to the same conscious state - then that's property dualism.
Since there's a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
[this point has low relevance]
Believe what I quoted, and that qualia exist, and you're a dualist; believe what I quoted, and deny that qualia exist (which amounts to saying that consciousness and the whole world of appearance doesn't really exist, even as appearance), and you're an eliminativist.
It seems like we can cash out the statement "It appears to X that Y" as a fact about an agent X that builds models of the world which have the property Y. It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence or the existence of qualia.
"Degrees of existence", by the way, only makes sense insofar as it really means "degrees of something else". Existence, like truth, is absolute.
Degrees of existence come from what is almost certainly a harder philosophical problem about which I am very confused.
My guess that quantum entanglement matters for conscious cognition is an inference from facts about our phenomenology, not facts about our programming.
Facts about your phenomenology are facts about your programming! If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain. There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
But my meta-answer is that these are solely issues about computation, which have no implications for consciousness until we adopt a particular position about the relationship between computation and consciousness.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I've made a judgement about an ontology both at a logical and an empirical level. That's what I was talking about, when I said that if you swapped and , I couldn't detect the swap, but I'd still know empirically that color is real, and I'd still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
A: "The universe is made out of nothing but love"
B: "What are the properties of ontologically fundamental love?"
A: "[The equations that define the standard model of quantum mechanics]"
B: "I have no evidence to falsify that theory."
A: "Or balloons. It could be balloons."
B: "What are the properties of ontologically fundamental balloons?"
A: "[the standard model of quantum theory expressed using different equations]"
B: "There is no evidence that can discriminate between those theories."
... if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
I'm a reductive materialist for statements - I don't see the problem with reading statements about consciousness as statements about quarks. Ontologically I suppose I'm an eliminative materialist.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-18T08:57:40.074Z · LW(p) · GW(p)
Since there's a many-to-one mapping between physical states and temperatures, am I a temperature dualist? Would it be any less dualist to define a one-to-one mapping between physical states of glasses of water and really long strings? (You can assume that I insist that temperature and really long strings are real.)
The ontological status of temperature can be investigated by examining a simple ontology where it can be defined exactly, like an ideal gas in a box where the "atoms" interact only through perfectly elastic collisions. In such a situation, the momentum of an individual atom is an exact property with causal relevance. We can construct all sorts of exact composite properties by algebraically combining the momenta, e.g. "the square of the momentum of atom A minus the square root of the momentum of atom B", which I'll call property Z. But probably we don't want to say that property Z exists, in the way that the momentum-property does. The facts about property Z are really just arithmetic facts, facts about the numbers which happen to be the momenta of atoms A and B, and the other numbers they give rise to when combined. Property Z isn't playing a causal role in the physics, but the momentum property does.
Now, what about temperature? It has an exact definition: the average kinetic energy of an atom. But is it like "property" Z, or like the property of momentum? I think one has to say it's like property Z - it is a quantitative construct without causal power. It is true that if we know the temperature, we can often make predictions about the gas. But this predictive power appears to arise from logical relations between constructed meta-properties, and not because "temperature" is a physical cause. It's conceptually much closer than property Z to the level of real causes, but when you say that the temperature caused something, it's ultimately always a shorthand for what really happened.
When we apply all this to coarse-grained computational states, and their identification with mental states, I actually find myself making, not the argument that I intended (about many-to-one mappings), but another one, an argument against the validity of such an identification, even if it is conceived dualistically. It's the familiar observation that the mental states become epiphenomenal and not actually causally responsible for anything. Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
So: if you were to insist that temperature is a fundamental physical cause and not just a shorthand for microphysical complexities, then you would not only be a dualist, you would be saying something in contradiction with the causal model of the world offered by physics. It would be a version of phlogiston theory.
As for the "one-to-one mapping between physical states of glasses of water and really long strings" - I assume those are symbol-strings, not super-strings? Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible. If you're saying that a physical glass of water really is a string of symbols, you'd be bringing up a whole other class of ontological mistakes that we haven't touched on so far, but which is increasingly endemic in computer-science metaphysics, namely the attempt to treat signs and symbols as ontologically fundamental.
It seems like we can cash out the statement "It appears to X that Y" as a fact about an agent X that builds models of the world which have the property Y.
I actually disagree with this, but thanks for highlighting the idea. The proposed reduction of "appearance" to "modeling" is one of the most common ways in which consciousness is reduced to computation. As a symptom of ontological error, it really deserves a diagnosis more precise than I can provide. But essentially, in such an interpretation, the ontological problem of appearance is just being ignored or thrown out, and all attention directed towards a functionally defined notion of representation; and then this throwing-out of the problem is passed off as an account of what appearance is.
Every appearance has an existence. It's one of the intriguing pseudo-paradoxes of consciousness that you can see something which isn't there. That ought to be a contradiction, but what it really means is that there is an appearance in your consciousness which does not correspond to something existing outside of your consciousness. Appearances do exist even when what they indicate does not exist. This is the proof (if such were needed) that appearances do exist. And there is no account of their existential character in a discourse which just talks about an agent's modeling of the world.
It appears to the brain I am talking to that qualia exist. It appears to the brain that is me that qualia exist. Yet this is not any evidence of the existence of qualia.
You are just sabotaging your own ability to think about consciousness, by inventing reasons to ignore appearances.
Facts about your phenomenology are facts about your programming!
No...
If you can type them into a computer, they must have a physical cause tracing back through your fingers, up a nerve, and through your brain.
Those are facts about my ability to communicate my phenomenology.
What's more interesting to think about is the nature of reflective self-awareness. If I'm able to say that I'm seeing , it's only because, a few steps back, I'm able to "see" that I'm seeing ; there's reflective awareness within consciousness of consciousness. There's a causal structure there, but there's also a non-causal ontological structure, some form of intentionality. It's this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
There is no rule in science that says that large-scale quantum entanglement makes this behavior more or less likely, so there is no evidence for large-scale quantum entanglement.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it's not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
My point is that the evidence for consciousness, that various humans such as myself and you believe that they are conscious, can be cashed out as a statement about computation, and computation and consciousness are orthogonal, so we have no evidence for consciousness.
Once again, appearance is being neglected in this passage, this time in favor of belief. To admit that something appears is necessarily to give it some kind of existential status.
B: "What are the properties of ontologically fundamental love?"
A: "[The equations that define the standard model of quantum mechanics]"
The word "love" already has a meaning, which is not exactly easy to map onto the proposed definition. But in any case, love also has a subjective appearance, which is different to the subjective appearance of hate, and this is why the experience of hate can falsify the theory that only love exists.
I'm a reductive materialist for statements - I don't see the problem with reading statements about consciousness as statements about quarks.
Intentionality, qualia, and the unity of consciousness; none of those things exist in the world of quarks as point particles in space.
Ontologically I suppose I'm an eliminative materialist.
The opposite sort of error to religion. In religion, you believe in something that doesn't exist. Here, you don't believe in something that does exist.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-19T00:08:46.561Z · LW(p) · GW(p)
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it's very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn't ontologically fundamental, you aren't doing so on the basis of evidence.
But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of "everything else constant" wrt mental states, we're done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
Appearances do exist even when what they indicate does not exist.
is a fact about complex arrangements of quarks.
Those are facts about my ability to communicate my phenomenology.
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
What's more interesting to think about is the nature of reflective self-awareness. If I'm able to say that I'm seeing , it's only because, a few steps back, I'm able to "see" that I'm seeing ; there's reflective awareness within consciousness of consciousness. There's a causal structure there, but there's also a non-causal ontological structure, some form of intentionality. It's this non-causal constitutive structure of consciousness which gets passed by in the computational account of reflection. The sequence of conscious states is a causally connected sequence of intentional states, and intentionality, like qualia, is one of the things that is missing in the standard physical ontology.
Non-causal ontological structure is suspicious.
The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it's not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected.
but it's not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
The word "love" already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Replies from: Mitchell_Porter, Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-26T07:38:34.110Z · LW(p) · GW(p)
(part 2)
Your ability to communicate your phenomology traces backwards through a clear causal path through a series of facts, each of which is totally orthogonal to facts about what is ontologically fundamental.
Since your phenomenology, you claim, is a fact about what is ontologically fundamental, it stretches my sense of plausibility that your phenomenology and your ability to communicate your phenomenology are causally unrelated.
I'll quote myself: "The appeal to quantum entanglement is meant to make possible an explanation of the ontology of mind revealed by phenomenology, it's not meant to explain how we are subsequently able to think about it and talk about it, though of course it all has to be connected."
Earlier in this comment, I gave a very vague sketch of a quantum Cartesian theater which interacts with neighboring quantum systems in the brain, at the apex of the causal chains making up the sensorimotor pathways. The fact that we can talk about all this can be explained in that way.
The root of this disagreement is your statement that "Facts about your phenomenology are facts about your programming". Perhaps you're used to identifying phenomenology with talk about appearances, but it refers originally to the appearances themselves. My phenomenology is what I experience, not just what I say about it. It's not even just what I think about it; it's clear that the thought "I am seeing " arises in response to a that exists before and apart from the thought.
Non-causal ontological structure is suspicious.
This doesn't mean ontological structure that has no causal relations; it means ontological structure that isn't made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it's going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It's a spatial structure, not a causal structure.
it's not connected! Quantum entanglement is totally disconnected from how we are able to think about and talk about it!
Either quantum entanglement is disconnected from consciousness, or consciousness is disconnected from thinking and talking about consciousness.
Could you revisit this point in the light of what I've now said? What sort of disconnection are you talking about?
The word "love" already has a meaning, which is not exactly easy to map onto the proposed definition.
In your scenario, you are proposing a 1-to-1 mapping between the properties of ontologically fundamental experiences and standard quantum mechanics.
Let's revisit what this branch of the conversation was about.
I was arguing that it's possible to make judgements about the truth of a proposed ontology, just on the basis of a description. I had in mind the judgement that there's no in a world of colorless particles in space; reaching that conclusion should not be a problem. But, since you were insisting that "people can't tell the difference between ontologies", I tried to pull out a truly absurd example (though one that occasionally gets lip service from mystically minded people) - that only love exists. I would have thought that a moment's inspection of the world, or of one's memories of the world, would show that there are things other than love in existence, even if you adopt total Cartesian skepticism about anything beyond immediate experience.
Your riposte was to imagine an advocate of the all-is-love theory who, when asked to provide the details, says "quantum mechanics". I said it's rather hard to interpret QM that way, and you pointed out that I'm trying to get experience from QM. That's clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience. My actual thesis is that conscious experience is the state of some particular type of quantum system, so the emotions do have to be in the theory somewhere. But I don't think you can even reduce the other emotions to the emotion of love, let alone the non-emotional aspects of the mind, so the whole thing is just silly.
Then you had your advocate go on to speak in favor of the all-is-balloons theory, again with QM providing the details. I think you radically overestimate the freedom one has to interpret a mathematical formalism and still remain plausible or even coherent.
What we say using natural language is not just an irrelevant, interchangeable accessory to what we say using equations. Concepts can still have a meaning even if it's only expressed informally, and one of the underappreciated errors of 20th-century thought is the belief that formalism validates everything: that you can say anything about a topic and it's valid to do so, if you're saying it with a formalism. A very minor example is the idea of a "noncommutative probability". In quantum theory, we have complex numbers, called probability amplitudes, which appear as an intermediate stage prior to the calculation of numbers that are probabilities in the legitimate sense - lying between 0 and 1, expressing relative frequency of an outcome. There is a formalism of this classical notion of probability, due to Kolmogorov. You can generalize that formalism, so that it is about probability amplitudes, and some people call that a theory of "noncommutative probability". But it's not actually a theory of probability any more. A "noncommutative probability" is not a probability; that's why probability amplitudes are so vexatious to interpret. The designation, "noncommutative probability", sweeps the problem under the carpet. It tells us that these mysterious non-probabilities are not mysterious; they are probabilities - just ... different. There can be a fine line between "thinking like reality" and fooling yourself into thinking that you understand.
All that's a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-27T03:14:32.488Z · LW(p) · GW(p)
Temperature is an average. All individual information about the particles is lost, so you can't invert the mapping from exact microphysical state to thermodynamic state.
So divide the particle velocities by temperature or whatever.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
How do you tell what's redundant complexity and what's ontologically fundamental? Position or momentum model of quantum mechanics, for instance?
Now I'd add that the derived nature of macroscopic "causes" is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes.
What bothers me about your viewpoint is that you are solving the problem that, in your view, some things are epiphenomenal by making an epiphenomenal declaration - the statement that they are not epiphenomenal, but rather, fundamental.
So I posit the existence of what Dennett calls a "Cartesian theater", a place where the seeing actually happens and where consciousness is located; it's the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a "quantum system", not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Is there anything about your or anyone else's actions that provides evidence for this hypothesis?
"genuine" causal relations is much weaker than "ontologically fundamental" relations.
Do only pure qualia really exist? Do beliefs, desires, etc. also exist?
That's way too hard, so I'll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn't let you deduce that a dog is a donkey.
You can map a set of three quantum states onto a set of {, , }
This doesn't mean ontological structure that has no causal relations; it means ontological structure that isn't made of causality. A causal sequence is a structure that is made of causality. But if the individual elements of the sequence have internal structure, it's going to be ontologically non-causal. A data structure might serve as an example of a non-causal structure. So would a spatially extended arrangement of particles. It's a spatial structure, not a causal structure.
No, it means ontological structure - not structures of things, but the structure of thing's ontology - that doesn't say anything about the things themselves, just about their ontology.
Could you revisit this point in the light of what I've now said? What sort of disconnection are you talking about?
A logical/probabilistic one. There is no evidence for a correlation between the statements "These beings have large-scale quantum entanglement" and "These beings think and talk about consciousness"
That's clever, except that I would have to be saying that the world of experience is nothing but love, and that QM is nothing but the world of experience
You would have to be saying that to be exactly the same as your character. You're contrasting two views here. One thinks the world is made up of nothing but STUFF, which follows the laws of quantum mechanics. The other thinks the world is made up of nothing but STUFF and EXPERIENCES. If you show them a quantum state, and tell the first guy "the stuff is in this arrangement" and the second guy "the stuff is in this arrangement, and the experiences are in that arrangement", they agree exactly on what happens, except that the second guy thinks that some of the things that happen are not stuff, but experiences.
That doesn't seem at all suspicious to you?
All that's a digression, but the idea that QM could be the formal theory of any informal concept you like, tastes of a similar disregard for the prior meanings of words.
You are correct. "balloons" refers to balloons, not to quarks.
I guess what's going on is that the guy is saying that's what he believes balloons are.
But thinking about the meaning of words is clarifying.
It seems like the question is almost - "Is 'experience' a word like phlogiston or a word like elephant?"
More or less, whatever has been causing us to see all those elephants gets to be called an elephant. Elephants are reductionism-compatible. There are some extreme circumstances - images of elephants I have seen are fabrication, the people who claim to have seen elephants are lying to me - that break this rule. Phlogiston, on the other hand, is a word we give up on much more readily. Heat is particle bouncing around, but the absence of oxygen is not phlogiston - it's just the absence of oxygen.
You believe that "experience" is fundamentally incompatible with reduction. An experience, to exist at all, must be an ontologically fundamental experience. Thus saying "I see red" makes two claims - one, that the brain is in a certain class of its possible total configuration states, those in which the person is seeing red, and two, that the experience of seeing red is ontologically fundamental.
I see no way to ever get the physical event of people claiming that they experience color correlated with the ontological fundamentalness of their color, as we can investigate the phlogiston hypothesis and stop using it if and only if it turns out to be a bad model.
What is a claim when it's not correlated with its subject? The whole point of the words within it has been irrevocably lost. It is pure speculation.
I really, really don't think, that when I say I see red, I'm just speculating.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-02-04T05:55:53.528Z · LW(p) · GW(p)
It's almost a month since we started this discussion, and it's a bit of a struggle to remember what's important and what's incidental. So first, a back-to-basics statement from me.
Colors do exist, appearances do exist; that's nonnegotiable. That they do not exist in an ontology of "nothing but particles in space" is also, fundamentally, nonnegotiable. I will engage in debates as to whether this is so, but only because people are so amazingly reluctant to see it, and the implication that their favorite materialistic theories of mind actually involve property dualism, in which color (for example) is tied to a particular structure or behavior of particles in the brain, but can't be identified with it.
We aren't like the ancient atomists who only had an informal concept of the world as atoms in a void, we have mathematical theories of physics, so a logical further question is whether these mathematical theories can be interpreted so that some of the entities they posit can be identified with color, with "experiences", and so on.
Here I'd say there are two further important facts. First, an experience is a whole and has to be tackled as a whole. Patches of color are just a part of a multi-sensory whole, which in turn is just the sensory aspect of an experience which also has a conceptual element, temporal flow, a cognitive frame locating current events in a larger context, and so on. Any fundamental theory of reality which purports to include consciousness has to include this whole, it can't just talk about atomized sensory qualia.
Second, any theory which says that the elementary degrees of freedom in a conscious state correspond to averaged collective physical degrees of freedom will have to involve property dualism. That's because it's a many-to-one mapping (from physical states to conscious states), and a many-to-one mapping can't be an identity.
All that is the starting point for my line of thought, which is an attempt to avoid property dualism. I want to have something in my mathematical theory of reality which simply is the bearer of conscious states, has the properties and structure of a conscious whole, and is appropriately located in the causal chain. Since the mathematics describing a configuration of particles in space seems very unpromising for such a reinterpretation; and since our physics is quantum mechanics anyway, and the formalism of quantum mechanics contains entangled wavefunctions that can't be factorized into localized wavefunctions, it's quite natural to look for these conscious wholes in some form of QM where entanglement is ontological. However, since consciousness is in the brain and causally relevant, this implies that there must be a functionally relevant brain subsystem that is in a quantum coherent state.
That is the argument which leads me from "consciousness is real" to "there's large-scale quantum entanglement in the brain". Given the physics we have, it's the only way I see to avoid property dualism, and it's still just a starting point, on every level: mathematically, ontologically, and of course neurobiologically. But that is the argument you should be scrutinizing. What's at stake in some of our specific exchanges may be a little obscure, so I wanted to set down the main argument in one piece, in one place, so you could see what you're dealing with.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-02-04T18:30:00.795Z · LW(p) · GW(p)
I will lay down the main thing convincing me that you're correct.
Consider the three statements:
"there's a large-scale quantum entanglement in the brain"
"consciousness is real"
"Mitchell Porter says that consciousness is real."
Your inference requires that 1 and 2 are correlated. It is non-negotiable that 2 or 3 are correlated. There is no special connection between 1 and 3 that would make them uncorrelated.
However, 1 and 3 are both clearly-defined physical statements, and there is no physical mechanism for their correlation. We conclude that they are uncorrelated. We conclude that 1 and 2 are uncorrelated.
↑ comment by Mitchell_Porter · 2011-01-26T07:38:10.031Z · LW(p) · GW(p)
(part 1)
The central point about the temperature example is that facts about which properties really exist and which are just combinations of others are mostly, if not entirely, epiphenomenal. For instance, we can store the momenta of the particles, or their masses and velocities. There are many invertible functions we could apply to phase space, some of which would keep the calculations simple and some of which would not, but it's very unclear, and for most purposes irrelevant, which is the real one.
So when you say that X is/isn't ontologically fundamental, you aren't doing so on the basis of evidence.
Temperature is an average. All individual information about the particles is lost, so you can't invert the mapping from exact microphysical state to thermodynamic state.
Most of the invertible functions you mention would reduce to one of a handful of non-redundant functions, obfuscated by redundant complexity.
Causality, if we have the true model of physics, is defined by counterfactuals. If we hold everything else constant and change X to Y, what happens? So if we have a definition of "everything else constant" wrt mental states, we're done. We certainly can construct one wrt temperature (linearly scale the velocities.)
Your model of physics has to have some microscopic or elementary non-counterfactual notion of causation for you to use it to calculate these complex macroscopic counterfactuals. Of course in the real world we have quantum mechanics, not the classical ideal gas we were discussing, and your notion of elementary causality in quantum mechanics will depend on your interpretation.
But I do insist there's a difference between an elementary, fundamental, microscopic causal relation and a complicated, fuzzy, macroscopic one. A fundamental causal connection, like the dependence of the infinitesimal time evolution of one basic field on the states of other basic fields, is the real thing. As with "existence", it can be hard to say what "causation" is. But whatever it is, and whether or not we can say something informative about its ontological character, if you're using a physical ontology, such fundamental causal relations are the place in your ontology where causality enters the picture and where it is directly instantiated.
Then we have composite causalities - dependencies among macroscopic circumstances, which follow logically from the fundamental causal model, and whose physical realization consists of a long chain of elementary causal connections. Elementary and composite causality do have something in common: in both cases, an initial condition A leads to a final condition B. But there is a difference, and we need some way to talk about it - the difference between the elementary situation, where A leads directly to B, and the composite situation, where A "causes" B because A leads directly to A' which leads directly to A'' ... and eventually this chain terminates in B.
Also - and this is germane to the earlier discussion about fuzzy properties and macroscopic states - in composite causality, A and B may be highly approximate descriptions; classes of states rather than individual states. Here it's even clearer that the relation between A and B is more a highly mediated logical implication than it is a matter of A causing B in the sense of "particle encounters force field causes change in particle's motion".
How does this pertain to consciousness? The standard neuro-materialist view of a mental state is that it's an aggregate of computational states in neurons, these computational states being, from a physical perspective, less than a sketch of the physical reality. The microscopic detail doesn't matter; all that matters is some gross property, like trans-membrane electrical potential, or something at an even higher level of physical organization.
I think I've argued two things so far. First, qualia and other features of consciousness aren't there in the physical ontology, so that's a problem. Second, a many-to-one mapping is not an identity relation, it's more suited to property dualism, so that's also a problem.
Now I'd add that the derived nature of macroscopic "causes" is also a problem, if you want to have the usual materialist ontology of mind and you also want to say that mental states are causes. And as with the first two problems, this third problem can potentially be cured in a theory of mind where consciousness resides in a structure made of ontologically fundamental properties and relations, rather than fuzzy, derived, approximate ones. This is because it's the fundamental properties which enter into the fundamental causal relations of a reductionist ontology.
In philosophy of mind, there's a "homunculus fallacy", where you explain (for example) the experience of seeing as due to a "homunculus" ("little human") in your brain, which is watching the sensory input from your eyes. This is held to be a fallacy that explains nothing and risks infinite regress. But something like this must actually be true; seeing is definitely real, and what you see directly is in your skull, even if it does resemble the world outside. So I posit the existence of what Dennett calls a "Cartesian theater", a place where the seeing actually happens and where consciousness is located; it's the end of the sensory causal chain and the beginning of the motor causal chain. And I further posit that, in current physical language, this place is a "quantum system", not just a classically distributed neural network; because this would allow me to avoid the problems of many-to-one mappings and of derived macroscopic causality. That way, the individual conscious mind can have genuine causal relations with other objects in the world (the simpler quantum systems that are its causal neighbors in the brain).
Anyway, the existence of a one-to-one mapping is a necessary but not a sufficient condition for a proposed identity statement to be plausible
What are the other conditions?
That's way too hard, so I'll just illustrate the original point: You can map a set of three donkeys onto a set of three dogs, one-to-one, but that doesn't let you deduce that a dog is a donkey.
↑ comment by Mitchell_Porter · 2011-01-14T11:38:45.573Z · LW(p) · GW(p)
(part 2 of reply)
In other words, so long as we talk simply about computation, there is nothing at all to inherently make an AI insist that its "experience" can't be made of physical entities. It's just a matter of ontological presuppositions.
Is there anything to inherently prevent it from insisting that? Should we accept our ontological presuppositions at face value?
See next section.
I would be able to tell the difference between an ontology in which they exist, and an ontology in which they don't.
No you wouldn't. People can't tell the difference between ontologies any more then math changes if you print its theorems in a different color. People can tell the difference between different mathematical laws of physics, or different arrangements of stuff within those laws. What you notice is that you have a specific class of gensyms that can't have relations of reduction for other symbols, or something else computational. Facts about ontology are totally orthogonal to facts about things that influence what words you type.
We are talking at cross-purposes here. I am talking about an ontology which is presented explicitly to my conscious understanding. You seem to be talking about ontologies at the level of code - whatever that corresponds to, in a human being.
If someone tells me that the universe is made of nothing but love, and I observe that hate exists and that this falsifies their theory, then I've made a judgement about an ontology both at a logical and an empirical level. That's what I was talking about, when I said that if you swapped and , I couldn't detect the swap, but I'd still know empirically that color is real, and I'd still be able to make logical judgements about whether an ontology (like current physical ontology) contains such an entity.
Your sentence about gensyms is interesting as a proposition about the computational side of consciousness, but...
A neuron is a glob of trillions of atoms doing inconceivably many things at once. You're focusing on a few of the simple differential sub-perceptions which make up the experience of looking at that image, associating them in your mind with certain gross changes of state in that glob of atoms, and proposing that the experience is identical to a set of several such simultaneous changes occurring in a few neurons. In doing so, you're neglecting both the bulk of the physical events occurring elsewhere in the neurons, and the fundamental dissimilarity between "staring at a few homogeneous patches of color" and "billions of ions cascading through a membrane".
My consciousness is a computation based mainly or entirely on regularities the size of a single neuron or bigger, much like the browser I'm typing in is based on regularities the size of a transistor. I wouldn't expect to notice if my images were, really, fundamentally, completely different. I wouldn't expect to notice if something physical happened - the number of ions was cut by a factor of a million and made the opposite charge, but it the functions from impulses to impulses computed by neurons were the same.
... if gensyms only exist on that scale, and if changes like those which you describe make no difference to experience, then you ought to be a dualist, because clearly the experience is not identical to the physical in this scenario. It is instead correlated with certain physical properties at the neuronal scale.
It's more like the difference between night and day. It is possible to attain a higher perspective which unifies them, but you don't get there by saying that day is just night by another name.
Uniform color and edgeness are as different as night and day.
They are, but I was actually talking about the difference between colorness/edgeness and neuronness.
↑ comment by TheOtherDave · 2011-01-09T06:33:30.866Z · LW(p) · GW(p)
A few thoughts in response:
I agree with you that if my experience of red can't be constructed of matter, then my understanding of a sentence also can't be. And I agree with you that we don't have a reliable account of how to construct such things out of matter, and without such an account we can't rule out the possibility that, as you suggest, such an account is simply not possible. I agree with you that this objection to physicalism has been around for a long time.
I agree with you that insofar as we understand vitalism to be an account of how particular arrangements of matter move around, it is a different sort of thing from the kind of "sentientism" you are talking about. That said, I think that's a misrepresentation of historical vitalism; I think when the vitalists talked about elan vital being the difference between living and unliving matter, they were also attributing sentience (though not sapience) to elan vital, as well as simple animation.
I don't equate the experience of red with the tendency to output the word "red" when queried, both in the sense that it's easy for me to imagine being unable to generate that output while continuing to experience red, and in the sense that it's easy for me to imagine a system that outputs the word "red" when queried without having an experience of red. Lexicalization is neither necessary nor sufficient for experience.
I don't equate the experience of red with categorization... it is easy to imagine categorization without experience. It's harder to imagine experience without categorization, though. Categorization might be necessary, but it certainly isn't sufficient, for experience.
Like you, I can't come up with a physical account of sentience. I have little faith in the power of my imagination, though. Put another way: it isn't easy for me to see what one can and can't make out of particles. But I agree with you that any such account would be surprising, and that there is a phenomenon there to explain. So I think I fall somewhere in between your two classes of people who are a waste of time to talk to: I get that there's a problem, but it isn't obvious to me that the properties that comprise what it feels like to be a bat must be ontologically basic and nonphysical. Which I think still means I'm wasting your time. (I did warn you in the grandparent comment that you won't find my answer interesting.)
If it turns out that a particular sensation is perfectly correlated with the presence of a particular physical structure, and that disrupting that structure always triggers a disruption of the sensation, and that disrupting the sensation always triggers a disruption of the structure... well, at that point, I'm pretty reluctant to posit a nonphysical sensation. Sure, it might be there, but if I posit it I need to account for why the sensation is so tightly synchronized with the physical structure, and it's not at all clear that that task is any simpler than identifying one with the other, counterintuitive as that may be.
At the other extreme, if the nonphysical structure makes a difference, demonstrating that difference would make me inclined to posit a nonphysical sensation. For example, if we can transmit sensation without transmitting any physical signal, I'd be strongly inclined to posit a nonphysical structure underlying the sensation. Looking for such a demonstrable difference might be a useful way to start getting somewhere.
↑ comment by Mitchell_Porter · 2011-01-10T11:08:02.096Z · LW(p) · GW(p)
Perhaps we are closer to mutual understanding than might have been imagined, then. A crucial point: I wouldn't talk about the mind as something "nonphysical". That's why I said that the problem is with our current physical ontology. The problem is not that we have a model of the world in which events outside our heads are causally connected to events inside our heads via a chain of intermediate events. The problem is that when we try to interpret physics ontologically (and not just operationally), the available frameworks are too sparse and pallid (those are metaphors of course) to produce anything like actual moment-to-moment experience. The dance of particles can produce something isomorphic to sensation and thought, but not identical. Therefore, what we might think of as a dance of particles actually needs to be thought of in some other way.
So I'm actually very close in spirit to the reductionist who wants to think of their experience in terms of neurons firing and so forth, except I say it's got to be the other way around. Taken literally, that would mean that we need to learn to think of what we now call neurons firing, as being fundamentally - this - moment-to-moment experience, as is happening to you right now. Except, the physical nature of whole neurons I don't believe plausibly allows such an ontological reinterpretation. If consciousness really is based on mesoscopic-level informational states in neurons, then I'd favor property dualism rather than the reverse monism I just advocated. But I'm going for the existence of a Cartesian theater somewhere in the brain whose physical implementation is based on exact quantum states rather than collective coarse-grained classical ones, quantum states which in our current understanding would look more algebraic than geometric. And the succession of abstract algebraic state transitions in that Cartesian theater is the deracinated mathematical description of what, in reality, is the flow of conscious experience.
If that is the true interior reality of one quantum island in the causal network of the world, it might be anticipated that every little causal nexus has its own inside too - its own subjectivity. The non-geometric, localized, algebraic side of physics would turn out to actually be a description of the local succession of conscious states, and the spatial, geometric aspect of physics would in fact describe the external causal interactions between these islands of consciousness. Except I suspect that the term consciousness is best reserved for a very rare and highly involuted type of state, and that most things count as islands of "being" but not as islands of "experiencing" (at least, not as islands of reflective experiencing).
I should also distinguish this philosophy from the sort which sees mind wherever there is distributed computation - so that the hierarchical structure of classical interaction in the world gets interpreted as a set of minds made of minds made of minds. I would say that the ontological glue of individual consciousness is not causal interaction - it's something much tighter. The dependence of elements of a state of consciousness on the whole state of consciousness is more like the way that the face of a cube is part of the cube, though even that analogy is nowhere near strong enough, because the face of a cube is a square and a square can have independent existence, though when it's independent it's no longer a face. However we end up expressing it, the world is fundamentally made of these logical ontological unities, most of which are very simple and correspond to something like particles, and a few of which have become highly complex - with waking states of consciousness being extremely complex examples of these - and all of these entities interact causally and quasi-locally. These interactions bind them into systems and into systems of systems, but systems themselves are not conscious, because ontologically they are multiplicities, and consciousness is always a property of one of those fundamental physical unities whose binding principle is more than just causal association.
An ontology of physics like that is one where the problem of consciousness might be solved in a nondualistic way. But its viability does seem to require that something like quantum entanglement is found to be relevant to conscious cognition. As I said, if that isn't borne out, I'll probably fall back on some form of property dualism, in which there's a many-to-one mapping between big physical states (like ion concentrations on opposite sides of axonal membranes) and distinct possible states of consciousness. But physical neuroscience has quite a way to go yet, so I'm very far from giving up on the monistic quantum theory of mind.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-10T15:12:49.846Z · LW(p) · GW(p)
So, getting back to my original question about what your alternate ontology has to offer...
If I'm understanding you (which is far from clear), while you are mostly concerned with being ontologically correct rather than operationally useful, you do make a falsifiable neurobiological prediction having something I didn't follow to do with quantum entanglement.
Cool. I approve of falsifiable predictions; they are a useful thing that a way of thinking about the world can offer.
Anything else?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-14T11:43:02.116Z · LW(p) · GW(p)
I think you ought to be more interested in what this shows about the severity of the problem of consciousness. See my remarks to William Sawin, about color and about many-to-one mappings, and how they lead to a choice between this peculiar quantum monism (which is indeed difficult to understand at first encounter), and property dualism. While I like my own ideas (about quantum monads and so forth), the difficulties associated with the usual approaches to consciousness matter in their own right.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-14T16:43:24.932Z · LW(p) · GW(p)
(nods) I understand that you do; I have from the beginning of this exchange been trying to move forward from that bald assertion into a clarification of why I ought to be... that is, what benefits there are to be gained from channeling my interest as you recommend.
Put another way: let us suppose you're right that there are aspects of consciousness (e.g., subjective experience/qualia) that cannot be adequately explained by mainstream ontology.
Suppose further that tomorrow we encounter an entity (an isolated group of geniuses working productively on the problem, or an alien civilization with a different ontological tradition, or spirit beings from another dimension, or Omega, or whatever) that has worked out an ontology that does adequately explain it, using quantum monads or something else, to roughly the same level of refinement and practical implementation that we have worked out our own.
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
Or, to ask the question a different way: suppose we encounter an entity that claims to have worked out such an ontology, but won't show it to us. What properties ought we look for in that entity that provide evidence that their claim is legitimate?
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements. (I may have misunderstood that, in which case I would appreciate clarification.) So I should not expect them to have a superior understanding of behavior that would manifest in various detectable ways. Nor should I expect them to have a superior understanding of physics.
I'm not really sure what I should expect them to have a superior understanding of, though, or what capabilities I should expect such an understanding to entail. Surely there ought to be something, if this branch of knowledge is, as you claim, worth pursuing.
Thus far, I've gotten that they ought to be able to make predictions about neurobiological structures that relate to certain kinds of quantum structures. I'm wondering what else.
Because if it's just about being right about ontology for the sake of being right about ontology when it entails no consequences, then I simply disagree with you that I ought to be more interested.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-18T09:15:04.312Z · LW(p) · GW(p)
What kinds of things would you expect that entity to be capable of that we are incapable of due to the (posited) inability of our ontology to adequately account for subjective experience?
I don't consider this inability to merely be posited. It's a matter of understanding what you can and can't do with the ontological ingredients provided. You have particles, you have non-positional properties of individual particles, you have the motions of particles, you have changes in the non-positional properties. You have causal relations. You have sets of these entities; you have causal chains built from them; you have higher-order quantitative and logical facts deriving from the elementary facts about configuration and causal relationships. That's basically all you have to work with. An ontology of fields, dynamical geometry, probabilities adds a few twists to this picture, but nothing that changes it fundamentally. So I'm saying there is nothing in this ontology, either fundamental or composite (in a broad sense of composite), which can be identified with - not just correlated with, but identified with - consciousness and its elements. And color offers the clearest and bluntest proof of this.
We can keep going over this fact from different angles, but eventually it comes down to seeing that one thing is indeed different from another. 1 is not 0; is not any specific thing that can be found in the ontology of particles. It reduces to pairwise comparative judgments in which ontologically dissimilar basic entities are perceived to indeed be ontologically dissimilar.
The reason I ask is that you seem to concede that behavior can be entirely accounted for without reference to the missing ontological elements.
What are we trying to explain, ultimately? What even gives us something to be explained? It's conscious experience again; the appearance of a world. Our physical theories describe the behavior of a world which is structurally similar to the world of appearance, but which does not have all its properties. We are happy to say that the world of appearance is just causally connected, in a regularity-preserving way, to an external world, and that these problem properties only exist in the "world of appearance". That might permit us to regard the "external world" as explained by our physics. But then we have this thing, "world of appearance", where all the problems remain, and which we are nonetheless trying to assimilate to physics (via neuroscience). However, we know (if we care to think things through), that this assimilation is not possible with the current physical ontology.
So the claim that we can describe the behavior of things is not quite as powerful as it seems, because it turns out that the things we are describing can't actually be the "things" of direct experience, the appearances themselves. We can get isomorphism here, but not identity. It's an ontological problem: the things of physical theory need to be reconceived so that some of them can be identified with the things of consciousness, the appearances.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-01-18T14:28:13.598Z · LW(p) · GW(p)
I understand that you aren't "merely" positing the inability of a set of particles, positions and energy-states to be an experience.
I am.
I also understand that you consider this a foolish insistence on my part on rejecting the obvious facts of experience. As I've said several times now, repeatedly belaboring that point isn't going to progress this discussion further.
↑ comment by Mass_Driver · 2011-01-08T09:27:21.606Z · LW(p) · GW(p)
I find this argument irresistably compelling, and would appreciate a post or a private message letting me know what your answer is. I don't have one; it's all I can do here to notice that I am confused.
↑ comment by AlephNeil · 2010-05-15T04:07:40.553Z · LW(p) · GW(p)
I think you need to be taken outside and shot...
...
...j/k.
It's just that over recent years I've spent quite a long time arguing with people educated principally in philosophy, who hate Dennett and think his version of materialism is absurd (or at least that it's manifestly wrong), and think it's absolutely essential to go around saying things like 'all we know about are correlations between body and mind'.
It's sort-of interesting/refreshing for me to arrive here, with a bunch of people who are (I assume) educated principally in computer science (with perhaps a few mathies and physicists), who are almost unanimously Dennett fans, think that functionalism is just blindingly obvious, that 'zombies' are blindingly obviously impossible, that it's blindingly obvious that the 'Systems Reply' is correct, that anything we build capable of passing the (full) Turing Test would have to be conscious etc.
The ones who don't 'get it' - that at the core of Dennett's view there's the difficult-to-swallow idea that there isn't a 'fact of the matter' as to whether a being is conscious and if so what it's conscious of - can at least fall back on a Greg Egan-style view of consciousness which is identical insofar as it agrees that the issues above are 'blindingly obvious'. (That's the other thing: the people here have actually read Greg Egan - woohoo.)
I can see you have a more in common with the philosopher-types than the locals. And actually, in your interpretation of Dennett I think there's a mistake - one I've seen elsewhere:
You think that in abolishing the 'Cartesian theater' he is ipso facto abolishing phenomenal awareness, but this simply doesn't follow. What he's abolishing is the idea that all of the 'bits' of a person's awareness are present 'together' in a single sharply-defined 'moment', such that there are well-defined answers to questions like "am I seeing a moving dot or a static one?" which would resolve the "Orwellian/Stalinesque" dilemma.
Even after the Cartesian theater is abolished, you can still be a dualist as long as you're prepared to give ground on things like 'the unity of consciousness', and admit that the various parts of the mindscape are slightly removed from each other - not as far removed as the mind of a different person altogether, or even as far as the two hemiminds of a split-brain patient, but certainly not bundled together in a brilliant 'point' of 'inner light'.
Replies from: Mitchell_Porter, Blueberry↑ comment by Mitchell_Porter · 2010-05-16T09:41:39.800Z · LW(p) · GW(p)
I think you need to be taken outside and shot...
I'd just come back as a zombie.
the difficult-to-swallow idea that there isn't a 'fact of the matter' as to whether a being is conscious and if so what it's conscious of
That sums it up well. Next up, let's consider other startling possibilities, such as: there isn't a fact of the matter as to whether you're reading this sentence, there isn't a fact of the matter as to whether this planet exists, there isn't a fact of the matter as to whether there is a fact of the matter as to whether a being is conscious...
Replies from: AlephNeil↑ comment by AlephNeil · 2010-05-16T10:38:14.603Z · LW(p) · GW(p)
Yeah but come on... you always-a-fact-of-the-matter-ists have some startling things to think about too, like The Exact Moment When You First Became Conscious, and the Infinitely Precise Line one can draw across the phylogenetic tree demarcating species whose members are (or may be) conscious and those which never are.
(Afterthought: Or are you some kind of panpsychist? Then your startling possibilities incude the minds of rocks...)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-16T11:10:12.394Z · LW(p) · GW(p)
you always-a-fact-of-the-matter-ists have some startling things to think about too, like The Exact Moment When You First Became Conscious, and the Infinitely Precise Line one can draw across the phylogenetic tree demarcating species whose members are (or may be) conscious and those which never are
See, it's not so hard! You just have to take the idea seriously, and stick with it. You might even have a talent for this. And here I was thinking that my labor here was in vain.
↑ comment by Blueberry · 2010-05-15T05:36:21.441Z · LW(p) · GW(p)
that anything we build capable of passing the (full) Turing Test would have to be conscious
I believe Eliezer doesn't agree with that last one, and has talked about building an AI who isn't conscious.
Also, consider the following hypothetical: I get really drunk and/or take Ambien and black out at 2 am. I have no conscious experience or memory of the time between 2 am and 3 am, but during that time you have a (loud and drunken) conversation with me. Or maybe in my drunken state I sit at my computer and manage to instant message without being conscious of it, and the person at the other end is convinced I'm human and not a computer program. Counterexample?
Replies from: AlephNeil↑ comment by AlephNeil · 2010-05-15T06:31:41.620Z · LW(p) · GW(p)
Well, I think we can all agree that it's possible for a non-conscious person (or program or whatever) to be mistaken for a conscious being.
However, there are several objections I can make to this scenario being considered a counterexample:
(1) How do you know you're not conscious? Just because you don't remember it the next day doesn't mean you don't have any awareness at the time.
(2) In the Turing test the judge is supposed to be 'on the look-out' for which of its two subjects seems less able to respond adequately to their questions. And one of the subjects is presumed to be a healthy, sober human. So unless you think the judge would be unable to distinguish a drunken, unconscious conversation from a normal, sober one, you would presumably fail the Turing test.
↑ comment by pjeby · 2010-05-15T04:24:51.676Z · LW(p) · GW(p)
Quite apart from the peculiar difficulty involved in identifying complex subjective states like "going diving in the Caribbean on your 25th birthday" with the electrical state of a captive neuronal porridge, the basic raw ingredients of subjective experience, like color qualia, simply aren't there in an arrangement of pointlike objects in space.
Suppose I write a computer program (such as Second Life or World of Warcraft) that simulates the properties of an imaginary reality. Have I now created new "subjective secondary properties"? After all, in the real world, objects do not have owners and copyability, nor levels of mana or hit points. Is this "duality", then?
What about a book that describes an imaginary world? Is it duality because there are only words on the page, and these have no physical correlate to the things described?
The reasoning that you're using is an application of the mind projection fallacy. Human brains have built-in pattern recognition for seeing things as "minds", and having volition -- and this notion is itself an example of an imaginary property projected onto reality. The projection doesn't make the projected quality exist in outside reality, it merely exists in the computational model physically represented in the mind that makes the projection
tl;dr version: imaginary attributions in a model do not create dualtiy, or else computer programs have qualia equal to those of humans. Since no mysterious duality is required to create computer programs, we need not hypothesize that such is to create human subjective experience.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-16T06:31:38.881Z · LW(p) · GW(p)
Human brains have built-in pattern recognition for seeing things as "minds", and having volition -- and this notion is itself an example of an imaginary property projected onto reality. The projection doesn't make the projected quality exist in outside reality, it merely exists in the computational model physically represented in the mind that makes the projection
(My emphases.)
You seem to be contradicting yourself there. The mind only exists in the mind?
Replies from: pjeby↑ comment by pjeby · 2010-05-16T17:36:43.734Z · LW(p) · GW(p)
The mind only exists in the mind?
The intuitive notion of "mind" exists only in the physical manifestation of the mind.
Or to put it (perhaps) more clearly: the only reason we think dualism exists is because our (non-dual) brains tell us so. Like beauty, it's in the eye of the beholder.
Our judgment of whether something is intelligent or sentient is based on an opaque weighing of various sensory criteria, that tell us whether something is likely to have intentions of its own. We start out as children thinking that almost everything has this intentional quality, and gradually learn the things that don't.
It's as if brains have a built-in (at or near birth) "mind detector" circuit that triggers for some things, and not others, and which can be trained to cease seeing certain things as minds.
What it doesn't do, is ever fire for something whose motions and innards are fully understood as mechanical - so it doesn't matter how sophisticated AI ever gets, there will still be people who will insist it's neither conscious nor intelligent, simply because their built-in "mind detector" doesn't fire when they look at it.
And that's what people are doing when they claim special status for consciousness and qualia: elevating their genetically-biased intuition into the realm of physical law, not unlike people who insist there must be a soul that lives after death... because their "mind detector" refuses to cease firing when someone dies.
In short, this intuitive notion of mind gets in the way of developing actual artificial intelligence, and it leads to enormous wastes of time in discussions of dualism. Without the mind detector -- or if the operation of our mind detectors were fully transparent to the rest of our mental processes -- nobody would waste much time on the idea that there's anything non-physical. We'd only get as far as realizing that if there were non-physical things, we'd have no way to know about them.
However, since we do have an opaque mind-detector, that's capable of firing for the wind and the rain and for memories of dead people as easily as it does for live animals and people in front of us, we can get the feeling that we are having physical experiences of the non-physical... when that's a blatantly obvious contradiction in terms.
It's only by elevating your feelings and intuitions to the level of fact (i.e. abandoning science), that you can continue to insist that non-physical things exist in the physical world. It's pointing to reality and saying, I feel X when I look at it, therefore it is X.
(A bit like the religious fundamentalists who say that they feel icky when they see gays, therefore homosexuality is disgusting.)
Replies from: RobinZ↑ comment by RobinZ · 2010-05-16T17:47:01.863Z · LW(p) · GW(p)
(A bit like the religious fundamentalists who say that they feel icky when they see gays, therefore homosexuality is disgusting.)
I would have said, "A bit like philosophers of free will who say that they feel like they could have done something else, and therefore determinism must be false". (:
↑ comment by Blueberry · 2010-05-15T05:26:26.888Z · LW(p) · GW(p)
I upvoted you back to 0 because your comment was thoughtful and well-written, even though I disagree.
Yes, I'm in Dennett's camp. Aside from what other commenters have said, think about it like this:
I have a novel here. It's made of the letters A-Z as well as punctuation, arranged in a complicated pattern. But, somehow, the novel also talks about a plot and characters and a setting and so forth, even though all there is to the novel is letters and punctuation. The plot and characters don't have some magical separate state of existence: they exist because they're built out of the letters.
Same with conscious experience. Right now I'm eating goat cheese and crackers. This experience arises out of the neurons in my brain, and it's intimately tied up with them and the patterns they make. You can't separate it from my past experience and associations and memories (which is Dennett's point about qualia). Of course the experience exists: it's just built out of and associated with a complex pattern of neuron firings in my brain. The experience is not the same as the series of neurons: that would be a category error, just like a character in a book is not the same as the series of letters that make up his description. No property dualism needed. Of course it's difficult to explain this association, because we don't know enough about brain chemistry.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-05-15T05:51:35.774Z · LW(p) · GW(p)
I upvoted you back to 0 because your comment was thoughtful and well-written, even though I disagree.
Me too.
Yes, I'm in Dennett's camp. Aside from what other commenters have said, think about it like this:
Me too.
I think it's a good illustration, but I can give you 'the standard reply' from the anti-materialist: As a physical object, the novel is just a hunk of matter with funny shaped ink blotches on it. The 'plot' and 'characters' you speak of have a mental character to them: they don't exist outside of some mind apprehending the novel, a mind which actively 'constructs' these things rather than passively 'finding them' somewhere in the matter of the book.
So book --> plot is not after all an analogy that helps us understand how a mind can reduce to a pattern of physical matter, because "plot" already presupposes the mind, so any "reduction" would presuppose that the mind is itself reducible.
Yeah, I know this is all wrong - but I've learned to make myself "flip" between a materialist and anti-materialist view.
Replies from: Blueberry↑ comment by Blueberry · 2010-05-15T06:00:42.347Z · LW(p) · GW(p)
Hmm. Maybe a better analogy is three stones in a field making a triangle. The triangle exists and is formed by the stones, but this doesn't require dualism, just an understanding that relationships and structures exist and are built out of smaller parts. (I know, that's not exact either.)
Replies from: AlephNeil, Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-16T09:20:19.031Z · LW(p) · GW(p)
Earlier you wrote
Of course it's difficult to explain this association, because we don't know enough about brain chemistry.
The ontological ingredients, and the ways of combining them, which physics gives you are quite limited. You can make shapes (like your triangle), you can count objects, you can consider their motions and other changes of state, you can average quantitative properties, you can consider causal dependence and counterfactual situations. There might be a few other things you can do. But if you are going to have a mind-brain identity theory, and not property dualism, then something built solely using methods like the ones I just listed has to be the experience. It can't just be "associated with" the experience - that would be dualism.
Color is usually mentioned at this point, because it is pretty obvious that no amount of piling up particles, averaging their properties, and engaging in causal and counterfactual analysis, is going to give you redness where there was none, in the simple way that putting three stones in a field really does give you a triangle. If someone proposes that the experience of a certain shade of red is some complicated but purely physical predicate, object, or condition, then from the perspective of orthodox physical ontology, they are proposing a form of strong emergence. (Weak emergence is like the triangle.) And strong emergence is property dualism - it introduces new ontological ingredients.
But although color is the standard counterargument - because of its vividness - any sensation, any thought, anything involving a self, anything like the "experience of an object", is just as much unlike anything that can be made from physics in a weakly emergent way. I challenge you to find a single aspect of your experience which you can unproblematically identify with (and not just associate with) some imagined neurochemical correlate. In every case, you will be taking some subjectively manifest reality, and then saying to yourself, "that is really just neurons doing something"; and in every case, physics alone gives you absolutely no reason to think that neurons doing that has any subjective side to it.
If you don't want to be a dualist, you are going to have to take that subjectively manifest reality, admit that it exists somewhere in exactly that form, and somehow rebuild physics around it. But that is really hard to do.
Replies from: Strange7↑ comment by Strange7 · 2011-01-14T12:26:13.099Z · LW(p) · GW(p)
Three rocks in a field aren't a triangle until there's a brain with a concept of 'triangle' that identifies them as such. Photons of a particular wavelength aren't red until there's a brain with a concept of 'red' that identifies them as such. A creature isn't conscious until there's a brain with a concept of 'consciousness' that identifies it as such.
Third one's tricky because of the self-reference, but that doesn't make it an exception to the general rule. Concepts are predictive models, a model can't make predictions unless it's running on a computer, brains are the one kind of computer that can be mass produced by unskilled labor. Qualia, to the extent that they can be coherently defined at all, are a matter of software. Software can be translated between hardware platforms, but cannot exist in any useful form in the absence of hardware.
And, for the record, the math necessary to fully define a rock is a hell of a lot more complicated than "1+1." Don't dismiss it until you've properly studied it.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-18T09:26:24.737Z · LW(p) · GW(p)
Third one's tricky because of the self-reference
It's not just tricky, it's self-contradictory. The mind exists only in the mind, you say?
Concepts are predictive models ... brains are [computers] ... Qualia ... are a matter of software
If you really want to try reducing all of this to physics, I'd recommend that you first deliberately try to dispense with terms which have a technological or user-semantic connotation, because no such thing exists in physical ontology. "Computer" and "software" are being used as metaphors here, and a "model" is an intentional concept. Computer science has the concept of a "state machine", which is a little better from a physical standpoint, because it doesn't attach any semantics to the "states".
OK, fine, you can do such a translation, and you get e.g. qualia are equivalence classes of state machines. At least your claim has now truly been expressed in terms that do not implicitly exceed physical ontology. But it's still a wrong claim, because it says nothing about the properties that really define qualia, like the "" that we've been talking about in another thread.
the math necessary to fully define a rock is a hell of a lot more complicated than "1+1." Don't dismiss it until you've properly studied it.
I don't study rocks, but I study physics every day. I know the mathematics is complicated. What I'm saying is that physics is not mathematics.
Replies from: topynate, Strange7↑ comment by topynate · 2011-01-18T22:41:55.849Z · LW(p) · GW(p)
it says nothing about the properties that really define qualia, like the "" that we've been talking about in another thread
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn't have anything to do with the referent of 'redness'. It looks like your obvious premise that redness isn't reducible implies epiphenomenalism. Which is absurd, obviously.
Edit: Wow, you (nearly) bite the bullet in this comment! You say:
Unless one is willing to explicitly advocate epiphenomenalism, then mental states must be regarded as causes. But if they are just a shorthand for complicated physical details, like temperature, then they are not causes of anything.
I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they're a cause of this comment. I claim that the word 'cause' can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.
So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-26T08:02:20.422Z · LW(p) · GW(p)
So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn't have anything to do with the referent of 'redness'. It looks like your obvious premise that redness isn't reducible implies epiphenomenalism. Which is absurd, obviously.
No, it just means that plays a causal role in us, which would be played by something else in a simulation of us.
There's nothing paradoxical about the idea of an unconscious simulation of consciousness. It might be an ominous or a disconcerting idea, but there's no contradiction.
I claim that mental states can be regarded as causes, that they are indeed a shorthand for immensely complicated physical details (and significantly less but still quite a lot complicated computational details), and claim further that they cause a lot of things. For instance, they're a cause of this comment. I claim that the word 'cause' can apply to more than relationships between fundamental particles: for instance, an increase in the central bank interest rate causes a fall in inflation.
So, which do you disagree with: that interest rates are causal influences on inflation, or that interest rates and inflation are shorthand for complicated physical details?
See what I just said to William Sawin about fundamental versus derived causality. These are derived causal relations; really, they are regularities which follow indirectly from large numbers of genuine causal relations. My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity - a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be.
Being a single entity means that it can enter directly into whatever fundamental causal relations are responsible for physical dynamics. Being that entity, from the inside, means having the sensations, thoughts, and desires that you do have; described mathematically, that will mean that you are an entity in a particular complicated, formally specified state; and physically, the immediate interactions of that entity would be with neighboring parts of the brain. These interactions cause the qualia, and they convey the "will".
That may sound strange, but even if you believe in a mind that is material but non-fundamental, it still has to work like that or else it is causally irrelevant. So when you judge the idea, remember to check whether you're rejecting it for weirdness that your own beliefs already implicitly carry.
Replies from: Strange7↑ comment by Strange7 · 2011-01-26T21:37:08.935Z · LW(p) · GW(p)
My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity - a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be.
So you're taking the existing causal graph, drawing a box around all the interactions that happen inside a brain, and saying that everything inside the box counts as one thing.
That's not simplification, it's just bad accountancy.
↑ comment by Strange7 · 2011-01-18T21:47:05.223Z · LW(p) · GW(p)
The mind exists only in the mind, you say?
Where else would it be?
I'm saying that a brain is an environment where ideas can do interesting things (like reproducing themselves, mutating, splitting and recombining) comparable to the interesting things that started happening a very long time ago between amino acids and phospholipid membranes and assorted other organic chemicals which eventually resulted in the formation of brains. Any Turing-complete computer is also a sort of environment for ideas.
An idea outside an environment capable of supporting it does not do interesting things. It might be dormant, like a virus or bacterial spore, and colonize any less-hostile environment to which it's introduced. It might not. As yet, the only reliable way to distinguish between a dormant idea and a different arrangement of the same parts which does not constitute a dormant idea is to find an environment in which it will do interesting things.
For example, if you find a piece of baked clay with some scratch-marks in it, and want to know if they're cuneiform or just random scratches, you could show it to an archaeologist. The archaeologist looks at the tablet and compares it to prior knowledge about cuneiform - that is to say, transfers information about shape and coloration into her brain via the optic nerve and, once inside, drops them into the informational equivalent of a dish of agar. If anything interesting pops up, it's an idea. If not, either it's just noise, or it's an idea that the archaeologist can't figure out. There's no way to definitively prove the absence of potential ideas in a given information-bearing substrate.
If these disembodied qualia-properties don't help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can't see any point to this debate. Is it a social-signaling contest of some sort?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-01-26T08:30:04.990Z · LW(p) · GW(p)
Let's go back to your original statement:
A creature isn't conscious until there's a brain with a concept of 'consciousness' that identifies it as such.
OK, so according to you, we have concepts existing before and independently of consciousness, and we also have that consciousness is not a property that is objectively present (or else there'd be no need to appeal to the conceptual judgement of a brain, as a necessary cause of consciousness's existence). Both of these have to be true if you are to avoid circularity.
The second one already falsifies your account of consciousness. The difference between being conscious and not being conscious is not a matter of convention. It's an internal fact about you which is not affected by whether I am around to express opinions.
It sounds like you want the consciousness of a brain to depend on the conceptual judgements of that same brain, which is at least less abjectly dependent on the epistemology of outsiders. But it's still false. If you are conscious, you are conscious regardless of whatever opinions or concepts you have. Your conceptual capacities limit your possible conscious experience, in the sense that you can't consciously identify something as an X if you don't have the concept X, but whether or not you're conscious doesn't depend on how you are using (or misusing) your conceptual faculties at any time.
Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I'm not just talking about the sense of being a self. Even raw, self-oblivious sensory experience is a form of consciousness.
If these disembodied qualia-properties don't help you make any actionable predictions beyond what physicalism could do, and their presence is unfalsifiable, I can't see any point to this debate. Is it a social-signaling contest of some sort?
Maybe my very latest comments will clear things up a little. The immediate problem with physicalism is that reality contains qualia and physicalism doesn't. In a reformed physicalism that does contain qualia, they would have causal power.
Replies from: Strange7↑ comment by Strange7 · 2011-01-26T21:28:06.115Z · LW(p) · GW(p)
Just to clarify, by consciousness I mean awareness in all forms, not just self-awareness. What I said still applies to self-awareness as well as to awareness in general, but I thought I would make explicit that I'm not just talking about the sense of being a self.
Ah, so we're arguing over definitions.
The immediate problem with physicalism is that reality contains qualia and physicalism doesn't. In a reformed physicalism that does contain qualia, they would have causal power.
Let's say you take an organism capable of receiving and interpreting information in the form of light, such as e.g. a ferret with working eyes and a visual cortex. Duplicate it with arbitrary precision, keep one of the copies in a totally lightless box for a few minutes and shine a dazzling but nondamaging spotlight on the other for the same period of time. Then open the box, shut off the spotlight, and show them both a picture.
The ferret from the box would see blindingly intense light, gradually fading in to the picture, which would seem bright and vivid. The ferret from the spotlight would see near-total darkness, gradually fading in to the picture, which would seem dull and blurry. Same picture, very different subjective experience, but it's all the result of physiological (mostly neurological) processes that can be adequately explained by physicalism.
Does the theory of qualia make independently-verifiable predictions that physicalism cannot? Or, if the predictions are the same, is it somehow simpler to describe mathematically? In the absence of either of those conditions, I am forced to consider the theory of qualia needlessly complex.
↑ comment by Jack · 2010-05-15T11:09:17.622Z · LW(p) · GW(p)
These two worlds may be correlated, that is being demonstrated every day by neuroscience, but they simply cannot be identified under the physical ontology we have.
What exactly do you take the purpose of an ontology to be? If you have a scientific theory whose predictions hit the limit of accuracy for predicted experience why do you need anything in your ontology beyond the bound variables of the theory?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-16T09:32:10.802Z · LW(p) · GW(p)
An ontology is a theory about what's there. The attributes of experience itself, like color, meaning, and even time, have been swept under a carpet variously labeled "mind", "consciousness", or "appearance", while the interior decorators from Hard Science Inc. (formerly trading as the Natural Philosophy Company) did their work. We have lots of streamlined futuristic fittings now, some of them very elegant. But they didn't get rid of the big lump under the carpet. The most they can do is hide it from view.
Replies from: Jack↑ comment by Jack · 2010-05-16T12:15:47.994Z · LW(p) · GW(p)
An ontology is a theory about what's there.
We don't have access to "what is there". What we have are sensory experiences. Lots of them! Something is generating those experiences and we would like to know what we will experience in the future. So we guess at the interior structure of the experience generator and build models that predict for us what our future experiences will be. When our experiences differ from expected we revise the model (i.e. our ontology). This includes modeling the thing that we are which improves our predictions of our own experiences and our experiences of what other humans say are their experiences. One thing humans report is the experience is seeing color. So we need to explain that. One thing humans report is the experience is self-awareness so we have to explain that etc. You seem to want to reify the sensory experiences themselves just because they look different in our model than in our experience. But the model isn't supposed to look like our experience it is supposed to predict it. You're making a category error. Presumably you know this and think the problem is the categories. But you need to motivate your rejection of the categories. All I want are predictions and I've been getting them, so why should I reject this model?
The attributes of experience itself, like color, meaning, and even time, have been swept under a carpet variously labeled "mind", "consciousness", or "appearance",
But lots of scientists study these things! Last semester I learned all about auditory and visual perception. There is a lot we don't know which is why they're still working on it.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-05-16T12:35:27.440Z · LW(p) · GW(p)
We don't have access to "what is there". What we have are sensory experiences.
So we know that whatever is there must include those sensory experiences. They themselves are part of reality.
But the model isn't supposed to look like our experience it is supposed to predict it.
Most models of reality are partial models that implicitly presuppose some untheorized notion of experience in the model-user. Medicine and engineering aren't especially focused on the fact that doctors and engineers encounter the world, like everyone else, through the medium of conscious experience.
But there are two types of explanatory enterprise where conscious experience does become explicitly relevant. One is any theory of everything. The other is any science which does take experience as its subject matter. In the latter case, scientists will explicitly theorize about the nature of experience and its relationship to other things. In the former case, a theory of everything must take a stand on everything, including consciousness, even if only to say "it's made of atoms, like everything else".
So some part of these models is supposed to look like experience. However, as I have been saying elsewhere, nothing in physical ontology looks like an experience; and the sciences of consciousness so far just construct correlations between "physics" (i.e. matter) and experience. But they must eventually address the question of what an experience is.
↑ comment by PhilGoetz · 2010-05-15T03:46:37.173Z · LW(p) · GW(p)
Nice essay! I'm not yet won over by the suggestion in your final paragraph, but it's intriguing.
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2010-05-15T06:31:47.052Z · LW(p) · GW(p)
Phil writes, "Nice essay!"
Is there something in Mitchell's essay (comment) that Mitchell has not already said on this site 30 times or did you just like the way he phrased it this time?
comment by LazyDave · 2010-03-27T20:06:57.988Z · LW(p) · GW(p)
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be.
I do not really think you need an anthropic argument to prove that "you" couldn't be an animal; it is more a matter of definition, i.e. by definition you are not an animal. For example, there is no anthropic reason that "I" couldn't have been raised in Alabama, but what would it even mean to say that I could have been raised in Alabama? That somebody with the same exact genes and parents was raised in Alabama? In that case, it is the same as saying I have an identical twin that was raised there. The fact of the matter is that when I say "I", I am referring to someone with all of the same genes and experiences I have. To say that "I" could have been some other human is nonsensical; to say that "I" could have been a bat is even more so.
comment by bogus · 2010-03-27T15:06:57.245Z · LW(p) · GW(p)
Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
You assume that you have equal probability of being any conscious being. The internal subjective experience of humans stands out in its complexity; perhaps more complex subjective experiences have higher weight for some reason.
comment by neq1 · 2010-05-17T17:48:31.174Z · LW(p) · GW(p)
Anthropic reasoning is what leads people to believe in miracles. Rare events have a high probability of occurring if the number of observations is large enough. But whoever that rare event happens to will feel like it couldn't have just happened by chance, because the odds of it happening to them was so large.
If you wait until the event occurs, and then start treating it as a random event from a single trial, forming your hypothesis after seeing the data, you'll make inferential errors.
Imagine that there are balls in an urn, labeled with numbers 1, 2,...,n. Suppose we don't know n. A ball is selected. We look at it. We see that it's number x.
non-anthropic reasoning: all numbers between 1 and n were equally likely. I was guaranteed to observe some number, and the probability that it was close to n was the same as the probability that it was far from n. So all I know is that n is greater than or equal to x.
anthropic reasoning: A number as small as x is much less likely if n is large. Therefore, hypotheses with n close to x are more likely than hypotheses where n is much larger than x.
Replies from: Cyan, CarlShulman↑ comment by Cyan · 2010-05-17T21:13:25.144Z · LW(p) · GW(p)
What you have labeled anthropic reasoning is actually straight-up Bayesian reasoning. Wikipedia has an article on the problem, but only discusses the Bayesian approach briefly and with no depth. Jaynes also talks about it early in PT:LOS. In any event, to see the logic of the math, just write down the likelihood function and any reasonable prior.
↑ comment by CarlShulman · 2010-05-17T18:37:27.622Z · LW(p) · GW(p)
I suggest reading Radford Neal.
Replies from: neq1comment by UnholySmoke · 2010-04-29T14:37:33.215Z · LW(p) · GW(p)
The phrase "for me to be an animal" may sound nonsensical, but "why am I me, rather than an animal?" is not obviously sillier than "why am I me, rather than a person from the far future?".
Agreed - they are both equally silly. The only answer I can think of is 'How do you know you are not?" If you had, in fact, been turned into an animal, and an animal into you, what differences would you expect to see in the world?
Replies from: Strange7comment by Kutta · 2010-03-27T18:49:30.831Z · LW(p) · GW(p)
Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. (...) ...we still have no idea what it's like to feel a subjective echolocation quale.
(Excuse me for being off topic)
Reductionism is true; if we really know everything about a bat brain, bat quale would be included in the package. Imagine a posthuman that is able to model a bat's brain and sensory modalities on a neural level, in its own mind. There is no way it'd find anything missing about the bat; there is no way it'd complain about persistently mysterious bat quale. It's a fact that current humans are very bad at modeling any minds, including their own. Thus, human-level neuroscientists researching the bat's brain are a bit like the human operator in Searle's Chinese Room; they have access to a lot of abstract information but they're unable to actually hold a neural model of the bat in their minds and simulate firings.
In short, I think that in this case it's more reasonable to point to insufficiencies in brainpower before we start considering fundamental epistemological problems.
Replies from: wedrifid↑ comment by wedrifid · 2010-03-27T19:18:27.912Z · LW(p) · GW(p)
It's a fact that current humans are very bad at modeling any minds, including their own.
"Very bad" compared to what? We are brilliant at modelling minds relative to our ability for abstract reasoning, mathematics and, say, repeating a list of 8 items we were just told in reverse order.
Replies from: Kuttacomment by PhilGoetz · 2010-03-28T00:46:27.404Z · LW(p) · GW(p)
[Edited, because it was wrong.]
The doomsday argument is,
O(X) = random human me observes some condition already satisfied for X humans
pt(X) = P(there will be X humans total over the course of time)
pt(2X | O(X|2)) / pt(2X) < pt(X | O(X/2)) / pt(X)
This is true if your observation O(X) is, "X people lived before I was born", or, "There are X other people alive in my lifetime".
But if your observation O(X) is "I am the Xth human", then you get
pt(2X | O(X|2)) / pt(2X) = pt(X | O(X/2)) / pt(X)
and the Doomsday argument fails.
So which definition of O(X) is the right observation to use?
comment by Psychohistorian · 2010-03-27T18:20:57.105Z · LW(p) · GW(p)
The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe. However, given that there appears to be nothing so special about earth that it wouldn't reoccur many times among trillions and trillions of stars, we can still conclude that sentient life does likely exist elsewhere in the universe.
Similarly, in this context, the fact that animals have brains that are relatively similar to ours itself gives you evidence with which to refine the anthropic argument. As you said, that hard line between experience-having and not-experience having would be weird. Thus, evidence from the observed universe trumps, or at least significantly adjusts, the anthropic argument.
It seems to take very tiny pieces of evidence to destroy a lot of anthropic reasoning, which is why, as much as I'd enjoy me some fillet-'o-chimp, I don't generally trust anthropic reasoning as a stopping point; we can often improve on it with available information.
Replies from: PhilGoetz, Jordan, wedrifid↑ comment by PhilGoetz · 2010-03-27T21:29:48.177Z · LW(p) · GW(p)
For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe.
That's not how the anthropic principle works.
The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.
It's unclear if you get to count transhumans and AIs in V, which is the same problem Yvain is raising here about whether to include bats and ants in the distribution.
You can't conclude that there aren't other planets with life because you ended up here, because the probability of different values of V doesn't depend on the observable W. There's no obvious reason why P(there are 9999 other planets with life | I'm on this planet here with life) / P(there are 9999 other planets with life) would be different than P(there are 0 other planets with life | I'm on this planet with life) / P(there are 0 other planets with life).
(I divided by the priors to show that the anthropic principle takes effect only in the conditional probability; having a different prior probability is not an anthropic effect.)
Disclaimer: I'm a little drunk.
I'm troubled now that this formulation doesn't seem to work, because it relies on saying "P(fraction of all humans who have lived so far is < X)". It doesn't work if you replace the "<" with an "=". But the observable has an "=".
BTW, outside transhumanist circles, the anthropic principle is usually used to justify having a universe fine-tuned for life, not to figure out where you stand in time, or whether life will go extinct.
Replies from: PlatypusNinja, wedrifid↑ comment by PlatypusNinja · 2010-03-31T18:49:09.394Z · LW(p) · GW(p)
The anthropic principle lets you compute the posterior probability of some value V of the world, given an observable W. The observable can be the number of humans who have lived so far, and the value V can be the number of humans who will ever live. The probability of a V where 100W < V is smaller than the probability of a V only a few times larger than W.
This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anthropic argument, then I think we have to conclude that the anthropic argument is wrong and useless.
I would be interested in hearing examples of applications of the anthropic argument which are not vulnerable to the "depending on your reference class you get results that are either completely bogus or, in the best case, unverifiable" counterargument.
(I don't mean to pick on you specifically; lots of commentors seem to have made the above claim, and yours was simply the most well-explained.)
Replies from: PhilGoetz, SilasBarta↑ comment by PhilGoetz · 2010-04-01T21:21:17.447Z · LW(p) · GW(p)
This argument could have been made by any intelligent being, at any point in history, and up to 1500AD or so we have strong evidence that it was wrong every time. If this is the main use of the anthropic argument, then I think we have to conclude that the anthropic argument is wrong and useless.
First, "the anthropic argument" usually refers to the argument that the universe has physical constants and other initial conditions favorable to life, because if it didn't, we wouldn't be here arguing about it.
Second, what you say is true, but someone making the argument already knows this. The anthropic argument says that "people before 1500AD" is clearly not a random sample, but "you, the person now conscious" is a random sample drawn from all of history, although a sample of very small size.
You can dismiss anthropic reasoning along those lines for having too small a sample size, without dismissing the anthropic argument.
↑ comment by SilasBarta · 2010-03-31T20:14:40.280Z · LW(p) · GW(p)
Thank you for saying this. I agree. Since at least the time I made this comment, I have tentatively concluded that anthropic reasoning is useless (i.e. necessarily uninformative), and am looking for a counterexample.
↑ comment by Jordan · 2010-03-27T19:05:31.808Z · LW(p) · GW(p)
The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small.
True, assuming sentient life is common enough.
This would suggest sentient life does not exist elsewhere in the universe.
Not true. This is like saying that if you roll a million sided die and get 362,853 then the die must have been fixed because the chance of getting 362,853 is 1-in-a-million!
Replies from: Psychohistorian↑ comment by Psychohistorian · 2010-03-27T21:14:54.186Z · LW(p) · GW(p)
if you roll a million sided die and get 362,853 then the die must have been fixed because the chance of getting 362,853 is 1-in-a-million!
Were that appropriate, the same mechanism would also defeat the reasoning in this post. While I agree with your ultimate conclusion, using solely the anthropic principle and no additional information, I believe you are compelled to conclude extraterrestrial life does not exist.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-03-27T21:26:30.658Z · LW(p) · GW(p)
Were that appropriate, the same mechanism would also defeat the reasoning in this post.
I disagree. There is a natural category (sentience, reflectivity, etc.) that picks out humans over other Earthly animals and leads to a more-than-max-entropy prior for humans being more anthropically special*; this is not the case for either 362,853 or Earth.
* If you accept anthropic reasoning at all, that is. I'm sort of playing devil's advocate in this comment; this post mostly just pushes me further towards biting the bullet of UDT/collapsing epistemology to decision theory.
↑ comment by wedrifid · 2010-03-27T19:05:29.488Z · LW(p) · GW(p)
The anthropic principle is contingent on no additional information. For example, if sentient life exists elsewhere in the universe, your odds of being a human are vanishingly small. This would suggest sentient life does not exist elsewhere in the universe. However, given that there appears to be nothing so special about earth that it wouldn't reoccur many times among trillions and trillions of stars, we can still conclude that sentient life does likely exist elsewhere in the universe.
With an acknowledgement that on topics of this difficulty I don't expect to be right a supermajority of the time I have to disagree both on what "I am human" tells me about other beings and on what extra information tells me.
Given no additional information, noticing that I am a human increases the probability that there is sentient life elsewhere in the universe (it at least shows that sentient life is possible). It is a mistake to draw any conclusions from p(a randomly chosen sentient being is a human | there are sentient beings elsewhere in the universe). If both you and aliens exist then you and aliens exist. Knowing that you happen to be you instead of an alien isn't particularly significant.
As for extra information... well, the fact that we can't see any evidence of interstellar civilisations eating stars or otherwise messing up the place does provide weak-to-moderate evidence that intelligent life is hard to come by depending on how likely it is for intelligent life to progress that far. In that case anthropic reasoning would help explain how we could come to exist given that life was improbable. We would be an unimaginably improbable freak and all other similarly improbable freaks would be off in other Everett branches.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2010-03-27T21:23:15.183Z · LW(p) · GW(p)
Assume three possible worlds, for simplicity:
A: 1 billion humans. No ETs.
B: 1 billion humans, 1 million ETs
C: 1 billion humans, 1 billion billion billion ETs.
If I am using the anthropic principle and the observation that I am human, these together provide very strong evidence that we are in either world one or world two, with a slightly stronger nudge towards world one. Where we end up after this observation depends on our priors. I agree fully that making additional inferences, such as the probability of other sentient beings increasing due to our own existence, or when we look at the size of the universe, the odds of being alone decrease, affects the end probability.
The inference I described may be unduly restricted, but that is my exact point. The original post made an anthropic inference in isolation - it simply used the fact that there are more animals than humans, and the author is a human, to infer that animals do not have experiences. The form of the argument would not have changed significantly were it used to argue that rocks lack experience. Thus, while the argument is legitimate, it is easily overwhelmed by additional evidence, such as the fact that humans and animals have somewhat similar brains. That was my point: the anthropic principle is easily swamped by additional evidence (as in the ET issue) and so is being overextended here.
Replies from: PhilGoetz, AllanCrossman, wedrifid↑ comment by PhilGoetz · 2010-03-27T22:19:03.760Z · LW(p) · GW(p)
You're saying, "I rolled a die. The die came up 1. Therefore, this die probably has a small number of sides."
But "human" is just "what we are". Humans are not "species number 1". So your logic is really like saying, "I rolled a die. The die landed with some symbol on top. Therefore, it probably has a small number of sides."
Replies from: Strange7↑ comment by Strange7 · 2010-03-27T23:14:46.944Z · LW(p) · GW(p)
If the die is small enough for you to hold in one hand, and the symbol covers only side yet is large enough to easily read with typical human visual acuity, based on the laws of geometry it would be safe to assume that the die has fewer than about 100 sides, yeah.
Replies from: Rain↑ comment by AllanCrossman · 2010-03-27T22:32:37.946Z · LW(p) · GW(p)
If the various species of ET are such that no particular species makes up the bulk of sentient life, then there's no reason to be surprised at belonging to one species rather than another. You had to be some species, and human is just as likely as klingon or wookie.
↑ comment by wedrifid · 2010-03-28T13:01:48.028Z · LW(p) · GW(p)
If I am using the anthropic principle and the observation that I am human, these together provide very strong evidence that we are in either world one or world two, with a slightly stronger nudge towards world one.
And here is where we are in simple disagreement. I say that knowing that I am human tells me very little about the configuration of matter in a different galaxy. Things that it does not tell me include, but are not limited to, "is the matter arranged in the form of a childlike humanoid, maybe green or grey. Probably with a big head and that can do complex thinking?"
I claim (and, again, it is a complex topic so I wouldn't bet on myself at odds of more than one gives you, say, 20) that this argument isn't weak evidence that is easily overwhelmed. It is not evidence at all.
comment by Peterdjones · 2011-04-16T19:27:58.943Z · LW(p) · GW(p)
You can't be a toaster, because toasters don't have any awareness at all. As a philosophical ponderer, you likewise can't be an animal lower than H. Sap. If you were, you wouldn't be able to reflect on it.
comment by timtyler · 2010-05-16T07:51:33.423Z · LW(p) · GW(p)
Re: "If the doomsday argument is sufficient to prove that some catastrophe is preventing me from being one of a trillion spacefaring citizens of the colonized galaxy, this argument hints that something is preventing me from being one of a trillion bats or birds or insects."
The doomsday argument? It seems like a dubious premise.
comment by PeerInfinity · 2010-03-30T18:49:12.134Z · LW(p) · GW(p)
The following is copypasted from some stream-of-consciousness-style writing from my own experimental wave/blog/journal, so it may be kinda messy. If this gets upvoted, I might take the time to clean it up some more. The first part of this is entirely skippable.
(skippable part starts here)
I just read this LW post. I think the whole argument is silly. But I still haven't figured out how to explain the reasons clearly enough to post a comment about it. I'll try to write about it here.
Some people have posted objections to it in the comments, but so far none that clearly show the problem.
This is basically the same problem as with the doomsday argument, and anthropics in general. Generalizing from one example. Or, more accurately, trying to do statistics with a sample size of 1. Improbable events happen. If people experiencing this improbable event try to do anthropic reasoning about it, then they will conclude that "we just happened to be in this improbable category" is improbable, and therefore they are probably in some other, more probable category that gives the same results. And they would be right. They probably are in the more probable category. But some observers really are in that improbable category. And if they take actions assuming that they are in the more probable category, and not the improbable category, then they will be worse off as a result. But that won't be because they made a mistake in the math, it will be because they just happened to be in the more improbable category, and therefore any actions they take assuming that they are in the more probable category will be suboptimal.
Sorry, the above was confusing. I should rewrite it using specific examples, not general descriptions.
(skippable part ends here)
One standard example is the Doomsday Argument: It would be improbable for us to find ourselves in a low-population, pre-Singularity past, if there will be a future containing many orders of magnitude more observers. The conclusion of the doomsday argument is that there probably is no post-Singularity future, and that humanity will probably soon go extinct. And yes, that is what "the math" says. But it would be an extremely bad idea to assume that doomsday will inevitably come soon, and therefore there's no point in trying to do anything to prevent it. The math says that it's improbable to find yourself as one of the few people before the Singularity. The math doesn't say that it's impossible. There are still some people who will just happen to find themselves alive before the Singularity, and it would be a tragedy of epic proportions if these people, upon recognizing that their current situation is improbable, decide that there's no point trying to help make sure the Singularity happens, and turns out okay for everyone involved.
The same applies to the Simulation Argument: If there is a post-Singularity future that contains lots of ancestor simulations, then it would be improbable for us to find ourselves in the real pre-Singularity universe, rather than one of these ancestor simulations. And yes, that is what "the math" says. But it would be a tragedy of epic proportions to assume that you must inevitably be in one of these simulations, and therefore there's no point in trying to help make sure the Singularity happens, and turns out okay for everyone involved. Oh, and it would also be a good idea to try to prevent any ancestor simulations from being created in the future. Or at least that's my opinion, as someone who doesn't want to be in an ancestor simulation.
So, now how does all this apply to that LW post? Oh, right, assuming that animals are probably not conscious. The math is less clear in this case, but even if the math turns out to be correct, it would still be a bad idea to forget about that word "probably". It would still be tragic to guess wrong about whether animals are conscious, and treat them cruelly for your own benefit as a result. And, as some commenters pointed out, the probability of guessing wrong is quite high. And so:
(probability that animals are conscious) x (suffering caused by treating animals cruelly) > (probability that animals are not conscious) x (minor inconveniences to yourself caused by not treating animals cruelly)
Or at least that's my guess. I could be wrong.
Replies from: Raincomment by Mallah · 2010-03-30T17:15:31.943Z · LW(p) · GW(p)
Another reason I wouldn't put any stock in the idea that animals aren't conscious is that the complexity cost of a model in we are and they (other animals with complex brains) are not is many bits of information. 20 bits gives a prior probability factor of 10^-6 (2^-20). I'd say that would outweigh the larger # of animals, even if you were to include the animals in the reference class.
Replies from: bogus↑ comment by bogus · 2010-03-30T21:00:39.530Z · LW(p) · GW(p)
The complexity cost of a model in which any brain is conscious is enormous. Keep in mind that a model with consciousness has to 'output' qualia, concepts, thoughts... which (as far as we can tell) correspond to complex brain patterns which are physically unique to each single brain.
That is, unless the physical implementation of subjective experience is much simpler than we think it is.
comment by Wei Dai (Wei_Dai) · 2010-03-30T06:37:41.006Z · LW(p) · GW(p)
An (insufficiently well designed) AI might use this kind of reasoning to conclude that it's not like anything to be a human. (I mentioned this as an AI risk at the bottom of this SL4 post.)
comment by timtyler · 2010-03-27T16:14:03.132Z · LW(p) · GW(p)
Re: "Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal."
Your priors say that you are a human. It is evidence that is hard to ignore, no matter how unlikely may seem. Concrete evidence that you are part of a minority trumps the idea that being part of a minority is statistically unlikely.
Since this is true regardless of whether or not it "feels like something" to be a bat, the mere evidence of your existence as a human doesn't allow you to draw conclusions about Nagel's bat speculations.
comment by Desrtopa · 2011-01-07T16:13:57.547Z · LW(p) · GW(p)
If you randomly selected from the set of all sentient beings throughout time and space, the odds are vanishingly low that you would get the Little Prince as well.
Suppose that he ponders his situation, and concludes that if there were places in the universe where many, many humans can coexist, then it would be unlikely that he would find himself living alone on an asteroid.
If we accept for the sake of an argument that he exists, then someone must be the Little Prince, and be doomed to make incorrect inferences about the representativeness of their situation.
It makes no difference to the Little Prince's observations whether he is the only being in the universe, or in a heavily populated universe where he simply happens to find himself completely isolated.
Similarly, it makes no difference to our observations whether our future contains a mass extinction event or a population explosion. A universe where our future contained a population explosion would contain a vantage point equivalent to our own just as one where our near future contained an extinction event would.
For any individual, the answer to the question "is my situation typical?" is more likely to be yes than no, at least for sufficiently broad definitions of "typical." But that doesn't mean that the answer can't be "no," and unless you define "typical" so broadly as to be meaningless, sometimes it has to be. If you see a possible future event that would render all present and past existences atypical, you can't use anthropic reasoning to determine whether it's likely to happen, because the universes in which the event doesn't happen and the ones where it does still contain the same vantage points prior to it.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-01-10T01:03:23.298Z · LW(p) · GW(p)
The logical conclusion of that version of the anthropic principle is that the universe contains infinitely many copies of us.
comment by CannibalSmith · 2010-03-30T11:35:03.405Z · LW(p) · GW(p)
Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events? That is, the only way to obtain probability is from statistics, and I cannot run repeated experiments of when, where, and as what I exist.
Replies from: wnoise, thomblake↑ comment by wnoise · 2010-03-30T16:35:01.461Z · LW(p) · GW(p)
That's entirely contrary to the Bayesian program that this site broadly endorses: throwing out the subjective probability baby with the anthropic bath water, as it were.
Replies from: CannibalSmith↑ comment by CannibalSmith · 2010-03-31T13:33:00.544Z · LW(p) · GW(p)
What, really? Wait, what!? Uh.
- Could you please answer my question directly in the form of "yes/no, because"?
- Do you mean by subjective probability the fact(?) that probability is about the map and not the territory?
- If yes, what does it have to do with anthropics?
- If yes, what! Contrary?? I learned about it here!
- If no, I'm completely confused.
Also, dear reader, vote parent up or down to tell me whether he's correct about you.
Replies from: Morendil, wedrifid↑ comment by Morendil · 2010-03-31T13:46:12.201Z · LW(p) · GW(p)
No, probability is not "meaningless for singular events". We can meaningfully discuss, in Bayesian terms, the probability of drawing a red ball from a jar, even if that jar will be destroyed after the single draw. The probabilities are assessments about our state of knowledge.
Therefore no, we cannot dismiss all anthropic reasoning for the reasons you suggested.
If you got "probability is meaningless for singular events" from what you learned here, either you are confused, or I am. (Possibly both.)
↑ comment by wedrifid · 2010-03-31T14:21:15.894Z · LW(p) · GW(p)
Can we dismiss all anthropic reasoning by saying that probability is meaningless for singular events?
No, because it isn't isn't meaningless.
That is, the only way to obtain probability is from statistics
No, you can get it from mathematics. Even basic arithmetic. Infinite series of events, on the other hand, those are hard to come by.
, and I cannot run repeated experiments of when, where, and as what I exist.
I dismiss many examples of (bad) anthropic reasoning because they assume that that the probability of their subjective experience is what you get if you draw a random head out of a jar of all things that meet some criteria of self awareness.
Do you mean by subjective probability the fact(?) that probability is about the map and not the territory?
Kind of. Read Probability is subjectively objective
If yes, what! Contrary?? I learned about it here!
The frequentist dogma was the 'contrary' part, not the 'maps/territory' stuff. Probability doesn't come from statistics and definitely applies to single events.
Replies from: wnoise↑ comment by thomblake · 2010-03-31T13:54:54.974Z · LW(p) · GW(p)
It seems to me that the disagreement here is because you're looking at different parts of the problem. It might well be said that you can't have a well-calibrated prior for an event that never happened before, if that entails that you actually don't know anything about it (and that might be what you're thinking of). On the other hand, you should be able to assign a probability for any event, even if the number mostly represents your ignorance.
comment by Pablo (Pablo_Stafforini) · 2010-04-03T13:27:36.289Z · LW(p) · GW(p)
Instead of showing that non-human animals are unconscious, anthropic reasoning may show that such animals are conscious if we are not ourselves soon doomed to extinction. Expanding the class of observers to include such animals makes it less surprising that we find ourselves living at this comparatively early stage of human evolution, since "we" refers to conscious rather than to merely human beings.
This argument assumes that most non-human animals will soon go extinct. But this assumption makes sense under many of the possible scenarios involving human survival.