What do rationalists think about the afterlife?
post by Adam Zerner (adamzerner) · 2014-05-13T21:46:48.131Z · LW · GW · Legacy · 99 commentsContents
99 comments
I've read a fair amount on Less Wrong and can't recall much said about the plausibility of some sort of afterlife. What do you guys think about it? Is there some sort of consensus?
Here's my take:
- Rationality is all about using the past to make predictions about the future.
- "What happens to our consciousness when we die?" (may not be worded precisely, but hopefully you know what I mean).
- We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
- Can we say that a different set of preconditions doesn't produce consciousness? I personally don't see reason to believe this. I see 3 possibilities that we don't have reason to reject, because we have no data on them. I'm still confused and not too confident in this belief though.
- Possibility 1) Maybe the 'other' conscious beings don't want to communicate their consciousness to us.
- Possibility 2) Maybe the 'other' conscious beings can't communicate their consciousness to us ever.
- Possibility 3) Maybe the 'other' conscious beings can't communicate their consciousness to us given our level of technology.
- And finally, since we have no data, what can we say about the likelihood of our consciousness returning/remaining after we die? I would say the chances are 50/50. For something you have no data on, any outcome is equally likely (This feels like something that must have been talked about before. So side-question: is this logic sound?).
Edit: People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this. My point in this post is that I don't see why we have reason to reject the 3 possibilities above. If you reject the idea that consciousness could reside outside of the brain, please explain why.
99 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2014-05-13T23:34:08.871Z · LW(p) · GW(p)
This question has little to do with LW-style rationality. The contrivance of postulating extra entities like souls becomes obvious once you accept that mind is a process in a living brain, affected by drugs, injuries, etc. You can, of course, postulate anything you want, but the odds are against you, and definitely not 50/50. See this illuminating debate, discussed here. More about the 50/50 fallacy you've fallen prey to.
Edit: fixed the debate video link to a better-quality one.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T00:11:23.507Z · LW(p) · GW(p)
I don't think I'm committing the Fallacy of Gray. The Fallacy of Gray is when you treat probabilities between 0 and 100 as the same. This instance is a question is of what to do with a completely unknown probability.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-05-14T00:12:24.105Z · LW(p) · GW(p)
Watch the debate I linked, please. The probability is not unknown, it's vanishingly small.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T00:19:10.348Z · LW(p) · GW(p)
I will watch it as I go to sleep tonight.
Regarding the 50/50 fallacy, there's two questions: 1) What is the likelihood of something happening for which you have no data that would allow you to predict it? 2) What is the likelihood of an afterlife?
The second question is the overarching question of this post, for which I may be wrong about the probability. But what I meant in the past comment is that I don't think the Fallacy of Gray applies to the first question. Sorry that I wasn't clear about that. Now that I've clarified, do you think it would be committing the Fallacy of Gray to say that something for which you have no data on has a 50/50 shot of occurring?
Replies from: polymathwannabe, shminux, DanielLC↑ comment by polymathwannabe · 2014-05-14T00:36:03.437Z · LW(p) · GW(p)
You don't seem to understand what it's really like to "have no data." A question with no data is something like: Will the Emperor of Alfa Centauri eat fried lummywaps or boiled sanquemels today?
On human consciousness we have lots and lots of data, more than enough to predict confidently that it's lost forever when we die.
Replies from: D_Alex↑ comment by D_Alex · 2014-05-19T07:41:35.433Z · LW(p) · GW(p)
A question with no data is something like: Will the Emperor of Alfa Centauri eat fried lummywaps or boiled sanquemels today?
I get your point, but your example is poor - I think we have more than enough data to answer this question: No, with 99%+ probability.
↑ comment by Shmi (shminux) · 2014-05-14T00:50:21.159Z · LW(p) · GW(p)
What is the likelihood of something happening for which you have no data that would allow you to predict it
It depends on where your priors come from. If you mean Knightian uncertainty, then this is a whole area of research. Afterlife is not like that.
There is plenty of experimental data to falsify the afterlife model. If you take the souls model seriously, there are concrete testable predictions it makes, these have been investigated and found to be false. The video, among many other sources, discusses a bunch of them. Based on the results of these experiments we can evaluate the probability of afterlife, and the number is tiny, though the exact number would depend on who does the calculations.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-14T01:06:11.099Z · LW(p) · GW(p)
If you take the souls model seriously, there are concrete testable predictions it makes, these have been investigated and found to be false. The video, among many other sources, discusses a bunch of them.
I don't have time to watch the video, can you give an example? The only experiments I can think of leave the experimenter apparently unable to report the results.
Edit: Ok, there have been reports of some experimenters who have successfully reported positive results, but not in a reliable or reproducible manner.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-05-14T02:47:12.072Z · LW(p) · GW(p)
First, feel free to state your favorite model, or at least some starting point, e.g. "all information in the brain is duplicated in an extra-brane substrate not interacting with the currently known physical forces, such as electromagnetism", or "the mind is not located in the brain, the latter is only a conduit it uses to communicate with other minds", and we can start refining it, eventually finding what it predicts. Then we can start looking whether relevant experiments have been done.
For example, the observation that damaging a certain part of one's brain leads to cognitive changes may or may not be relevant, depending on the model used.
Similarly, noting that ape brains are very similar to human, yet monkeys have no souls is only an argument against some specific models.
Another example: an experiment that tests whether a person reporting an out-of-body experience is able to read a message secretly hidden on the upward-facing part of a ceiling fan would not falsify an epiphenomenal model where a soul simply accumulates the memories and personality, only to separate some time after the brain death becomes irreversible.
So, pick your model.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-14T04:04:03.783Z · LW(p) · GW(p)
First, feel free to state your favorite model, or at least some starting point
So you're asking me to pick some hypothesis to privilege.
Replies from: shminux↑ comment by Shmi (shminux) · 2014-05-14T05:00:25.725Z · LW(p) · GW(p)
Sure, if you want to put it this way. I see it more as making a commitment to analyzing a single afterlife model vs the no-afterlife version "consciousness is a process in a living brain". Committing to analyzing a single clearly defined model helps against inadvertently moving the goalposts if a contrary evidence is found. An explicit goalpost moving is fine, as long as it is of the form "we have found this model to be invalid, but we can construct an alternative model which does not suffer from the same weakness, so let's reject the original model and consider this one instead". I see it done in Physics all the time, by the way. So, please do pick a model. (Also, I wasn't the one who downvoted your reply. Downvoting a living conversation is one sure way to end it.)
↑ comment by DanielLC · 2014-05-14T01:13:58.489Z · LW(p) · GW(p)
1) What is the likelihood of something happening for which you have no data that would allow you to predict it?
Do you mean anything that fits that criteria, or a given thing that fits it? I find it quite likely that there is a large number of things that are true that we have no realistic way of finding out, but it's dwarfed by the number of things that are false that we have no realistic way of finding out. If there are five things that are true that you can't verify, so you decide to believe five unverifiable things, it just means you'll be wrong ten times instead of five.
comment by Richard_Kennaway · 2014-05-14T07:54:10.275Z · LW(p) · GW(p)
The reason for thinking that consciousness is a physical process of the brain is the remarkable correspondence we find between injuries to the brain or the introduction of various chemicals, and variations of conscious experience. That leaves open the possibility that consciousness is a physically separate entity for which the brain is the interface through which it moves the body and receives sensation from it, rather than brain processes themselves being consciousness. However most of the ground for that is undercut by the fact that some brain injuries severing parts of the brain from each other also appear to sever the corresponding parts of consciousness -- split-brain observations. It seems that it is not only components of consciousness that correspond to brain regions, but also their interconnection. The more we find out, the less role is left for the hypothesis of a separate consciousness to do any work. Like the old "God in the gaps" argument to defend theism against science.
That is the argument, but it is important to note what it does not solve. It does not solve the problem of what consciousness is -- of why there is any such thing in the world as experience, and how any physical process could produce it. Nobody knows the answer to that. (ETA: including non-materialists. "There are souls" is not an explanation of how they work.) In this it differs from the problem of God. There may be people who claim a direct experience of God in the same way as they have direct experience of themselves, but it does not seem to be common. Experience of one's own presence, on the other hand, is reported by almost everyone. Yet everything else we know about the world tells us that there cannot possibly be any such thing. We cannot even see what an explanation for experience would be.
There are those who point to some physical phenomenon that is present whenever consciousness is, and conclude, "that's consciousness". Unless they show how whatever it is produces experience, they have not explained consciousness. Most proposed solutions are of this form.
There are those who point to some more or less speculative physical process (e.g. quantum gravity in the microtubules) and assert, "that's consciousness". Unless they show how whatever it is would produce experience, they have not even speculatively explained consciousness.
There are those who take the apparently impossible magnitude of the problem as an argument that there is no problem, which is like a student demanding full marks because the exam was too hard.
There are those who claim to have realised that in fact they have no consciousness and never did (Buddhist enlightenment is often so described, and I believe that the psychologist Susan Blackmore has said something like this, but I can't find a cite). If they carry on functioning like ordinary people then they are claiming to be p-zombies, and if they don't, they've philosophised a tamping iron into their brain.
As for myself, I simply note the evidence above, the problem it leaves unsolved, and my lack of any idea for a solution. Yet it seems that few people can do that. Instead, as soon as they start thinking about the problem, they frantically cast about for solutions and latch onto something of one of the above forms. The problem is like a piece of grit in an oyster, provoking it to encyst it in layer upon layer of baroque encrustations that merely hide the problem instead of solving it.
My point in this post is that I don't see why we have reason to reject the 3 possibilities above.
We have as much reason to reject them as we have to reject the existence of a slice of chocolate cake in the asteroid belt.
People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this.
Yes, this is a regrettably frequent error on this question.
Replies from: Kawoomba, adamzerner↑ comment by Kawoomba · 2014-05-15T07:38:34.344Z · LW(p) · GW(p)
"There are souls" is not an explanation of how they work.
But of course that is an explanation, in the same sense that Maxwell's equations explain the behavior of electric and magnetic fields. "We have this here thing, and is generates this here other thing, which is an observation we describe with this here equation". No different (disregarding the complexity penalty) from saying "souls generate experience" and if all you miss is the math-speak, then insert some greek letter for "soul" and another one for "consciousness". Of course, there is no reason to posit that 'souls' exist in the first place, given the commonly accepted definition. However, the concept of souls doesn't get discarded because it doesn't explain consciousness, because it does. It gets discarded because it adds complexity without making up for it by making predictions, or simplifying the descriptions of the available data/experiments.
As for myself, I simply note the evidence above, the problem it leaves unsolved, and my lack of any idea for a solution.
A common error with the (to me) best candidate explanation, panpsychism, is that it is often conflated with "everything is conscious, so everything thinks / has a mind / is agenty in some sense / can suffer". Obviously matter has the potential to generate qualia, at least in certain configurations. It seems, just on complexity grounds, a simpler model to posit that consciousness-generation is just something that matter does, rather than something which happens exclusively in brains or other algorithm-instantiators (not unlike Tegmark's thought process leading up to "Our Mathematical Universe"). Brains just have the means to process / consolidate and report it. Consider if it were so, then evolutionary selection pressures would work on that property as well; leading to e.g. synchronizing individual "atoms" of consciousness into larger assemblies (cool guy).
Of course, the distinction between the uncontroversial "matter has the potential to generate consciousness" (which the process-folk would also agree with) and "all matter generates some proto-form of consciousness, and the brain evolved to synchronize and shape these building blocks" may be merely a difference in phrasing. Nevertheless, I lean towards the latter purely because it seems simpler in an algorithmic complexity sense. (These are abridged thoughts, the in-a-nutshell version. There are weak points, such as model-building and consciousness being linked in some sense, otherwise there would be no selection pressure to consolidate consciousness. Still, I feel the 'it's an emergent property' to be much more flawed. We forget that emergent is just code for 'can't grasp it on a more basic level, a shortcut our computationally limited models are forced to make. A computationally unlimited model-builder could do away with the whole 'emergent' concept in the first place, and describe a chair -- or a wave in the sea -- on the most basic level. Concluding that something is emergent in the sense of saying it only exists on a certain level of granularity upwards is, to me, confusing our very useful model-building hacks with the reality they aim to describe. There is no 'emergent' in reality. There is only the base level, everything else is a computational hack used by model-builders.)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-15T08:44:28.395Z · LW(p) · GW(p)
"There are souls" is not an explanation of how they work.
But of course that is an explanation, in the same sense that Maxwell's equations explain the behavior of electric and magnetic fields. "We have this here thing, and is generates this here other thing, which is an observation we describe with this here equation".
The difference is that the explanation by souls contains no equations, no mechanisms, nothing but the word. Consciousness extends before life and after death because "we have immortal souls". It's like saying that things fall "because of gravity". That someone speaks a foreign language well "by being fluent". That a person learns to ride a bicycle by "getting the knack of it". (The first of those three examples is Feynman's; the other two are things I have actually heard someone say.)
No different (disregarding the complexity penalty) from saying "souls generate experience" and if all you miss is the math-speak, then insert some greek letter for "soul" and another one for "consciousness".
That would be cargo-cult mathematics: imitating superficial details of the external form (Greek letters) while failing to understand what mathematics is. (cough Procrastination Equation cough)
Still, I feel the 'it's an emergent property' to be much more flawed.
Indeed, "emergence" is no more of an explanation. However, I don't think that
A computationally unlimited model-builder could do away with the whole 'emergent' concept in the first place,
Describing things in terms of "tables", "chairs", "mountains", "rivers" and so on is a great deal shorter than describing them in terms of quarks (and how do we know that quarks and the bottom level?). A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment. Such an entity would not be making models at all.
There is only the base level, everything else is a computational hack used by model-builders.
How does this claim cash out in terms of experience? If someone tried to take it seriously, why wouldn't they go on to think, "'I' is just a computational hack used by model-builders. I don't exist! You don't exist! We're just patterns of neural fitings. No, there are no neurons, there's just atoms! No, atoms don't exist either, just quarks! But how do I know they're the base level? No, I don't exist! There is no 'I' to know things! There are no things, no knowing of things! These words don't exist! They're just meaningless vibrations and neural firings! No, there are no vibrations and no neurons! It's all quarks! Quarkquarkquark..." and spending the rest of their days in a padded cell?
Replies from: Kawoomba↑ comment by Kawoomba · 2014-05-15T09:24:39.872Z · LW(p) · GW(p)
It's like saying that things fall "because of gravity".
But that's precisely what we say. Things fall "because this equation describes how they fall" (math just allows for a more precise description than natural languages). All we do is find good (first priority: accurate, second priority: short) descriptions, which is just "this does that". Fundamentally, a law of gravity and a law of "souls do consciousness" are the same thing, except the first is actually useful and can be "cashed out" better. Consider F=ma were a base-level description. How is "because F=ma" any more of an explanation than "because souls do consciousness" (of course disregarding practicalities such as predictive value etc., I'm only concerned with "status as an explanation")?
A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment.
Well, you can reject Omega-type thought experiments with the same reasoning. Also, Turing Machines.
I'm surprised that "There is only the base level, everything else is a computational hack used by model-builders." is considered to be controversial. I don't mean it as "the referents of the abstractions us model-builders use don't exist", just as "'the wave' is just a label, the referent of which isn't some self-evident basic unit; the concept is just a short-hand which is good enough for our daily-life purposes". Think of it this way: Would two supremely powerful model-builders come up with "chair", independently? If there's reason to answer "no", then chair is just a label useful to some model-builders, as opposed to something fundamental to the territory.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-15T10:57:59.435Z · LW(p) · GW(p)
It's like saying that things fall "because of gravity".
But that's precisely what we say. Things fall "because this equation describes how they fall" (math just allows for a more precise description than natural languages).
"This equation describes how they fall" is a sensible thing to say. "Because of gravity" is only sensible if it refers to that mathematics. The usage I intended to refer to is when someone says that who doesn't know the mathematics and is therefore not referring to it -- a member of the general public doing nothing but repeating a word he has learned.
A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment.
Well, you can reject Omega-type thought experiments with the same reasoning.
I do reject some of them, and have done so here in the past. Not all of these thought experiments make any sense. Omega works great for formulating Newcomb's problem. After that it's all downhill.
Also, Turing Machines.
Turing machines do not perform arbitrary computations instantly.
Think of it this way: Would two supremely powerful model-builders come up with "chair", independently? If there's reason to answer "no", then chair is just a label useful to some model-builders, as opposed to something fundamental to the territory.
I think there is reason to answer "yes". (I assume these model-builders are looking at human civilisation, and not at intelligent octopuses swimming in the methane lakes of Titan.) Less parochially, they will come up with integers, real numbers, calculus, atoms, molecules, fluid dynamics, and so on. Is "group" (in the mathematical sense) merely a computational hack over the "base level" of ZF (or some other foundation for mathematics)?
What does it actually mean to claim that something is "just a computational hack", in contrast to being "fundamental to the territory"? What would you be discovering, when you discovered that something belonged to one class rather than the other? Nobody has seen a quark, not within any reasonable reading of "to see". Were atoms just a computational hack before we discovered they were made of parts? Were protons and neutrons just a computational hack before quarks were thought of? How can we tell whether quarks are just a computational hack? Only in hindsight, after someone comes up with another theory of particle physics?
That's rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between "base-level things" and "computational hacks", and what you get from that distinction.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-05-15T12:02:18.724Z · LW(p) · GW(p)
"Because of gravity" is only sensible if it refers to that mathematics.
Well, that's where we disagree (I'd agree with "useful" instead of "sensible"). The mathematical description is just a more precise way of describing what we see, of describing what a thing does. It is not providing any "justification". The experimental result needs no justification. It just is. And we describe that result, the conditions, the intermediate steps. No matter how precise that description, no matter what language we clad it in, the "mechanism" always remains "because that's what gravity does". We have no evidence to assume that there are "souls" which generate consciousness. However, that is an explanation for consciousness. Just not one surviving the Razor.
To preempt possible misunderstandings, I'm pointing out the distinction between "we have no reason to assume this explanation is the shortest (explains observations in the most compact way) while also being accurate (not contradicting the observations) -- and -- "this is not an explanation, it just says 'souls do consciousness' without providing a mechanism". The first I strongly agree with. The second I strongly disagree with. All our explanations boil down to "because x does y", be they Maxwell's or his demon's soul's, or his silver hammer's.
Turing machines do not perform arbitrary computations instantly.
I wasn't previously aware you draw distinctions between "concepts which cannot exist which are useful to ponder" and "concepts which cannot exist which are over the top". ;-) Both of which could be called magical. While I do see your point, the Computer Science-y way of thinking (with which you're obviously familiar) kind of trains one to look at extreme cases / the limits, to test the boundary conditions to check whether some property holds in general; even if those boundary conditions aren't achievable. Hence the usefulness of TMs.
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow. No need to have as many separate concepts for the macroscopic and the microscopic if you have no difficulty making the computations from a few levels down (need not be 'the base level'). Each abstracted layer creates new potential inaccuracies/errors, unless we assume nothing is lost in translation. Usually we don't need to concern ourselves with atoms when we put down a chair, but eventually it will happen that we put down a chair and something unexpected happens because of an atomic butterfly effect which was invisible from the macroscopic layer of abstraction.
That's rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between "base-level things" and "computational hacks", and what you get from that distinction.
Let me try one way of explaining what I mean, and one way to explain why I think it's an important distinction. Consider two model-builders which are both unconstrained to the maximum degree you'd allow without dismissing them as useless fantasies. Consider two perfectly accurate models of reality (or as accurate as you'd allow them to be). Presumably, they would by necessity be isomorphic, and their shortest representation identical. However, since those shortest representations are uncomputable (let's retreat to more realism when it suits us), let's just assume we're dealing with 2 non-optimally compressed but perfectly accurate models of reality. One which the uFAI exterminating the humans came up with, and one which the uFAI terminating the Alpha-Centaurians came up with. So they meet over a game of cards, and compare models of reality. Maybe the Alpha-Centaurians -- floating sentient gas bags, as opposed to blood bags -- never sat down (before being exterminated), so its models don't contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate? Maybe, once exchanging notes, the Alpha-Centauri AI notes that humans (before being exterminated) liked to use 'chairs', so it includes some representation of 'chair' in its databanks. Maybe the AI's don't rely on such token concepts in the first place, and just describe different conglomerates of atoms, as atoms. It's not that they couldn't just save the 'chair'-concept, there would just be no necessity to do so. No added accuracy, no added model fidelity, no added predictive power. Only if they lacked the oomph to describe everything as atoms-only would they start using labels like "chairs" and "flowers" and "human meat energy conversion facilities".
What I get from that distinction is recognizing pseudo-answers such as "consciousnessness is an emergent phenomenon and only exists at a certain macroscopic level" as mostly being a confusion of thinking macroscopic layers to be somehow self-contained, independent of the lower levels, instead of computationally-friendly abstractions and approximations of lower levels. When we say "chairs are real, and atoms are real, and the quarks are real, and (whichever base levels we get down to) is real", and hold all of those as true at the same time, there is a danger of forgetting that chairs are only real because atoms are real, which are only real because elementary particles are real, which ...", there is a dependency chain going all the way down to who knows where. All the way down to the "everything which can be described by math exists" swirling color vortex. "Consciousness is an actual physical phenomenon which can only be described as a macroscopic process, which only emerges at a higher level of abstraction, yet it exists and creates conscious experience" is self-contradictory, to me. It confuses a layer of abstraction which helps us process the world with a self-contained "emergent" world which is capable of creating conscious experience all on its own. Consciousness must be expressable purely on a base-level (whatever it may be), or it cannot be.
Of course it's not feasible to talk about consciousness or chairs on a quark level (unless you're Penrose), and "emergent" used as "we talk about it on this level because it seems most accessible to us" is perfectly fine. However, because of the computational-hack vs. territory confusion, "emergent" is used all too often as if it was some answer to some riddle, instead of an admission of insufficient resources.
That's rather a barrage of text with only a single pass of proof-reading, if you have enough time to go through it, please point out where I've been unclear, or what doesn't make sense to you.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-21T15:13:00.888Z · LW(p) · GW(p)
We have no evidence to assume that there are "souls" which generate consciousness. However, that is an explanation for consciousness.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn't tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of "souls", nobody knows anything. (1)
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow.
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it's more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find "chair" showing up in some Solomonoff-like induction method, what does it mean to say they don't exist? Or "hydrogen"? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between "computational hack" and "really exists" becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don't think it's worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can't see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren't actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
Maybe the Alpha-Centaurians -- floating sentient gas bags, as opposed to blood bags -- never sat down (before being exterminated), so its models don't contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate?
The possibilities of the universe are too vast for the human concept of "chair" to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.
↑ comment by Adam Zerner (adamzerner) · 2014-05-14T08:58:45.129Z · LW(p) · GW(p)
I agree with everything you said, except for the Devils Advocate part.
We have as much reason to reject them as we have to reject the existence of a slice of chocolate cake in the asteroid belt.
It's not an argument against against my claim. It's just saying "don't play Devils Advocate for fun, only do it to help you find truth". I'm definitely not playing Devils Advocate for fun, I'm trying to arrive at the truth.
I'm not confident in my belief that "we don't know whether or not we'll remain conscious after we die". I'm more confident in it than the alternative, so it's the belief I'll go with for now, but I'm exploring whether or not it's true, which is why I posted here.
Anyway, consider the possibility that we remain conscious after we die, but can't communicate this consciousness to the living (if it helps to be more concrete, let's say that consciousness resides on some super small physical level that is uninterrupted when we die). We have no data on whether or not this possibility is true. We aren't aware of any preconditions that lead to it, and we aren't aware of any preconditions that don't lead to it (the correlation between the brain and consciousness is a correlation between the brain and consciousness that can be communicated). I know it seems crazy (and my inner voice sort of tells me that it's crazy), but I think that this means that my model of the world should give it a 50/50 shot at happening.
I've thought about it a lot, and I think the reason it feels weird to say that is because we're so used to dealing with things for which we do have information about. I think the instinctive thing to do is to query our minds for data that could support or reject this possibility, and our mind returns data on a similar possibility: consciousness that we can communicate. Another thing: I think it's tempting to reverse stupidity. To say "people who believe in the afterlife are clearly wrong; there isn't any afterlife". I'm still confused so I apologize for this paragraph being jumbled. I'm basically just saying that these are things to maybe be weary of.
I'd really like to get your thoughts on this after considering my argument again and giving it an honest chance (that we have no data on what does or doesn't lead to the state of "being conscious after you die but being incapable of communicating it to living people"). I definitely wouldn't be surprised if I made a mistake in my reasoning and I would really love to know what it is if I'm making one.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-14T15:00:44.502Z · LW(p) · GW(p)
Sorry, I wasn't intending to make a reference to Devil's advocacy with that link, but to the question of whether it is reasonable to claim that there could be a slice of chocolate cake in the asteroid belt. It is true that we can't observe the asteroid belt well enough to tell directly, but the world has patterns, and what we know of those patterns, tested by the observations that we have made, rules out the chocolate cake hypothesis. We don't, indeed can't, say, "how can we know?" and give it 50% probability.
Our observations of the connections between brain states and consciousness also don't leave much room for disembodied existence. Personally, I wouldn't say it's as well established as the nonexistence of asteroidal chocolate cake, but I put souls, ghosts, and other spirits a long way below the 50% of maximum ignorance.
Any ghosts out there reading this? Show yourselves, don't just give a few people spooky chills!
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T22:20:13.985Z · LW(p) · GW(p)
1) Regarding connection we see between brain states and consciousness, how do we know that people are really "unconscious"? What if they're still experiencing and feeling things, but are just incapable of communicating this to us at the time, and are incapable of remembering it? Sort of like how when we are asleep and dreaming we're conscious but we could only confirm this is we wake up at the right time (actually, I'm not sure if this is actually true).
2) Assuming that the connection between brain states and consciousness is legit, then I think you're right. After thinking about it some more, I think the point you make below means it'd be much less than a 50/50 chance.
That leaves open the possibility that consciousness is a physically separate entity for which the brain is the interface through which it moves the body and receives sensation from it, rather than brain processes themselves being consciousness. However most of the ground for that is undercut by the fact that some brain injuries severing parts of the brain from each other also appear to sever the corresponding parts of consciousness -- split-brain observations. It seems that it is not only components of consciousness that correspond to brain regions, but also their interconnection.
We have all this data that says "mess the brain up, and you mess consciousness up". It's possible that there is some underlying thing that represents consciousness, but we have data that says that this thing is messed up when you mess the brain up. It'd be crazy if your brain happens to be messed up in such a way when you die that it leaves the thing underlying consciousness in tact.
But as far as overall likelihood of consciousness remaining, it depends on 1). Could we really say that brain states correlate with unconsciousness? How can we determine unconsciousness?
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-05-15T04:13:51.195Z · LW(p) · GW(p)
What if they're still experiencing and feeling things, but are just incapable of communicating this to us at the time, and are incapable of remembering it?
A hypothesis thus described is untestable. Moreover, it's inconsequential: the observed result is the same regardless of whether the hypothesis is true or not. In such a case, the hypothesis can be safely ignored because it adds nothing to our models.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-15T04:30:53.614Z · LW(p) · GW(p)
1) Untestable doesn't mean wrong.
2) What about the possibility that we just can't do a good job of measuring peoples' consciousness given our level of technology?
comment by Ben Pace (Benito) · 2014-05-14T08:30:00.567Z · LW(p) · GW(p)
"We have a number of situations in which we can hear music in this world e.g. Hitting play on your laptop. However this is just the data we have; there are many situations we don't know about. Can we really say that when a song stops, it doesn't continue somewhere else? Personally I see no reason to believe this. At best we're talking fifty-fifty here."
The situations that cause music to be heard are exceedingly complex and are not likely to happen in other parts of the universe by accident. Similarly, the situations which cause consciousness are exceedingly complex, involving so many substructures and modules working in some currently unfathomable way (Daniel Dennett suggests the need for competition ). Even if you think it's likely that all of the necessary parts exist elsewhere in the universe, that's like saying you think music exists somewhere else in the universe - it doesn't explain why you expect the music that stops here to keep on going elsewhere. There's nothing so special about consciousness that it probably just jumps out of the atoms it's being run on - that's magical thinking. Of course, if you are thing like that, maybe you are religious, and you have to contend with everything we know about the mind so far in cognitive science etc.
Replies from: adamzerner, Eugine_Nier↑ comment by Adam Zerner (adamzerner) · 2014-05-14T09:19:57.193Z · LW(p) · GW(p)
I don't think it's "likely" that consciousness remains after death, and I'm not religious.
My point is that we have no data on the possibility of us retaining consciousness after death but being unable to communicate it to the living. We aren't aware of any preconditions that lead to this, and we aren't aware of any preconditions that don't lead to it.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-14T09:23:27.009Z · LW(p) · GW(p)
"if you turn the laptop off, how do you know the music isn't still going, and you just can't hear it?"
We see a phenomena and we see it's causes (to the best of our knowledge). It is not a fifty-fifty chance that if you stop the cause, and you furthermore have no evidence of the phenomena, that the phenomena still exists. It is vastly improbable.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T09:30:40.209Z · LW(p) · GW(p)
Using the music example, you don't know if you stopped the cause of "music playing in such a way that can't be heard".
For the record, I'm confused about this whole thing, especially the 50/50 part. I'm just saying what my best understanding tells me.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-14T10:17:34.567Z · LW(p) · GW(p)
You can't hear it, so it's a more complex thing to suppose it's still there. All the aspects of consciousness can be changed by affecting the brain, so to say that if we turn the brain 'off', that consciousness is still functioning just fine, is a hypothesis you have no evidence for.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T21:55:50.013Z · LW(p) · GW(p)
You can't hear it, so it's a more complex thing to suppose it's still there.
If it is indeed a more complex thing, then I think that means you're right: the chances of it happening are small (inversely related to the complexity).
But how do we know that it's actually a complex thing? The correlates of consciousness on the neuronal level are complex, but what if underneath it all on some smaller physical level there's a simple cause of consciousness? How do we know that "consciousness is complex" is more likely than this theory?
By the way, sorry I took so long to make this point. This is the real point I've been trying to make, but I haven't really been able to articulate it well until talking it over with you guys.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-16T10:37:12.366Z · LW(p) · GW(p)
My original point: the hypothesis that, when all our evidence says that the causes of consciousness have ceased operating, consciousness is still existing, is a much more complex hypothesis than saying it stops existing when all the evidence suggests so, and so you shouldn't expect it to still exist.
Your new point: Just because all the evidence we have says consciousness is a highly complex feature of the world that requires brains functioning in certain ways, doesn't mean that it is, therefore I'm fifty fifty on whether or not it continues when the brain dies (apologies if that seems a little uncharitable).
My recommendation: Maybe watch some Daniel Dennett talks on consciousness, and read some science of how the brain works (I hear Pinker's "How the Mind Works" is good). At the very least, I think that there's so much evidence showing how each aspect of our conscious experience can be affected by affecting the brain, that to suggest consciousness isn't almost entirely dependant on brain function is no longer reasonable. And even then, if you want to suggest that, once the brain stops functioning and you have no memories, no reaction to stimuli, no thought processes (because all of your modules that run these functions have stopped working) that then some essence of experience still exists... That experience is more than the sum of these brain parts, and will continue to exist, seems like a confusion. It really can just be the sum of its parts, a bag of trick, because that's what the evidence indicates.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-16T11:36:50.824Z · LW(p) · GW(p)
After thinking about it some more, I think my point comes down to this: when we say that "messing with the brain messes with consciousness", how do we really know that? How can we infer that someone else is conscious?
We infer from behavior that someone is conscious, but can we infer that from the absence of behavior that there is an absence of consciousness? That's like saying A => B, therefore ~A => ~B.
And if we can't infer whether or not someone is unconscious, we have no data on what does or doesn't lead to unconsciousness.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-16T12:19:34.101Z · LW(p) · GW(p)
when we say that "messing with the brain messes with consciousness", how do we really know that?
Everything about someone's personality and mental functioning can be affected by affecting the brain. Take an example; people with a certain type of brain damage can stop recognising faces. Their qualia have been fundamentally messed up. They can see the world just fine, but if their mother comes to talk to them, until she says 'hello' and they recognises her voice, they don't know who she is.
There are so many cases like that.
Now do you really think that it's fifty fifty on whether their consciousness has been affected by the brain damage? I mean, what's the alternative? That they're recognising their mother and choosing not to act on it? That their consciousness has disassociated with action, and inside their head they're thinking 'Help me! Help me!' whilst their body says "I'm sorry, I don't recognise you"?
All the components of your mind are created by the different modules in your brain. When you affect the brain, it totally messes up your mind. When the brain stops, the totally best guess is that consciousness stops too.
↑ comment by Eugine_Nier · 2014-05-15T05:33:17.075Z · LW(p) · GW(p)
"We have a number of situations in which we can hear music in this world I.e. Hitting play on your laptop. However this is just the data we have; there are many situations we don't know about. Can we really say that when a song stops, it doesn't continue somewhere else? Personally I see no reason to believe this. At best we're talking fifty-fifty here."
The situation is not analogous since whether the song stops or continues where it can't be heard no one experiences it. Consciousness, on the other hand, is always experienced by the conscious entity.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2014-05-15T07:09:56.339Z · LW(p) · GW(p)
From an external point of view, the situation is analogous. Every time everyone's ever died, the evidence of their consciousness has ceased, and whenever a song has turn off, the evidence of it's being on has ceased. So to say that one has definitely ended, whilst the other one is fifty-fifty, seems a mistake.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2014-05-16T04:28:58.965Z · LW(p) · GW(p)
The difference is that with consciousness there is also an internal point of view, with songs there isn't.
comment by Eneasz · 2014-05-13T22:15:52.158Z · LW(p) · GW(p)
Not much has been said cuz there ain't much to say about things that don't exist. Your mind is what your brain does. When the brain stops, so do you. This isn't even advanced rationality - it's reductionism 101. I believe there was a Intelligence Squared debate on it just a few days ago that rehashed all the same old ground if you'd like a refresher. Here we go.
Giving a prior of .5 is ridiculous. Something for which you have no evidence and which breaks several known laws of physics should begin with a seriously tiny prior. You're being heavily influenced by social traditions.
Replies from: Eugine_Nier, adamzerner↑ comment by Eugine_Nier · 2014-05-14T00:52:20.596Z · LW(p) · GW(p)
This isn't even advanced rationality - it's reductionism 101.
Rather it's self-defeating reductionism, of the kind where you start by arguing that the only meaningful questions are based on experiences and anticipated experiences, and end by concluding that the concept of "experience" is meaningless.
↑ comment by Adam Zerner (adamzerner) · 2014-05-13T23:50:23.450Z · LW(p) · GW(p)
Your mind is what your brain does. When the brain stops, so do you. This isn't even advanced rationality - it's reductionism 101.
The brain seems to be something that leads to consciousness, but is it the only thing? To know that it's the only thing, we'd have to have data on all other sorts of preconditions and know what they lead to. We don't have this data. More specifically, we don't have the data that shows that the 3 possibilities I mentioned in the post don't occur.
For the record, I'm not some sort of conspiracy theorist and I'm not religious. And I'm not arguing that there is an "afterlife", just that we don't really know.
Replies from: shokwave↑ comment by shokwave · 2014-05-14T03:06:55.302Z · LW(p) · GW(p)
The brain seems to be something that leads to consciousness, but is it the only thing?
Maybe other things can "lead to" consciousness as well, but what makes you suspect that humans have redundant ways of generating consciousness? Brain damage empirically causes damage to consciousness, so that pretty clearly indicates that the brain is where we get our consciousness from.
If we had redundant ways of generating consciousness, we'd expect that brain damage would simply shift the consciousness generation role to our other redundant system, so there wouldn't be consciousness damage from brain damage (in the same way that damage to a car's engine wouldn't damage its ability to accelerate if it had redundant engines). But we don't see this.
we don't really know.
We know there's no afterlife. What work is "really know" doing in this sentence, that is capable of reversing what we know about the afterlife?
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T03:59:51.285Z · LW(p) · GW(p)
Brain damage empirically causes damage to consciousness, so that pretty clearly indicates that the brain is where we get our consciousness from.
It causes damage to our ability to communicate our consciousness. For all we know, people with brain damage (and who are sleeping, unconscious, dead etc.) may be conscious, but just unable to communicate it with us (or remember it when they wake up
A concrete example might help. Consciousness could exist on some small quantum or string level, or other small level we haven't even discovered yet. It's possible that this level is undisturbed when we die, and that we continue to be conscious.
Replies from: Desrtopa, Desrtopa, shokwave↑ comment by Desrtopa · 2014-05-15T00:34:46.766Z · LW(p) · GW(p)
It causes damage to our ability to communicate our consciousness. For all we know, people with brain damage (and who are sleeping, unconscious, dead etc.) may be conscious, but just unable to communicate it with us (or remember it when they wake up
This isn't really the sort of thing that Shminux is probably talking about. People with many kinds of brain damage fully retain their ability to communicate, while various faculties for thought associated with those brain regions are affected. Or, on the other hand, regions associated with language can be damaged, leaving subjects impaired in their ability to communicate, while other faculties for thought appear to be largely intact. Brain damage does not simply amount to leaving our consciousness "on" or "off."
↑ comment by Desrtopa · 2014-06-12T00:55:56.841Z · LW(p) · GW(p)
To follow up on my earlier comment, I strongly recommend checking out the book "The Tale of the Dueling Neurosurgeons," by Sam Kean. It's unremittently interesting and engaging, and discusses cases which pretty thoroughly disabuse the notion of brain damaged individuals failing to communicate an intact consciousness in just about every chapter.
↑ comment by shokwave · 2014-05-19T08:47:32.684Z · LW(p) · GW(p)
An extreme form of brain damage might be destruction of the entire brain. I don't think that someone with their entire brain removed has consciousness but lacks the ability to communicate it; suggesting that consciousness continues after death seems to me to be pushing well beyond what we understand "consciousness" to refer to.
comment by DanielLC · 2014-05-13T22:34:43.153Z · LW(p) · GW(p)
Personal identity is an illusion. It's often a useful illusion, but when you're dealing with something outside the normal usage, it can go way off. It results in asking meaningless questions like "Where did we come from?", "Where do we go when we die?", and "Is the guy who came out of the teleporter the real me, or just my clone?".
I don't mean to say that suggesting the existence of an afterlife is meaningless. I just mean that we're using a flawed model that implicitly assumes an afterlife, when there's no reason to believe in one. You assume that you continue to be conscious, when in fact there is a conscious entity at a given point in space and time, and there may or may not be another further ahead in time. You tried to be more skeptical about it, but you're still given it even odds. Since we live in an ordered universe, it's clear that almost all possible beings don't exist, or exist less in some sense or something like that. It's not even odds.
There may well be conscious beings beyond what we normally interact with. The issue here is that they aren't ex-humans. We know that a human brain is an integral part of the human mind. We know what individual pieces do. If there is some part of the human mind that survives death, it's not going to work the same way after we die. The mind might be modular enough that it can exist with some low-level animalistic intelligence, but more likely it would break down completely. It's also pretty unlikely that there is such a thing in the first place. There are a lot of problems that would be associated with an incorporeal organ. For example, how do you hold it in place?
Replies from: None, Richard_Kennaway, Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-14T09:16:02.529Z · LW(p) · GW(p)
It's often a useful illusion, but when you're dealing with something outside the normal usage, it can go way off. It results in asking meaningless questions like "Where did we come from?", "Where do we go when we die?", and "Is the guy who came out of the teleporter the real me, or just my clone?".
These are not meaningless questions. The materialist answers to the first two are "we come into existence as our physical vessel developed, and cease to exist when that physical vessel has been destroyed." Non-materialists of various sorts may say "we existed before we entered a new body and depart from that body when it dies". Materialist and non-materialist answers to the third depend on the technology of teleportation. Since teleportation is fictional, you can make up any sort of technology you like to get whatever answer you want.
↑ comment by Richard_Kennaway · 2014-05-14T07:29:05.605Z · LW(p) · GW(p)
Personal identity is an illusion.
What experiences the illusion?
Replies from: DanielLC↑ comment by DanielLC · 2014-05-14T18:52:57.512Z · LW(p) · GW(p)
I'm not saying consciousness is an illusion. I have no idea what that thing is.
You, that is to say, the conscious being at a specific place and time (not that you exist at just one specific point, but it's my way of specifying "that one") has qualia of a memory of another, earlier conscious being. This is not to be confused with having the memory of the qualia of earlier consciousness. Your existence is the result of an earlier being, not an extension of it.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-14T22:15:33.982Z · LW(p) · GW(p)
Your existence is the result of an earlier being, not an extension of it.
This is a distinction without a difference. Decomposition into parts, temporal or spatial, does not demonstrate nonexistence of the whole.
comment by JQuinton · 2014-05-14T13:40:09.644Z · LW(p) · GW(p)
And finally, since we have no data, what can we say about the likelihood of our consciousness returning/remaining after we die? I would say the chances are 50/50. For something you have no data on, any outcome is equally likely
I don't think that's true. It's not that we have no data; we still have prior probability. And with that, all of our background knowledge that goes into that prior which function as data. Theories aren't argued in isolation!
Think about how many things need to be true in order for consciousness to "return" after we die, and if any of those things are false then the entire model fails (this is why Occam's Razor is a useful heuristic) Where does it go after we die? How does it get there? What contains it? What is the energy source for this non-bodied consciousness? How does this fit in with our current understanding of physics like laws of thermodynamics (most people who posit non-bodied consciousness unwittingly propose that consciousness is a perpetual motion machine)? Is it even accurate to call it "consciousness" if it cannot receive any sensory input? In other words, we need ears to hear, eyes to see, etc. but this consciousness would be without the brain structure necessary to process vibrations in the air into what we know as "sound". Is this model of consciousness specific to humans or to any of the higher primates (or even other intelligent species like dolphins or elephants), and why?
On the other hand, we don't need to suggest other laws of physics / theory of evolution for our current model of consciousness, and it has a satisfactory answer for all of the questions I asked above. So Occam's Razor is the mode of thought until we actually get more evidence in the non-bodied consciousness direction.
comment by private_messaging · 2014-05-14T14:22:14.141Z · LW(p) · GW(p)
There's a lot of very bizarre death-related stuff some people here take seriously: future simulations of you recreated from historical data, "quantum immortality", big universe with copies of you existing very far away, us living in a simulation, and so on and so forth. It's just that the atheists who believe in such can't call that "afterlife" otherwise they wouldn't be atheists.
edit: I think there's also a very serious case of inconsistency. If you ask someone about "afterlife", they're pretty sure there's no afterlife, but if you come up with an elaborate narrative, for example one where the future AI uses all the digital data and backtracks a simulation of the world to recreate you, so when you die you just wake up in that AI's simulation - they're no longer nearly as sure.
Replies from: Furcas↑ comment by Furcas · 2014-05-15T04:14:57.590Z · LW(p) · GW(p)
There's no inconsistency, because we take the word 'afterlife' to mean what 99% of humanity means by it, which isn't, y'know, really compatible with the pattern theory of identity.
Replies from: private_messaging↑ comment by private_messaging · 2014-05-15T07:32:03.812Z · LW(p) · GW(p)
"Afterlife" encompasses a very wide variety of beliefs, especially if you are speaking of 99% of humanity. That's sort of like saying "trees" is incompatible with "pines".
Replies from: Furcas, drethelin↑ comment by Furcas · 2014-05-15T14:44:36.255Z · LW(p) · GW(p)
No, it's like not using the word 'God' to describe a super-intelligent AI. The two concepts have some things in common, but a physical, man-made computer program just isn't what people mean when they say "God". Likewise, their brain getting reconstructed some decades after their death isn't what people mean when they say "afterlife".
Replies from: Lumifer, ChristianKl, private_messaging↑ comment by ChristianKl · 2014-05-29T15:01:55.319Z · LW(p) · GW(p)
I think God would fit in many places where people on LW say Omega.
↑ comment by private_messaging · 2014-05-16T06:53:14.035Z · LW(p) · GW(p)
When people say "afterlife", what they mean is living after they die. It's words with fairly general meaning. (By the way, inability to generalize or use general concepts is a big autism stereotype)
comment by plex (ete) · 2014-05-14T02:56:39.176Z · LW(p) · GW(p)
People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this. My point in this post is that I don't see why we have reason to reject the 3 possibilities above. If you reject the idea that consciousness could reside outside of the brain, please explain why.
Simply put: Occam's Razor, and lack of evidence.
More thoroughly: We have at best very weak evidence for an afterlife (non-verifiable near-death experiences, religious texts, and it being a generally common belief/hunch that it may exist), plausible explanations for why each of those weak sources of evidence would exist without a real afterlife (psychological effects of almost dying and general unreliability of minds in extreme situations, the fact that an afterlife makes a religious meme more powerful, and flat out hope/unwillingness to face the end respectively). All descriptions of a true afterlife conflict massively with the testable and prediction giving knowledge that science has found, and trying to make the two work together, if it is even possible, would give a much less simple theory.
comment by gjm · 2014-05-13T23:57:12.092Z · LW(p) · GW(p)
"Returning" and "remaining" (as you put it in your last bullet point) are very different things. The latter seems to require that we are, or have, something like souls that, despite their immateriality and apparent complete inaccessibility to any sort of scientific investigation, are the true bearers of our identity and consciousness. This is very, very hard to square with (e.g.) the copious evidence that "the mind is what the brain does" and I think it's reasonable to regard it as pretty much refuted.
"Returning" is another kettle of fish entirely. It covers, e.g., (1) resurrection of the sort envisaged by religions like Christianity, (2) later reconstruction by some sort of superintelligent agent, and arguably (3) cryonics if that turns out to work. Not to mention other exotic options like (4) we are in a simulation and whoever's running it wants to resurrect us. Note that for some of these options our putative resurrection takes place entirely outside our world. Evidence for or against is going to have to be indirect (e.g., some guy turns up, works a sufficiently dramatic set of miracles, and explains that he is an emissary of the gods, who by the way are going to resurrect everyone whose surname begins with a vowel; or many things of this kind that might have happened fail to happen, constituting evidence against resurrection).
Mostly, though, the reason to reject resurrection (unless you happen to think you are in possession of some kind of divine revelation or something) is Occam's razor. Yeah, it might turn out that we're in the Matrix and the computers running it are going to give us all second chances, but that's a much more complicated possibility than that our world is "the" world. And no, the odds aren't 50/50, for the reasons others have given and linked to; you can't make everything equally likely on pain of inconsistency, and in-some-sense-on-average more complicated things must be less probable.
I think you'll probably find a strong (but not unanimous) consensus among LW-rationalists that: consciousness is a property of physical systems and doesn't involve immaterial souls (unless, e.g., you define those in some way reducible to properties of physical systems); it, or something "equally good", could exist on other substrates besides ours; "survival" (consciousness remaining after we die) is monstrously unlikely; "resurrection" (consciousness returning somehow after we die) is possible in principle and may some day happen to some people via technological wonders like "uploading"; there is no good reason to expect "resurrection" to be on offer to the human race at large and it should be regarded as very unlikely for Occamish reasons.
(I would expect most of the dissent to come from people who, despite being LW-rationalists, are adherents of a religion that says there's some kind of afterlife.)
[EDITED to add: there might also be a substantial fraction dissenting on the grounds that some of the key notions are ill-defined; for instance, that we don't really know what "conscious" means beyond the fact that one particular bunch of things -- namely, us -- seem to have it; or that personal identity is fuzzy and in putative cases of "resurrection" there really isn't a fact of the matter as to whether the "before" and "after" are actually The Same Person.]
Replies from: DanielLC, Eugine_Nier, adamzerner↑ comment by DanielLC · 2014-05-14T01:16:17.090Z · LW(p) · GW(p)
I don't like to say we don't have souls. We do have bearers of identity and consciousness. They're just not immortal, and they're made of atoms. Or more accurately, they're made of the data that the arrangement of atoms represents.
Replies from: DefectiveAlgorithm↑ comment by DefectiveAlgorithm · 2014-05-16T06:27:30.704Z · LW(p) · GW(p)
Is this any more than a semantic quibble?
Replies from: DanielLC↑ comment by Eugine_Nier · 2014-05-14T01:18:10.556Z · LW(p) · GW(p)
there might also be a substantial fraction dissenting on the grounds that some of the key notions are ill-defined; for instance, that we don't really know what "conscious" means beyond the fact that one particular bunch of things -- namely, us -- seem to have it; or that personal identity is fuzzy and in putative cases of "resurrection" there really isn't a fact of the matter as to whether the "before" and "after" are actually The Same Person.
I'm not sure I agree. "Consciousness" strikes me as being as well-defined as concepts like "anticipation" and especially "experience" that are used in the foundations of empiricism.
↑ comment by Adam Zerner (adamzerner) · 2014-05-14T00:38:45.647Z · LW(p) · GW(p)
The latter seems to require that we are, or have, something like souls that, despite their immateriality and apparent complete inaccessibility to any sort of scientific investigation, are the true bearers of our identity and consciousness.
That's not what I mean by "remaining". Consciousness could exist on some small quantum or string level, or other small level we haven't even discovered yet. It's possible that this level is undisturbed when we die, and that we continue to be conscious. And it's possible that as we continue to be conscious, we can't communicate it to living people.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-05-14T00:52:22.912Z · LW(p) · GW(p)
Consciousness could exist on some small quantum [...] level...
Absolutely not. Consciousness is not that basic, and it definitely doesn't belong in the fundamental structure of reality. You're making a huge leap that ignores several levels of organization (in order: atomic, chemical, biological, computational). Consciousness depends on the pattern of neurons communicating inside our heads; examining a single neuron nucleus in the microscope (or taking one of its carbon atoms into a collider) will miss consciousness entirely because you've set the magnifying glass too close to get the pattern.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T00:57:27.009Z · LW(p) · GW(p)
I'm familiar with the current scientific literature on the neural correlates of consciousness (I was a neuroscience major and did my senior thesis on it). But these are correlates. We indeed don't know of any correlates on a level smaller than the neuronal level, but that doesn't mean they don't exist.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-05-14T01:01:10.417Z · LW(p) · GW(p)
Elsewhere on this thread you also said,
we don't have the data that shows that the 3 possibilities I mentioned in the post don't occur.
Are you seriously going to posit a belief in the afterlife just because neurology can't prove a negative?
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-14T01:04:00.953Z · LW(p) · GW(p)
Are you seriously going to posit a belief in the afterlife just because neurology can't prove a negative?
No, I'm just saying that it's possible, not that I believe in it.
Replies from: polymathwannabe↑ comment by polymathwannabe · 2014-05-14T01:09:02.885Z · LW(p) · GW(p)
Not with quantum consciousness. That's not even in "possible" territory.
comment by someonewrongonthenet · 2014-05-19T07:50:47.450Z · LW(p) · GW(p)
It is a simple case of parsimony.
The brain and the physical world in general are sufficient to explain consciousness, so therefore any assumptions beyond that get a complexity penalty.
And...that's the only reason. All your "possibilities" are indeed possible... but improbable. I don't reject the idea that consciousness could theoretically reside outside the brain, but it's much more parsimonious to assume it does not.
I've read a fair amount on Less Wrong
If so, I take it you already understand about parsimony and its importance for hypothesizing, since there has been a good deal of discussion about that. Additionally, as a neuroscience major you have sufficient background to at least agree that consciousness could theoretically be an entirely physical phenomenon. You've got all the background information you need to make the required inferences.
So for my own curiosity, do let me know: Was reading above sufficient to cause you to alter your belief, and do you now know that consciousness probably (more probably than "50/50") is an entirely physical phenomenon which centers around the brain?
Edit: Reading your other comments, you still don't get parsimony.
Hypothesis 1: I have only two left toes.
Hypothesis 2: I have only two left toes and own a bunny rabbit hat.
1 is more parsimonious than 2. It's not 50/50.
Hypothesis 1: The brain is enough to explain the evidence of consciousness
Hypothesis 2: The brain is enough to explain the evidence of consciousness, but there are additional things which are also conscious, only we can't observe them.
1 is more parsimonious than 2. It's not 50/50.
Replies from: adamzerner, Eugine_Nier↑ comment by Adam Zerner (adamzerner) · 2014-05-19T14:26:47.281Z · LW(p) · GW(p)
This is my position - http://lesswrong.com/lw/k8a/what_do_rationalists_think_about_the_afterlife/awx0.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-19T14:59:48.484Z · LW(p) · GW(p)
that's just a link to this thread.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-19T15:20:47.611Z · LW(p) · GW(p)
Oh sorry, I thought it was a permalink. This link works -
http://lesswrong.com/lw/k8a/what_do_rationalists_think_about_the_afterlife/awx0
*When I put a period after the link, it included the period in the url, that's why it wasn't working.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-19T17:36:39.744Z · LW(p) · GW(p)
You keep coming back to if consciousness could be not just the brain, if there could be something else.
Yes, it could. It's just extremely un-parsimonious to postulate that it does.
If you agree that consciousness could be explained by the brain alone, then parsimony should lead you to agree that consciousness is probably explained by the brain alone. If you don't agree that consciousness could be explained by the brain alone, then that's a longer problem.
Every time you catch yourself saying "but what if X" or "but how do we know X for sure", consider parsimony.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-19T19:52:57.808Z · LW(p) · GW(p)
From what I understand, things that are parsimonious are less likely than the simpler option because of compounded probability. Ie. something parsimonious requires a1..a25 to be true, while a simpler option just requires a1..a5 to be true.
My point is that I don't see reason to think that we have any information about the probabilities. How can we say that "a1..a500 needs to be true in order for consciousness to remain after brain destruction"? What observations have we made that would lead us to think that? My feeling is that we've never actually made an observation that says x => unconsciousness, because we've never actually been able to infer a state of unconsciousness.
Note: Sorry if I'm just not understanding the point about parsimony. I know that everyone seems to disagree with me and is making that point, so I've been trying to understand it and think about how it disproves my current belief, but for the reasons I explain above I don't think the argument that "parsimony => it's unlikely that consciousness remains" is valid. That argument requires information about what causes unconsciousness that I don't think we have.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-20T04:24:54.800Z · LW(p) · GW(p)
We have some data on what preconditions seem to produce consciousness (ie. neuronal firing). However, this is just data on the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
So you agree that brains are sufficient to explain consciousness. This consists of a1...a5.
Let Hypothesis 1 be that brains are conscious.
Then, as Hypothesis 2, you have the other conscious beings (a5...a25). Note that H2 also believes that brains are conscious. (a1...a5). So you have...
H1: a1...a5 "brains are conscious"
H2: a1...a25 "brains are conscious, and there's other types of consciousnesses outside our perception
H3:a1...a5, b1...b5: "brains are conscious, and there's a teapot on Andromeda.
H4: a1...a5, c1...c5: Brains are conscious, and there is no such thing as a consciousness outside our perception.
H5: a1...a5, d1...d5: Brains are conscious, and there are no teapots in Andromeda.
H6: a1...a5, e1...e5: Brains are conscious, and there is no water in Andromeda
H7: a1...a5, f1...f5: Brains are conscious, and there is water in Andromeda
In order of parsimony, it's obvious that H1>H2,H3,H4, H5, H6, H7, right?
Right, but your real question is: is H2 more parsimonious than H4. And you're right, practically there is no mathematically rigorous way to get the answer.
The same can be said of H3 vs H5, but you've got a strong intuition that H5 is more likely, right? Can you prove it rigorously? No, but you've got a pretty strong sense that their are no teapots to be found on Pluto.
Similarly, between H6 and H7, you've got a fair sense that there is probably water somewhere on Andromeda. In fact, it would be more complicated to describe some circumstance by which Andromeda didn't have any water.
So is consciousness more like the teapot, or more like the water molecule? Well, your description of the universe gets simpler when you don't need to explain the teapot, whereas it gets more complex when you have to explain why there is no water in Andromeda.
Consider this, like all humans you have instinctive animist tendencies. You are emotionally biased to favor H2 as something that seems possible, because you emotionally think of consciousness as a simple, basic element of reality, like water. I say, consciousness is more like a teapot than it is like water.
Does Math make special exceptions for consciousnesses that it does not make for teapots? Think about it...you can imagine water forming on a star somewhere, it's fairly simple. How do you envision these separate consciousnesses forming? All possible ways in which these extra-physical consciousnesses could form are really complex. Your description of the universe, once you add these extra consciousnesses in, is going to get larger, not smaller.
You don't automatically find yourself scrambling to explain why there might not be afterlife...rather, you find yourself searching for an explanation for why there might be one. And that's because afterlives make the description of the universe larger and more complex and therefore require you to generate a story.
I admit, this isn't a proof, and you're not going to get a proof. But it's a really strong intuition.
...I wonder if it would help, if I came up with an unrelated idea that could only be rejected using intuition-parsimony and asked you to refute it. You'll instinctively call on parsimony, and then you can apply the same methods to the afterlife hypothesis.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-20T14:41:17.642Z · LW(p) · GW(p)
My point is that I don't see reason to think that we have any information about the probabilities. How can we say that "a1..a500 needs to be true in order for consciousness to remain after brain destruction"? What observations have we made that would lead us to think that? My feeling is that we've never actually made an observation that says x => unconsciousness, because we've never actually been able to infer a state of unconsciousness.
Right, but your real question is: is H2 more parsimonious than H4.
So you say that it can't be proven, but you have a "really strong intuition" that it is. Why? What observations have you made about what causes unconsciousness that would lead you to believe that it involves parsimony? And how have you been able to infer unconsciousness?
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-20T18:21:30.530Z · LW(p) · GW(p)
Because consciousness (regardless of whether it's based in something physical and observable, like the brain) by its inherent nature involves complex information processing. Even if you separate consciousness from the brain and put it in the abstract-unobservable-spirit-place-thing, it's still a mathematically defined structure with a lot of complexity. When the brain is right in front of you, you can point to it and say, there! that's a complex information processing structure!"When the brain is no longer in front of you, you necessarily have to posit an un-observable* complex information processing structure in spirit land.
Just replace brain with any other object, and you''ll get the same intuition. How do you know that some sort of time-keeping doesn't continue in an unobserved location, even after the physical clock is destroyed? How do you know that the teapot-ness doesn't continue on somewhere after the actual teapot is destroyed? And what about your past selves? They are all now destroyed? Are they all continuing on somewhere?
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2014-05-21T02:35:18.359Z · LW(p) · GW(p)
by its inherent nature involves complex information processing
Interesting. There does seem to be evidence that you need a complex structure with complex information processing to provide a variety of conscious experiences. The evidence for this I think is just that outcomes have independent causes (on the smallest levels). You'd need a complex structure to take in all the different inputs and produce the corresponding conscious experience as the output. A simple structure can't do that.
When the brain is no longer in front of you, you necessarily have to posit an *un-observable complex information processing structure in spirit land.
I wouldn't say that. Right now, we have some data on what parts of the brain need to be active for consciousness, but we can't measure things at a level of precision below single neurons. What if something is happening on a quantum level that underlies consciousness? What if the thing that underlies consciousness is present at a level below the quantum level, like something beyond our current understanding of physics? This is quite possible, and I don't think it's ridiculous to posit that this level may go undisturbed when the brain is destroyed.
So, as far as retaining "complex consciousness" (able to experience a variety of things) after the brain is destroyed, I see two possibilities:
The thing that causes the consciousness you experience when you're alive is destroyed, but some new structure is created and provides you with complex consciousness.
The thing that is currently causing your complex consciousness remains intact, and thus continues to provide you with that complex consciousness.
I agree that 1) is unlikely for reasons of parsimony. It's 2) that I'm questioning. Why is 2) more likely to be false than true?
Answering that question myself, I actually think that it is. If I knew more about physics I'd have a stronger opinion here, but I figure that when you destroy the brain on macro/microscopic level, it's unlikely for the nano/quantum/small level that I'm saying consciousness might be on to go undisturbed.
So back to my original objection - "What have we observed that would tell us that x => unconsciousness". Answer:
1) We've observed that the world is governed by cause and effect. Consider a given outcome. You can't have two different physical states lead to that outcome. (I'm not explaining this well, but hopefully you know what I mean).
2) We've observed that consciousness involves a large variety of different outcomes (seeing red, seeing blue, feeling hot, feeling cold...). From 1) we can infer that there must be a certain physical state that leads to each of these outcomes.
3) We've observed that the brain is a complex structure that is correlated with consciousness. I don't think we know the cause. Maybe it's on the neuronal level. I think it probably has to be on a smaller level. But regardless, it seems likely that when you destroy the brain, you'd destroy whatever it is that's producing consciousness, regardless of what level it's on.
4) We've observed parsimony. So it's unlikely that whatever causes consciousness will be regenerated out of the blue. once it's destroyed.
So to be explicit, I think it's unlikely that consciousness remains after the brain is destroyed.
But one thing might remain. There might be a sort of basic/flat level of consciousness, as opposed to "nothingness". And as opposed to the idea that consciousness has to involve our complex consciousness of experiencing all the various thing we experience. There may be a basic level where we only sort of experience one thing.
If this level exists, how do we know that destroying the brain interferes with it? What do you think?
That's all I've got for now. I probably haven't expressed these points too clearly, as I'm just coming up with a lot of them and haven't had time analyze them enough. Please let me know what you think, and if you could sum it up and express it a little more clearly than I have. Thanks for the conversation!
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-22T04:50:11.130Z · LW(p) · GW(p)
There might be a sort of basic/flat level of consciousness
That position is called pan-psychism. I don't think pan-psychism violates the rules of parsimony, but I also think once you find yourself asking how quantum vacuums or a baseball bats or water molecules subjectively feel, you need to back up a bit.
If this level exists, how do we know that destroying the brain interferes with it? What do you think?
Personally? I think that my qualia is that which separates reality from all the other hypothetical mathematical structures, and just leave it at that (so I evaluate consciousness from a strictly information-processing standpoint).. Well, that's the short version, I'd probably need to write a bit more for that to make sense.
↑ comment by Eugine_Nier · 2014-05-20T02:28:54.770Z · LW(p) · GW(p)
The brain and the physical world in general are sufficient to explain consciousness
The problem is that they aren't, as Richard explains here.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-20T03:57:46.353Z · LW(p) · GW(p)
1) In context, we're talking about consciousness as in beings which are behaviorally aware, not about subjective-experience qualia, right?
2) Richard says he doesn't know, not that they aren't.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-20T06:36:36.027Z · LW(p) · GW(p)
The brain and the physical world in general are sufficient to explain consciousness
The problem is that they aren't, as Richard explains here.
1) In context, we're talking about consciousness as in beings which are behaviorally aware, not about subjective-experience qualia, right?
I don't understand the distinction you're making there. As I use the words, consciousness is awareness, which is experience. These are just different ways of pointing at the same thing.
2) Richard says he doesn't know, not that they aren't.
The problem is worse than merely not knowing, in the sense in which we do not know a cure for cancer. We can imagine that eventually, we will discover enough about the mechanism of cancer to devise an intervention as effective as penicillin for bacterial infection. But we cannot see, in terms of what we understand of physics and the brain, how there could be any such thing as consciousness, despite people giving the matter a great deal of thought and experimental work. That's a strong argument that they are not sufficient to explain it.
There's a tautologous sense in which they are sufficient, by taking "the physical world" to include whatever the real explanation turns out to be, but in a discussion of this issue "the physical world" means the world as understood in terms of the sort of physical theories we currently have.
On the other hand, the very close observable connection between brain states and consciousness is a strong argument that the brain and the physical world are sufficient to explain consciousness.
In short we are faced with a gigantic problem:
We are conscious.
Consciousness is very closely connected with the brain.
We cannot see how there could be any such phenomenon.
Go to 1.
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-20T18:00:18.595Z · LW(p) · GW(p)
I don't understand the distinction you're making there
"Behaviorally aware" is a term I'm using to talk about consciousness without invoking the "hard problem of consciousness".
The brain is a structure which takes various inputs, does a bunch of operations to them, and produces various outputs. We can see how that works, and to some extents we can make machines that do the same.
When someone's "unconscious", it means they are no longer responding to the environment (taking inputs, producing outputs) appropriately. A "conscious" being is responding appropriately to the environment. It's various internal parts are interacting with each other in an organized way, and they are also interacting with the external world in an organized way. Behaviorally speaking, they are aware and reacting.
None of the above has yet involved qualia, subjective experience, or the hard problem of consciousness. We are using the word "conscious" to mean two separate things - "aware-in-the-information-processing-sense" and "subjectively-experiencing-qualia".
So you can talk about whether or not there are conscious (information-processing) structures which continue on after we die without tackling subjective experience or qualia. And when you talk about these structures, you still have to use parsimony...just because the information-processing structure is no longer in the "observable-physical world" or whatever doesn't mean it's not still a complex, rule-following mathematical structure.
Which is why I say, in the context of this conversation, there is no need to invoke The Hard Problem
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
I think this is because this is primarily a matter of definition. The "answer" to the "hard problem" is decidedly not empirical and purely philosophical. All non-empirical statements are tautological in nature. Arguments aiming to dissolve the question rely upon altering definitions and are necessarily tautological.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-20T20:26:28.537Z · LW(p) · GW(p)
"Behaviorally aware" is a term I'm using to talk about consciousness without invoking the "hard problem of consciousness".
The brain is a structure which takes various inputs, does a bunch of operations to them, and produces various outputs. We can see how that works, and to some extents we can make machines that do the same.
Why call this "consciousness"? Pretty much any machine that we make takes various inputs, does a bunch of operations to them, and produces various outputs. Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) "behaviourally aware"? It even runs tests on itself and reports the results.
I don't think it useful to broaden the definition of "conscious" to include things that are clearly not "conscious" (in the meaning it normally has). This doesn't let you talk about consciousness without invoking the "hard problem of consciousness". It lets you talk about something completely different that you are calling by the same name, without invoking the "hard problem of consciousness".
A lot of discussion on the subject consists of people writing their conclusion in different words and using it as an argument for their conclusion.
I think this is because this is primarily a matter of definition. The "answer" to the "hard problem" is decidedly not empirical and purely philosophical.
The problem is clearly an empirical one. We are aware, seek an explanation, but have not found one.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-21T01:21:54.744Z · LW(p) · GW(p)
Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) "behaviourally aware"? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
We don't normally use the word "aware" to describe it, but what it's doing seems very, very close to the things we do describe with the word awareness.
The problem is clearly an empirical one.
Then I've misunderstood your claim. The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won't explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn't explain it. Seems like a non-empirical question to me. That's why they call it subjective experience.
Replies from: Richard_Kennaway, Eugine_Nier↑ comment by Richard_Kennaway · 2014-05-21T13:52:40.052Z · LW(p) · GW(p)
Is my computer (my real computer, not an imaginary one programmed with an imaginary AI) "behaviourally aware"? It even runs tests on itself and reports the results.
Yes, actually? To the extent that a worm is aware.
Is a worm aware? I don't know. Is my computer aware? I see no reason to think so, not in the sense of "aware" that we're discussing. Is a thermostat aware? That too has input and output. Is a rock aware? If the answer to all of these is "yes", then that is not a useful sense of "aware". It's just another route for the mercury blob of thought to escape the finger of logic.
In other contexts, I have no problem with talking about a robot (a real robot really existing in the real world right now, such as Google's driverless cars) as being "aware" of something, or for that matter my computer running self-tests, but I would also know that I was not imputing consciousness to the devices. If we're going to talk about consciousness, that is what we must talk about, instead of broadening the word beyond what we are talking about and using the same word to talk about some other thing instead.
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won't explain the subjective experience we have.
I would say that's one particular position, or class of positions, on the Hard Problem. The other class of positions are those that hold that if we understood all etc. etc. then it would explain the subjective experience we have.
The Hard Problem, to me, is that both of these positions are both ineluctable and untenable.
That's why they call it subjective experience.
That we have subjective experience is an objective fact.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-22T05:04:55.494Z · LW(p) · GW(p)
Is X aware
Is there no middle ground between "aware" and "not aware" then? This is like asking "Is a boulder a chair?", "is a tree stump a chair?" "Is a stool a chair?" Words are fuzzy like that.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-23T14:56:58.142Z · LW(p) · GW(p)
Is there no middle ground between "aware" and "not aware" then? This is like asking "Is a boulder a chair?", "is a tree stump a chair?" "Is a stool a chair?" Words are fuzzy like that.
Yes, there's a whole range. Maybe a worm has a microconsciousness, or a nanoconsciousness, or maybe it has none at all, relative to a human. Or maybe it's like asking about the temperature of a cluster of a few atoms. The concept is indeed fuzzy.
That we have subjective experience is an objective fact.
Rather, that you have it is an objective fact to you. The empirical questions involved here are applied to other minds, not your own.
Other people seem to be the same sorts of thing as me, and they report awareness of things. That's good enough for me to believe them to have consciousness. When robots get good enough to not sound like spam when they pretend to be people, then that criterion would have to be reexamined.
As Scott Aaronson points out in his discussion of IIT, experiences of oneself and intuitions about other creatures based on their behaviour are all we have to go on at present. If an explanation of consciousness doesn't more or less match up to those intuitions, it's a problem for the explanation, not the intuitions.
↑ comment by Eugine_Nier · 2014-05-21T02:26:01.731Z · LW(p) · GW(p)
The Hard Problem of Consciousness as popularly understood is that even if we understand all the mechanisms of thought to the point that we can construct brains ourselves, it won't explain the subjective experience we have. We can understand the universe with mathematical precision down to the last photon and it still wouldn't explain it. Seems like a non-empirical question to me.
The common meaning of "empirical" is something based on experience, so it seems that the Hard Problem of Consciousness fits that definition.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2014-05-22T04:36:11.800Z · LW(p) · GW(p)
No? There is no subjective experience I can have that can distinguish you from a P-zombie (under the (wrong) assumption that the hard-problem even makes sense and that there is a meaningful distinction to be made there)
comment by DefectiveAlgorithm · 2014-05-14T12:54:43.486Z · LW(p) · GW(p)
While I do still find myself quite uncertain about the concept of 'quantum immortality', not to mention the even stronger implications of certain multiverse theories, these don't seem to be the kind of thing that you're talking about. I submit that 'there is an extant structure not found within our best current models of reality isomorphic to a very specific (and complex) type of computation on a very specific (and complex) set of data (ie your memories and anything else that comprises your 'identity')' is not a simple proposition.
comment by polymathwannabe · 2014-05-14T00:15:24.129Z · LW(p) · GW(p)
The OP is terribly confused.
"What happens to our consciousness when we die?"
The question makes no sense. The irreversible disruption of metabolic chemistry that for reasons of convenience we all call death implies also the irreversible destruction of consciousness. Asking what happens to consciousness when we die is like asking what happens to a bird flying formation when they land.
[N]euronal firing [only represents] the preconditions that seem to produce consciousness that can/do communicate/demonstrate its consciousness to us.
Any other factors would be irrelevant. With no self-awareness, there's no consciousness worth speaking of.
In short: Physical destruction of the brain entails destruction of consciousness because your consciousness resides entirely and solely in your brain. If the brain tissue gets too damaged (most commonly from lack of oxygen), it dies, and you with it. There is not, and cannot be, any afterlife.
comment by trist · 2014-05-13T22:22:57.151Z · LW(p) · GW(p)
On your last point, no. To put it colloquially, the simpler answer is more likely.
In practical terms, saying that all binary choices without evidence causes conflicts. Namely, if there's a 50/50 chance that your consciousness dissolves when you die, and a 50/50 chance that a hidden FAI captures your brain state, and a 50/50 chance that a hidden UFAI captures your brain state. That implies that in every case that a FAI captures your brain state so does a UFAI.
comment by polymathwannabe · 2014-05-14T01:33:14.879Z · LW(p) · GW(p)
People in the comments have just taken it as a given that consciousness resides solely in the brain without explaining why they think this.
And it's a neuroscience major who's asking this. For Skynet's sake, if you of all people can't see why consciousness requires a physical substrate several levels above the subatomic scale, then this whole thread is hopeless.
comment by [deleted] · 2014-06-13T10:41:04.044Z · LW(p) · GW(p)
It may be magical thinking, but magical thinking is awesome!
"Anxiety reliefAccording to theories of anxiety relief and control, people turn to magical beliefs when there exists a sense of uncertainty and potential danger and little to do about it. Magic is used to restore a sense of control. In support of this theory, research indicates that superstitious behavior is invoked more often in high stress situations, especially by people with a greater desire for control.[7] It is proposed that one reason (but not necessarily the only reason) for the persistence of magic rituals is that the ritual activates vigilance-precaution systems – that is to say, that the rituals prompt their own use by creating a feeling of insecurity and then proposing themselves as precautions.[8] Pascal Boyer and Pierre Liénard propose that the shape rituals take results from goal demotion and attentional focus on lower level representation.[9] Levels of representation were previously described by J.M. Zacks and Barbara Tversky.[10] At the lowest level are simple gestures (such as putting the left foot in a shoe). At the mid-level are behavioral episodes (such as putting one’s shoes on). At the highest level are scripts (such as getting dressed to go out). Ordinarily, people describe and recall behavior in terms of the middle level of behavioral episodes. In studies of obsessive-compulsive rituals, focus shifts to the lower level of gestures, resulting in goal demotion. For example, an obsessive-compulsive cleaning ritual may overemphasize the order, direction, and number of wipes used to clean the surface. The goal becomes less important than the actions used to achieve the goal, with the implication that magic rituals can persist without efficacy because the intent is lost within the act. Debate remains as to whether studies of obsessive-compulsive rituals can be extended to describe other kinds of rituals."
comment by Eugine_Nier · 2014-05-14T01:10:44.740Z · LW(p) · GW(p)
The problem with asking a question like that in this forum, is that most people here have been trained not to recognize the referent of the word "consciousness" and thus tend to confuse it with its correlates.