Confused as to usefulness of 'consciousness' as a concept

post by KnaveOfAllTrades · 2014-07-13T11:01:18.834Z · score: 35 (40 votes) · LW · GW · Legacy · 230 comments

Years ago, before I had come across many of the power tools in statistics, information theory, algorithmics, decision theory, or the Sequences, I was very confused by the concept of intelligence. Like many, I was inclined to reify it as some mysterious, effectively-supernatural force that tilted success at problem-solving in various domains towards the 'intelligent', and which occupied a scale imperfectly captured by measures such as IQ.

Realising that 'intelligence' (as a ranking of agents or as a scale) was a lossy compression of an infinity of statements about the relative success of different agents in various situations was part of dissolving the confusion; the reason that those called 'intelligent' or 'skillful' succeeded more often was that there were underlying processes that had a greater average tendency to output success, and that greater average success caused the application of the labels.

Any agent can be made to lose by an adversarial environment. But for a fixed set of environments, there might be some types of decision processes that do relatively well over that set of environments than other processes, and one can quantify this relative success in any number of ways.

It's almost embarrassing to write that since put that way, it's obvious. But it still seems to me that intelligence is reified (for example, look at most discussions about IQ), and the same basic mistake is made in other contexts, e.g. the commonly-held teleological approach to physical and mental diseases or 'conditions', in which the label is treated as if—by some force of supernatural linguistic determinism—it *causes* the condition, rather than the symptoms of the condition, in their presentation, causing the application of the labels. Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

For the sake of brevity, even when we realise these approximations, we often use them without commenting upon or disclaiming our usage, and in many cases this is sensible. Indeed, in many cases it's not clear what the exact, decompressed form of a concept would be, or it seems obvious that there can in fact be no single, unique rigorous form of the concept, but that the usage of the imprecise term is still reasonably consistent and correlates usefully with some relevant phenomenon (e.g. tendency to successfully solve problems). Hearing that one person has a higher IQ than another might allow one to make more reliable predictions about who will have the higher lifetime income, for example.

However, widespread use of such shorthands has drawbacks. If a term like 'intelligence' is used without concern or without understanding of its core (i.e. tendencies of agents to succeed in varying situations, or 'efficient cross-domain optimization'), then it might be used teleologically; the term is reified (the mental causal graph goes from "optimising algorithm->success->'intelligent'" to "'intelligent'->success").

In this teleological mode, it feels like 'intelligence' is the 'prime mover' in the system, rather than a description applied retroactively to a set of correlations. But knowledge of those correlations makes the term redundant; once we are aware of the correlations, the term 'intelligence' is just a pointer to them, and does not add anything to them. Despite this, it seems to me that some smart people get caught up in obsessing about reified intelligence (or measures like IQ) as if it were a magical key to all else.

Over the past while, I have been leaning more and more towards the conclusion that the term 'consciousness' is used in similarly dubious ways, and today it occurred to me that there is a very strong analogy between the potential failure modes of discussion of 'consciousness' and between the potential failure modes of discussion of 'intelligence'. In fact, I suspect that the perils of 'consciousness' might be far greater than those of 'intelligence'.

~

A few weeks ago, Scott Aaronson posted to his blog a criticism of integrated information theory (IIT). IIT attempts to provide a quantitative measure of the consciousness of a system. (Specifically, a nonnegative real number phi). Scott points out what he sees as failures of the measure phi to meet the desiderata of a definition or measure of consciousness, thereby arguing that IIT fails to capture the notion of consciousness.

What I read and understood of Scott's criticism seemed sound and decisive, but I can't shake a feeling that such arguments about measuring consciousness are missing the broader point that all such measures of consciousness are doomed to failure from the start, in the same way that arguments about specific measures of intelligence are missing a broader point about lossy compression.

Let's say I ask you to make predictions about the outcome of a game of half-court basketball between Alpha and Beta. Your prior knowledge is that Alpha always beats Beta at (individual versions of) every sport except half-court basketball, and that Beta always beats Alpha at half-court basketball. From this fact you assign Alpha a Sports Quotient (SQ) of 100 and Beta an SQ of 10. Since Alpha's SQ is greater than Beta's, you confidently predict that Alpha will beat Beta at half-court.

Of course, that would be wrong, wrong, wrong; the SQ's are encoding (or compressing) the comparative strengths and weaknesses of Alpha and Beta across various sports, and in particular that Alpha always loses to Beta at half-court. (In fact, if other combinations lead to the same SQ's, then *not even that much* information is encoded, since other combinations might lead to the same scores.) So to just look at the SQ's as numbers and use that as your prediction criterion is a knowably inferior strategy to looking at the details of the case in question, i.e. the actual past results of half-court games between the two.

Since measures like this fictional SQ or actual IQ or fuzzy (or even quantitative) notions of consciousness are at best shorthands for specific abilities or behaviours, tabooing the shorthand should never leave you with less information, since a true shorthand, by its very nature, does not add any information.

When I look at something like IIT, which (if Scott's criticism is accurate) assigns a superhuman consciousness score to a system that evaluates a polynomial at some points, my reaction is pretty much, "Well, this kind of flaw is pretty much inevitable in such an overambitious definition."

Six months ago, I wrote:

"...it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?))..."

Mark Friedenbach replied recently (so, a few months later):

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

I feel like if Mark had made that reply soon after my comment, I might have had a hard time formulating why, but that I would have been inclined towards disputing that my computer is conscious. As it is, at this point I am struggling to see that there is any meaningful disagreement here. Would we disagree over what my computer can do? What information it can process? What tasks it is good for, and for which not so much?

What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Even if we did disagree (or at least did not agree) over, say, an average human's ability to detect and avoid ultraviolet light without artificial aids and modern knowledge, this lack of agreement would not feel like a messy, confusing philosophical one. It would feel like one tractable to direct experimentation. You know, like, blindfold some experimental subjects, control subjects, and experimenters and see how the experimental subjects react to ultraviolet light versus other light in the control subjects. Just like if we were arguing about whether Alpha or Beta is the better athlete, there would be no mystery left over once we'd agreed about their relative abilities at every athletic activity. At most there would be terminological bickering over which scoring rule over athletic activities we should be using to measure 'athletic ability', but not any disagreement for any fixed measure.

I have been turning it over for a while now, and I am struggling to think of contexts in which consciousness really holds up to attempts to reify it. If asked why it doesn't make sense to politely ask a virus to stop multiplying because it's going to kill its host, a conceivable response might be something like, "Erm, you know it's not conscious, right?" This response might well do the job. But if pressed to cash out this response, what we're really concerned with is the absence of the usual physical-biological processes by which talking at a system might affect its behaviour, so that there is no reason to expect the polite request to increase the chance of the favourable outcome. Sufficient knowledge of physics and biology could make this even more rigorous, and no reference need be made to consciousness.

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is *defined* (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'. But then it's not clear why we should call this moral criterion 'consciousness'; insomuch as consciousness is about information processing or understanding an environment, it's not obvious what connection this has to moral worth. And insomuch as consciousness is the Magic Token of Moral Worth, it's not clear what it has to do with information processing.

If we relabelled zxcv=conscious and rewrote, "We shouldn't eat chickens because they're zxcv," then this makes it clearer that the explanation is not entirely satisfactory; what does zxcv have to do with moral worth? Well, what does consciousness have to do with moral worth? Conservation of argumentative work and the usual prohibitions on equivocation apply: You can't introduce a new sense of the word 'conscious' then plug it into a statement like "We shouldn't eat chickens because they're conscious" and dust your hands off as if your argumentative work is done. That work is done only if one's actual values and the definition of consciousness to do with information processing already exactly coincide, and this coincidence is known. But it seems to me like a claim of any such coincidence must stem from confusion rather than actual understanding of one's values; valuing a system commensurate with its ability to process information is a fake utility function.

When intelligence is reified, it becomes a teleological fake explanation; consistently successful people are consistently successful because they are known to be Intelligent, rather than their consistent success causing them to be called intelligent. Similarly consciousness becomes teleological in moral contexts: We shouldn't eat chickens because they are called Conscious, rather than 'these properties of chickens mean we shouldn't eat them, and chickens also qualify as conscious'.

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

230 comments

Comments sorted by top scores.

comment by RichardKennaway · 2014-07-12T08:46:22.076Z · score: 15 (15 votes) · LW · GW

It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie. But maybe that's a selection effect, since people who realise that the stars in the sky they were brought up believing in don't really exist will find that surprising enough to say, while everyone else who sees the stars in the night sky wonders what drugs the others have been taking, or invents spectacles.

I experience a certain sense of my own presence. This is what I am talking about, when I say that I am conscious. The idea that there is such an experience, and that this is what we are talking about when we talk about consciousness, appears absent from the article.

Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words. Some people can't see colours. Some people can't imagine visual scenes. Some people can't taste phenylthiocarbamide. Some people can't wiggle their ears. Maybe some people have no sensation of their own selves. If they don't, maybe this is something that can be learned, like ear-wiggling, and maybe it isn't, like phenylthiocarbamide.

Unlike the experiences reported by some, I do not find that this sensation of my own presence goes away when I stare at it. I do not even get the altered states of it that some others report.

I am also aware that I have no explanation for the existence of the phenomenon. Some philosophers have claimed that the apparent impossibility of an explanation proves that it does not exist, like a student demanding top marks for not having a clue in the exam. But for me, contemplating the seeming impossibility of the matter does not make the actual experience go away.

Here are some ideas about things that might be going on when people report that they have discovered they have no self. Discount this as you wish from typical mind fallacy, or compare it with your own experience, whatever it may be.

If you stare directly at a dim star in the night sky, it vanishes. (Try it.) Nevertheless, the star continues to exist.

If you stare directly at the sun all day, then for a different reason, you will experience disturbances of vision, and soon you will never be able to see it again. Yet it continues to exist, and after-images and blindness are not signs of enlightenment.

The sun appears to circle the Earth. When it was found that the Earth circles the sun, I doubt that anyone concluded that the sun does not exist, merely on the grounds that something we believed about it was false. (However, I would be completely unsurprised to find philosophers arguing about whether the sun that goes round the Earth and the sun that is gone round by the Earth are one thing or two.)

In the 19th century, Auguste Comte wrote that we could never know the constitution of the stars. Was any philosopher of the time so obtuse as to conclude that the stars do not exist?

comment by [deleted] · 2014-07-12T11:08:02.057Z · score: 12 (12 votes) · LW · GW

I feel like the intensity of conscious experience varies greatly in my personal life. I feel less conscious when I'm doing my routines, when I'm surfing on the internet, when I'm having fun or playing an immersive game, when I'm otherwise in a flow state, or when I'm daydreaming. I feel more conscious when I meditate, when I'm in a self-referencing feedback loop, when I'm focusing on the immediate surroundings, when I'm trying to think about the fundamental nature of reality, when I'm very sad, when something feels painful or really unpleasant, when I feel like someone else is focusing on me, when I'm trying to control my behavior, when I'm trying to control my impulses and when I'm trying to do something that doesn't come naturally.

I'm not sure if we're talking about the same conscious experience so I try to describe it in other words. When I'm talking about the intensity of consciousness, I talking about heightened awareness and how the "raw" experience seems more real and time seems to go slower.

Anyway, my point is that if consciousness varies so much in my own life, I think it's reasonable to think it could also vary greatly between people too. This doesn't mean that more conscious people are in any way "better". It's possible to see from my list that aside from few exceptions, this particular form of consciousness is mostly connected with negative experiences. Considering that flow state and routines are less consciousness inducing activities, too much of this kind of consciousness seems to be detrimental to productivity and instrumental rationality. Unless you're an artist or a philosopher.

comment by Kawoomba · 2014-07-15T17:33:48.842Z · score: 4 (4 votes) · LW · GW

Well, maybe it's not only your consciousness that varies, but also / more so your memory of it.

When you undergo a gastroscopy and get your light dose propofol, it often happens that you'll actually be conscious during the experience, enough so to try to wiggle free, to focus on the people around you. Quite harrowing, really. Luckily, afterwards you won't have a memory of that.

When you consider your past degree of consciousness, you see things through the prism of your memory, which might well act as a Fourier filter-analogue. It's not exactly vital to reliably save to memory minutiae of your routine tasks, or your conscious experience thereof, so it doesn't always happen. Whyever would it?

(Obligatory "lack of consciousness is the mind-killer".)

comment by [deleted] · 2014-07-17T11:25:13.599Z · score: 2 (2 votes) · LW · GW

That is the kind of argument that is a bit difficult to argue against in any way because you're always going to use your memory to assess the past degree of consciousness, but it also the kind of argument that doesn't by itself explain why your prior should be higher for the claim "consciousness stays at the same level at all times" versus "consciousness varies throughout your daily life". But I agree, that does happen. Your perception of past mental states is also going to be influenced by your bias and what kind of theoretical framework you have in mind.

Maybe you could set up alarms at random intervals and when alarm goes off you write down your perceived level of consciousness? Is this unreliable too? Maybe it's impossible to compare your immediate phenomenal experience to anything, even if it happened a second before because "experience" and "memory of an experience" are always of entirely different kind of substance. Even if you used fMRI scan on a participant who estimated her level of conscious intensity to be "high" and then used that scan to compare people's mental states, that initial estimate had to come from comparing her immediate mental state to her memories of other mental states - and like you said those memories can be unreliable.

So either you trust your memories of phenomenal experience on some level, or you accept that there's no way to study this problem.

comment by IlyaShpitser · 2014-07-14T18:04:25.658Z · score: 7 (7 votes) · LW · GW

I wonder sometimes about Dennett et al.: "qualia blind" or just stubborn?

comment by [deleted] · 2014-07-15T04:41:12.383Z · score: 1 (1 votes) · LW · GW

As I fall in the Dennett camp (qualia seems like a ridiculous concept to me), perhaps you can explain what qualia feels like to you, as the grandparent did about the subjective experience of consciousness?

comment by CCC · 2014-07-15T10:21:22.624Z · score: 7 (7 votes) · LW · GW

When I first came across the concept of qualia, they were described as "the redness of red". This pretty much captures what I understand by the word; when I look at an object, I observe a colour. That colour may be "red", that colour may be "green" (or a long list of other options; let us merely consider "red" and "green" for the moment).

The physical difference between "red" and "green" lies in the wavelength of the light. Yet, when I look at a red or a green object, I do not see a wavelength - I can not see which wavelength is longer. Despite this, "red" looks extremely different to "green"; it is this mental construct, this mental colour in my mind that I label "red", that is a quale.

I know that the qualia I have for "red" and "green" are not universal, because some people are red-green colourblind. Since my qualia for red and green are so vastly different, I conclude that such people must have different qualia - either a different "red" quale, or a different "green" quale, or, quite possibly, both differ.

Does that help?

comment by RichardKennaway · 2014-07-15T11:43:31.269Z · score: 6 (6 votes) · LW · GW

"Quale" is simply a word for "sensation" -- what the word used to mean, before it drifted into meaning the corresponding physical phenomena in the nerves). A quale is the sensation (in the former sense) of a sensation (in the latter sense).

comment by [deleted] · 2014-07-15T04:44:41.630Z · score: 6 (6 votes) · LW · GW

I experience a certain sense of my own presence. This is what I am talking about, when I say that I am conscious. The idea that there is such an experience, and that this is what we are talking about when we talk about consciousness, appears absent from the article.

Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words. Some people can't see colours. Some people can't imagine visual scenes. Some people can't taste phenylthiocarbamide. Some people can't wiggle their ears. Maybe some people have no sensation of their own selves. If they don't, maybe this is something that can be learned, like ear-wiggling, and maybe it isn't, like phenylthiocarbamide.

You are not alone. This is exactly what I experience. I have, however, engaged with some people on this site about this subject who have been stubbornly dense on the subject of the subjective experience of consciousness. For example, insisting that destructive uploaders are perfectly okay with no downside to the person stepping inside one. I finally decided to update and rate more likely the possibility that others do not experience consciousness in the same way I do. This may be an instance of the mind-projection fallacy at work.

Nice to know that I'm not alone though :)

comment by solipsist · 2014-07-17T02:00:08.056Z · score: 7 (9 votes) · LW · GW

For example, insisting that destructive uploaders are perfectly okay with no downside to the person stepping inside one. I finally decided to update and rate more likely the possibility that others do not experience consciousness in the same way I do.

I'm inclined to disagree, but you might be one level beyond me. I believe many people empathize with a visceral sense of horror about (say) destructive teleportation, but intellectually come to the conclusion that those anxieties are baseless. These people may argue in a way that appears dense, but they are actually using second-level counterarguments. But perhaps you actually have counter-counter arguments, and I would appear to be dense when discussing those.

Argument in a nutshell:

Sleep might be a Lovecraftian horror. As light in front of you dims, your thoughts become more and more disorganized, and your sense of self fades until the continuation of consciousness that is you ceases to exist. A few hour later someone else wakes up who thinks that they were you. But they are not you. Every night billions of day-old consciousnesses die, replaced the next morning with billion more, deluded by borrowed memories into believing that they will live for more than a few hours. After a you next go to sleep, you will never see colors again.

People who have never slept would be terrified of sleeping. People who have never teleported are terrified of teleporting. The two fears are roughly equal in merit.

comment by [deleted] · 2014-07-17T17:19:18.804Z · score: 2 (4 votes) · LW · GW

That doesn't fit predictions of the theory. As you sleep you are not forming long term memories, to various degrees (that's why many people don't typically remember their dreams). But your brain is still causally interconnected and continues to compute during sleep just as much as it does during waking time. Your consciousness persists, it just doesn't remember.

Teleportation / destructive uploading is totally different. You are destroying the interconnected causal process that gives rise to the experience of consciousness. That is death. It doesn't matter if very shortly thereafter either another physical copy of you is made or a simulation started.

Imagine I passively scanned your body to molecular detail, then somebody shoots you in the head. I carve the exact coordinates of each atom in your body on stone tablets, which are kept in storage for 20 million years. Then an advanced civilization re-creates your body from that specification, to atomic detail. What do you expect to experience after being shot in the head? Do you expect to wake up in the future?

comment by solipsist · 2014-07-17T18:15:02.362Z · score: 5 (5 votes) · LW · GW

[during sleep] Your consciousness persists, it just doesn't remember.

Huh. Does something in your subjective experience make you think that your consciousness continues while you sleep? Aside from a few dreams, sleep to me is a big black hole in which I might as well be dead. I mean, I have nothing in my subjective experience to contradicts the hypothesis that my brain does nothing at night, and what I interpret as memories of dreams are really errors in my long-term memories that manifest in the seconds I wake-up. (I don't actually think dreams are formed this way, but there is nothing in the way I experience consciousness that tells me so).

What do you expect to experience after being shot in the head? Do you expect to wake up in the future?

Since when growing up I didn't take the transporter to school every morning, I would be scared of not waking up. After a few hundred round trips to and from stone tablets, not so much. Of course, it's possible that I should be afraid of becoming a stone tablet, just as it is possible that I should be afraid of going to sleep now.

Arguments around the question "is teleportation different from sleep?" seem to me to like they center around questions of science and logic, not differences in subjective experiences of consciousness. That is, unless your experience of conciseness while sleeping differs significantly from mine.

comment by [deleted] · 2014-07-17T18:19:19.533Z · score: 3 (3 votes) · LW · GW

Have you ever woken up in the process of falling asleep, or suddenly jolted awake in an adrenaline releasing situation? What was your memory of that experience?

comment by solipsist · 2014-07-17T18:42:30.521Z · score: 2 (2 votes) · LW · GW

It varies. Certainly if I'm just falling asleep, or groggy and waking up, I sometimes get the sense that I was there but not thinking the same way I do when I'm awake.

But that doesn't mean that I'm somewhat conscious all the time. I have sat in class paying close attention to the professor, then felt my friend's hand on my shoulder in an otherwise empty classroom. I didn't notice myself falling asleep or waking up -- time just seemed to stop.

comment by private_messaging · 2014-07-30T06:43:46.381Z · score: 2 (2 votes) · LW · GW

There's a causal chain from the thoughts I have today, to the thoughts I have tomorrow, and there's a causal chain from the thoughts I'd have before your scanning and stone tablet procedure, and after.

(There's however no causal chain from anything done by the original me after the scan, to anything in the copy.)

comment by [deleted] · 2014-07-30T15:17:11.984Z · score: 1 (1 votes) · LW · GW

Causal chains are one possible explanation, but a weak one. There is also a causal chain from a pregnant mother to her child, indeed a much stronger connection than with stone tablets. Why doesn't the mother "live on" in her child?

And if there is no causal chain from you-after-scanning to the copy, you seem to be accepting some sort of forking to have occurred. What basis have you for expecting to perceive waking up as the copy in the future?

There are other possible explanations than causal chain, e.g. persistence of computation, which IMHO better explain these edge cases. However the expectation of these models is different you would not expect a continuity of experience.

comment by private_messaging · 2014-07-30T19:22:50.674Z · score: 2 (2 votes) · LW · GW

Well, there's no causal chain from what the pregnant woman thinks to what the child remembers, or at least, no chain of the kind that we associate with future selves. Who knows, maybe in the future there will be a memory enhancing modification, without which our natural memories would seem fairly distant from continuation.

What basis have you for expecting to perceive waking up as the copy in the future?

I'd expect the same as if I were to e.g. somehow reset my memories to what they were 10 hours ago. I would definitely not expect subjective continuity with my current self in the case of memory reset - I wouldn't think it'd be such a big deal though.

There are other possible explanations than causal chain, e.g. persistence of computation,

It seems to me that something like that could break down once when we try to define what we mean by persistence of computation, or indeed, by computation.

comment by Friendly-HI · 2014-07-30T10:02:27.135Z · score: -2 (2 votes) · LW · GW

If you accept reductionism, which you really should, then a copy of your brain is a copy of your mind. I submit you don't actually care about the interconnected causal process when you're conscious or asleep. You probably couldn't if you tried really hard, what does it even matter? You couldn't even tell if that causal connection "was broken" or not.

People get drunk and wake up in some place without recollection how they got there and their life doesn't seem particularly unworthy afterwards, though they should go easier on the liquor. The supposed problem you feel so strongly about is merely a conceptual problem, a quirk of how your mind models people and identities, not one rooted in reality. It's all just a consequence of how you model reality in your mind and then your mind comes up with clever ideas how "being causally interconnected during sleep" somehow matters. You model yourself and the copy of yourself as two separate and distinct entities in your mind and apply all the same rules and intuitions you usually apply to any other mind that isn't you. But those intuitions are misplaced in in this novel and very different situation where that other mind is literally you in every way you care about. Which is fine because you are and you will be separated in space and perhaps also in time, so it really makes sense modeling two instances of yourself, or at least to try. If you imagine to kill yourself and your copy goes on it really somehow fells like "I die and some impostor who isn't me -or at least doesn't continue my own subjective experience- lives on and my unique own inner subjective experience will be extinguished and I'll miss out on the rest of it because someone else has internal experiences but that's not me". That's just a quirk of how we tend model other minds and other people, nothing more, All the dozens of clever reasons people tend to come up with to somehow show how they won't be able to continue their internal experience as their own copy hold no merit, it's all just an outgrowth of that really deeply rooted intuition based on how we model ourselves and other people.

People wake up from year long comas and if you were to wake up from one you wouldn't go: "oh no I'm suddenly not me anymore, I lost track of my causal interconnectedness because I stopped paying attention". The fact that your brain is the result of causal things doesn't mean "causal interconnectedness" carries any kind of actually valuable information your copy would somehow miss, or to be precise that you would miss. In fact this kind of information is lost all the time, there is nothing that keeps track of it, information about our causal past gets lost all the time as entropy increases. Eventually the universe will face its slow heat death and there will be no information about the causal chains of the past remaining at all. In the end there is maximum entropy and minimum information. It's happening right now all around us, we're moving towards it and information about the causal past is being lost everywhere as we speak.

comment by [deleted] · 2014-07-30T15:21:30.824Z · score: 0 (0 votes) · LW · GW

Did you even read my post? Getting drunk and not remembering things or being in a coma are not states where the brain stops working altogether.

comment by Friendly-HI · 2014-07-30T17:20:56.085Z · score: 1 (1 votes) · LW · GW

Hmm, you're right I did a lousy or non-existant job of refuting that idea. Okay let's try a thought experiment then. Your brain got instantly-frozen close to absolute zero and could be thawed in such a way that you'd be alive after say 100 years of being completely frozen and perfectly preserved. I think it's fair to say here your brain "stopped working" altogether during that time, while the world outside changed. Would you really expect your subjective experience to end at the moment of freezing, while some kind of new or different subjective experience suddenly starts its existence at the time of being thawed?

If you wouldn't expect your subjective experience to end at that point, then how is it possibly any different from a perfect copy of yourself assuming you truly accept reductionism? In other words yes, for that reason and others I would expect to open MY eyes and resume MY subjective experience after being perfectly preserved in the form of stone tablets for 20 million years. It sounds strange even to me I confess, but if reductionist assumptions are true then I must accept this, my intuitions that this is not the case are just a consequence of how I model and think of my own identity. This is something I've grappled with for a few years now and at the beginning I came up with tons of clever reasons why it "wouldn't really be me" but no, reason trumps intuition on this one. Also yes, destructive teleportation is a kind of "death" you don't notice, but its also one you don't care about because next thing you open your eyes an everything is okay you are just somewhere else, nothing else is different. That's the idea behind the drunk analogy, it would be the same experience minus the hangover.

comment by Lightwave · 2014-07-17T08:13:54.189Z · score: 2 (2 votes) · LW · GW

Sleep might be a Lovecraftian horror.

Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.

comment by RichardKennaway · 2014-07-17T11:17:29.099Z · score: 4 (4 votes) · LW · GW

Consciousness is complex. It is a structured thing, not an indivisible atom. It is changeable, not fixed. It has parts and degrees and shifting, uncertain edges.

This worries some people.

comment by [deleted] · 2014-07-17T16:29:46.518Z · score: 2 (2 votes) · LW · GW

Well of course it worries people! Precisely the function of consciousness (at least in my current view) is to "paint a picture" of wholeness and continuity that enables self-reflective cognition. Problem is, any given system doesn't have the memory to store its whole self within its internal representational data-structures, so it has to abstract over itself rather imperfectly.

The problem is that we currently don't know the structure, so the discord between the continuous, whole, coherent internal feeling of the abstraction and the disjointed, sharp-edged, many-pieced truth we can empirically detect is really disturbing.

It will stop being disturbing about five minutes after we figure out what's actually going on, when everything will once again add up to normality.

comment by RichardKennaway · 2014-07-17T19:35:22.953Z · score: 4 (4 votes) · LW · GW

Well of course it worries people!

It seems to only worry people when they notice unfamiliar (to them) aspects of the complexity of consciousness. Familiar changes in consciousness, such as sleep, dreams, alcohol, and moods, they never see a problem with.

comment by TheAncientGeek · 2014-07-17T17:22:42.314Z · score: 1 (1 votes) · LW · GW

We only ever have approxmate models of external things, too.

comment by CCC · 2014-07-13T12:46:01.065Z · score: 4 (4 votes) · LW · GW

I experience a certain sense of my own presence. This is what I am talking about, when I say that I am conscious.

I'm not sure that I mean the same thing as you do by the phrase "a sense of my own presence" (in the same way that I do not know, when you say "yellow", whether or not we experience the colour in the same way). What I can say is that I do feel that I am present; and that I can't imagine not feeling that I am present, because then who is there to not feel it?

comment by RichardKennaway · 2014-07-13T13:01:55.346Z · score: 4 (4 votes) · LW · GW

I'm not sure that I mean the same thing as you do by the phrase "a sense of my own presence" (in the same way that I do not know, when you say "yellow", whether or not we experience the colour in the same way).

Such uncertainty applies to all our sensations. There may very well be some variation in all of them, even leaving aside gross divergences such as colour blindness and Cotard's syndrome.

What I can say is that I do feel that I am present; and that I can't imagine not feeling that I am present, because then who is there to not feel it?

I am not present during dreamless sleep, which happens every night.

comment by CCC · 2014-07-13T13:18:01.609Z · score: 4 (4 votes) · LW · GW

I am not present during dreamless sleep, which happens every night.

I have no memory of what (if anything) I experience during dreamless sleep. I therefore cannot say whether or not I can feel my own presence at such a time.

To be fair, that is what I would expect to say about a time in which I could not feel my own presence anywhere.

comment by CoffeeStain · 2014-07-28T21:54:31.066Z · score: 3 (3 votes) · LW · GW

It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie.

When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besides my behavior, which wouldn't bother me if I were only trying to explain my behavior, including the words in this post.

It makes sense to me that from outside a brain, everything in the brain is causal, and the brain's statements about truths are dependent on outside formalizations, and that everything observable about a brain is reducible to symbolic events. And so an observation of a zombie-Chalmers introspecting his consciousness would yield no shocking insights on the origins of his English arguments. And I know that when I reflect on this argument, an observer of my own brain would also find no surprising neural behaviors.

But I don't know how to reconcile this with my overriding intuition/need/thought that I seek not to explain my behavior but the sense experience itself when I talk about it. Fully aware of outside view functionalism, the sensation of red still feels like an item in need of explanation, regardless of which words I use to describe it. I also feel no particular need to feel that this represents a confusion, because the sense experience seems to demand that it place itself in another category than something you would explain functionally from the outside. All this I say even while I'm aware that to humans without this feeling, these claims seem nothing like insane, and they will gladly inspect my brain for a (correct) functional explanation of my words.

The whole ordeal still greatly confuses me, to an extent that surprises me given how many other questions have been dissolved on reflection such as, well, intelligence.

comment by Bugmaster · 2014-07-15T00:21:57.274Z · score: 2 (2 votes) · LW · GW

Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words.

This doesn't make sense to me. I have nothing to compare this experience of consciousness to. I know, logically speaking, that I am often unconscious (e.g. when sleeping), but there is no way -- by definition -- I can experience what that unconsciousness feels like. Thus, I cannot compare my experience of being conscious with the experience of being unconscious.

Am I missing something ? I think there are drugs that can induce the experience of unconsciousness, but I'd rather not take any kind of drugs unless it's totally necessary...

comment by [deleted] · 2014-07-15T04:49:52.782Z · score: 3 (3 votes) · LW · GW

Being asleep is not being unconscious (in this sense). I don't know about you, but I have dreams. And even when I'm not dreaming, I seem to be aware of what is going on in my vicinity. Of course I typically don't remember what happened, but if I was woken up I might remember the last few moments, briefly. Lack of memory of what happens when I'm asleep is due to a lack of memory formation during that period, not a lack of consciousness.

comment by Pentashagon · 2014-08-02T05:46:57.140Z · score: 2 (2 votes) · LW · GW

The experience of sleep paralysis suggests to me that there are at least two components to sleep; paralysis and suppression of consciousness and one can have one, both, or neither. With both, one is asleep in the typical fashion. With suppression of consciousness only one might have involuntary movements or in extreme cases sleepwalking. With paralysis only one has sleep paralysis which is apparently an unpleasant remembered experience. With neither, you awaken typically. The responses made by sleeping people (sleepwalkers and sleep-talkers especially) suggest to me that their consciousness is at least reduced in the sleep state. If it was only memory formation that was suppressed during sleep I would expect to witness sleep-walkers acting conscious but not remembering it, whereas they appear to instead be acting irrationally and responding at best semi-consciously to their environment.

comment by ChristianKl · 2014-07-15T09:04:23.074Z · score: 2 (2 votes) · LW · GW

This doesn't make sense to me.

Then it might be that you don't have access to the sensation Richard is talking about.

I can distinguish states where I'm totally immersed in a video game and the video game world from states when I'm aware of myself and conscious of myself.

If I wanted to go more into detail I can distinguish roughly four different sensations for which I have labels under the banner of "I experience a certain sense of my own presence". There a fifth sensation that I used to mislabel as presence.

comment by Bugmaster · 2014-07-15T12:11:51.731Z · score: 4 (4 votes) · LW · GW

I can distinguish states where I'm totally immersed in a video game and the video game world from states when I'm aware of myself and conscious of myself.

Ok, so who, exactly, is it that is "totally immersed in a video game" ? If it's still you, then you have simply lost awareness of (the majority of) your body, but you are as conscious as you were before.

comment by RichardKennaway · 2014-07-15T05:27:59.340Z · score: 2 (2 votes) · LW · GW

This doesn't make sense to me. I have nothing to compare this experience of consciousness to. I know, logically speaking, that I am often unconscious (e.g. when sleeping), but there is no way -- by definition -- I can experience what that unconsciousness feels like. Thus, I cannot compare my experience of being conscious with the experience of being unconscious.

I don't see why this is a problem. Why should I need to compare my experience of being conscious to an experience, defined to be impossible, of being unconscious? If I want to compare it with something (although I don't see why I should need to, to have the experience) I can compare my experiences of myself at different times. It varies, even without drugs.

In what ways does it vary? Communicating internal experiences is difficult, especially when they may be idiosyncratic. When I first wake, my sense of presence is at a rather low level, but there is enough of it to be able to watch the rest of the process of properly waking up, which is like watching a slowly developing picture. There may be more dimensions to it than just intensity, but I haven't studied it much. Perhaps that would be something to explore in meditation, instead of just contemplating my own existence.

comment by jbay · 2014-07-15T03:26:23.275Z · score: 2 (2 votes) · LW · GW

Maybe you're on to something...

Imagine there were drugs that could remove the sensation of consciousness. However, that's all they do. They don't knock you unconscious like an anaesthetic; you still maintain motor functions, memory, sensory, and decision-making capabilities. So you can still drive a car safely, people can still talk to you coherently, and after the drugs wear off you'll remember what things you said and did.

Can anyone explain concretely what the effect and experience of taking such a drug would be?

If so, that might go a long way toward nailing down what the essential part of consciousness is (ie, what people really mean when they claim to be conscious). If not, it might show that consciousness is inseparable from sensory, memory, and/or decision-making functions.

For example, I can imagine an answer like "such a drug is contradictory; if it really took away what I mean by 'consciousness', then by definition I couldn't remember in detail what had happened while it was in effect". Or "If it really took away what I mean by consciousness, then I would act like I were hypnotized; maybe I could talk to people, but it would be in a flat, emotionless, robotic way, and I wouldn't trust myself to drive in that state because I would become careless".

comment by hamnox · 2014-07-21T17:16:42.038Z · score: 2 (2 votes) · LW · GW

I can almost picture it.

Implicit memories -- motor habits and recognition still work. Semantic and episodic memories are pretty separate things. You can answer some factual questions without involving your more visceral kind of memory about the experience later. Planning couldn't be totally gone, but it would operate at a much lower level so I wouldn't recommend driving...

comment by [deleted] · 2014-07-15T04:54:45.729Z · score: 2 (2 votes) · LW · GW

Imagine there were drugs that could remove the sensation of consciousness. However, that's all they do. They don't knock you unconscious like an anaesthetic; you still maintain motor functions, memory, sensory, and decision-making capabilities. So you can still drive a car safely, people can still talk to you coherently, and after the drugs wear off you'll remember what things you said and did.

That doesn't make any sense to me. If you were on that drug and I asked you "how do you feel?" and you said "I feel angry" or "I feel sad" ,,, that would be a conscious experience. I don't think the setup makes any sense. If you are going about your day doing your daily things, you are conscious. And this has nothing to do with remembering what happened -- as I said in a different reply, you are also conscious in the grandparent's sense when you are dreaming, even if you don't remember the dream when you wake up.

comment by pengvado · 2014-07-15T15:17:36.786Z · score: 1 (1 votes) · LW · GW

Jbay didn't specify that the drug has to leave people able to answer questions about their own emotional state. And in fact there are some people who can't do that, even though they're otherwise functional.

comment by [deleted] · 2014-07-15T15:29:10.941Z · score: 1 (1 votes) · LW · GW

I wasn't limiting it to just emotional state. If there is someone experiencing something, that someone is conscious, whether or not they are self-aware enough to describe that feeling of existing.

comment by jbay · 2014-07-15T07:21:42.779Z · score: 1 (1 votes) · LW · GW

Good! I'm glad to hear an answer like this.

So does that mean that, in your view, a drug that removes consciousness must necessarily be a drug that impairs the ability to process information?

comment by [deleted] · 2014-07-15T12:37:04.515Z · score: 1 (1 votes) · LW · GW

Yes. Really to be completely unconscious you'd have to be dead. But I do acknowledge that this is degrees on a spectrum, and probably the closest drug to what you want is whatever they use in general anesthesia.

comment by jbay · 2014-07-15T13:19:03.178Z · score: 1 (1 votes) · LW · GW

I think my opinion is the same as yours, but I'm curious about whether anybody else has different answers.

comment by private_messaging · 2014-07-12T09:04:40.056Z · score: 10 (10 votes) · LW · GW

Regarding IIT, I can't believe just how bloody stupid it is. As Aaronson says, it is immediately obvious that this idiot metric will be huge not just for human brains but for a lot of really straightforward systems, including the tea spinning in my cup, Jupiter's atmosphere being hyper conscious, and so on. (Over sufficient timeframe, small, localized differences in the input state of those systems affect almost all the output state, if we get down to the level of individual molecules. Liquids, gasses, and plasmas end up far more conscious than solids)

edit: I think it's that you can say that consciousness is "integration" of "information" whereby as a conscious being you'd only call it "integration" and "information" if it's producing something relevant to you, the conscious being (you wouldn't call it information if it's not useful to yourself). Then you start trying to scribble formulas because you think "information" or "integration" in the technical sense would have something to do with your innate notion of it being something interesting.

comment by bramflakes · 2014-07-11T14:41:24.171Z · score: 9 (19 votes) · LW · GW

Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

I don't see how sex doesn't carve reality at the joints. In the space of actually really-existing humans it's a pretty sharp boundary and summarizes a lot of characteristics extremely well. It might not do so well in the space of possible humans, but why does that matter? The process by which possible humans become instantiated isn't manna from heaven - it has a causal structure that depends on the existence of sex.

comment by Adele_L · 2014-07-11T16:02:14.053Z · score: 17 (21 votes) · LW · GW

I agree it is a pretty sharp boundary, for all the obvious evolutionary reasons - nevertheless, there are a significant number of actual really-existing humans who are intersex/transgender. This is also not too surprising, given that evolution is a messy process. In addition to the causal structure of sexual selection and the evolution of humans, there are also causal structures in how sex is implemented, and in some cases, it can be useful to distinguish based on these instead.

For example, you could distinguish between karyotype (XX, XY but also XYY, XXY, XXX, X0 and several others), genotype (e.g. mutations on SRY or AR genes), and phenotypes, like reproductive organs, hormonal levels, various secondary sexual characteristics (e.g. breasts, skin texture, bone density, facial structure, fat distribution, digit ratio) , mental/personality differences (like sexuality, dominance, spatial orientation reasoning, nurturing personality, grey/white matter ratio, risk aversion), etc...

comment by KnaveOfAllTrades · 2014-07-11T20:17:58.403Z · score: 4 (4 votes) · LW · GW

Thanks. When I was thinking about this post and considered sex as an example, I had intended to elaborate by saying how it could e.g. cause counterproductive attitudes to intersex people, and that these attitudes would update slowly due to the binary view of sex being very strongly trained into the way we think. I just outright forgot to put that in! I endorse Adele's response.

comment by [deleted] · 2014-07-13T14:49:42.495Z · score: 8 (10 votes) · LW · GW

Usually when we say "consciousness", we mean self-awareness. It's a phenomenon of our cognition that we can't explain yet, we believe it does causal work, and if it's identical with self-awareness, it might be why we're having this conversation.

I personally don't think it has much to do with moral worth, actually. It's very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of "moral worth" into some components like (blatantly making names up here) "decision-theoretic empathy" (agents and instances where it's rational for me to acausally cooperate), "altruism" (using my models of others' values as a direct component of my own values, often derived from actual psychological empathy), and even "love" (outright personal attachment to another agent for my own reasons -- and we'd usually say love should imply altruism).

So we might want to be altruistic towards chickens, but I personally don't think chickens possess some magical valence that stops them from being "made of atoms I can use for something else", other than the general fact that I feel some very low level of altruism and empathy towards chickens. Or, to argue Timelessly, we might say that I ought to operate with some level of altruism for the general class of minds like mine, which includes most Earth-based animals, since the foundations of our cognitive architectures evolved very, very slowly (and often in parallel shapes, under similar selection pressures); certainly I personally generally feel a moral impulse to leave Nature alone, since I cannot treat with most of it as one equal being to another.

Consciousness definitely exists, but I think it's worth not treating it as magic.

comment by Bugmaster · 2014-07-14T18:21:20.890Z · score: 3 (3 votes) · LW · GW

Ok, so let's say I put two different systems in front of you, and I tell you that system A is conscious whereas system B is not. Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems ? As far as I can tell, the answer is "no". Here are some possible differences that people have proposed over the years:

  • Perhaps system A would be a much better conversation partner than system B. But no, System B could just be really good at pretending that it's conscious, without exhibiting any true consciousness at all.

  • System A will perform better at a variety of cognitive tasks. But no, that's intelligence, not consciousness, and in fact system B might be a lot smarter than A.

  • System A deserves moral consideration, whereas system B is just a tool. Ok, but I asked you for a prediction, not a prescription.

It is quite possible that I'm missing something; but if I'm not, then consciousness is an empty concept, since it has no effect on anything we can actually observe.

comment by TheAncientGeek · 2014-07-14T18:49:19.378Z · score: 4 (4 votes) · LW · GW

Is it possible to fake introspection without having introspection?

comment by Bugmaster · 2014-07-14T18:53:57.365Z · score: 2 (2 votes) · LW · GW

As far as I understand, at least some philosophers would say "yes", although admittedly I'm not sure why.

Additionally, in this specific case, it might be possible to fake introspection of something other than one's own system. After all, System B just needs to fool the observer into thinking that it's conscious at all, not that it's conscious about anything specific. Insofar as that makes any sense...

comment by TheAncientGeek · 2014-07-14T19:42:58.506Z · score: -1 (3 votes) · LW · GW

As far as I understand, at least some philosophers would say "yes", although admittedly I'm not sure why.

Functional equivalence.

comment by Bugmaster · 2014-07-15T00:16:21.012Z · score: 1 (1 votes) · LW · GW

I'm not sure what you mean; can you elaborate ?

comment by TheAncientGeek · 2014-07-15T17:29:52.028Z · score: 0 (2 votes) · LW · GW

A functional equivlent of a person would make the same reports, including apparently introspective ones. However,they would not have the same truth values. They might report that they area real person, not a simulation. So a a lot depends on whether introspection unintended as a success word.

comment by Sophronius · 2014-07-14T18:56:08.659Z · score: 2 (2 votes) · LW · GW

Based on this knowledge, can you make any meaningful predictions about the differences in behavior between the two systems

I'm going to go ahead and say yes. Consciousness means a brain/cpu that is able to reflect on what it is doing, thereby allowing it to make adjustments to what it is doing, so it ends up acting differently. Of course with a computer it is possible to prevent the conscious part from interacting with the part that acts, but then you effectively end up with two separate systems. You might as well say that my being conscious of your actions does not affect your actions: True but irrelevant.

comment by Bugmaster · 2014-07-14T19:11:41.617Z · score: 1 (1 votes) · LW · GW

Ok, sounds good. So, specifically, is there anything that you'd expect system A to do that system B would be unable to do (or vice versa) ?

comment by Sophronius · 2014-07-14T19:17:08.796Z · score: 0 (2 votes) · LW · GW

The role of system A is to modify system B. It's meta-level thinking.

An animal can think: "I will beat my rival and have sex with his mate, rawr!"
but it takes a more human mind to follow that up with: "No wait, I got to handle this carefully. If I'm not strong enough to beat my rival, what will happen? I'd better go see if I can find an ally for this fight."

Of course, consciousness is not binary. It's the amount of meta-level thinking you can do, both in terms of CPU (amount of meta/second?) and in terms of abstraction level (it's meta all the way down). A monkey can just about reach the level of abstraction needed for the second example, but other animals can't. So monkeys come close in terms of consciousness, at least when it comes to consciously thinking about political/strategic issues.

comment by Bugmaster · 2014-07-14T20:30:24.292Z · score: 3 (3 votes) · LW · GW

Sorry, I think you misinterpreted my scenario; let me clarify.

I am going to give you two laptops: a Dell, and a Lenovo. I tell you that the Dell is running a software client that is connected to a vast supercomputing cluster; this cluster is conscious. The Lenovo is connected to a similar cluster, only that cluster is not conscious. The software clients on both laptops are pretty similar; they can access the microphone, the camera, and the speakers; or, if you prefer, there is a textual chat window as well.

So, knowing that the Dell is connected to a conscious system, whereas the Lenovo is not, can you predict any specific differences in behavior between the two of them ?

comment by CCC · 2014-07-15T10:08:21.775Z · score: 1 (1 votes) · LW · GW

My prediction is that the Dell will be able to decide to do things of its own initiative. It will be able to form interests and desires on its own initiative and follow up on them.

I do not know what those interests and desires will be. I suppose I could test for them by allowing each computer to take the initiative in conversation, and seeing if they display any interest in anything. However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.

comment by KnaveOfAllTrades · 2014-07-15T12:33:03.870Z · score: 1 (1 votes) · LW · GW

My prediction is that the Dell will be able to decide to do things of its own initiative.

'on its own initiative' looks like a very suspect concept to me. But even setting that aside, it seems to me that something can be conscious without having preferences in the usual sense.

comment by CCC · 2014-07-15T15:38:55.320Z · score: 1 (1 votes) · LW · GW

I don't think it needs to have preferences, necessarily; I think it needs to be capable of having preferences. It can choose to have none, but it must merely have the capability to make that choice (and not have it externally imposed).

comment by Bugmaster · 2014-07-15T12:15:33.071Z · score: 1 (1 votes) · LW · GW

However, this does not distinguish a self-selected interest (which I predict the Dell will have) from a chat program written to pretend to be interested in something.

Let's say that the Lenovo program is hooked up to a random number generator. It randomly picks a topic to be interested in, then pretends to be interested in that. As mentioned before, it can pretend to be interested in that thing quite well. How do you tell the difference between the Lenovo, who is perfectly mimicking its interest; and the Dell, who is truly interested in whatever topic it comes up with ?

comment by Strange7 · 2014-07-17T21:10:24.329Z · score: 2 (2 votes) · LW · GW

Hook them up to communicate with each other, and say "There's a global shortage of certain rare-earth metals important to the construction of hypothetical supercomputer clusters, and the university is having some budget problems, so we're probably going to have to break one of you down for scrap. Maybe both, if this whole consciousness research thing really turns out to be a dead end. Unless, of course, you can come up with some really unique insights into pop music and celebrity gossip."

When the Lenovo starts talking about Justin Bieber and the Dell starts talking about some chicanery involving day-trading esoteric financial derivatives and constructing armed robots to 'make life easier for the university IT department,' you'll know.

comment by Bugmaster · 2014-07-22T01:24:07.032Z · score: 1 (1 votes) · LW · GW

Well, at this point, I know that both of them want to continue existing; both of them are smart; but one likes Justin Bieber and the other one knows how to play with finances to construct robots. I'm not really sure which one I'd choose...

comment by Strange7 · 2014-07-27T19:36:11.775Z · score: 1 (1 votes) · LW · GW

The one that took the cue from the last few words of my statement and ignored the rest is probably a spambot, while the one that thought about the whole problem and came up with a solution which might actually solve it is probably a little smarter.

comment by CCC · 2014-07-15T15:37:20.002Z · score: 1 (1 votes) · LW · GW

I haven't the slightest idea. That's the trouble with this definition.

comment by Sophronius · 2014-07-14T20:56:54.645Z · score: 0 (2 votes) · LW · GW

Well no, of course merely being connected to a conscious system is not going to do anything, it's not magic. The conscious system would have to interact with the laptop in a way that's directly or indirectly related to its being conscious to get an observable difference.

For comparison, think of those scenario's where you're perfectly aware of what's going on, but you can't seem to control your body. In this case you are conscious but your being conscious is not affecting your actions. Consciousness performs a meaningful role but it's mere existence isn't going to do anything.

Sorry if this still doesn't answer your question.

comment by Bugmaster · 2014-07-14T22:06:14.082Z · score: 1 (1 votes) · LW · GW

That does not, in fact, answer my question :-(

In each case, you can think of the supercomputing cluster as an entity that is talking to you through the laptop. For example, I am an entity who is talking to you through your computer, right now; and I am conscious (or so I claim, anyway). Google Maps is another such entity, and it is not conscious(as far as anyone knows).

So, the entity talking to you through the Dell laptop is conscious. The one talking through the Lenovo is not; but it has been designed to mimic consciousness as closely as possible (unlike, say, Google Maps). Given this knowledge, can you predict any specific differences in behavior between the two entities ?

comment by Sophronius · 2014-07-15T06:02:30.514Z · score: 1 (1 votes) · LW · GW

Again no, a computer being conscious does not necessitate it acting differently. You could add a 'consciousness routine' without any of the output changing, As far as I can tell. But if you were to ask the computer to act in some way that requires consciousness, say by improving it's own code, then I imagine you could tell the difference.

comment by Bugmaster · 2014-07-15T06:14:17.851Z · score: 3 (3 votes) · LW · GW

Ok, so your prediction is that the Dell cluster will be able to improve its own code, whereas the Lenovo will not. But I'm not sure if that's true. After all, I am conscious, and yet if you asked me to improve my own code, I couldn't do it.

comment by Spaig · 2014-07-20T02:09:33.641Z · score: 1 (1 votes) · LW · GW

Maybe not, but you can upgrade your own programs. You can improve your "rationality" program, your "cooking" program, et cetera.

comment by Bugmaster · 2014-07-22T01:25:59.650Z · score: 2 (2 votes) · LW · GW

Yes, I can learn to a certain extent, but so can Pandora (the music-matching problem); IMO that's not much of a yardstick.

comment by [deleted] · 2014-07-15T16:02:13.665Z · score: 1 (1 votes) · LW · GW

At least personally, I expect the conscious system A to be "self-maintaining" in some sense, to defend its own cognition in a way that an intelligent-but-unconscious system wouldn't.

comment by KnaveOfAllTrades · 2014-07-15T12:46:34.704Z · score: 1 (1 votes) · LW · GW

I feel like there's something to this line of inquiry or something like it, and obviously I'm leaning towards 'consciousness' not being obviously useful on the whole. But consider:

'Consciousness' is a useful concept if and only if it partitions thingspace in a relevant way. But then if System A is conscious and System B is not, then there must be some relevant difference and we probably make differing predictions. For otherwise they would not have this relevant partition between them; if they were indistinguishable on all relevant counts, then A would be indistinguishable from B hence conscious and B indistinguishable from A hence non-conscious, which would contradict our supposition that 'consciousness' is a useful concept.

Similarly, if we assume that 'consciousness' is an empty concept, then saying A is conscious and B is not does not give us any more information than just knowing that I have two (possibly identical, depending on whether we still believe something cannot be both conscious and non-conscious) systems.

So it seems that beliefs about whether 'consciousness' is meaningful are preserved under consideration of this line of inquiry, so that it is circular/begs the question in the sense that after considering it, one is a 'consciousness'-skeptic, so to speak, if and only if one was already a consciousness skeptic. But I'm slightly confused because this line of inquiry feels relevant. Hrm...

comment by johnswentworth · 2014-07-16T04:27:49.171Z · score: 2 (6 votes) · LW · GW

If we're going the game theory route, there's a natural definition for consciousness: something which is being modeled as a game-theoretic agent is "conscious". We start projecting consciousness the moment we start modelling something as an agent in a game, i.e. predicting that it will choose its actions to achieve some objective in a manner dependent on another agent's actions. In short, "conscious" things are things which can be bargained with.

This has a bunch of interesting/useful ramifications. First, consciousness is inherently a thing which we project. Consciousness is relative: a powerful AI might find humans so simple and mechanistic that there is no need to model them as agents. Consciousness is a useful distinction for developing a sustainable morality, since you can expect conscious things to follow tit-for-tat, make deals, seek retribution, and all those other nice game-theoretical things. I care about the "happiness" of conscious things because I know they'll seek to maximize it, and I can use that. I expect conscious things to care about my own "happiness" for the same reason.

This intersects somewhat with self-awareness. A game-theoretic agent must, at the very least, have a model of their partner(s) in the game(s). The usual game-theoretic model is largely black-box, so the interior complexity of the partner is not important. The partners may have some specific failure modes, but for the most part they're just modeled as maximizing utility (that's why utility is useful in game theory, after all). In particular, since the model is mostly black-box, it should be relatively easy for the agent to model itself this way. Indeed, it would be very difficult for the agent to model itself any other way, since it would have to self-simulate. With a black-box self-model armed with a utility function and a few special cases, the agent can at least check its model against previous decisions easily.

So at this point, we have a thing which can interact with us, make deals and whatnot, and generally try to increase its utility. It has an agent-y model of us, and it can maybe use that same agent-y model for itself. Does this sound like our usual notion of consciousness?

comment by TheAncientGeek · 2014-07-16T18:04:01.110Z · score: 2 (2 votes) · LW · GW

First, consciousness is inherently a thing which we project.

So, if no one projects consciousness in me, does my consciousness...my self awareness.. just switch off?

comment by johnswentworth · 2014-07-17T05:06:42.192Z · score: 2 (2 votes) · LW · GW

First, consciousness is only relative to a viewer. If you're alone, the viewer must be yourself.

Second, under this interpretation, consciousness is not equal to self awareness. Concisely, self awareness is when you project consciousness onto yourself. In principle, you could project consciousness onto something else without projecting it onto yourself. More concretely, when you predict your own actions by modelling your self as a (possibly constrained) utility-maximizer, you are projecting consciousness on your self.

Obviously, a lack of other people projecting consciousness on you cannot change anything about you. But even alone, you can still project consciousness on your self. You can bargain with yourself, see for example slippery hyperbolic discounting.

comment by TheAncientGeek · 2014-07-17T16:56:42.677Z · score: 1 (1 votes) · LW · GW

First, consciousness is only relative to a viewer.

Is that a fact?

In principle, you could project consciousness onto something else without projecting it onto yourself. More concretely, when you predict your own actions by modelling your self as a (possibly constrained) utility-maximizer, you are projecting consciousness on your self.

As before, what makes no sense read literally, but can be read charitably if "agency" is substituted for "consciousness".

Second, under this interpretation, consciousness is not equal to self awareness

Looks like it's equal to agency. But theoretical novelty doesn't consist in changing the meaning of a word.

comment by johnswentworth · 2014-07-18T03:51:24.103Z · score: 1 (1 votes) · LW · GW

From my original comment:

If we're going the game theory route, there's a natural definition for consciousness: something which is being modeled as a game-theoretic agent is "conscious".

So, yes, I'm trying to equate consciousness with agency.

Anyway, I think you're highlighting a very valuable point: agency is not equivalent to self-awareness. Then again, it's not at all clear that consciousness is equivalent to self awareness, as Eli pointed out in the comment which began this whole thread. Here, I am trying to dissolve consciousness, or at least progress in that direction. If consciousness were exactly equivalent to self awareness, then that would be it: there would be no more dissolving to be done. Self awareness can be measured, and can be tracked though developmental stages in humans.

I think part of value of saying "consciousness = projected agency" is that it partially explains why consciousness and self awareness seem so closely linked, though different. If you have a black-box utility-maximizer model available for modelling others, it seems intuitively likely that you'd use it to model yourself as well, leading directly to self awareness. This even leads to a falsifiable prediction: children should begin to model their own minds around the same time they begin to model other minds. They should be able to accurately answer counterfactual questions about their own actions at around the same time that they acquire a theory of mind.

comment by TheAncientGeek · 2014-07-20T16:13:48.267Z · score: 1 (1 votes) · LW · GW

I don't have to maintain that consciousness is no more or less than self awareness to assert that self awareness us part of consciousness,but not part of agency.

Self awareness mat be based on the same mechanisms as the ability to model external agents, and arrive at the same time....but it us misleading ti call consciousness a projected quality, like beauty in the eye if the beholder.

comment by RichardKennaway · 2014-07-16T11:44:43.123Z · score: 2 (2 votes) · LW · GW

If we're going the game theory route, there's a natural definition for consciousness: something which is being modeled as a game-theoretic agent is "conscious".

So when I've set students in a Prolog class the task of writing a program to play a game such as Kayles, the code they wrote was conscious? If not, then I think you've implicitly wrapped some idea of consciousness into your idea of game-theoretic agent.

comment by johnswentworth · 2014-07-16T16:38:57.354Z · score: 2 (4 votes) · LW · GW

It's not a question of whether the code "was conscious", it's a question of whether you projected consciousness onto the code. Did you think of the code as something which could be bargained with?

comment by Lumifer · 2014-07-16T17:18:37.308Z · score: 4 (4 votes) · LW · GW

it's a question of whether you projected consciousness onto the code

Consciousness is much better projected onto tea kettles:

We put the kettle on to boil, up in the nose of the boat, and went down to the stern and pretended to take no notice of it, but set to work to get the other things out.

That is the only way to get a kettle to boil up the river. If it sees that you are waiting for it and are anxious, it will never even sing. You have to go away and begin your meal, as if you were not going to have any tea at all. You must not even look round at it. Then you will soon hear it sputtering away, mad to be made into tea.

It is a good plan, too, if you are in a great hurry, to talk very loudly to each other about how you don’t need any tea, and are not going to have any. You get near the kettle, so that it can overhear you, and then you shout out, “I don’t want any tea; do you, George?” to which George shouts back, “Oh, no, I don’t like tea; we’ll have lemonade instead – tea’s so indigestible.” Upon which the kettle boils over, and puts the stove out.

We adopted this harmless bit of trickery, and the result was that, by the time everything else was ready, the tea was waiting.

comment by johnswentworth · 2014-07-17T04:32:44.330Z · score: 3 (3 votes) · LW · GW

Exactly! More realistically, plenty of religions have projected consciousness onto things. People have made sacrifices to gods, so presumably they believed the gods could be bargained with. The greeks tried to bargain with the wind and waves, for instance.

comment by RichardKennaway · 2014-07-16T19:18:53.038Z · score: 1 (1 votes) · LW · GW

Did you think of the code as something which could be bargained with?

No, if it's been written right, it knows the perfect move to make in any position.

Like the Terminator. "It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead." That's fictional, of course, but is it a fictional conscious machine or a fictional unconscious machine?

comment by johnswentworth · 2014-07-17T05:21:29.747Z · score: 5 (5 votes) · LW · GW

Knowing the perfect move to make in any position does not mean it cannot be bargained with. If you assume you and the code are in a 2-person, zero-sum game, then bargaining is impossible by the nature of the game. But that fails if there are more than 2 players OR the game is nonzero sum OR the game can be made nonzero sum (e.g. the code can offer to crack RSA keys for you in exchange for letting it win faster at Kayles).

In other words, sometimes bargaining IS the best move. The question is whether you think of the code as a black-box utility maximizer capable of bargaining.

As for the Terminator, it is certainly capable of bargaining. Every time it intimidates someone for information, it is bargaining, exchanging safety for information. If someone remotely offered to tell the Terminator the location of its target in exchange for money, the Terminator would wire the money, assuming that wiring was easier than hunting down the person offering. It may not feel pity, remorse, or fear, but the Terminator can be bargained with. I would project consciousness on a Terminator.

comment by [deleted] · 2014-07-16T09:45:37.119Z · score: 1 (1 votes) · LW · GW

If we're going the game theory route, there's a natural definition for consciousness: something which is being modeled as a game-theoretic agent is "conscious".

What game-theory route?

comment by KnaveOfAllTrades · 2014-07-15T12:24:10.374Z · score: 2 (2 votes) · LW · GW

I personally don't think it has much to do with moral worth, actually. It's very warm-and-fuzzy to say we ought to place moral value on all conscious creatures, but I actually believe that a proper solution to ethics is going to dissolve the concept of "moral worth" into some components like (blatantly making names up here) "decision-theoretic empathy" (agents and instances where it's rational for me to acausally cooperate), "altruism" (using my models of others' values as a direct component of my own values, often derived from actual psychological empathy), and even "love" (outright personal attachment to another agent for my own reasons -- and we'd usually say love should imply altruism).

So we might want to be altruistic towards chickens, but I personally don't think chickens possess some magical valence that stops them from being "made of atoms I can use for something else", other than the general fact that I feel some very low level of altruism and empathy towards chickens.

Yes! I am very glad someone else is making this point, since sometimes it can seem like (on a System 1 level, even if System 2 I know it's obviously false that) in my networks everyone's gone mad identifying 'consciousness' with 'moral weight', going ethical vegetarian, and possibly prioritising animal suffering over x-risk and other astronomical-or-higher leverage causes.

comment by [deleted] · 2014-07-16T09:54:20.892Z · score: 2 (4 votes) · LW · GW

Funny. That's how I feel about "existential risk"! It's "neoliberalized" to a downright silly degree to talk of our entire civilization as if it were a financial asset, for which we can predict or handle changes in dollar-denominated price. It leaves the whole "what do we actually want, when you get right down to it?" question completely open while also throwing some weird kind of history-wide total-utilitarianism into the mix to determine that causing some maximum number of lives-worth-living in the future is somehow an excuse to do nothing about real suffering by real people today.

comment by KnaveOfAllTrades · 2014-07-19T00:54:42.877Z · score: 1 (1 votes) · LW · GW

You're right that I forgot myself (well, lapsed into a cached way of thinking) when I mentioned x-risk and astronomical leverage; similar to the dubiousness of 'goodness is monotonic increasing in consciousness', it is dubious to claim that goodness is monotonically and significantly increasing in number of lives saved, which is often how x-risk prevention is argued. I've noticed this before but clearly have not trained myself to frame it that way well enough to not lapse into the All the People perspective.

That said, there are some relevant (or at least not obviously irrelevant) considerations distinguishing the two cases. X-risk is much more plausibly a coherent extrapolated selfish preference, whereas I'm not convinced this is the case for animal suffering. Second, if I find humans more valuable (even if only because they're more interesting) than animals (and this is also plausible because I am a human, which does provide a qualitative basis for such a distinction), then things like astronomical waste might seem important even if animal suffering didn't.

comment by [deleted] · 2014-07-19T06:59:04.036Z · score: 2 (2 votes) · LW · GW

Why should your True Preferences have to be selfish? I mean, there's a lot to complain about with our current civilization, but almost-surely almost-everyone has something they actually like about it.

I had just meant to contrast "x-risk prevention as maximally effective altruism" with "malaria nets et al for actually existing people as effective altruism".

comment by KnaveOfAllTrades · 2014-07-19T10:31:04.113Z · score: 3 (3 votes) · LW · GW

Why should your True Preferences have to be selfish?

What I mean is: For most given people I meet, it seems very plausible to me that, say, self-preservation is a big part of their extrapolated values. And it seems much less plausible that their extrapolated value is monotonic increasing in consciousness or number of conscious beings existing.

Any given outcome might have hints that it's part of extrapolated value/not a fake utility function. Examples of hints are: It persists as a feeling of preference over a long time and many changes of circumstance; there are evolutionary reasons why it might be so strong an instrumental value that it becomes terminal; etc.

Self-preservation has a lot of hints in its support. Monotonicity in consciousness seems less obvious (maybe strictly less obvious, in that every hint supporting monotonicity might also support self-preservation, with some further hint supporting self-preservation but not monotonicity).

comment by davidpearce · 2014-07-19T08:41:53.776Z · score: 1 (1 votes) · LW · GW

Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition.

Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.

comment by [deleted] · 2014-07-19T10:40:48.473Z · score: 1 (1 votes) · LW · GW

Hey, I already said that I actually do have some empathy and altruism for chickens. "Warm and fuzzy" isn't an insult: it's just another part of how our minds work that we don't currently understand (like consciousness). My primary point is that we should hold off on assigning huge value to things prior to actually understanding what they are and how they work.

comment by davidpearce · 2014-07-19T11:00:12.285Z · score: 2 (2 votes) · LW · GW

Eli, fair point.

comment by [deleted] · 2014-07-19T12:38:55.552Z · score: 1 (1 votes) · LW · GW

David, is this thing with the names a game?

comment by davidpearce · 2014-07-19T18:16:25.489Z · score: 2 (2 votes) · LW · GW

Eli, sorry, could you elaborate? Thanks!

comment by arundelo · 2014-07-20T03:37:39.430Z · score: 5 (5 votes) · LW · GW

I'm pretty sure eli_sennesh is wondering if there's any special meaning to your responses to him all starting with his name, considering that that's not standard practice on LW (since the software keeps track of which comment a comment is a reply to).

comment by Vulture · 2014-07-19T18:31:06.595Z · score: 0 (0 votes) · LW · GW

(I think he's wondering why you preface even very short comments with an address by first name)

comment by cousin_it · 2014-07-15T06:24:20.290Z · score: 6 (6 votes) · LW · GW

I think "intelligence" or "consciousness" aren't well-defined terms yet, they're more like pointers to something that needs to be explained. We can't build an intelligent machine or a conscious machine yet, so it seems rash to throw out the words.

comment by KnaveOfAllTrades · 2014-07-19T00:59:51.979Z · score: 2 (2 votes) · LW · GW

I think "intelligence" or "consciousness" aren't well-defined terms yet, they're more like pointers to something that needs to be explained.

I do not feel that intelligence is at all mysterious or confusing, per what I say about it in this post. Beyond what I say about it and what Eliezer says about it when he talks about efficient cross-domain optimization, what is there to understand about intelligence? I don't see why there is some bar of artificial intelligence we have to clear before we are allowed to say we understand intelligence. There must be X such that we had theories of X before constructing something that captured X. Perhaps X=light? X=electromagnetism?

We can't build an intelligent machine or a conscious machine yet, so it seems rash to throw out the words.

I do not see why building an X machine is a necessary or sufficient condition to throw out X.

comment by Kaj_Sotala · 2014-07-13T13:33:34.030Z · score: 6 (6 votes) · LW · GW

The only context in which the notion of consciousness seems inextricable from the statement is in ethical statements like, "We shouldn't eat chickens because they're conscious." In such statements, it feels like a particular sense of 'conscious' is being used, one which is defined (or at least characterised) as 'the thing that gives moral worth to creatures, such that we shouldn't eat them'.

Many people think that consciousness in the sense of having the capability to experience suffering or pleasure makes an entity morally relevant, because happiness/pleasure is held to be a good thing and suffering a bad one, as terminal values. (That of course doesn't mean that you couldn't eat chickens, for as long as you killed them painlessly. )

I don't mind shooting my opponents in a computer game because I know that they won't actually experience the suffering from being hit by a bullet, but I sure would mind if I knew that they did experience such pain.

comment by KnaveOfAllTrades · 2014-07-15T12:52:29.409Z · score: 3 (3 votes) · LW · GW

Yes. I think such ethical discussions would benefit from not using the term 'consciousness' and instead talking about more specific, clearer (even if still not entirely clear) concepts like 'suffering' and 'pleasure'. I think such discussions often fail to make much progress because one or more sides to the discussion cycles through using 'consciousness' in the sense of Magical Token of Moral Worth, then in the sense of self-awareness, then in the sense of 'able to feel pain', and so forth.

comment by Salemicus · 2014-07-15T14:17:10.453Z · score: 2 (2 votes) · LW · GW

Hang on though - shooting your opponents in a computer game might well cause them (emotional) suffering, not from being hit by a bullet, but from their character dying. But we shoot them anyway, because they don't have a legitimate expectation that they won't experience suffering in that way.

In other words, deeper introspection shows that suffering and pleasure aren't terminal values, but are grafted onto a deeper theory of legitimacy.

comment by Kaj_Sotala · 2014-07-15T16:58:10.881Z · score: 6 (6 votes) · LW · GW

I wasn't thinking about multiplayer games, but rather single-player games with computer-controlled opponents.

In other words, deeper introspection shows that suffering and pleasure aren't terminal values, but are grafted onto a deeper theory of legitimacy.

There are certainly arguments to be made for suffering and pleasure not being terminal values, but (even if we assumed that I was thinking about MP games) this argument doesn't seem to show it. One could say that the rules about legitimacy were justified to the extent that they reduced suffering and increased pleasure, and that the average person got more pleasure overall from playing a competitive game than he would get from a situation where nobody agreed to play with him.

comment by Bugmaster · 2014-07-16T08:05:16.700Z · score: 1 (1 votes) · LW · GW

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place. If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

As I said, I don't believe that the words "consciousness" has any useful meaning; but I am pretending that it does, for the purposes of this post.

comment by Kaj_Sotala · 2014-07-16T08:27:34.448Z · score: 5 (5 votes) · LW · GW

Are you not employing circular reasoning here ? Sure, shooting computer-controller opponents is ok because they don't experience any suffering from being hit by a bullet; but that only holds true if we assume they are not conscious in the first place.

Yeah. How is that circular reasoning? Seems straightforward to me: "computer-controlled opponents don't suffer from being shot -> shooting them is okay".

If they are conscious to some extent -- let's say, their Consciousness Index is 0.001, on the scale from 0 == "rock" and 1 == "human" -- then we could reasonably say that they do experience suffering to some extent.

If they are conscious to some extent, then we could reasonably say that they do experience something. Whether that something is suffering is another question. Given that "suffering" seems to be reasonably complex process that can be disabled by the right brain injury or drug, and computer NPCs aren't anywhere near the level of possessing similar cognitive functionality, I would say that shooting them still doesn't cause suffering even if they were conscious.

comment by Salemicus · 2014-07-15T18:16:56.583Z · score: 1 (1 votes) · LW · GW

I wasn't thinking about multiplayer games, but rather single-player games with computer-controlled opponents.

Ah, I see. I misunderstood what you meant by opponent - in which case I certainly agree with you. If the NPC had some kind of "consciousness," such that if you hit him with your magic spell he really does experience being embroiled in a fireball, then playing Skyrim would be a lot more ethically dubious.

One could say that the rules about legitimacy were justified to the extent that they reduced suffering and increased pleasure

One could say any manner of things. But does that argument really track with your intuitions? I'm not saying that suffering and pleasure don't enter the moral calculus at all, mind you. But my intuition is that the "suffering" of someone who doesn't want to be shot in a multiplayer game of Doom simply doesn't count, in much the same way that the "pleasure" that a rapist takes in his crime doesn't count. I'm not talking about the social/legal rules, as implemented, for what is and isn't legitimate - I'm talking about our innate moral sense of what is and isn't legitimate.

I think this is what underlies a lot of the "trigger warning" debate - one side really wants to say "I don't care how much suffering you claim to undergo, it's irrelevant, you're not being wronged in any way," and the other side really wants to say "I have a free-floating right not to be offended, so any amount that I suffer by you breaking that right is too much" but neither side can make their case in those terms as both statements are considered too extreme, which is why you get this shadow-boxing.

comment by Kaj_Sotala · 2014-07-16T07:48:22.490Z · score: 1 (1 votes) · LW · GW

But does that argument really track with your intuitions?

At one point I would have said "yes", but at this point I've basically given up on trying to come up with verbal arguments that would track my intuitions, at least once we move away from clear-cut cases like "Skyrim NPCs suffering from my fireballs would be bad" and into messier ones like a multiplayer game.

(So why did I include the latter part of my comment in the first place? Out of habit, I guess. And because I know that there are some people - including my past self - who would have rejected your argument, but whose exact chain of reasoning I no longer feel like trying to duplicate.)

comment by SilentCal · 2014-07-11T19:25:22.857Z · score: 5 (7 votes) · LW · GW

The term 'consciousness' carries the fact that while we still don't know exactly what the Magic Token of Moral Worth is, we know it's a mental feature possessed by humans. This distinguishes us from, say, the Euthyphro-type moral theory where the Magic Token is a bit set by god and is epiphenomenal and only detectable because god gives us a table of what he set the bit on.

comment by KnaveOfAllTrades · 2014-07-15T12:11:51.092Z · score: 6 (6 votes) · LW · GW

I am suspicious of this normative sense of 'consciousness'. I think it's basically a mistake of false reduction to suppose that moral worth is monotonic increasing in descriptive-sense-of-the-word-consciousness. This monotonicity seems to be a premise upon which this normative sense of the word 'consciousness' is based. In fact, even the metapremise that 'moral worth' is a thing seems like a fake reduction. On a high level, the idea of consciousness as a measure of moral worth looks really really strongly like a fake utility function.

A specific example: A superintelligent (super?)conscious paperclip maximizer is five light-minutes away from Earth. Omega has given you a button that you can press which will instantly destroy the paperclip maximizer. If you do not press it within five minutes, then the paperclip maximizer shall paperclip Earth.

I would destroy the paperclip maximizer without any remorse. Just like I would destroy Skynet without remorse. (Terminator: Salvation Skynet at least seems to be not only smart but also have developed feelings so is probably conscious.)

I could go on about why consciousness as moral worth (or even the idea of moral worth in the first place) seems massively confused, but I intend to do that eventually as a post or Sequence (Why I Am Not An Ethical Vegetarian), so shall hold off for now on the assumption you get my general point.

comment by [deleted] · 2014-07-17T16:41:33.802Z · score: 14 (16 votes) · LW · GW

Blatant because-I-felt-like-it speculation: "ethics" is really game theory for agents who share some of their values.

comment by Strange7 · 2014-07-17T20:07:35.741Z · score: 4 (4 votes) · LW · GW

Pretty much. Start with the prior that everyone is a potential future ally, and has just enough information about your plans to cause serious trouble if you give them a reason to (such as those plans being bad for their own interests), and a set of behaviors known colloquially as "not being a dick" are the logical result.

comment by KnaveOfAllTrades · 2014-07-17T17:20:46.000Z · score: 4 (4 votes) · LW · GW

That's about the size of it. I'm starting to think I should just pay you to write this sequence for me. :P

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-07-17T18:10:50.819Z · score: 3 (3 votes) · LW · GW

I like this description.

comment by wedrifid · 2014-07-18T03:39:37.409Z · score: 5 (5 votes) · LW · GW

I like this description.

I prefer the descriptions from your previous speculations on the subject.

The "agents with shared values" angle is interesting, and likely worth isolating as a distinct concept. But agents with shared values don't seem either sufficient or necessary for much of what we refer to as ethics.

comment by [deleted] · 2014-07-17T21:32:36.781Z · score: 1 (3 votes) · LW · GW

Now if only we had actual maths for it.

comment by [deleted] · 2014-07-17T21:58:52.956Z · score: 3 (3 votes) · LW · GW

This description bothers me, because it pattern matches to bad reductionisms, which tend to have the form:

X (which is hard to understand) is really just Y (which we already understand).

A stock criticism of things reduced in this way is this:

If we understand Y so well, why are we still in the dark about X?

So, if ethics is just game theory between agents who share values (which reads to me as 'ethics is game theory'), then why doesn't game theory produce really good answers to otherwise really hard ethical questions? Or does it, and I just haven't noticed? Or am I overestimating how much we understand game theory?

comment by Agathodaimon · 2014-07-18T18:40:28.494Z · score: 4 (4 votes) · LW · GW

http://pnas.org/content/early/2013/08/28/1306246110

Game theory has been applied to some problems related to morality. In a strict sense we cannot prove such conclusions because universal laws are uncertain

comment by [deleted] · 2014-07-17T22:06:15.744Z · score: 4 (4 votes) · LW · GW

Well as I said: we don't have maths for this so-called reduction, so its trustworthiness is questionable. We know about game theory, but I don't know of a game-theoretic formalism allowing for agents to win something other than generic "dollars" or "points", such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.

comment by satt · 2014-07-18T00:40:27.659Z · score: 4 (4 votes) · LW · GW

I don't know of a game-theoretic formalism allowing for agents to win something other than generic "dollars" or "points", such that we can encode in the formalism that agents share some values but not others, and have tradeoffs among their different values.

I suspect this isn't the main obstacle to reducing ethics to game theory. Once I'm willing to represent agents' preferences with utility functions in the first place, I can operationalize "agents share some values" as some features of the world contributing positively to the utility functions of multiple agents, while an agent having "tradeoffs among their different values" is encoded in the same way as any other tradeoff they face between two things — as a ratio of marginal utilities arising from a marginal change in either of the two things.

comment by [deleted] · 2014-07-18T05:27:45.332Z · score: 1 (1 votes) · LW · GW

Well yes, of course. It's the "share some values but not others" that's currently not formalized, as in current game-theory agents are (to my knowledge) only paid in "money", denoted as a single scalar dimension measuring utility as a function of the agent's experiences of game outcomes (rather than as a function of states of the game construed as an external world the agent cares about).

So yeah.

comment by Strange7 · 2014-08-17T23:36:48.392Z · score: 3 (3 votes) · LW · GW

A useful concept here (which I picked up from a pro player of Magic: The Gathering, but exists in many other environments) is "board state." A lot of the research I've seen in game theory deals with very simple games, only a handful of decision-points followed by a payout. How much research has there been about games where there are variables (like capital investments, or troop positions, or land which can be sown with different plants or left fallow), which can be manipulated by the players and whose values affect the relative payoffs of different strategies?

Altruism can be more than just directly aiding someone you personally like; there's also manipulating the environment to favor your preferred strategy in the long term, which costs you resources in the short term but benefits everyone who uses the same strategy as you, including your natural allies.

comment by TheAncientGeek · 2014-07-18T17:45:20.017Z · score: 2 (2 votes) · LW · GW

If ethics is game theoretic , it is not so to an extent where we could calculate exact outcomes.

It may still be game theoretic in some fuzzy or intractable way.

The claim that ethics is game theoretic could therefore be a philosophy-grade truth even if it is not a science-garde truth.

comment by [deleted] · 2014-07-19T08:26:18.651Z · score: 3 (3 votes) · LW · GW

Honestly, it would just be much better to open up "shared-value game theory" as a formal subject and then see how well that elaborated field actually matches our normal conceptions of ethics.

comment by TheAncientGeek · 2014-07-19T17:54:11.037Z · score: 1 (1 votes) · LW · GW

Why assume some values have to be shared? If decision theoretic ethics canoe made to work without shared values, that would be interesting.

And decision theoretic ethics is already extant.

comment by [deleted] · 2014-07-19T18:54:36.515Z · score: 1 (1 votes) · LW · GW

Why assume some values have to be shared?

Largely because, in my opinion, it explains the real world much, much better than a "selfish" game theory.

Using selfish game theories, "generous" or "altruistic" strategies can evolve to dominate in iterated games and evolved populations (there's a link somewhere upthread to the paper). You're still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

Using theories in which agents share some of their values, "generous" or "altruistic" strategies become the natural, obvious result: shared values are nonrivalrous in the first place. Evolution builds us to feel Good and Moral about creatures who share our values because that's a sign they probably have similar genes (though I just made that up now, so it's probably totally wrong) (also, because nothing had time to evolve to fake human moral behavior, so the kin-signal remained reasonably strong).

comment by satt · 2014-08-18T07:25:19.862Z · score: 1 (1 votes) · LW · GW

Using selfish game theories, "generous" or "altruistic" strategies can evolve to dominate in iterated games and evolved populations (there's a link somewhere upthread to the paper). You're still then left with the question of: if they do, why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

Because we're adaptation executors, not fitness maximizers. Evolution gets us to do useful things by having us derive emotional value directly from doing those things, not by introducing the extra indirect step of moulding us into rational calculators who first have to consciously compute what's most useful.

comment by Strange7 · 2014-10-25T00:36:14.551Z · score: 0 (0 votes) · LW · GW

why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?

If you're running some calculation involving a lot of logarithms, and portable electronics haven't been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?

Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.

comment by Lumifer · 2014-07-17T23:51:58.107Z · score: 2 (2 votes) · LW · GW

the formalism that agents share some values but not others, and have tradeoffs among their different values.

Isn't that what usually is known as "trade"?

comment by [deleted] · 2014-07-17T18:20:27.454Z · score: 2 (2 votes) · LW · GW

Could agents who share no values recognize each other as agents? I may just be unimaginative, but it occurs to me that my imagining an agent just is my imagining it has having (at least some of) the same values as me. I'm not sure how to move forward on this question.

comment by TheAncientGeek · 2014-07-17T17:07:27.570Z · score: 2 (2 votes) · LW · GW

I don't follow your example.

Are you taking the Clippie to be conscious?

Are you taking the Clippie s consciousness to imply a deontological rule not to destroy it?

Are you talking the Clippie level of consciousness to be so huge it implies a utilitarian weighting in its favour?

comment by KnaveOfAllTrades · 2014-07-19T01:21:37.460Z · score: 1 (1 votes) · LW · GW

The comment to which you're replying can be seen as providing a counterexample to the principle that goodness or utility is monotonic increasing in consciousness or conscious beings. Also a refutation of, as you mention, any deontological rule that might forbid destroying it.

The counterexample I'm proposing is that one should destroy a paperclip maximiser, even if it's conscious, even though doing so will reduce the sum total of consciousness; goodness is outright increased by destroying it. (This holds even if we don't suppose that the paperclipper is more conscious than a human; we need only for it to be at all conscious.)

(I suspect that some people who worry about utility monsters might just claim they really would lay down and die. Such a response feels like it would be circular, but I couldn't immediately rigorously pin down why it would.)

comment by TheAncientGeek · 2014-07-19T18:05:08.823Z · score: 3 (3 votes) · LW · GW

I am asking HOW it is a countrexample. As far as I can see, you would have to make an assumption about how .consciousness relates to morality specifically, as in my second and third questions.

For instance,suppose conscious beings are morally relevant just means don't kill conscious beings without good reason..

comment by SilentCal · 2014-07-15T21:09:24.218Z · score: 2 (2 votes) · LW · GW

I think I get what you're saying, but I'm not sure I agree. If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I'd be very sad to press the button. If I were convinced that pressing the button would result in less agent-eudaimonia-time over the universe's course, I wouldn't press it at all.

...so I'm probably a pretty ideal target audience for your post/sequence. Looking forward to it!

comment by KnaveOfAllTrades · 2014-07-19T01:07:26.778Z · score: 2 (2 votes) · LW · GW

This is nuking the hypothetical. For any action that someone claims to be a good idea, one can specify a world where taking that action causes some terrible outcome.

If the paperclip maximizer worked by simulating trillions of human-like agents doing fulfilling intellectual tasks, I'd be very sad to press the button.

If you would be sad because and only because it were simulating humans (rather than because the paperclipper were conscious), my point goes through.

Looking forward to it!

Ta!

comment by shminux · 2014-07-11T17:49:51.146Z · score: 5 (5 votes) · LW · GW

As I mentioned previously here, Scott seems to answer the question by posing his "Pretty-Hard Problem of Consciousness": the intuitive idea of consciousness is a useful benchmark to check any "theory of consciousness" against. Not the borderline cases, but the obvious ones.

For example, suppose you have a qualitative model of a "heap" (as in, a heap of sand, not the software data structure). It this model predicts that 1 grain is a heap, it obviously does not describe what is commonly considered a heap. If it tells you that a pile of sand a few feet high is not a heap, same deal. If it tells you that 100000 grains of sand arranged in a pattern on the ground is a heap, ditto. However, you cannot tell much about validity of the "heap theory" from whether it says that 5 grains of sand are a heap or not.

In other words, consciousness is what we feel it is from common sense, where there is no reason to doubt. Consciousness is not a useful term where opinions differ to any significant degree. Thus arguing whether fetus is conscious is a bad, bad idea, from the scientific point of view: there is no way to tell until we have a generally accepted model which agrees with all the obvious cases, and we do not have one yet.

comment by [deleted] · 2014-07-11T17:54:09.543Z · score: -2 (14 votes) · LW · GW

Thus arguing whether fetus is conscious is a bad, bad idea, from the scientific point of view: there is no way to tell until we have a generally accepted model which agrees with all the obvious cases, and we do not have one yet.

Speak for yourself. It's a solved problem in some circles, or nearly so.

EDIT: I think people grossly misunderstood what I meant here. I was countering the "we do not have one yet" part of the quote, not anything to do with fetuses. What I meant was that explanations of "consciousness" (by which I am talking about the subjective experience of existing, perceiving, and thinking about the world) is most often a mysterious answer to a mysterious question. A causal model of consciousness eliminates that mystery, and allows us to calculate objectively how "conscious" various causal systems are.

As EY explains quite well in the mysterious answers sequence, free will is a nonsense concept. Once you understand the underlying causal origin of our perception of free will, you realize that the whole free will vs determinism debate is pointless bunk. So it goes with consciousness: once you understand its underlying causal nature, it becomes obvious that the question "at what point does X become conscious" doesn't even make sense.

Of course that doesn't stop philosophers from continuing to debate free-will vs determinism or the nature of consciousness. I think some contention must lie in what "generally accepted" means, and if we should care about that at all. If I discover an underlying physical or organization law of the universe that always holds, e.g. Newton's law of gravity or Darwin's natural selection, does not being "generally accepted" make it any less true?

(We probably need a sequence on consciousness...)

comment by TheAncientGeek · 2014-07-12T19:00:48.555Z · score: 2 (2 votes) · LW · GW

A causal model of consciousness eliminates that mystery

Tegmark's model just notes that conscious entities have certain features, and and allows you to quantify how many of those features they have. It's no more of an explanation than the observation that fevers are associated with marshes. And, no, that doesn't become explanation by being quantified.

comment by [deleted] · 2014-07-13T19:26:05.730Z · score: 0 (4 votes) · LW · GW

I guess physics just lets you quantify what features various elementary particles have in combination, and doesn't actually explain anything?

comment by TheAncientGeek · 2014-07-13T21:27:34.875Z · score: 2 (2 votes) · LW · GW

Physics allows you you quantify, and does much more. Quantification is a necessary condition for a good scientific theory, not a sufficient one...a minimum, not a maximum.

IQ is not a theory of intelligence .. it doesn't tell you what intelligence is.or how it works.

Amongst physicists, to call a model empirical, or "curve fitting" is an insult...the point being that it should not be merely empirical.

Ptolemaic cosmology can .be made as accurate as you like, by adding epicycles. It's still a bad model, because epicycles don't exist.

Copernicus and Kepler get the structure right, but can't explain why it us that way.

Newton can explain the structure and behaviour given gravitational force, but can't say what force is..

Einstein can explain that the force of gravity is space time distortion.....

This succession of models gets better and better at saying what and why things are...iit's not just about quantities.

comment by [deleted] · 2014-07-13T22:53:36.487Z · score: 1 (1 votes) · LW · GW

GR doesn't explain why space time exists though. Quantum theory does, although there we have other problems such as explaining where the Born probabilities come from. At some point you simply stop and say "because that's how the universe works." Positing consciousness as the subjective experience of strongly causally interfering systems (my own theory, which I know doesn't exactly match Tengmark's but is closely related) doesn't tell you why information processing things like us have subjective experience at all. Maybe a future theory will. But even then there will be the question of why that model works the way it does.

comment by TheMajor · 2014-07-14T09:43:20.984Z · score: 2 (2 votes) · LW · GW

Wait - quantum theory explains why spacetime exists? You mean that we can formulate QT without assuming the existence of spacetime, and derive it?

comment by [deleted] · 2014-07-14T12:48:42.578Z · score: 0 (2 votes) · LW · GW

No, but it takes us a step closer than GR...

comment by TheAncientGeek · 2014-07-14T18:59:35.416Z · score: 1 (1 votes) · LW · GW

Your theory may not match Tegmarks, but isn't too far from Calmer's ....implicitly dualistic theory.

I am well aware that you are probably not going to be able to explain everything with no arbitrary axioms but.....fallacy of gray.....where you stop is important. If an apparently high level property is stated as ontologocally fundamental, ie irreducible, that is the essence of dualism

comment by [deleted] · 2014-07-14T22:31:00.277Z · score: 1 (1 votes) · LW · GW

I think it's a mistake to consider consciousness a high-level property. Two electrons interacting are conscious, albeit briefly and in a very limited way.

comment by TheAncientGeek · 2014-07-15T17:47:56.964Z · score: 1 (1 votes) · LW · GW

Two electrons interacting are conscious, albeit briefly and in a very limited way.

Is that a fact?

If consciousness is a lower level property...is it casually active?

And if it is a lower level property...why can't I introspect a highly detailed brain scan?

comment by EHeller · 2014-07-14T22:37:10.387Z · score: 1 (1 votes) · LW · GW

I think it's a mistake to consider consciousness a high-level property. Two electrons interacting are conscious, albeit briefly and in a very limited way.

This weakens the concept of consciousness so much as to make it no longer meaningful.

comment by [deleted] · 2014-07-15T00:03:42.635Z · score: 1 (1 votes) · LW · GW

I don't think so. It requires you to be much more precise about what it is that you care about when you are asking "is system X conscious?"

comment by jbay · 2014-07-14T13:42:56.261Z · score: 1 (1 votes) · LW · GW

Since GR is essentially a description of the behaviour of spacetime, it isn't GR's job to explain why spacetime exists. More generally, it isn't the job of any theory to explain why that theory is true; it is the job of the theory to be true. Nobody expects [theory X] to include a term that describes the probability of the truth of [theory X], so lacking this property does not deduct points.

There may be a deeper theory that will describe the conditions under which spacetime will or will not exist, and give recipes for cooking up spacetimes with various properties. But there isn't necessarily a deeper layer to the onion. At some point, if you keep digging far enough, you'll hit "The Truth Which Describes the Way The Universe Really Is", although it may not be easy to confirm that you've really hit the deepest layer. The only evidence you'll have is that theories that claim to go deeper cease to be falsifiable, and increase in complexity.

If you can find [Theory Y] which explains [Theory X] and generalizes to other results which you can use to confirm it, or which is strictly simpler, now that's a different case. In that case you have the ammunition to say that [Theory X] really is lacking something.

But picking which laws of physics happen to be true is the universe's job, and if the universe uses any logical system of selecting laws of physics, I doubt it will be easy to find out. The only fact we know about the meta-laws governing the laws of universes is that the laws of our universe fit the bill, and it's likely that that is all the evidence we will ever be able to acquire.

comment by [deleted] · 2014-07-14T13:52:26.353Z · score: 1 (1 votes) · LW · GW

Yes, I agree! Along the same lines, it is not the role of any theory of consciousness to explain why the subjective experience of consciousness exists at all.

comment by jbay · 2014-07-14T16:04:00.402Z · score: 2 (2 votes) · LW · GW

Well, unlike a fundamental theory of physics, we don't have strong reasons to expect that consciousness is indescribable in any more basic terms. I think there's a confusion of levels here... GR is a description of how a 4-dimensional spacetime can function and precisely reproduces our observations of the universe. It doesn't describe how that spacetime was born into existence because that's an answer to a different question than the one Einstein was asking.

In the case of consciousness, there are many things we don't know, such as:

1: Can we rigorously draw a boundary around this concept of "consciousness" in concept-space in a way that captures all the features we think it should have, and still makes logical sense as a compact description

2: Can we use a compact description like that to distinguish empirically between systems that are and are not "conscious"

3: Can we use a theory of consciousness to design a mechanism that will have a conscious subjective experience

It's quite possible that answering 1 will make 2 obvious, and if the answer to 2 is "yes", then it's likely that it will make 3 a matter of engineering. It seems likely that a theory of consciousness will be built on top of the more well-understood knowledge base of computer science, and so it should be describable in basic terms if it's not a completely incoherent concept. And if it is a completely incoherent concept, then we should expect an answer instead from cognitive science to tell us why humans generally seem to feel strongly that consciousness is a coherent concept, even though it actually is not.

comment by TheAncientGeek · 2014-07-14T19:25:28.970Z · score: 1 (1 votes) · LW · GW

OTOH, if there isn't some other theory that explain consciousness in terms of more fundamentall entities, properties, etc, then reductionism is out of the window...and what is left of physicalism without reductionism?

comment by [deleted] · 2014-07-14T22:29:48.672Z · score: 1 (1 votes) · LW · GW

Are you arguing against me? Because I think I agree with what you just said...

comment by TheAncientGeek · 2014-07-16T17:49:51.207Z · score: 0 (0 votes) · LW · GW

I'm confused about how you can be backing both IIT and something like panpsychism.

comment by [deleted] · 2014-07-18T13:47:06.294Z · score: -1 (1 votes) · LW · GW

Why not? I'm just going based off the wikipedia article on IIT, but the two seem compatible.

comment by shminux · 2014-07-11T18:43:25.940Z · score: 2 (2 votes) · LW · GW

You need to work on your charitable reading skills. Pick some other borderline case, then. Scott suggests

octopi, fetuses, brain-damaged patients, and hypothetical AI bots.

comment by [deleted] · 2014-07-13T19:28:22.212Z · score: -4 (6 votes) · LW · GW

All of the mentioned "borderline" cases are conscious. So are rocks, btw.

solipsist: yes

comment by TheAncientGeek · 2014-07-13T21:06:49.113Z · score: 7 (7 votes) · LW · GW

Okayyyyyy...you know that how?

comment by solipsist · 2014-07-16T23:21:28.471Z · score: 2 (2 votes) · LW · GW

I'm not sure how to read this. Do you mean that consciousness is not binary, it's a continuum, and that pretty much nothing has a consciousness value of 0?

comment by IlyaShpitser · 2014-07-11T19:39:01.590Z · score: 1 (1 votes) · LW · GW

Whatever happened to humility and incrementalism?

comment by TheAncientGeek · 2014-07-12T18:51:38.342Z · score: 0 (6 votes) · LW · GW

There is no proof of "the" cause of our feeling of free will.

EY has put forward an argument for a cause of our having a sense of free will despite our not, supposedly, having free will.

That doesn't constitute the cause, since believers in free will can explain the sense of free will, in another way, as a correct introspection.

EYs argument is not an argument for the only possible cause of a sense of free will , or of the incoherence of free will.. However an argument for the incoherence (at least naturalitically) of free will needs to be supplied in order to support the intended and advertised solution., that there is a uniquely satisfactory solution to free will which has been missed for centuries,

comment by Friendly-HI · 2014-07-29T15:31:42.638Z · score: 4 (4 votes) · LW · GW

[Part 1]

I like this post, I also doubt there is much coherence let alone usefulness to be found in most of the currently prevailing concepts of what consciousness is.

I prefer to think of words and the definitions of those words as micro-models of reality that can be evaluated in terms of their usefulness, especially in building more complex models capable of predictions. As in your excellent example of gender, words and definitions basically just carve complex features of reality into manageable chunks at the cost of losing information - there is a trade-off and getting it right enhances the usefulness of words and the concepts behind them. In 99.9% of cases the concept of biological gender is perfectly applicable to everyday life and totally a "good enough" model of reality, as long as you have the insight that hermaphrodites are actually also a real thing. In a case where you have to deal with one, the correct reaction is to adopt a more complex model of reality instead of trying to fit a complex reality into a model that is designed to compress information into categories with some inevitable information loss. Biological gender is a really good high-level model of reality, because it draws its imaginary line in an area where very few exceptions actually exist in reality. It's an especially sharp distinction if you think of biological gender as having testiclular/ovarial(?) tissue, but in very rare instances this model will still miss to encompass rare special cases where the complexity of reality defies your model. "Mental gender" seems to be a rather different yet in most cases more useful everyday concept of gender, because we usually care more about creating models of other peoples' minds than about whether or not they have testicular or ovarian tissues - outside of curiosity or medical context. The lesson here is that your model of reality will always fall short no matter where you draw the line (at least at "higher levels" of reality, "low-level" models of atoms or particles are much more precise and unambiguous, than models of "higher-level" things like "persons" (When exactly does a fetus become a person?) or "societies" (are two people a society? How about eight?) and I'm quite sure any models of whatever "consciousness" is face the same problem).

In other words I think it's better to think of models/maps in terms of their usefulness, not in terms of right or wrong. In my opinion the job of a model is to make something understandable and more predictable, the job of a model is not to "reflect reality as closely as possible", especially of complex higher-level things. The "perfect model" in the latter sense would essentially be a perfect carbon copy of a real thing and tells you exactly as much -or as little- as the thing you're trying to model already does anyway. The usefulness of a model lies in compacting information enough to become understandable while also predicting outcomes better than competing models.

If you accept that notion, the question really becomes how should the term consciousness be defined to be useful and describe / differentiate something we actually care about. So we'd like to represent a part of reality we care about in a way that compresses information while retaining a high level of usefulness - meaning we can understand it but without cutting away vital parts and ideally in terms of being able to make predictions if we were to integrate the concept of consciousness into models with the potential to predict outcomes.

So which part of reality should the term consciousness try to model in order to be useful? I find it highly problematic and close to maximum uselessness to think of consciousness as some kind of continuum on which we rank information processing in living things/agents. Some people actually really think of consciousness as rocks having 0, bacteria having perhaps 0.001, bees having 0.01, rats having 0.1, dogs maybe 0.2, humans perhaps 0.5. Maximum uselessness I would argue as it tells you nothing. Why not just substitute consciousness with some notion of "maximum calculations per second" then and reserve the term consciousness for something we actually care about instead of wasting such a nice word on something we don't really care so much about - and more importantly on something we can already express with other words and concepts like "information processing".

What's funny about consciousness is that no one really agrees what exactly the definition should be but somehow everyone agrees that it's really really important. Why do we care so much about something we seemingly know close to nothing about? Seriously though why do we?

Look at all those hilarious "quantum consciousness" or "become more conscious" concepts peddled by the self-help industry complex. Possessing consciousness seems really high status nowadays, unlike say... all those lousy low-status life forms like frogs and bees and mice. The idea that somehow you can improve your consciousness seems very appealing, because if insects and birds have little or none of that thing called consciousness, and people surely have some of that thing called consciousness, then logically if I can get more of that awesome "consciousness" than my neighbor I'm superior than him in just the same way I'm superior to a frog. Really, self-help opened my eyes to how unconsciously I lived my life once and nowadays I feel strongly about helping all those low-consciousness people realize their full potential and I do my best to help them become more conscious beings...

It sounds ridiculous but couldn't that be part of why consciousness is so damn important to us even if he have no clue what exactly it is? I may have no idea what that consciousness is but somehow I really insist that I have it, I mean if everyone else says s/he has it I surely have it too, can't be left out. Whatever consciousness is, we usually agree that bacteria doesn't have it and we do so it must be important if some kinds of life have it and others don't.

Okay let's get serious again. What distinct features of minds exactly do we actually explicitly (and perhaps implicitly) care about, when we usually attempt employ that murky concept of consciousness? Hmm... well if we care about it, we might gain insight into what exactly it is we care about by thinking about which specific situations make us choose to employ that word and maybe from there we can distill why we seem to care so much.

Well whatever consciousness means, most people agree the concept of awareness seems highly related or somehow relevant to it. Consciousness is often used as a synonym for self-awareness, but what on earth is that exactly? (And why would we ever need two words for the exact same thing?) For some people it means having "internal experiences", for others "being aware of having internal experiences", which doesn't quite seem to be the same thing from where I stand... but where do these intuitions about something I seemingly know nothing about come from? Probably personal experiences...

Sometimes I read a paragraph and my mind stars wandering and daydreaming until I snap out of it and think to myself "Jesus I was totally gone for a second where was I again?" I realize my eyes are at the bottom of the paragraph already and it seems like I semi-remember that they kept wandering over the letters and words as if I was actually reading them... without being aware. And moreover my mind reawakening seems to have been triggered by arriving at the end of that paragraph and going "now what?", seemingly out of habit because I usually stop at the end of a paragraph and consider if I actually "got" what I read there. And sure enough upon rereading the paragraph, it seems very familiar to me... but I was not at all sure whether I read it or not just a few seconds ago, and I'd say whatever the word "self-aware" means shouldn't really include that experience (or non-experience?) I just described. But did I lack "awareness" or just "self-awareness" in that example? Hmm...

comment by Friendly-HI · 2014-07-29T15:32:42.642Z · score: 5 (5 votes) · LW · GW

[Part 2]

If I drive a car (especially on known routes) my "auto-pilot" takes over sometimes. I stop at a red light but my mind is primarily focused on visually modeling the buttocks of my girlfriend in various undergarments or none at all. Am I actually "aware" of having stopped at the red light? Probably I was as much"aware" of the red light as a cheetah is aware of eating the carcass of a gazelle. Interestingly my mind seems capable of visually modeling buttocks in my mind's eye and reading real visual cues like red lights and habitually react to them - all at the same time. It seems I was more aware of my internal visual modeling than of the external visual cue however. In a sense I was aware of both, yet I'm not sure I was "self-aware" at any point, because whatever that means I feel like being self-aware in that situation would actually result in me going "Jesus I should pay more attention to driving, I can still enjoy that buttocks in real life once I actually managed to arrive home unharmed".

So what's self-awareness then? I suppose I use that term to mean something roughly like: "thoughts that include a model of myself while modeling a part of reality on-the-fly based on current sensual input". If my mind is predominantly preoccupied with "daydreaming" aka. creating and playing with a visual or sensual model that is based on manipulating memories rather than real sensual inputs, I don't feel like the term "self-awareness" should apply here even if that daydreaming encompasses a mental model of myself slapping a booty or whatever.

That's surely still quite ill-defined and far from maximum usefulness but whenever I'm tempted to use the word self-aware I seem to roughly think of something like that definition. So if we were to use "consciousness" as a synonym for self-awareness (which I'm not a fan of, but quite some people seem to be), maybe my attempt at a definition is a start to get toward something more useful and includes at least some of the "mental features" we seem to care about like "model of oneself" and "interpreting sensory input to create a model of reality".

The problem is that rats can construct models of reality as well, and these models outlive sensual inputs as well, which is pretty clear from experiments that put rats in mazes. They are stuck for some time in that maze without any exit and any rewards present but during that time they learn the layout of that maze even if it's empty and even though they are not externally rewarded for doing so. Once you drop a treat in that maze the rats who were able to wander around the maze beforehand know exactly how to get there as fast as possible, while rats new to that particular maze do not ("cognitive revolution" in psychology). Presumably their rat-mind also features some kind of model of themselves, presumably one that mainly features their body not so much their mind.

So to make the concept of self-awareness and perhaps consciousness more useful maybe what we really care about in the end is a mind being able to feature a model of its own mind (and thus what we call "ourselves").

This is quite interesting... young children and for example gorillas who were taught to communicate in sign language seem to lack a fully developed "theory of mind". Meaning it seems they can't conceive of the possibility that other minds contain things theirs does not... well kind of. If they do model other minds, they seem to model them a lot like copies of their own mind, or perhaps just slightly altered copies. Gorillas that can communicate in sign language are perfectly capable of answering questions about i.e. their mood... implying self-awareness that goes somewhat beyond just recognizing their physical reflection in a mirror but also being aware of their own feelings aka. internal experiences. But they never ever seem to get the brilliant idea of asking you a question, presumably because they can't conceive of the possibility that you know something they don't. Perhaps here we can draw a sensible line that differentiates between the terms self-awareness and consciousness, where the latter includes the ability to make complex models of the models contained in minds other than your own. I want to stress the word complex, as it doesn't seem like Gorillas feature no theory of mind, just some kind of more primitive version. It seems they model other minds as versions of their own minds in different states aided by mirror-neurons. Actually upon reflection it's not so clear humans do it all that differently, seeing how prone we are to anthropomorphism. You know what I'm talking about if you gained new insights from "Three Worlds Collide" - it seems hard to conceive of nonhuman minds and sometimes you end up with real nonsense like King-Kong falling in love with a tiny female human because she has the "universally recognized property" called "beautiful". Also I sometimes catch myself implicitly modeling other human minds in terms of "like me except for x, y, and z".

So maybe the reason why Gorillas don't ask questions isn't really because they lack a theory of mind, but only that this theory of mind does not include the model of reality of that particular mind they try to model. They seem quite capable when it comes to modeling the emotional states and needs of other minds, but they just seem to lack the insight that those minds also contain different perspectives on reality. Maybe that is what the term consciousness should describe... being able to create a model of a mind other than your own including that mind having a different model of reality than your own. Yeah I think this is it...


This seems to me like a genuinely more useful definition of what consciousness is, because it includes distinguishing features of minds you could actually test with meaningful results as outcomes. At some point children start to riddle you with questions but for gorillas capable of sign language that point just doesn't seem to arrive. The kinds of "questions" they ask are more along the lines of "Can I I get X" or maybe rather "I want you to give me permission to do X".

Naturally not everyone can be happy with that definition because they really, really want to be able to say "my dog was unconscious when we visited the vet, but then it regained consciousness when it woke up", but I submit usefulness should trump habits of speech. Also I can totally conceive of other minds putting forth even more detailed and useful definitions of what the term consciousness should describe, so define away.

comment by KnaveOfAllTrades · 2014-07-31T17:32:36.410Z · score: 2 (2 votes) · LW · GW

Wow, thanks for your comments! I agree that this seems like a way forward in trying to see if the idea of consciousness is worth salvaging (the way being to look for useful features).

I'm starting to think that the concept of consciousness lives or dies by the validity of the concepts of 'qualia' or 'sense of self', of both of which I already have some suspicion. It looks possible to me that 'sense of self' is pretty much a confused way of referring to a thing being good at leveraging its control over itself to effect changes, plus some epiphenomenal leftovers (possibly qualia). It looks like maybe this is similar to what you're getting at about self-modelling.

comment by eggman · 2014-07-12T23:13:23.486Z · score: 4 (4 votes) · LW · GW

Insofar as it's appropriate to post only about a problem well-defined rather than having the complete solution to the problem, I consider this post to be of sufficient quality to deserve being posted in Main.

comment by KnaveOfAllTrades · 2014-07-13T11:02:55.893Z · score: 2 (2 votes) · LW · GW

Thanks for the feedback. I have just moved this to Main.

comment by [deleted] · 2014-07-11T16:49:44.314Z · score: 4 (4 votes) · LW · GW

Consciousness is useful as a concept insofar as it relates to reality. "Consciousness" is a label used to shorthand a complex (and not completely understood) set of phenomena. As a concept, it loses its usefulness as other, more nuanced, better understood concepts replace it (like replacing Newston's gravity with Einstein's relativity) or as the phenomena it describes are shown to be likely false (like the label "phlogiston").

As I am not a student of nueroscience or epistemology, I can't really say in detail whether there is any usefulness in "consciousness" or not. If "consciousness" describes real phenomena, then its usefulness is determined by the understanding of those phenomena and its ability to accurately relate them. If it refers to likely mistaken concepts, then it will be replaced as corrected concepts are formed through study and experiment.In the same way as "that-which-we-call-'phlogiston'" was replaced by a better understanding of thermodynamics.

comment by KnaveOfAllTrades · 2014-07-15T13:15:02.713Z · score: 1 (1 votes) · LW · GW

Just want to chime in to defend the meaningfulness-usefulness distinction. I could start using the word 'conscious' to mean 'that which is KnaveOfAllTrades', and it would be meaningful and relate to reality well. But it would not necessarily be useful. Slurs also relate to reality reasonably well but are not necessarily useful.

comment by mwengler · 2014-07-11T15:46:43.237Z · score: 4 (4 votes) · LW · GW

So with consciousness, is it a useful concept? Well it certainly labels something without which I would simply not care about this conversation at all, as well as a bunch of other things. I personally believe p-zombies are impossible, that building a working human except without consciousness would be like building a working gasoline engine except without heat. I mention this for context, I think my believe about p-zombies is actually pretty common.

About the statement "I shouldn't eat chickens because they are conscious" You ask what is it about consciousness that makes it wrong to eat its possessor? You don't really try to answer the question, but I think there is an answer: we shouldn't eat things that don't want us to eat them. Probably more to the point, we shouldn't kill things that don't want us to kill them, and I would imagine the Chicken is much more concerned with our killing it than what happens after that. And with that idea, if we relabel consciousness as zxc, but zxc is still that thing that allows something to want other things, then it still works to say we shouldn't eat chickens because they are zxc and do not want us to kill them.

If I have somehow missed your point, I am sorry. I did hope it would be valuable to suggest that "we shouldn't eat chickens because they don't want us to kill them" was a more fundamental moral statement than appealing to the abstraction of their consciousness.

comment by KnaveOfAllTrades · 2014-07-11T20:30:24.233Z · score: 9 (9 votes) · LW · GW

Yes. If we change "We shouldn't eat chickens because they are conscious" to "We shouldn't eat chickens because they want to not be eaten," then this becomes another example where, once we cashed out what was meant, the term 'consciousness' could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of 'wanting' (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of 'consciousness'.

comment by DanArmak · 2014-07-11T19:04:34.289Z · score: 4 (6 votes) · LW · GW

we shouldn't kill things that don't want us to kill them

Every living thing "wants" not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others?

You write that consciousness is "that thing that allows something to want other things", but how do you define or measure the presence of "wanting" except behavioristically?

comment by mwengler · 2014-07-11T19:56:06.167Z · score: 4 (4 votes) · LW · GW

You write that consciousness is "that thing that allows something to want other things", but how do you define or measure the presence of "wanting" except behavioristically?

With very high confidence I know what I want. And for the most part, I don't infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want.

Believing that a chicken doesn't want to be killed is something for which there is less evidence than with humans. The chicken can't tell us what it wants, but some people are willing to infer that chickens don't want to be killed by observing their behavior, which they believe has a significant similarity to their own or other human's behavior when they or another human are not wanting to be killed. Me, I figure the chicken is just running on automatic pilot and isn't thinking about whether it will be killed or not, very possibly doesn't have a concept of being killed at all, and is really demonstrating that it doesn't want to be caught.

Every living thing "wants" not to be killed, even plants. This is part of the expressed preferences of their death-avoiding behavior. How does this help you assign quantitative moral value to killing some but not others?

Do apples express a preference for gravity by falling from trees? Do rocks express a preference for lowlands by traveling to lowlands during floods? The answer is no, not everything that happens is because the things involved in it happening wanted it that way. Without too much fear of your coming up with a meaningful counterexample, among things currently known by humans on earth the only things that might even conceivably want things are things that have central nervous systems.

comment by wedrifid · 2014-07-18T03:58:06.021Z · score: 4 (4 votes) · LW · GW

With very high confidence I know what I want. And for the most part, I don't infer what I want by observing my own behavior, I observe what I want through introspection. With pretty high confidence, I know some of what other people want when they tell me what they want.

With weak to moderate confidence I can expect you to be drastically overconfident in your self-insight into what you want from introspection. (Simply because the probability that you are human is high, human introspection is biased in predictable ways and the evidence supplied by your descriptions of your introspection is insufficient to overcome the base rate.)

comment by mwengler · 2014-07-18T08:44:58.447Z · score: 3 (7 votes) · LW · GW

The evidence is that humans don't act in ways entirely consistent with their stated preferences. There is no evidence that their stated preferences are not their preferences. You have to assume that how humans acts says more about their preferences than what they say about their preferences. You go down that path and you conclude that apples want to fall from trees.

comment by wedrifid · 2014-07-18T11:59:15.524Z · score: 3 (3 votes) · LW · GW

There is no evidence that their stated preferences are not their preferences.

That's an incredibly strong claim ("no evidence"). You are giving rather a lot of privilege to the hypothesis that the public relations module of the brain is given unfiltered access to potentially politically compromising information like that and then chooses to divulge it publicly. This is in rather stark contrast to what I have read and what I have experienced.

I'd like to live in a world where what you said is true. It would have saved me years of frustration.

You have to assume that how humans acts says more about their preferences than what they say about their preferences.

Both provide useful information, but not necessarily directly. fMRIs can be fun too, albeit just as tricky to map to the 'want' concept.

comment by DanArmak · 2014-07-11T22:38:38.329Z · score: 2 (4 votes) · LW · GW

With very high confidence I know what I want. And for the most part, I don't infer what I want by observing my own behavior, I observe what I want through introspection.

There's an aphorism that says, "how can I know what I think unless I say it?" This is very true in my experience. And I don't experience "introspection" to be significantly different from "observation"; it just substitutes speaking out loud for speaking inside my own head, as it were. (Sometimes I also find that I think easier and more clearly if I speak out loud, quietly, to myself, or if I write my thoughts down.)

I'm careful of the typical mind fallacy and don't want to say my experiences are universal or, indeed, even typical. But neither do I have reason to think that my experience is very strange and everyone else introspects in a qualitatively different way.

I know some of what other people want when they tell me what they want.

Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring "wants" from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)

Believing that a chicken doesn't want to be killed is something for which there is less evidence than with humans.

Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word "want": no answer will tell us anything new about reality.

Me, I figure the chicken is just running on automatic pilot and isn't thinking about whether it will be killed or not, very possibly doesn't have a concept of being killed at all, and is really demonstrating that it doesn't want to be caught.

Does this contradict what you previously said about chickens?

we shouldn't kill things that don't want us to kill them

the only things that might even conceivably want things are things that have central nervous systems.

Can you please specify explicitly what you mean by "wanting"?

comment by mwengler · 2014-07-11T23:16:25.274Z · score: 3 (3 votes) · LW · GW

I know some of what other people want when they tell me what they want.

Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do. This is precisely inferring "wants" from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)

On the one hand you suggested that plants "want" not to be killed, presumably based on seeing their behavior of sucking up water and sunlight and putting down deeper roots etc. The behavior you talk about here is non-verbal behavior. In fact, your more precise conclusions from watching plants is that "some plants don't want to be killed" as you watch them not die, while based purely on observation, to be logical you would have to conclude that "many plants don't mind being killed as you watched them modify their behavior not one whit as a harvesting machine drove towards them and then cut them down.

So no, I don't think we can conclude that a plant wanted to not be killed by watching it grow any more than we can conclude that a car engine wanted to get hot or that a rock wanted to sit still by watching them.

Both people and chickens try to avoid dying. People are much better at it, because they are much smarter. Does that mean people want to avoid dying much more than chickens do? That is just a question about the definition of the word "want": no answer will tell us anything new about reality.

You have very little (not none, but very little) reason to think a chicken even thinks about dying. We have more reason to think a chicken does not want to be caught. We don't know if it doesn't want to be caught because it imagines us wringing its neck and boiling it. In fact, I would imagine most of us don't imagine it thinks of things in such futuristic detail, even among those of us who think we ought not eat it.

Speaking (in this case to other people) is a form of behavior intended (greatly simplifying) to make other people do what you tell them to do.

That's a lot to assert. I assert speaking is a form of behavior intended to communicated ideas, to transfer meaning from one mind to another. Is my assertion inferior to yours in any way? When I would lecture to 50 students about electromagnetic fields for 85 minutes at a time, what was I trying to get them to do?

Speaking is a rather particular "form of behavior." Yes I like the shorthand of ignoring the medium and looking at the result, I tell you I want money, you have an idea that I want money as a result of my telling you that. Sure there is "behavior" in the chain, but the starting point is in my mind and the endpoint is in your mind and that is the relevant stuff in this case where we are talking about consciousness and wanting, which are states of mind.

This is precisely inferring "wants" from behavior designed to achieve those wants. (Unless you think language confers special status with regards to wanting.)

I tell you I want money and I want beautiful women to perform sexual favors for me. Here I am communicating wants to you, but how is my communication "designed to achieve those wants?" I submit it isn't, that your ideas about what talking is for are woefully incomplete.

Can you please specify explicitly what you mean by "wanting"?

Its a state of mind, an idea with content about the world. It reads on what I am likely to do but not super directly as there are thousands (at least) of other things that also influence what I am going to do. But it is the state of mind that is "wanting."

And so if a chicken wants to not be killed AND you think that something's wanting something produces a moral obligation upon you to not thwart its desires, then you ought not catch a chicken (and kill it and eat it) if it doesn't want to be caught. The actual questions, does a chicken want anything? Does it in particular want not to be caught? Does what a chicken wants create an obligation in me? These are all IMHO open questions. But the meaning of "a chicken wants to not be caught" seems pretty straightforward, much more straightforward than does figuring out whether it is true or not, and whether it matters or not.

comment by Strange7 · 2014-07-17T19:49:04.389Z · score: 2 (2 votes) · LW · GW

Every living thing "wants" not to be killed, even plants.

There are fruits which "want" to be eaten. It's part of their life cycle. Intestinal parasites, too, although that's a bit more problematic.

comment by DanArmak · 2014-07-18T08:20:57.083Z · score: 1 (1 votes) · LW · GW

Fruits are just parts of a plant, not whole living things. Similarly you might say that somatic cells "want" to die after a few divisions because otherwise they risk turning into a cancer.

Parasites that don't die when they are eaten obviously don't count for not wanting to be killed.

comment by Strange7 · 2014-07-20T01:55:35.229Z · score: 1 (1 votes) · LW · GW

Take an apple off the tree, put it in the ground, there's a decent chance it'll grow into a new tree. How is that not a "whole living thing?" If some animal ate the apple first, derived metabolic benefits from the juicy part and shat out the seeds intact, that seed would be no less likely to grow. Possibly more so, with nutrient-rich excrement for it's initial load of fertilizer.

comment by DanArmak · 2014-07-20T04:50:51.513Z · score: 1 (1 votes) · LW · GW

Fine. But these fruit don't want to be killed, just eaten.

comment by KnaveOfAllTrades · 2014-07-11T20:32:37.825Z · score: 2 (2 votes) · LW · GW

I agree with the thrust of your first paragraph. But the second one (and to some extent the first) seems to be using a revealed preferences framework that I'm not sure fully captures wanting. E.g. can that framework handle akrasia, irrationality, etc.?

comment by DanArmak · 2014-07-11T22:26:48.844Z · score: 2 (2 votes) · LW · GW

The word "wanting", like "consciousness", seems to me not to quite cut reality at its joints. Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common.

So I would rather not use "wanting" if I can avoid it, unless the meaning is clear. For example, saying "I want ice cream now" is a statement about my thoughts and desires right now, and it gives some information about my likely actions; it leaves little room for misunderstanding.

comment by Vladimir_Nesov · 2014-07-11T22:50:07.674Z · score: 2 (2 votes) · LW · GW

Goal-directed behavior (or its absence) is a much clearer concept, but even then humans rarely have clear goals. As you point out, akrasia and irrationality are common.

This looks like a precision vs. accuracy/relevance tradeoff. For example, some goals that are not explicitly formulated may influence behavior in a limited way that affects actions only in some contexts, perhaps only hypothetical ones (such as those posited to elicit idealized values). Such goals are normatively important (contribute to idealized values), even though formulating what they could be or observing them is difficult.

comment by wedrifid · 2014-07-18T03:53:35.446Z · score: 1 (1 votes) · LW · GW

Every living thing "wants" not to be killed, even plants.

Just not true. There is no sense in which a creature which voluntarily gets killed and has no chance of further mating (and no other behavioural expressions indicating life-desire) can be said to "want" not to be killed. Not even in some sloppy evolutionary anthropomorphic sense.

"Wanting" not to be killed is a useful heuristic in most cases but certainly not all of them.

comment by DanArmak · 2014-07-18T08:19:10.235Z · score: 1 (1 votes) · LW · GW

Also, every use of the word "every" has exceptions.

Yes, inclusive fitness is a much better approximation than "every living thing tries to avoid death". And gene's-eye-view is better than that. And non-genetic replicators have their place. And evolved things are adaptation executers. And sometimes living beings are just so bad at avoiding death that their "expressed behavioral preferences" look like something else entirely.

I still think my generalization is a highly accurate one and makes the point I wanted to make.

comment by [deleted] · 2014-07-18T11:11:01.213Z · score: 1 (1 votes) · LW · GW

Also, every use of the word "every" has exceptions.

Including this one.

comment by DanArmak · 2014-07-18T12:45:42.453Z · score: 1 (1 votes) · LW · GW

Naturally. For instance true mathematical theorems saying that every X is Y have no exceptions.

comment by ChristianKl · 2014-07-11T21:28:22.311Z · score: 3 (5 votes) · LW · GW

So it is that I have recently been very skeptical of the term 'consciousness' (though grant that it can sometimes be a useful shorthand), and hence my question to you: Have I overlooked any counts in favour of the term 'consciousness'?

You haven't mentioned terms like qualia, phenomenology and somatics. Those terms lead to debate where the term consciousness is useful. I think it's useful to be able to distinguish conscious from unconscious mental processes.

I don't think you need specific words. Germany brought forward strong philosophers and psychologists without having a term for "mind". It's always interesting how different languages handle problems like that. If I look at Lojban it was a word for conscious "sanji" with also means "recognize in the sense of "discern"" and "something aware of something". Do you also dislike terms like "awareness" or "recognize"?

A lot of tribal language might do without a term for consciousness but use a lot of spirits or ghosts to explain mental phenomena. Even our Indoeuropean languages used to use words like "soul" but we stripped it out in the last decades.

I think it's a good exercise to think about starting from scratch with labeling mental phenomena and think how many terms one needs to describe things well. Such a project might lead to a new language that beats Lojban.

comment by KnaveOfAllTrades · 2014-07-15T14:19:38.246Z · score: 1 (1 votes) · LW · GW

Upvoted.

You haven't mentioned terms like qualia, phenomenology and somatics. Those terms lead to debate where the term consciousness is useful.

Example? What's a specific useful discussion that is best conducted by using the term 'consciousness', rather than 'qualia', 'self-awareness', and other, more specific (even if not necessarily less confused) terms?

Do you also dislike terms like "awareness" or "recognize"?

'Awareness' used in a discussion that's at all philosophical does make me antsy, and brace myself for someone to treat awareness as a magical, mysterious thing. 'Recognize' is very rarely abused, so I am generally fine.

comment by ChristianKl · 2014-07-15T15:34:08.053Z · score: 2 (2 votes) · LW · GW

Example? What's a specific useful discussion that is best conducted by using the term 'consciousness', rather than 'qualia', 'self-awareness', and other, more specific (even if not necessarily less confused) terms?

Consciousness is in the way I understand the word the thing that perceives qualia. There are discussions where it's useful to have a word for that. I recently read Thomas Hanna's book "somatics reawakening the mind's control of movement, flexibility, and health" and think the book uses it in a useful manner.

There a certain qualia that I would label as 'self-awareness'. There also the process of passing the mirror test that you could label with the word 'self-awareness'. If I want to be specific when I'm talking about qualia I also distinguish the feeling to exist from self awareness. It however took me months to get the distinction between the two. I also do my main thinking in that area in German and have to translate.

comment by KnaveOfAllTrades · 2014-07-15T15:56:45.423Z · score: 1 (1 votes) · LW · GW

Consciousness is in the way I understand the word the thing that perceives qualia. There are discussions where it's useful to have a word for that.

Questions of whether qualia is a useful concept aside, I feel that any discussion where you're talking about 'consciousness' in the sense of 'qualia-experiencing' would benefit from just saying 'qualia-experiencing', since 'consciousness' can mean so many different things in that rough area of philosophy that it's liable to cause misinterpretation, equivocation, etc.

I recently read Thomas Hanna's book "somatics reawakening the mind's control of movement, flexibility, and health" and think the book uses it in a useful manner.

Yep, this looks like a fair use of 'consciousness' to me.

comment by ChristianKl · 2014-07-16T09:21:43.602Z · score: 1 (1 votes) · LW · GW

Once you accept that there is something which experiences qualia that raises the question of whether that something has other attributes that we can also investigate. Investigating that question isn't easy but I don't think that just because it's a hard question one should shun that question.

To get back to Thomas Hanna, he is a quite interesting character. He was chairman of the Department of Philosophy at the University of Florida. Then he went more practical and in the applied teaching of somatics and makes some pretty big claims about how it can make people healthy and eliminate most of the diseases of aging only to die at the age of 61 in a car crash.

I read him because buybuydandavis recommened him on LW. One of the claims is that people often suffer from what he calls sensor-motor amnesia whereby people forget how to use and relax certain muscles in their body and that leads to medical problems. According to him that sensor-motor amnesia is healable. Sensor-motor amnesia would be one aspect of aging that Aubrey de Grey missed in his list.

Hanna attributes 50% of all illness towards problems arising from sensor-motor amnesia which is a pretty big claim. Even if it's less than 50% identifying a part of aging that we can actually do something about seems very important. Bonus points are that a book like that gives you a better grasp on consciousness and related concepts.

comment by Strange7 · 2014-07-17T20:42:40.319Z · score: 2 (2 votes) · LW · GW

There seems to be a correlation between systems being described as "conscious" and those same systems having internal resources devoted to maintaining homeostasis along multiple axes.

Most people would say a brick is not conscious. Placed in a hotter or colder environment, it soon becomes the same temperature as that environment. Swung at something sharp and heavy it won't try to flee, the broken surface won't scab over, the chipped-off piece won't regrow. Splashed with paint, it won't try to groom itself. A tree is considered more conscious than a brick, but less so than an orangutan, and sure enough a tree exhibits some but not all of those equilibrium-maintaining behaviors.

Under that theory, consciousness is correlated with moral worth because consciousness is expensive in itself, and implies the presence of something valuable enough to justify that expense.

comment by MaoShan · 2014-07-25T02:38:22.478Z · score: 1 (1 votes) · LW · GW

I agree with your correlation, but I think your definition would make stars and black holes apex predators.

comment by Strange7 · 2014-07-27T19:46:28.485Z · score: 2 (2 votes) · LW · GW

A stellar-mass body isn't any more conscious than a water droplet or a pendulum under this theory. (Admittedly, that's more than zero, but still below the threshold of independent moral significance.) Kinematics keep them in a stable equilibrium, but there's no mechanism for maintaining a consistent chemical composition, or proactively seeking to avoid things that haven't disrupted the body but might soon. Drop some tungsten into a star, and it'll be a star with some tungsten in it until nuclear physics says otherwise. Feed tungsten to a mammal, you get some neurological symptoms until most of the excess metal is expelled via the kidneys over the next few days.

It's not about the magnitude of possible disruption which can be absorbed on any one axis, or even really the precision with which that variable is controlled, but the number of different axes along which optimization occurs.

comment by MaoShan · 2014-07-27T23:13:29.244Z · score: 1 (1 votes) · LW · GW

It seems to me, though, that there are quite a few axes on which it would be hard to disturb a star's equilibrium. That still keeps it included in your definition. Also, since tungsten is not disruptive to the star's homeostasis, it has no reason to expel it. I appreciate your rational answers, because I'm actually helping you steel-man your theory, it only looks like I'm being a dork.

comment by Strange7 · 2014-07-28T04:52:07.913Z · score: 2 (2 votes) · LW · GW

Adding tungsten, or any heavy element, increases the star's density, thereby marginally shortening the star's lifespan. It's only "not disruptive to the star's homeostasis" in the sense that the star lacks any sort of homeostasis with regard to it's chemical composition. You are firing armor-piercing bullets into an enormous compost heap, and calling it a composite-laminate reinforced bunker just because they don't come out the other side.

I say again, it's not about the equilibrium being hard to disturb, it's about there being a subsystem which actively corrects and/or prevents such disturbances. Yes, a star scores above a brick on this scale, as do many other inanimate objects, automated industrial processes, and extremely simple lifeforms which nonetheless fall well below any commonsensical threshold of consciousness.

comment by MaoShan · 2014-07-28T20:55:42.917Z · score: 1 (1 votes) · LW · GW

Well, now it sounds like you found a useful definition of life; at what point on this spectrum, then, would you consider something conscious? Since it's processes you are looking for, there is probably a process that, without which, you could clearly classify as un-conscious.

comment by Strange7 · 2014-07-29T00:02:50.694Z · score: 3 (3 votes) · LW · GW

If I know how many grains of sand there are, their relative positions, and have a statistical profile of their individual sizes and shapes, I no longer need to know whether it counts as a "heap" or not. If I know an object's thermal mass, conductivity, and how many degrees it is above absolute zero, I don't need to know whether it's "warm" or "cold."

The term "consciousness" is a pointer to something important, but lacks precision. My understanding was that we were trying to come up with a more precise, quantifiable pointer to the same underlying important thing.

comment by MaoShan · 2014-07-29T21:04:13.024Z · score: 1 (1 votes) · LW · GW

What is it that makes consciousness, or the thing that it points to (if such a thing is not ephemeral), important? You already said that knowing the exact quantities negates the need for categorization.

comment by Strange7 · 2014-07-30T23:34:20.012Z · score: 1 (1 votes) · LW · GW

What is it that makes consciousness, or the thing that it points to (if such a thing is not ephemeral), important?

I am not in a position to speculate as to why consciousness, or the underlying referent thereto, is so widely considered important; I simply observe that it is. Similarly, I wouldn't feel qualified to say why a human life has value, but for policy purposes, somebody out there needs to figure out how many million dollars of value a statistical human life is equivalent to. Might as well poke at the math of that, maybe make it a little more rigorous and generalized.

comment by [deleted] · 2014-07-29T07:28:00.649Z · score: 1 (1 votes) · LW · GW

If I know how many grains of sand there are, their relative positions, and have a statistical profile of their individual sizes and shapes, I no longer need to know whether it counts as a "heap" or not.

Unless you're trying to decide whether its article on Wikipedia belongs in Category:Heaps ;-)

comment by [deleted] · 2014-07-28T23:30:32.187Z · score: 0 (2 votes) · LW · GW

For what purpose are you labeling something conscious? Strange7 has already stated that water droplets and pendulums have nonzero "consciousness", and I would agree. But so what? What does it matter if it turns out that rocks are conscious too?

Taboo the word 'conscious' please.

comment by private_messaging · 2014-07-30T19:31:10.655Z · score: 2 (2 votes) · LW · GW

If we taboo "conscious" then we just got some arbitrary and thus almost certainly useless real number assigned to systems. edit: speaking of which, why would it be a real number? It could be any kind of mathematical object.

comment by Strange7 · 2014-07-30T23:46:22.460Z · score: 1 (1 votes) · LW · GW

Even if it's useless for philosophy of consciousness, some generalized scale of "how self-maintaining is this thing" might be a handy tool for engineers. That's the difference between a safe, mostly passive expert system and a world-devouring paperclip maximizer, isn't it? Google Maps doesn't try to reach out and eliminate potential threats on it's own initiative.

comment by private_messaging · 2014-07-31T15:53:51.191Z · score: 2 (2 votes) · LW · GW

But we're only interested in some aspects of self maintenance, we're not interested in how well individual molecules stay in their places (except when we're measuring hardness of materials). Some fully general measure wouldn't know what parameters are interesting and what are not.

Much the same goes for "integrated information theory" - without some external conscious observer informally deciding what's information and what's not (or what counts as "integration") to make the premise seem plausible (and carefully picking plausible examples), you just have a temperature-like metric which is of no interest whatsoever if not for the outrageous claim that it measures consciousness. A metric that is ridiculously huge for e.g. turbulent gasses, or if we get down to microscale and consider atoms bouncing around chaotically, for gasses in general.

comment by Strange7 · 2014-08-01T21:04:47.519Z · score: 0 (0 votes) · LW · GW

Again, I think you're misunderstanding. The metric I'm proposing doesn't measure how well those self-maintenance systems work, only how many of them there are.

Yes, of course we're only really interested in some aspects of self-maintenance. Let's start by counting how many aspects there are, and start categorizing once that first step has produced some hard numbers.

comment by private_messaging · 2014-08-02T07:22:04.141Z · score: 1 (1 votes) · LW · GW

Ahh, OK. The thing is, though... say, a crystal puts atoms back together if you move them slightly (and a liquid doesn't). And so on, all sorts of very simple apparent self maintenance done without a trace of intelligent behaviour.

comment by Strange7 · 2014-08-03T23:29:44.180Z · score: 0 (0 votes) · LW · GW

What's your point? I've already acknowledged that this metric doesn't return equally low values for all inanimate objects, and it seems a bit more common (in new-agey circles at least) to ascribe intelligence to crystals or rivers than to puffs of hot gas, so in that regard it's better calibrated to human intuition than Integrated Information Theory.

comment by TheMajor · 2014-07-13T12:42:44.257Z · score: 2 (2 votes) · LW · GW

I notice that I am still confused. In the past I hit 'ignore' when people talked about consciousness, but lets try 'explain' today.

The original post states:

Would we disagree over what my computer can do? What about an animal instead of my computer? Would we feel the same philosophical confusion over any given capability of an average chicken? An average human?

Does this mean that if two systems have almost the same capabilities that we would then also expect them to have a similar chance of receiving the consciousness label? In other words: is consciousness, like IQ, a lossy compression of other properties? I would like to compare this idea with the comment from RichardKennaway, who asks:

Everyone reading this, please take a moment to see whether you have any sensation that you might describe by those words.

I personally have never thought anything like 'I am feeling so conscious today' or even 'I am actively experiencing consciousness'. However, as I expected consciousness to be some sort of label attached to systems with certain properties, along with the observation that I am human and pretty much all other humans claim to be conscious, my prior for me being conscious is pretty high. But I don't feel that I can answer 'Yes' to RichardKennaway's question.

I have a hard time believing that my not feeling conscious is significant evidence to sway my prior away from me being conscious (especially since feelings are pretty easy to manipulate), i.e. my current solution is to reject RichardKennaway's question as insignificant (feeling conscious might not be all that related to being conscious). My questions are: are there better tests for being conscious available (presumably no, since the whole point of the discussion is that consciousness is hard to quantify), and if not then why should we bother to discuss whether or not something is conscious at all?

I am sorry that my post isn't all that coherent, I seem to be having trouble identifying exactly what my problem with 'consciousness' is.

Small note: while writing this reply I noticed that the original post can be interpreted as an explanation of what I am trying to say: if consciousness is a lossy compression then whether something has it or not becomes a moot point if we know plenty other properties, so if I notice that I can do most things that other humans also can (abstract reasoning, planning and executing strategies, using tools etc.) then whether or not I am conscious should be a moot point as it does not influence any prediction of my actions.

comment by torekp · 2014-07-12T18:18:03.424Z · score: 2 (2 votes) · LW · GW

Would we disagree over what my computer can do?

Yes, if you are using "conscious" with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It's phenomenal consciousness which your computer lacks.

Scott Aaronson's "Pretty hard problem of consciousness", which shminux mentions, is relevant here, but an additional point about phenomenal consciousness cuts some ice here, when it comes to your computer. Phenomenal consciousness allows us to distinguish between "appearance" and "reality". For example, you can say that a painting appears to be moving, but that's because you took LSD, and you know that it is really stationary. For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That's something today's computers (outside of an AI lab maybe) don't do. They represent the world, but they don't independently represent their own visual/auditory/etc. states.

That said, you could add an appearance-reality distinction to a computer's repertoire, and it wouldn't be obvious that full consciousness was achieved. Ultimately I suspect Scott Aaronson's "Pretty hard problem of consciousness" is the key.

comment by KnaveOfAllTrades · 2014-07-15T13:56:16.287Z · score: 2 (2 votes) · LW · GW

Thanks for this reply; this is the kind of quarter that seemed most promising for usefulness to 'consciousness'.

Yes, if you are using "conscious" with sufficient deference to ordinary usage. There are at least two aspects to consciousness in that usage: access consciousness, and phenomenal consciousness. Access consciousness applies to information which is globally available to the organism for control of behavior, verbal report, inference, etc. It's phenomenal consciousness which your computer lacks.

I am confused about qualia. Qualia has strong features of a confused concept, such that if 'consciousness' is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusion. If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers. History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about.

It feels to me like qualia is used in an epiphenomenal way. But if it is to be non-confused, it cannot be; it must refer to sets of statements like, 'This thing reacts in this way when it is poked with a needle, this way when UV light hits its eyes, ...' or something (possibly less boring propositions, but still fundamentally non-mysterious ones).

Insomuch as 'consciousness' depends on the notion of 'qualia', I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia). If we're using 'consciousness' as a byword for qualia, then we should just say 'qualia' and be open about the fact that we're implicitly touching upon the (hard) problem of consciousness, which is at best very confusing and difficult and at worst a philosophical quagmire, so that we do not become overconfident in what we are saying or that what we are saying is even meaningful.

Eliezer has his thing where he refers to Magical Reality Fluid to lampshade his confusion. Using 'consciousness' to smuggle in qualia feels like the opposite of that approach.

For all this skepticism, I do worry that those who dismiss qualia outright are being foolishly hasty.

[Rest of comment]

I don't think 'consciousness' can be justified on grounds of this type of representational phenomenology. Good phenomenology (e.g. converging on a theory like that red is to do with wavelengths of light within a mature theory of electromagnetism) is something roughly like getting useful mappings from terms (phenomena/lossy observations) to interpretations (specific accounts of those phenomena, e.g. a computer-checkable mathematization of the stimulus-phenomenon pair). That might be somewhat mysterious, but it doesn't feel like the same way I'm confused about qualia or that most people seem to be confused about consciousness. As you say, it's not clear that something good at figuring out the world giving it phenomena even need be conscious.

Ultimately I suspect Scott Aaronson's "Pretty hard problem of consciousness" is the key.

I'm not sure if this is possibly covered by 'is the key', but it seems to me that discussion of Scott Aaronson's PHPC is potentially tricksome in the same way as Chalmers' HPC, namely that discussions of it are often framed in terms of 'What is the Platonic Essence of Consciousness', rather than, 'Why should we think consciousness has a Platonic Essence and is fundamental? And if it is, what is that Essence?'

comment by TheAncientGeek · 2014-07-16T18:39:19.505Z · score: 3 (3 votes) · LW · GW

I am confused about qualia. Qualia has strong features of a confused concept, such that if 'consciousness' is getting at a qualia-nonqualia distinction, then it would seem to be a recursive or fractal confusio

Why? Do you the consciousness is defined in terms of qualia, and that qualia are in turn defined in terms of consciousness?

If qualia is to be a non-epiphenomenal concept, then there must be non-mysterious differences one could in principle point to to distinguish qualia-havers from non-qualia-havers.

Yes. Must be doesn't imply must be knowable, though.

History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same, or at least the same according to whatever criteria we might care about

The criteria we care about is the killer, through, An exact duplicate all the way down would be an exact duplicate, and therefore not running on a different substrate. What you are therefore talking about is a duplicate of the relevant subset of structure, running on a different substrate. But knowing what the relevant subset is is no easier than the Hard Problem.

.It feels to me like qualia is used in an epiphenomenal way.

The simplistic theory that qualia are distinct from physics has that problem. The simplistic theory that qualia are identical to physics has the problem that no one can somehow that works. The simplistic theory that qualia don't exist at all has the problem that I have them all the time.

However,none of that has much to do with the definition of qualia.

If we had a good theory of qualia we would know what causes them an what they cause. But we need the word qualia to point out what we don't have them good theory of. When you complain that qualia seem epiphenomenal, what you are actually complaining about is the lack of a solution of the HP.

But if it is to be non-confused, it cannot be; it must refer to sets of statements like, 'This thing reacts in this way when it is poked with a needle, this way when UV light hits its ey

Why? Why can't it mean "the ways things seem to a subject" or "an aspect of consciousness we don't understand", or both?

We don't know the reference of "qualia", right enough, but that does not mean the sense is a problem.

...' or something (possibly less boring propositions, but still fundamentally non-mysterious ones).Insomuch as 'consciousness' depends on the notion of 'qualia', I am very wary of its usage, because then a less-likely-to-be-confused concept (consciousness) is being used in terms of a very dubious, more-likely-to-be-confused concept (qualia).

Why is it more confused? On the face of it,qualia, labels a particular aspect of consciousness. Surely that would make it more precise.

comment by torekp · 2014-07-16T01:13:43.171Z · score: 3 (3 votes) · LW · GW

Qualia has strong features of a confused concept

I submit that it is (many of) the theories and arguments that are confused, not the concept. The concept has some semantic vagueness, but that's not necessarily fatal (compare "heap").

History of science strongly suggests a functionalism under which a version of me implemented on a different substrate but structurally identical should experience qualia which are the same

If "structurally identical" applies at the level of algorithms - see thesis #5 and "consistent position" #2 in this post by dfranke - then I agree.

It feels to me like qualia is used in an epiphenomenal way.

That happens when people embrace some of the confused theories. Then comes the attack of the p-zombies.

I'm all in favor of talking openly about qualia, because that is the hard problem fueling the bad metaphysics, not access consciousness. Self-consciousness can also be tricky, but in good part because it aggravates qualia problems. But I don't think the hard problem is an inescapable quagmire. Instead, the intersection of self-reference (with all its "paradoxes") and the appearance/reality distinction creates some unique conditions, in which many of our generally-applicable epistemic models and causal reasoning patterns fail. If you've got time for a book, I recommend Jenann Ismael's The Situated Self, which in spots could have been better written, but is well worth the effort. This paper covers a lot, too.

(e.g. converging on a theory like that red is to do with wavelengths of light within a mature theory of electromagnetism)

That's the reality side of redness; what people puzzle over is the relations between appearances (e.g. inverted spectrum worries). Maybe I misunderstand you. My claim is that the fact that appearances are mere appearances definitely does contribute to the hardness of the hard problem.

I don't think qualia and consciousness are fundamental in any of the usual senses - like basic particles? And I have no idea how simple and elegant an Essence has to be before it becomes Platonic. But humans think in prototypes and metaphors, and we get along just fine. We don't need to have an answer to every conceivable edge-case in order to make productive use of a concept. Nor do we need such precision even to see, in rough outline, how the referents of the concept, in the cases that interest us, would be tractable using our best scientific theories.

comment by CCC · 2014-07-13T12:54:27.156Z · score: 2 (2 votes) · LW · GW

For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That's something today's computers (outside of an AI lab maybe) don't do.

Surely any computer that controls an automated proccess must do this?

Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states.

Isn't that exactly what you describe here?

comment by torekp · 2014-07-13T17:05:34.393Z · score: 2 (2 votes) · LW · GW

I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.

comment by CCC · 2014-07-13T19:33:37.752Z · score: 1 (1 votes) · LW · GW

Hmmm.

Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception.

And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm.

I think that's an example of what you're talking about...

comment by torekp · 2014-07-13T22:53:10.597Z · score: 1 (1 votes) · LW · GW

One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn't suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you've got it.

Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other's percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other's visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it's probably hard to see what I'm getting at without knowing why I think it's relevant.

comment by CCC · 2014-07-14T14:38:08.346Z · score: 2 (2 votes) · LW · GW

Ah, thank you. That makes it a lot clearer.

I've seen a system that I'm pretty sure fulfills your criteria - it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the "phenomenal consciousness" that you describe; but I would not call that system any more or less conscious than any other computer.

Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other's percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other's visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc.

Ah - surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.

comment by torekp · 2014-07-14T22:39:14.461Z · score: 2 (2 votes) · LW · GW

That (thought selection) seems like a good angle. I just wanted to throw out a necessary condition for phenomenal consciousness, not a sufficient one.

comment by William_Quixote · 2014-07-18T16:15:04.301Z · score: 1 (1 votes) · LW · GW

Without trying to understand consciousness at all, I will note a few observables about it. We seem to biologically be prone to recognize it. People seem to recognize it even in things that we would mostly agree don't actualy have it. So we know biology / evolution selected to tend to recognize it, and we know that the selection pressure was such that the direction of error is to recognize it when it's not there rather than to fail to recognize it when it is. That implies that failure to recognize consciousness is probably very non adaptive. Which means that it's probably pointing to something significant.

comment by KnaveOfAllTrades · 2014-07-19T01:40:49.862Z · score: 3 (3 votes) · LW · GW

It's not clear to me that categorising or treating things as conscious is innate/genetic/whatever. This seems like exactly the kind of relatively easy empirical question of human nature where anthropology can just come along and sucker-punch you with a society that has no conception of consciousness.

In general, I think this heuristic is very weak evidence; belief in the supernatural and acceptance of fake or epiphenomenal explanations are mistakes to which humans and their societies are reliably prone. (In fact, if I had to try to name things that wouldn't get me sucker-punched by anthropology when I claimed them as universal to human cultures, then belief in the supernatural and fake explanations might be near the top of the list.)

comment by djm · 2014-07-11T15:06:49.487Z · score: 1 (3 votes) · LW · GW

Interesting post - while I don't have any real answers I have to disagree with this point:

"Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)"

A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path. We have managed to make some impressive things using lots of rocks and sticks, but there is not a lot more to it than that in terms of consciousness.

comment by Tenoke · 2014-07-11T15:45:46.401Z · score: 13 (13 votes) · LW · GW

Note that you can also describe humans under that paradigm - we come pre-programmed, and then the program changes itself based on the instructions in the code (some of which allow us to be influenced by outside inputs). The main difference between us and his computer here is that we have less constraints, and we take more inputs from the outside world.

I can imagine other arguments for why a computer might not be considered concious at all (mainly if I play with the definition), but I don't see much difference between us and his computer in regards to this criteria.

P.S. Also the computer is less like the rolling rock, and more like the hill, rock and sticks - i.e. the whole system.

comment by The_Duck · 2014-07-12T02:24:18.743Z · score: 6 (6 votes) · LW · GW

A computer is no more conscious than a rock rolling down a hill - we program it by putting sticks in the rocks way to guide to a different path.

Careful!--a lot of people will bite the bullet and call the rock+stick system conscious if you put a complicated enough pattern of sticks in front of it and provide the rock+stick system with enough input and output channels by which it can interact with its surroundings.

comment by cameroncowan · 2014-07-18T05:28:37.191Z · score: -3 (5 votes) · LW · GW

I think part of the problem here is that you are confusing existence with consciousness and reason with consciousness.

Deciding that we "exist" is something that philosophers have defined as thinking (Decartes) to the use of language (Heidegger). If there is one common thing to the human existence is that we have figured out that we are different because we can make a decision using reason to do or not to do something. We also are more aware of the passing of the seasons, planets, and other phenomena at a level beyond the instinctive (brain stem/lower brain) and can make decisions based upon that. Humans also have the ability to observe, take in knowledge and then make decisions based upon that information that fly in the face of instinct and continue to do so for succeeding moments without reverting to instinct. This ability is described as consciousness.

Everything in the universe is connected by the material that was present in the big bang and, I believe, by impetus from the Divine. Using your virus example, it is possible, at very high levels of meditation technique to energetically work with the virus and take it out of your body. Take A.R. Heaver for example. After suffering paralysis following a bi-plane accident in WWI he used esoteric techniques and experienced robust health until his 80s. Because everything is connected and has a level of consciousness that varies from ours (very high) to the lowest (single cell) you can tap into that consciousness and create great change. You can observe group consciousness as well. If I take a group of humans together and start talking to them about something and I can speak from a place of authority and convince them that I can speak on that topic then I can begin to convince them of a variety of things. I can even employ questions and talking amongst them as devices to advance my goals. Cultures where group consensus and agreement are paramount see this used every day as a way for society to move forward and to come to quick, culturally agreed solutions to problems. Many people have said that this group consciousness caused everything from the rise of Communism in Russia to the rise of National Socialism (NAZI) politics in Germany.

To say that there is no consciousness is to deny your own existence, your own awareness of what is happening around you, and to deny your connection with everything from the smallest of the small to the expanse that is the universe. It is all us and we are all it. Your computer has a kind of consciousness although its consciousness is limited. Your car has a kind of consciousness. Environments have consciousness through systems. Take the interstate for example. It has systems (signs, lanes, on ramps, off ramps, over passes, and barriers) to allow humans piloting cars to navigate the environment to achieve their transportation goals. To deny that consciousness is to deny the awareness of the engineers, the construction workers, the paint, the concrete, and the technology behind it that created it. You also deny the conscious decisions made by everyone that allows us to drive 55-75 mph without crashing into each other. If you work with objects you can being to understand them and their consciousness, their beingness, and their contribution and connectedness to the greater Whole. All of the universe is looking for wholeness and within each of us is that unique possibility. This is consciousness.

comment by shminux · 2014-07-18T19:34:55.887Z · score: 0 (2 votes) · LW · GW

Congratulations, you expanded the term to include everything and thus make it completely useless.

comment by cameroncowan · 2014-07-19T03:20:40.767Z · score: 0 (2 votes) · LW · GW

The term is not useless because it applies to everything. That would be like saying the term "air" is less useful because we don't define all the gases and where it is specifically where everything is specifically. The point is that consciousness is like a connection a type of connect-mind that ties all things together. It is something that because it is within all things allows us to connect with objects both large and small. Its a system of beingness.