Posts

Comments

Comment by Signer on The Point of Trade · 2021-06-22T19:09:10.198Z · LW · GW

there are two more magical powers

Zero size in 3rd dimension and time travel.

Comment by Signer on Can someone help me understand the arrow of time? · 2021-06-15T18:10:06.954Z · LW · GW

there’d be no point doing anything

That's not even in the top five reasons to not do anything:

  • There is no ultimate reason to do things you have reasons to do.
  • Everything you not do still happens in other parts of multiverse.
  • You doing things have a chance to create infinite amount of hells.
  • Valence is arbitrary.
  • The most precise model of you is action minimizer, so you should notdo things.

And I don't think it's totally solved, but you can interpret "we exist at a single point in the timeline" as something like "you can describe yourself differentially" i.e. what really exists is the timeline. Then the point of the theory is that if timeline contains memories then it contains all your expreiances.

Comment by Signer on The Homunculus Problem · 2021-05-27T21:40:13.241Z · LW · GW

I mean, why? You just double-unjoke it and get "the way to talk about it is to just talk about piping processed sense data". Which may be not that different from homunculi model, but then again I'm not sure how it problematic in visual illusion example.

Comment by Signer on Response to "What does the universal prior actually look like?" · 2021-05-21T23:27:16.758Z · LW · GW

And I guess that's where decision-theoretic questions arise - if basement inductors are willing to wait for enough frames, then we can't do anything, so we won't. Because we wouldn't have enough simplicity to fake observations indefinitely, right? Otherwise we are intended model.

Comment by Signer on Response to "What does the universal prior actually look like?" · 2021-05-20T21:09:52.293Z · LW · GW

To clarify, sufficient observations would still falsify all "simulate simple physics, start reading from simple location" programs and eventually promote "simulate true physics, start reading from camera location"?

Comment by Signer on Vim · 2021-04-08T08:19:23.748Z · LW · GW

Modal editing is a nice idea for compressing many hotkeys - it's a shame IDEs don't support defining your own modes.

Comment by Signer on What Do We Know About The Consciousness, Anyway? · 2021-04-07T22:54:09.921Z · LW · GW
  1. To me it looks like the defining feature of consciousness intuition is one's certainty in having it, so I define consciousness as the only thing one can be certain about and then I know I am conscious by executing "cogito ergo sum".

  2. I can imagine disabling specific features associated with awareness starting with memory: seeing something without remembering feels like seeing something and then forgetting about it. Usually when you don't remember seeing something recent it means your perception wasn't conscious, but you certainly forgot some conscious moments in the past.
    Then I can imagine not having any thoughts. It is harder for long periods of time, but I can create short durations of just seeing that, as far as I remember, are not associated with any thoughts.
    At that point it becomes harder to describe this process as self-awareness. You could argue that if there is representation of the lower level somewhere in the high level, then it is still modeling. But there is no more reason to consider these levels parts of the same system, than to consider any sender-receiver pair as self-modeling system.

  3. I don't know. It's all ethics, so I'll probably just check for some arbitrary similarity-to-human-mind metric.

we have reasons to expect such an agent to make any claim humans make

Depending on detailed definitions of "reflect on itself" and "model itself perceiving" I think you can make an agent that wouldn't claim to be perfectly certain in its own consciousness. For example, I don't see a reason why some simple cartesian agent with direct read-only access to its own code would think in terms of consciousness.

Comment by Signer on What Do We Know About The Consciousness, Anyway? · 2021-04-03T20:36:59.564Z · LW · GW

So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model—which is me—has trained to perceive other human mind so well that it learned to generalize to itself.

What's your theory for why consciousness is actually your ability to perceive yourself as human mind? From your explanation it seems to be

  1. You think (and say) you have consciousness.
  2. When you examine why you think it, you use your ability to perceive yourself as human mind.
  3. Therefore consciousness is your ability to perceive yourself as human mind.

You are basically saying that consciousness detector in the brain is an "algorithm of awareness" detector (and algorithm of awareness can work as "algorithm of awareness" detector). But what are the actual reasons to believe it? Only that if it is awareness, then it explains why you can detect it? It certainly is not a perfect detector, because some people will explicitly say "no, my definition of consciousness is not about awareness". And because it doesn't automatically fits into "If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something" if you just replace "conscious" by "able to percieve itself".

Comment by Signer on What Do We Know About The Consciousness, Anyway? · 2021-04-02T19:35:26.877Z · LW · GW

Ok, by these definitions what I was saying is "why not having ability to do recursion stops you from having pain-qualia?". Just feeling like there is a core of truth to qualia ("conceivability" in zombie language) is enough for asking your world-model to provide a reason why not everything, including recursively self-modeling systems, feels like qualialess feelings - why recursively self-modeling is not just another kind of reaction and perception?

Comment by Signer on What Do We Know About The Consciousness, Anyway? · 2021-04-01T23:44:30.542Z · LW · GW

I believe it depends on one's preferences. Wait, you think it doesn't? By "ability to do recursion" I meant "ability to predict its own state altered by receiving the signal" or whatever the difference of the top level is supposed to be. I assumed that in your model whoever doesn't implement it doesn't have qualia therefore doesn't feel pain because there is no one to feel it. And for the interested in the Hard Problem the question would be "why this specific physical arrangement interpreted as recursive modeling feels so different from when the pain didn't propagate to the top level".

Comment by Signer on Why We Launched LessWrong.SubStack · 2021-04-01T21:18:03.434Z · LW · GW

So the money play is supporting Substack in greaterwrong and maximizing engagement metrics by unifying lesswrong's and ACX's audiences in preparation to inevitable lesswrong ICO?

Comment by Signer on What Do We Know About The Consciousness, Anyway? · 2021-04-01T20:46:27.260Z · LW · GW

when a sensation perceived by a human (in the biological sense of perceiving) stops being a quale?

When it stops feeling like your "self-awareness" and starts feeling like "there was nobody “in there”". And then it raises questions like "why not having ability to do recursion stops you from feeling pain".

Comment by Signer on Could billions spacially disconnected "Boltzmann neurons" give rise to consciousness? · 2021-04-01T18:21:56.474Z · LW · GW

No value-free arguments against it, but it probably can be argued that you can't do anything to help Boltzmann’s brains anyway.

Comment by Signer on Toward A Bayesian Theory Of Willpower · 2021-03-26T18:49:52.606Z · LW · GW

I don't understand what's the point of calling it "evidence" instead of "updating weights" unless brain literally implements P(A|B) = [P(A)*P(B|A)]/P(B) for high level concepts like “it’s important to do homework”. And even then this story about evidence and beliefs doesn't bring anything additional to the explanation with specific weight aggregation algorithm.

Comment by Signer on What I'd change about different philosophy fields · 2021-03-13T08:02:41.393Z · LW · GW

The LEDs are physical objects and so your list of firings could be wrong about physical fact of actual firing if you had hallucination when making that list. Same with the neurons: it's either indirect knowledge about them, or no one actually knows whether some neuron is on or off.

Well, except you can say that neurons or LEDs themselves know about themselves. But first, it's just renaming "knowledge and reality" to "knowledge and direct knowledge" and second, it still leaves almost all seemings (except "left half of a rock seems like left half of a rock to a left half of a rock") as uncertain - even if your sensations can be certain about themselves, you can't be certain, that you having them.

Or you could have an explicitly Cartesian model where some part the chain "photons -> eye -> visual cortex -> neocortex -> expressed words" is arbitrary defined as always true knowledge. Like if the visual cortex says "there is an edge at (123, 123) of visual space", you interpret it as true or as an input. But now you have a problem of determining "true about what?". It can't be certain knowledge about eye, because visual cortex could be wrong about eye, and it can't be about visual cortex for any receiver of that knowledge, because it could be spoofed in transit. I guess implementing Cartesian agent would be easier or maybe even some part of any reasonable agent is required to be Cartesian, but I don't see how certainty in inputs can be justified.

Comment by Signer on What I'd change about different philosophy fields · 2021-03-09T22:01:38.128Z · LW · GW

Yes, there are experiences, not only beliefs about them. But as with beliefs about external reality, beliefs can be imprecise.

It is possible to create a more precise description of how something seems to you and for which your internal representation with integer count of built things is just approximation. And you can even define some measure of the difference between experiences, instead of just talking about separate objects.

It is not extremely bad approximation to say "it seems like two sentences to me" so it is not like being sure in the absence of experience is the right way.

The only thing you can be sure of is that something exist, because otherwise nothing could produce any approximations. But if you can't precisely specify temporal or spatial or whatever characteristics of your experience, there is no sense in which you can be sure what something seems to you.

Comment by Signer on What I'd change about different philosophy fields · 2021-03-09T14:34:32.159Z · LW · GW

Are you seriously saying that “You can not be sure how the world seems to you” has significant plausbility?

How sure are you, that this sentence seems to you the same it seemed to you 1ms ago? If you can't precisely quantify difference between experiences, you can't have perfect certainty in your beliefs about experience. And it gets worse when you leave the zone that the brain's reflective capabilities were optimized for.

Comment by Signer on I'm still mystified by the Born rule · 2021-03-05T00:38:26.964Z · LW · GW

Regarding Q3, I don't understand what's wrong with the observation that we checked the Born rule by doing repeated experiments and just QM without the Born rule predicts (by doing inner product) that after doing repeated experiments amplitude in all regions that contradict Born statistic tends to zero. That way we get consistent world picture where all what's really happens is amplitude decrease and following Born rule is just arbitrary preference.

Comment by Signer on Are the Born probabilities really that mysterious? · 2021-03-02T17:56:03.256Z · LW · GW

The problem is that there is no reason to posit any additive measure and treat it as probability. It can be done, but QM itself doesn't provide any significance to the numbers you would get by squaring amplitudes.

Comment by Signer on Qualia Research Institute: History & 2021 Strategy · 2021-01-26T19:45:37.576Z · LW · GW

The way I see it, the crux is not in a deep structure being definable - functionalism is perfectly compatible with definitions of experience on the same level of precision and reality as elements. And the research into the physical structures that people associate with consciousness certainly can be worthwhile and it can be used to resolve ethical disagreements in the sense that actual humans would express agreement afterwards. But the stance of QRI seems to be that resulting precise definition would be literally objective as in "new fundamental physics" - I think it should be explicitly clarified whether it's the case.

Comment by Signer on Grokking illusionism · 2021-01-07T01:42:44.165Z · LW · GW

Hmm, I'm not actually sure about quantifying ratio of crazy/predictive intuitions (especially in case of generalizing to include perception) to arrive at low prior for intuitions. The way I see it, if everyone had an interactive map of Haiti in the corner of their vision, we should try to understand how it works and find what it corresponds to in reality - not immediately dismiss it. Hence the question about specific illusionary parts of consciousness.

Anyway, I thing the intuition about consciousness does correspond to a part of reality - to "reality" part. I.e. panpsychism is true and zombie thought experiment illustrates difference between real world and the world that does not exist. It doesn't involve additional primitives, because physical theories already include reality, and it diverges from intuition about consciousness in unsurprising parts (like intuition being too anthropocentric).

Comment by Signer on Grokking illusionism · 2021-01-06T20:25:54.256Z · LW · GW

I appreciate the difference between absolute certainty and allowing the possibility of error, but as a matter of terminology, "illusion" is usually used to refer to things that are wrong, not merely may be wrong. Words doesn't matter that much, of course, but I still interested in what intuitions about consciousness you consider to probably not correspond to reality at all? For example, what do you do with intuition underlying zombie argument:

  1. Would you say the statement "we live in non-zombie world" is true?
  2. Or the entire setup is contradictory because consciousness is a label for some arbitrary structure/algorithm and it was specified that structures match for both worlds?
  3. Or do you completely throw away the intuition about consciousness as not useful?

From what you said I guess it's 2 (which by the way implies that whether you/you from yesterday/LUT-you/dust-theoretic copies of you/dogs feel pain is a matter of preferences), so the next question is what evidence is there for the conclusion that the intuition about consciousness can't map to anything other than algorithm in the brain? It can't map to something magical but what if there is some part of reality that this intuition corresponds to?

Comment by Signer on Grokking illusionism · 2021-01-06T17:05:40.343Z · LW · GW

feelings didn’t necessarily map to reality, no matter how real they felt

But they do map to reality, just not perfectly. "I see red stripe" approximately maps to some brain activity. Sure, feelings about them being different things may be wrong, but "illusionism about everything except physicalism" is just restating physicalism without any additional argument. So what feelings are you illusionistic about?

Comment by Signer on Grokking illusionism · 2021-01-06T16:08:41.866Z · LW · GW

People have various intuitions about phenomenal consciousness

People say that, but are there actual studies of expressed intuitions about consciousness?

Comment by Signer on Ethics in Many Worlds · 2020-11-09T19:20:25.450Z · LW · GW

The main reason is the double-slit experiment: if you start with a notion of reality that expects photon to travel through either one or the other slit, and then the nature is like ~_~, it is already a sufficient reason to rethink reality. Different parts of probability distribution don't influence each other.

What happens if we experimentally discover a deeper layer of physics beneath QM

I mean, there is no need for hypotheticals - it's not like we started with probabilistic reality - we started with gods. And then everyone already changed their notion of reality to the probabilistic one in response to QM. Point is, changing one's ontology may not be easy, but if you prohibit continuous change then the Spirit of the Forest welcomes you. So yes, if we discover new better physics and it doesn't include interference between worlds, then sure, we dodged this bullet. But until then I see no reason to not assume MWI without special status for any measure. We don't even lose any observations that way - we just now know what it meant to observer something.

Comment by Signer on Ethics in Many Worlds · 2020-11-09T09:40:30.947Z · LW · GW

It has some notion - that notion is just not classical and not fundamental. What happens when you study the results of any experiments or make predictions is described by the theory. It just doesn't describe it in classical or probabilistic terms because they are not real. And doesn't tell you how to maximize knowledge, because it's ambiguous without specifying how to aggregate knowledge in different branches.

Comment by Signer on Ethics in Many Worlds · 2020-11-08T20:58:37.319Z · LW · GW

I'm saying that the classical notions of prediction, knowledge, observations and the need to explain them in classical sense should not be fundamental part of the theory with MWI. It is a plain consequence of QM equations that amplitudes of the branches, where frequency of repeated experiments contradicts Born rule, tends to zero. Theory just doesn't tell us why Born probabilities are right for specific observables in absolute sense, because there are no probabilities or sampling on physical level and wavefunction containing all worlds continues to evolve as it did before. We can label "amplitudes of the branches, where x is wrong, tend to zero" situation as "we observe x", but it would be arbitrary ethical decision. The Hilbert measure is correct only if you want to sum over branches, but there is nothing in the physics that forces you to want anything.

Comment by Signer on Ethics in Many Worlds · 2020-11-08T17:21:06.500Z · LW · GW

correct predictions

"Correct" only in the sense that the measure of branches where it's not correct approaches zero. So only matters if you already value such a measure.

Comment by Signer on Multiple Worlds, One Universal Wave Function · 2020-11-05T14:14:45.128Z · LW · GW

Any actual application of QM still requires the Copenhagen approach.

You can derive practical usefulness of Copenhagen approach from MWI without postulating reality of observables.

I never actually heard any coherent arguments in favor of reality of observables. If we giving up on minimizing complexity, why not go all the way to the original intuitions and say that Spirit of the Forest shows you the world consistent with QM calculations?

And to avoid misunderstanding: MWI means wavefunction is real, but worlds and Born rule are just arbitrary approximation.

Comment by Signer on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T21:38:17.036Z · LW · GW

It sounds useful but I don't see any reason to include the way I treat anything into ontology. That wavefunction is nearly zero in all regions where Born statistics fails is just consequence, not postulate. Similarly you can derive that following Bayes rule will result in largest amount of spicemeasure for states where you know something. Whether you want this or not is purely ethical question and ethics today is as arbitrary as it was yesterday. You might as well only track uncertainty about wavefunction and not specific decoherence-path and decide to minimize worst ignorance or something.

You would need a postulate only if you want there to be some fundamental point-knowledge but there are no point-states in reality - everything is just amplitudes.

Comment by Signer on The Born Rule is Time-Symmetric · 2020-11-02T02:32:43.024Z · LW · GW

In a neighborhood of , there are many slightly different versions of you and many slightly different versions of the ball.

In generalization does the neighborhood refer to nearby states in wavefunction or different possible future/past wavefunctions (i.e. distributions of complex numbers over space)? 

If first, how does it work with the whole (region of) wavefunction evolving simultaneously? I guess I just have unresolved doubts about timeless distribution of amplitude, like does it actually checks out that past and future are always in the neighborhood in relative configuration space? Or how do you normalize amplitude over expanding space? And in that picture  without interaction it's harder for me to, well, justify laws that generate amplitudes for neighboring states.

If second, don't we have only one possible future because evolution of wavefunction is deterministic?  

Comment by Signer on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T00:14:28.752Z · LW · GW

Right, I confused epistemic assumptions with ontological assumptions. But does minimizing epistemic assumptions even makes sense? I mean we don't start with PBR anyway - we start with what happened to be in our brains. So what's the point then in selecting description of the universe that is epistemically nearest to our starting state as opposed to the least complex one? I guess it would be interesting if we actually could reach PBR QM without ever invoking complexity minimization...

Comment by Signer on The Born Rule is Time-Symmetric · 2020-11-02T00:03:11.391Z · LW · GW

Does this generalize from the Born rule to continuous decoherence?

And why there is a walk at all? I mean, sure, memory is just consequence of wave equation like everything all, but I' m still having trouble conceptualizing timeless interaction...

Comment by Signer on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T22:22:52.452Z · LW · GW

The results of the laws are indeterministic, but laws themselves are kinda not - you always get the same probabilities. So I figured you would need additional complexity to distinguish between deterministic and indeterministic parts of description of the universe.

Forgive my ignorance, but why do we need projection postulate in MWI?

Comment by Signer on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T21:45:07.567Z · LW · GW

I assumed at least laws themselves don't change in PBR so we still need some things to be deterministic in addition to indeterministic things. Not sure if it requeries additional ontology, but still seems to result in more complex theory?

Perspectives in MWI can be derived.

I guess "actions behaving according to wavefuntion" in PBR do replace "wavefunction", but would't then laws of behavior became more complex to include that translation from wavefunction to actions?

Comment by Signer on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T00:17:22.383Z · LW · GW

It actually involves even fewer assumptions than the MWI because it rejects the commonly accepted postulate in Step 3 above.

I don't see why we need any of the three. MWI assumes only wavefunction. PBR assumes perspective, actions, indeterminism and still wavefuntion, that describes behavior of actions.

Comment by Signer on The Solomonoff Prior is Malign · 2020-10-17T23:28:31.465Z · LW · GW

Wouldn't complexity of earth and conditioning on importance be irrelevant because it would still appear in consequentialists' distribution of strings and in specification of what kind of consequentialists we want? Therefore they will only have the advantage of anthropic update, that would go to zero in the limit of string's length, because choice of the language would correlate with string's content, and penalty for their universe + output channel.

Comment by Signer on This Territory Does Not Exist · 2020-08-14T19:42:19.165Z · LW · GW

That's because "subjective experience" and "reality" are the same thing - panpsychism solves the Hard Problem and provides some intuitions for what "reality" means.

Comment by Signer on Many-worlds versus discrete knowledge · 2020-08-14T15:29:24.765Z · LW · GW

If simplest physics contradicts epistemology, you should change epistemology - it would be nice to develop some weird quantum knowledge theory without fundamental discrete facts.

Comment by Signer on Neural Basis for Global Workspace Theory · 2020-06-25T16:55:40.112Z · LW · GW

If there’s no global workspace, and there’s just the thalamus doing sensory gating, and routing chunks of cortex to each other, I’d expect to see a lot more multi tasking ability.

What if there is global workspace, but it doesn't hold one value? On some level it has to be true anyway - perception is not one-dimensional. And it all depends on definition (granularity) of task - if we need to explain why global workspace can't be dominated by page with half math problems and half story, then we can use the same explanation for why the state of workspace learned to not usually be like that. I can see how interconnectedness of workspace means all parts of input vector influence all of the workspace's state, and so you can't easily process different inputs independently, but can't you process combined input? Isn't it what happens, when you first just see something, then hear "Tell me what you see", and the action is produced because of what you see and hear?

Comment by Signer on Neural Basis for Global Workspace Theory · 2020-06-25T00:40:05.774Z · LW · GW

Functionally, the global workspace is an area that disparate parts of the cortex can all compete to put a value on. This competition is winner-takes-all, and only one value can be on the network at a time. Once a value is on the network, the rest of the cortex is able to read the value, thus serving as a temporary “global state”, hence the name.

What does it even mean for a network to have a global value? What's the evidence for that selection of winner always happening in TIN? Because it seems unnecessary for an explanation of conscious processing and attention when we already have a feedback loop with thalamus. Like, we get a visual input, it propagates through TIN, makes thalamus switch attention from external sensations to mental imagery, which when mixed with the current state of TIN after some iterations produces an action. Subliminal stimuli just don't make it to the feedback loop and therefore don't influence things very much.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-23T15:56:07.571Z · LW · GW

Make in my mind. Of course you can't change reality by shuffling concepts. But the idea is that all the ways consciousness works that are problematic are separate from other (easy) aspects of consciousness. So consciousness works how it worked before - you see clouds because something made you neurons activate in that pattern. You just recognise that confusing parts of consciousness (that I think all boil down to the zombie argument) are actually what we call "existence".

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-23T15:18:34.228Z · LW · GW

But the problems with existence don't become more severe because of merging of "existence" and "consciousness" concepts. On the contrary: before we didn't have any concrete idea of what it would mean to exist or not, but now we can at least use our intuitions about consciousness instead. And, on the other hand, all problematic aspects of consciousness (like surprising certainty about having it) are contained in existence.

Amusingly, I've just got from a flight where I put my backpack into my bag, so I could use it for luggage on the return flight^^.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-23T13:53:11.562Z · LW · GW

Oh, and if by "why and how is everything conscious" you mean "why believe in panpsychism" and not "what causes consciousness in panpsychist view" then, first, it's less about how panpsychism solves The Hard Problem, and more about why accept this particular solution. So, moving goalposts and all that^^. I don't quite understand why would someone be so reluctant to accept any solution that is kinda physicalist and kinda non-epiphenomenal, considering people say that they don't even understand how solution would look in principle. But there are reasons why panpsychism is the only acceptable solution: if consciousness influences physical world, then it either requires new physics (including strong emergence), or it is present in everything. You can detect difference between different states of mind with just weak emergence, but only "cogito, ergo sum" doesn't also work in zombie world.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-23T12:52:03.775Z · LW · GW

why and how do we have subjective experience, rather than experiencing nothing

Because we exist. "Because" not in the sense of casual dependency, but in the sense of equivalence. The point is that we have two concepts (existence and consciousness) that represent the same thing in reality. "Why they are the same" is equivalent to "why there is no additional "consciousness" thing" and that is just asking why reality is like it is. And it is not the same as saying "it's just the way world is, that we have subjective experience" right away - panpsychism additionally states that not only we have experience, and provides a place for consciousness in purely physical worldview.

And for "how" - well, it's the question of the nature of existence, because there is no place for mechanism between existence and consciousness - they are just the same thing. So, for example, different physical configurations mean different (but maybe indistinguishable by agent) experiences. And not sure if it counts as "how", but equivalence between consciousness and existence means every specific aspect of consciousness can be analysed by usual scientific methods - "experience of seeng blue" can be emergent, while consciousness itself is fundamental.

I mean, sure, "why everything exists" is an open question, so it may seem like pointless redefinition. But if we started with two problems and ended with one, then one of them is solved.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-23T10:32:59.722Z · LW · GW

The Hard Problem is basically "what part of the equation for wavefunction of the universe says that we are not zombies". The answer of panpsychism is "the part where we say that it is real". When you imagining waking up made of cold silicon and not feeling anything, you imagining not existing.

Non-fundamental "self" is there just to solve decomposition problem - there is no isolation of qualia, just qualia of isolation. And it works because it is easier to argue that you can be wrong about some particular aspects of consciousness (like there being fundamentally distinct conscious "selfs", or the difference between your current experience of blue sky and your experience of the same blue sky in the past) than that you can be wrong about there being consciousness at all.

It doesn't answer what all the interesting differences between rocks and human brains are, but these differences are not "Hard" or mysterious - only the difference between zombies and us is "Hard". Interesting parts are just hard to answer because they depend on what you want to know. And if you want to know whether something have that basic spark of consciousness, then the answer is that everything has it.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-13T00:08:02.110Z · LW · GW

Less like I oppose ever using words "exist" and "causes" for non-fundamental things, and more like doing it is what makes it vulnerable to conceivability argument in the first place: the only casual power that brain has and rock hasn't comes from different configuration of quarks in space, but quarks are in the same places in zombie world.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-12T23:34:36.066Z · LW · GW

The Hard Problem according to your description is that there is no place for consciousness in how things work. Why then making everything to be that place is not considered as solving the problem?

And about emergence - what TAG said. I also strongly agree about the importance of the ontology.

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-12T20:41:30.747Z · LW · GW

Yeah, I agree that calling it illusionism was a bad idea.

How can I study the consciousness of a rock? How can I compare the consciousness of a small rock vs. a big one?

As in all these questions, it depends on whether you want to study that consciousness which the Hard Problem is about, or the "difference between conscious and unconscious"-one. For the former it's just a study of physics - there is a difference between being a granite rock and a limestone rock. The experience would be different, but, of course, indistinguishable to the rock. If you want to study the later one, you would need to decide what features you care about - similarity to computational processes in the brain, for example - and study them. You can conclude that rock doesn't have any amount of that kind of consciousness, but there still would be a difference between real rock and rock zombie - in zombie world reassembling rock into a brain wouldn't give it consciousness in the mysterious sense. I understand, if it would start to sound like eliminativism at this point, but the whole point of non-ridiculous panpsychism is that it doesn't provide rocks with any human experiences like seeing red - the difference would be as much as you can expect between rock and human, but there still have to be an experience of being a rock, for any experience to not be epiphenomenal.

What happens to the consciousness of an iceberg when it melts and mingles with the ocean?

It melts and mingles with the ocean. EDIT: There is no need for two different languages, because there is only one kind of things. When you say "I see the blue sky" you approximately describe the part of you brain.

Am I conscious when I am unconscious? When I am dead?

In the sense of the difference between zombies and us - yes, you would be having an experience of being dead. In the sense of there being relevant brain processes - no, if you don't want to bring quantum immortality or dust theory.

What observations could you show me that would surprise me, if I believed (as I do, for want of anything to suggest otherwise) that rocks and water have no consciousness at all?

If you count logic as observation: that belief leads to contradiction. Well, "confusion" or whatever the Hard Problem is - if you didn't believe that, then there would't be a Hard Problem. The surprising part is not that there is a contradiction - everyone expects contradictions when dealing with consciousness - it's that this particular belief is all you need to correct to clear all the confusion. You probably better off reading Strawson or Chalmers than listening to me, but it goes like that:

  1. Rocks and water have no consciousness at all.
  2. You can create brain from rocks and water.
  3. Brains have consciousness.
  4. Only epiphenomenal things can emerge.
  5. Consciousness is not epiphenomenal.

It pretends to solve the problem of consciousness by simply attaching the word to everything.

Well, what parts of the problem are not solved by attaching the word to everything?

Comment by Signer on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-12T18:59:53.375Z · LW · GW

Well, the trick is that panpsychism is physicalist in broad sense, as they say. After all it's not like physicalist deny the concept of existence, and saying that the thing, that is different between us and zombies, that we call "consciousness", is actually that thing that physicalist call "reality" does not make it unphysical and doesn't prevent physicalism from working where it worked before. It's all definitional anyway - if panpsychism solves everything, then it doesn't matter whether it is physicalist or not.