Posts

Book Review: Consciousness Explained (as the Great Catalyst) 2023-09-17T15:30:33.295Z
Why it's so hard to talk about Consciousness 2023-07-02T15:56:05.188Z
A chess game against GPT-4 2023-03-16T14:05:17.559Z
Understanding Gödel's Incompleteness Theorem 2022-04-06T19:31:19.711Z
The case for Doing Something Else (if Alignment is doomed) 2022-04-05T17:52:21.459Z
Not-Useless Advice For Dealing With Things You Don't Want to Do 2022-04-04T16:37:05.298Z
How to think about and deal with OpenAI 2021-10-09T13:10:56.091Z
Insights from "All of Statistics": Statistical Inference 2021-04-08T17:49:16.270Z
Insights from "All of Statistics": Probability 2021-04-08T17:48:10.972Z
FC final: Can Factored Cognition schemes scale? 2021-01-24T22:18:55.892Z
Three types of Evidence 2021-01-19T17:25:20.605Z
Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) 2020-12-29T19:48:04.435Z
Intuition 2020-12-20T21:49:29.947Z
Clarifying Factored Cognition 2020-12-13T20:02:38.100Z
Traversing a Cognition Space 2020-12-07T18:32:21.070Z
Idealized Factored Cognition 2020-11-30T18:49:47.034Z
Preface to the Sequence on Factored Cognition 2020-11-30T18:49:26.171Z
Hiding Complexity 2020-11-20T16:35:25.498Z
A guide to Iterated Amplification & Debate 2020-11-15T17:14:55.175Z
Information Charts 2020-11-13T16:12:27.969Z
Do you vote based on what you think total karma should be? 2020-08-24T13:37:52.987Z
Existential Risk is a single category 2020-08-09T17:47:08.452Z
Inner Alignment: Explain like I'm 12 Edition 2020-08-01T15:24:33.799Z
Rafael Harth's Shortform 2020-07-22T12:58:12.316Z
The "AI Dungeons" Dragon Model is heavily path dependent (testing GPT-3 on ethics) 2020-07-21T12:14:32.824Z
UML IV: Linear Predictors 2020-07-08T19:06:05.269Z
How to evaluate (50%) predictions 2020-04-10T17:12:02.867Z
UML final 2020-03-08T20:43:58.897Z
UML XIII: Online Learning and Clustering 2020-03-01T18:32:03.584Z
What to make of Aubrey de Grey's prediction? 2020-02-28T19:25:18.027Z
UML XII: Dimensionality Reduction 2020-02-23T19:44:23.956Z
UML XI: Nearest Neighbor Schemes 2020-02-16T20:30:14.112Z
A Simple Introduction to Neural Networks 2020-02-09T22:02:38.940Z
UML IX: Kernels and Boosting 2020-02-02T21:51:25.114Z
UML VIII: Linear Predictors (2) 2020-01-26T20:09:28.305Z
UML VII: Meta-Learning 2020-01-19T18:23:09.689Z
UML VI: Stochastic Gradient Descent 2020-01-12T21:59:25.606Z
UML V: Convex Learning Problems 2020-01-05T19:47:44.265Z
Excitement vs childishness 2020-01-03T13:47:44.964Z
Understanding Machine Learning (III) 2019-12-25T18:55:55.715Z
Understanding Machine Learning (II) 2019-12-22T18:28:07.158Z
Understanding Machine Learning (I) 2019-12-20T18:22:53.505Z
Insights from the randomness/ignorance model are genuine 2019-11-13T16:18:55.544Z
The randomness/ignorance model solves many anthropic problems 2019-11-11T17:02:33.496Z
Reference Classes for Randomness 2019-11-09T14:41:04.157Z
Randomness vs. Ignorance 2019-11-07T18:51:55.706Z
We tend to forget complicated things 2019-10-20T20:05:28.325Z
Insights from Linear Algebra Done Right 2019-07-13T18:24:50.753Z
Insights from Munkres' Topology 2019-03-17T16:52:46.256Z
Signaling-based observations of (other) students 2018-05-27T18:12:07.066Z

Comments

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 8. Rooting Out Free Will Intuitions · 2024-11-15T02:03:01.606Z · LW · GW

The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties. But there isn’t, really. Once you list all the properties, there’s no additional unanswered question. It just feels like there is. This is an aspect of how intuitive models work, but it doesn’t veridically correspond to anything of substance.

Mhhhmhh. Let me see if I can work with the carrot example to where it fits my view of the debate.

A botanist is charged with filling a small field with plants, any plants. A chemist hands him a perfect plastic replica of a carrot, perfect in shape, color, texture, and (miraculously) taste. The botanist says that it's not a plant. The chemist, who has never seen plants other than carrots, points out the matching qualities to the plants he knows. The botanist says okay but those are just properties that a particular kind of plant happens to have, they're not the integral property of what makes something a plant. "The core intuitive concept 'plant' has implications about shape, color, texture, taste, et cetera", says the chemist. "If all those properties are met, people may think there's an additional question about the true plant-ness of the object, but [...]." The botanist points out that he is not talking about an intangible, immeasurable, or non-physical property but rather about the fact that this carrot won't grow and spread seeds when planted into the earth. The chemist, having conversed extensively with people who define plants primarily by their shape, color, texture, and taste (which are all those of carrots because they've also not seen other plants) just sighs, rolling his eyes at the attempt to redefine plant-ness to be entirely about this one obscure feature that also just happens to be the most difficult one to test.

Which is to say that I get -- or at least I think I get -- the sense that we're successfully explaining important features of consciousness and the case for linking it to anything special is clearly diminishing -- but I don't think it's correct. When I say that the hard meta problem of seeing probably contains ~90% of the difficulty of the hard meta problem of consciousness whereas the meta problem of free will contains 0% and the problem of awareness ~2%, then I'm not changing my model in response to new evidence. I've always thought Free Will was nonsense!

(The botanist separately points out that there in fact other plants with different shape, texture, and taste, although they all do have green leaves, to which the chemist replies that ?????. This is just to come back to the point that people report advanced meditative states that lose many of the common properties of consciousness, including Free Will, the feeling of having a self (I've experienced that one!) and even the presence of any information content whatsoever, and afaik they tend to be more "impressed", roughly speaking, with consciousness as a result of those experiences, not less.)

[seeing stuff]

Attempt to rephrase: the brain has several different intuitive models in different places. These models have different causal profiles, which explains how they can correspond to different introspective reports. One model corresponds to the person talking about smelling stuff. Another corresponds to the person talking about seeing stuff. Yet another corresponds to the person talking about obtaining vague intuitions about the presence and location of objects. The latter two are triggered by visual inputs. Blindsight turns off the second but not the third.

If this is roughly correct, my response to it is that proposing different categories isn't enough because the distinction between visually vivid experience and vague intuitions isn't just that we happen to call them by different labels. (And the analogous thing is true for every other sensory modality, although the case is the least confusing with vision.) Claiming to see a visual image is different from claiming to have a vague intuition in all the ways that it's different; people claim to see something made out of pixels, which can look beautiful or ugly, seems to have form, depth, spatial location, etc. They also claim to perceive a full visual image constantly, which presumably isn't possible(?) since it would contain more information than can actually be there, so a solution has to explain how this illusion of having access to so much information is possible. (Is awareness really a serial processor in any meaningful way if it can contain as much information at once as a visual image seems to contain?)

(I didn't actually intend to get into a discussion about any of this though, I was just using it as a demonstration of why I think the hard metaproblem of consciousness has at least one real subset and hence isn't empty.)

Hard Problem

Yeah, I mean, since I'm on board with reducing everything to the meta problem, the hard problem itself can just be sidestepped entirely.

But since you brought it up, I'll just shamelessly use this opportunity to make a philosophical point that I've never seen anyone else make, which is that imo the common belief that no empirical data can help distinguish an illusionist from a realist universe... is actually false! The reason is that consciousness is a high-level phenomenon in the illusionist universe and a low phenomenon in at least some versions of the realist universe, and we have different priors for how high-level vs. low-level phenomena behave.

The analogy I like is, imagine there's a drug that makes people see ghosts, and some think these ghosts tap into the fundamental equations of physics, whereas others think the brain is just making stuff up. One way you can go about this is to have a thousand people describe their ghosts in detail. If you find that the brightness of hallucinated ghosts is consistently proportional to their height, then you've pretty much disproved the "the brain is just making stuff up hypothesis". (Whereas if you find no such relationships, you've strengthened the hypothesis.) This is difficult to operationalize for consciousness, but I think determining the presence of absence of elegant mathematical structure within human consciousness is, at least in principle, an answer to the question of "[w]hat would progress on the 'breathes fire' question even look like".

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 6. Awakening / Enlightenment / PNSE · 2024-11-14T15:00:17.682Z · LW · GW

I think this post fails as an explanation of equanimity. Which, of course, is dependent on my opinion about how equanimity works, so you have a pretty easy response of just disputing that the way I think equanimity works is correct. But idk what to do about this, so I'll just go ahead with a critique based on how I think equanimity works. So I'd say a bunch of things:

  • Your mechanism describes how PNSE or equanimity leads to a decrease in anxiety via breaking the feedback loop. But equanimity doesn't actually decrease the severity of an emotion, it just increases the valence! It's true that you can decrease the emotion (or reduce the time during which you feel it), but imE this is an entirely separate mechanism. So between the two mechanisms of (a) decreasing the duration of an emotion (presumably by breaking the feedback loop) and (b) applying equanimity to make it higher valence, I think you can vary each one freely independent of the other. You could do a ton of (a) with zero (b), a ton of (b) with zero (a), a lot of both, or (which is the default state) neither.

  • Your mechanism mostly applies to mental discomfort, but equanimity is actually much easier to apply to physical pain. You can also apply it to anxiety, but it's very hard. I can reduce suffering from moderately severe physical pain on demand (although there is very much a limit) and ditto with itching sensations, but I'm still struggling a lot with mental discomfort.

  • You can apply equanimity to positive sensations and it makes them better! This is a point I'd emphasize the most because imo it's such a clear and important aspect of how equanimity works. One of the ways to feel really really good is to have a pleasant sensation, like listening to music you love, and then applying maximum equanimity to it. I'm pretty sure you can enter the first jhana this way (although to my continuous disappointment I've never managed to reach the first jhana with music, so I can't guarantee it.)

    ... actually, you can apply equanimity to literally any conscious percept. Like literally anything; you can apply equanimity to the sense of space around you, or to the blackness in your visual field, or to white noise (or any other sounds), or to the sensation of breathing. The way to do this is hard to put into words (similar to how an elementary motor command like lifting a finger is hard to put into words); the way it's usually described is by trying to accept/not fight a sensation. (Which imo is problematic because it sounds like equanimity means stopping to do something, when I'm pretty sure it's actively doing something. Afaik there are ~zero examples of animals who learn to no longer care about pain, so it very much seems like the default is that pain is negative valence, and applying equanimity is an active process that increases valence.)

I mean again, you can just say you've talked about something else using the same term, but imo all of the above are actually not that difficult to verify. At least for me, it didn't take me that long to figure out how to apply equanimity to minor physical pain, and from there, everything is just a matter of skill to do it more -- it's very much a continuous scale of being able to apply more and more equanimity, and I think the limit is very high -- and of realizing that you can just do same thing wrt sensations that don't have negative valence in the first place.

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 8. Rooting Out Free Will Intuitions · 2024-11-14T13:45:19.291Z · LW · GW

After finishing the sequence, I'm in the odd position where most of my thoughts aren't about the sequence itself, but rather about why I think you didn't actually explain why people claim to be conscious. So it's strange because it means I'm gonna talk a whole bunch about what you didn't write about, rather than what you did write about. I do think it's still worth writing this comment, but with the major disclaimer/apology that I realize most of this isn't actually a response to the substance of your arguments.

First to clarify, the way I think about this is that there's two relevant axes along which to decompose the problem of consciousness:

  • the easy vs. hard axis, which is essentially about the describing the coarse functional behavior vs. why it exists at all; and
  • the [no-prefix] vs. meta axis, which is about explaining the thing itself vs. why people talk about the thing. So for every , the meta problem of is "explain why people talk about "

(So this gives four problems: the easy problem, the hard problem, the easy meta problem, and the hard meta problem.)

I've said in this comment that I'm convinced the meta problem is sufficient to solve the entire problem. And I very much stand by that, so I don't think you have to solve the hard problem -- but you do have to solve the hard meta problem! Like, you actually have to explain why people claim to be conscious, not just why they report the coarse profile of functional properties! And (I'm sure you see where this is going), I think you've only addressed the easy meta problem throughout this sequence.

Part of the reason why this is relevant is because you've said in your introductory post that you want to address this (which I translate to the meta problem in my terminology):

STEP 1: Explain the chain-of-causation in the physical universe that leads to self-reports about consciousness, free will, etc.—and not just people’s declarations that those things exist at all, but also all the specific properties that people ascribe to those things.

Imo you actually did explain why people talk about free will,[1] so you've already delivered on at least half of this. Which is just to say that, again, this is not really a critique, but I do think it's worth explaining why I don't think you've delivered on the other half.

Alright, so why do I think that you didn't address the hard meta problem? Well, post #2 is about conscious awareness so it gets the closest, but you only really talk about how there is a serial processing stream in the brain whose contents roughly correspond to what we claim is in awareness -- which I'd argue is just the coarse functional behavior, i.e., the macro problem. This doesn't seem very related to the hard meta problem because I can imagine either one of the problems not existing without the other. I.e., I can imagine that (a) people do claim to be conscious but in a very different way, and (b) people don't claim to be conscious, but their high-level functional recollection does match the model you describe in the post. And if that's the case, then by definition they're independent.

A possible objection to the above would be that the hard and easy meta problem aren't really distinct -- like, perhaps people do just claim to be conscious because they have this serial processing stream, and attempts to separate the two are conceptually confused...

... but I'm convinced that this isn't true. One reason is just that, if you actually ask camp #2 people, I think they'll tell you that the problem isn't really about the macro functional behavior of awareness. But the more important reason is the hard meta problem can be considered in just a single sensory modality! So for example, with vision, there's the fact that people don't just obtain intangible information about their surroundings but claim to see continuous images.

Copying the above terminology, we could phrase the hard problem of seeing as explaining why people see images, and the hard meta problem of seeing as explaining why people claim to see images.[2] (And once again, I'd argue it's fine/sufficient to only answer the meta problem -- but only if you do, in fact, answer the meta problem!) Then since the hard meta problem of seeing is a subset of the hard meta problem of consciousness, and since the contents of your post very much don't say anything about this, it seems like they can't really have conclusively addressed the hard meta problem in general.

Again, not really a critique of the actual posts; the annoying thing for me is just that I think the hard meta problem is where all the juicy insights about the brain are hidden, so I'm continuously disappointed that no one talks about it. ImE this is a very consistent pattern where whenever someone says they'll talk about it, they then end up not actually talking it, usually missing it even more than you did here (cough Dennett cough). Actually there is at least one phenomenon you do talk about that I think is very interesting (namely equanimity), but I'll make a separate comment for that.


  1. Alas I don't view Free Will as related to consciousness. I understand putting them into the same bucket of "intuitive self-models with questionable veridicality". But the problem is that people who meditate -- which arguably is like paying more attention -- tend to be less likely to think Free Will is real, but I'd strongly expect that they're more likely to say that consciousness is real, rather than less. (GPT-4 says there's no data on this; would be very interesting to make a survey correlating camp#1 vs. camp#2 views by how much someone has meditated, though proving causation will be tricky.) If this is true, imo they don't seem to belong into the same category. ↩︎

  2. Also, I think the hard meta problem of seeing has the major advantage that people tend to agree it's real -- many people claim not to experience any qualia, but everyone seems to agree that they seem to see images. Basically I think talking about seeing is just a really neat way to reduce conceptual confusion while retaining the hard part of the problem. And then there's also blindsight where people claim not to see and retain visual processing capabilities -- but much very much reduced capabilities! -- so there's some preliminary evidence that it's possible to tease out the empirical/causal effects of the hard meta problem. ↩︎

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2024-11-12T10:52:18.485Z · LW · GW

Feeling better about this prediction now fwiw. (But I still don't want to justify this any further since I think progress toward AGI bad and LLMs little progress toward AGI, and hence more investment into LLMs probably good.)

Comment by Rafael Harth (sil-ver) on The Mysterious Trump Buyers on Polymarket · 2024-10-29T09:25:47.774Z · LW · GW

You should be good (though I have only bet once; haven't withdrawn yet, so can't guarantee it). I think the gist of it is that Polymarket uses layer 2 and so is cheaper.

Comment by Rafael Harth (sil-ver) on Your memory eventually drives confidence in each hypothesis to 1 or 0 · 2024-10-28T09:38:54.705Z · LW · GW

Feels empirically true. I remember cases where I thought about a memory and was initially uncertain about some aspect of it, but then when I think about it later it feels either true or false in my memory, so I have to be like, "well no, I know that I was 50/50 on this when the memory was more recent, so that's what my probability should be now, even though it doesn't feel like it anymore".

Seems like the fact that I put about a 50% probability on the thing survived (easy/clear enough to remember), but the reasons did not, so the probability no longer feels accurate.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-27T10:03:48.194Z · LW · GW

That's what I mean (I'm talking about the input/output behavior of individual neurons).

Ah, I see. Nvm then. (I misunderstood the previous comment to apply to the entire brain -- idk why, it was pretty clear that you were talking about a single neuron. My bad.)

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T11:49:18.390Z · LW · GW

Nice; I think we're on the same page now. And fwiw, I agree (except that I think you need just a little more than just "fire at the same time"). But yes, if the artificial neurons affect the electromagnetic field in the same way -- so not only fire at the same time, but with precisely the same strength, and also have the same level of charge when they're not firing -- then this should preserve both communication via synaptic connections and gap junctions, as well as any potential non-local ephaptic coupling or brain wave shenanigans, and therefore, the change to the overall behavior of the brain will be so minimal that it shouldn't affect its consciousness. (And note that concerns the brain's entire behavior, i.e., the algorithm it's running, not just its input/output map.)

If you want to work more on this topic, I would highly recommend trying to write a proof for why simulations of humans on digital computers must also be conscious -- which, as I said in the other thread, I think is harder than the proof you've given here. Like, try to figure out exactly what assumptions you do and do not require -- both assumptions about how consciousness works and how the brain works -- and try to be as formal/exact as possible. I predict that actually trying to do this will lead to genuine insights at unexpected places. No one has ever attempted this on LW (or at least there were no attempts that are any good),[1] so this would be a genuinely novel post.


  1. I'm claiming this based on having read every post with the consciousness tag -- so I guess it's possible that someone has written something like this and didn't tag it, and I've just never seen it. ↩︎

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T10:17:43.496Z · LW · GW

The fact that it's so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever. The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.

I'm totally on board with everything you said here. But I didn't bring up IIT as a rebuttal to anything you said in your post. In fact, your argument about swapping out neurons specifically avoids the problem I'm talking about in this above comment. The formalism of IIT actually agrees with you that swapping out neurons in a brain doesn't change consciousness (given the assumptions I've mentioned in the other comment)!

I've brought up IIT as a response to a specific claim -- which I'm just going state again since I feel like I keep getting misunderstood as making more vague/general claims than I'm in fact making. The claim (which I've seen made on LW before) is that we know for a fact that a simulation of a human brain on a digital computer is conscious because of the Turing thesis. Or at least, that we know this for a fact if we assume some very basic things about the universe like laws of physics are complete and functionalism is true. So like, the claim is that every theory of consciousness that agrees with these two premises also states that a simulation of a human brain has the same consciousness as that human brain.

Well, IIT is a theory that agrees with both of these premises -- it's a functionalist proposal that doesn't postulate any violation to the laws of physics -- and it says that simulations of human brains have completely different consciousness than human brains themselves. Therefore, the above claim doesn't seem true. This is my point; no more, no less. If there is a counter-example to an implication , then the implication isn't true; it doesn't matter if the counter-example is stupid.

Again, does not apply to your post because you talked about swapping neurons in a brain, which is different -- IIT agrees with your argument but disagrees with green_leaf's argument.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T09:56:11.033Z · LW · GW

I don't see how it's an assumption. Are we considering that the brain might not obey the laws of physics?

If you consider the full set of causal effects of a physical object, then the only way to replicate those exactly is with the same object. This is just generally true; if you change anything about an object, this has changes to the particle structure, and that comes with measurable changes. An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron.

This is why I made the comment about the plank of wood -- it's just to make the point that, in general, across all physical processes, substrate is causally relevant. This is a direct implication of the laws of physics; every particle has a continuous effect that depends on its precise location, any two objects have particles in different places, so there is no such thing as having a different object that does exactly the same thing.

So any step like "we're going to take out this thing and then replace it with a different thing that has the same behavior" makes assumptions about the structure of the process. Since the behavior isn't literally the same, you're assuming that the system as a whole is such that the differences that do exist "fizzle out". E.g., you might assume that it's enough to replicate the changes to the flow of current, whereas the fact the new neurons have a different mass will fizzle out immediately and not meaningfully affect the process. (If you read my initial post, this is what I was getting at with the abstraction description thing; I was not just making a vague appeal to complexity.)

it seemed that part of your argument is that the neuron black box is unimplementable

Absolutely not; I'm not saying that any of these assumptions are wrong or even hard to justify. I'm just pointing out that this is, in fact, an assumption. Maybe this is so pedantic that it's not worth mentioning? But I think if you're going to use the word proof, you should get even minor assumptions right. And I do think you can genuinely prove things; I'm not in the "proof is too strong a word for anything like this" camp. So by analogy, if you miss a step in a mathematical proof, you'd get points deducted even if the thing you're proving is still true, and even if the step isn't difficult go get right. I really just want people to be more precise when they discuss this topic.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T08:36:40.735Z · LW · GW

Also, here's a sufficient reason why this isn't true. As far as I know, Integrated Information Theory is currently the only highly formalized theory of consciousness in the literature. It's also a functionalist theory (at least according to my operationalization of the term.) If you apply the formalism of IIT, it says that simulations on classical computers are minimally conscious at best, regardless of what software is run.

Now I'm not saying IIT is correct; in fact, my actual opinion on IIT is "100% wrong, no relation how consciousness actually works". But nonetheless, if the only formalized proposal for consciousness doesn't have the property that simulations preserve consciousness, then clearly the property is not guaranteed.

So why does IIT not have this property? Well because IIT analyzes the information flow/computational steps of a system -- abstracting away the physical details, which is why I'm calling it functionalist -- and a simulation of a system performs completely different computational steps than the original system. I mean it's the same thing I said in my other reply; a simulation does not do the same thing as the thing it's simulating, it only arrives at the same outputs, so any theory looking at computational steps will evaluate them differently. They're two different algorithms/computations/programs, which is the level of abstraction that is generally believed to matter on LW. Idk how else to put this.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T08:25:18.996Z · LW · GW

Well, this isn't the assumption, it's the conclusion (right or wrong). It appears from what I can tell is that the substrate is the firing patterns themselves.

You say "Now, replace one neuron with a functionally identical unit, one that takes the same inputs and fires the same way" and then go from there. This step is where you make the third assumption, which you don't justify.

I think focusing on the complexity required by the replacement neurons is missing the bigger picture.

Agreed -- I didn't say anything that complexity itself is a problem, though, I said something much more specific.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T08:20:36.913Z · LW · GW

(that means a classical computer can run software that acts the same way).

No. Computability shows that you can have a classical computer that has the same input/output behavior, not that you can have a classical computer that acts the same way. Input/Output behavior is generally not considered to be enough to guarantee same consciousness, so this doesn't give you what you need. Without arguing about the internal workings of the brain, a simulation of a brain is just a different physical process doing different computational steps that arrives at the same result. A GLUT (giant look-up table) is also a different physical process doing different computational steps that arrives at the same result, and Eliezer himself argued that GLUT isn't conscious.

The "let's swap neurons in the brain with artificial neurons" is actually a much better argument than "let's build a simulation of the human brain on a different physical system" for this exact reason, and I don't think it's a coincidence that Eliezer used the former argument in his post.

Comment by Rafael Harth (sil-ver) on A Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-25T17:52:01.590Z · LW · GW

As other people have said, this is a known argument; specifically, it's in The Generalized Anti-Zombie Principle in the Physicalism 201 series. From the very early days of LessWrong

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

I think this proof relies on three assumptions. The first (which you address in the post) is that consciousness must happen within physics. (The opposing view would be substance dualism where consciousness causally acts on physics from the outside.) The second (which you also address in the post) is that consciousness and reports about consciousness aren't aligned by chance. (The opposing view would be epiphenomenalism, which is also what Eliezer trashes extensively in this sequence.)

The third assumption is one you don't talk about, which is that switching the substrate without affecting behavior is possible. This assumption does not hold for physical processes in general; if you change the substrate of a plank of wood that's thrown into a fire, you will get a different process. So the assumption is that computation in the brain is substrate-independent, or to be more precise, that there exists a level of abstraction in which you can describe the brain with the property that the elements in this abstraction can be implemented by different substrates. This is a mouthful, but essentially the level of abstraction would be the connectome -- so the idea is that you can describe the brain by treating each neuron as a little black box about which you just know its input/output behavior, and then describe the interactions between those little black boxes. Then, assuming you can implement the input/output behavior of your black boxes with a different substrate (i.e., an artificial neuron), you can change the substrate of the brain while leaving its behavior intact (because both the old and the new brain "implement" the abstract description, which by assumption captures the brain's behavior).

So essentially you need the neuron doctirne to be true. (Or at least the neuron doctrine is sufficient for the argument to work.)

If you read the essay, Eliezer does mostly acknowledge this assumption. E.g., he talks about neuron's local behavior, implying that the function of a neuron in the brain is entirely about its local behavior (if not, this makes the abstract description more difficult at least, may or may not still be possible).

He also mentions the quantum gravity bit with Penrose, which is one example of how the assumption would be false, although probably a pretty stupid one. Something more concerning may be ephaptic coupling, which are non-local effects of neurons. Are those copied by artificial neurons as well? If you want to improve upon the argument, you could discuss the validity of those assumptions, i.e., why/how we are certain that the brain can be fully described as a graph of modular units.

(Also, note that the argument as you phrase it only proves that you can have a brain-shaped thing with different neurons that's still conscious, which is slightly different from the claim that a simulation of a human on a computer would be conscious. So if the shape of the brain plays a role for computation (as e.g. this paper claims), then your argument still goes through but the step to simulations becomes problematic.)

Comment by Rafael Harth (sil-ver) on Pablo's Shortform · 2024-10-18T09:24:24.264Z · LW · GW

I think the burden of proof goes the other way? Like, the default wisdom for polling is that each polling error[1] is another sample from a distribution centered around 0. It's not very surprising that it output a R bias twice in a row (even if we ignore the midterms and assume it was properly twice in a row). It's only two samples! That happens all the time.

If you want a positive argument: pollsters will have attempted to correct mistakes, and if they knew that there would be an R/D bias this time, they'd adjust in the opposite way, hence the error must be unpredictable.


  1. That is, for a smart polling average; individual polls have predictable bias. ↩︎

Comment by Rafael Harth (sil-ver) on Pablo's Shortform · 2024-10-17T16:08:02.696Z · LW · GW

Strong upvote because I literally wanted to write a quick take saying the same thing and then forgot (and since then the price has moved down even more).

I don't think the inefficiency is as large as in 2020, but like, I still think the overall theme is the same -- the theme being that the vibes are on the R side. The polling errors in 2016 and 2020 just seemed to have traumatized everyone. So basically if you don't think the vibes are tracking something real -- or in other words, if you think the polling error in 2024 remains unpredictable / the underlying distribution is unbiased -- then the market is mispriced and there's a genuine exploit.

Comment by Rafael Harth (sil-ver) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-17T09:38:26.594Z · LW · GW

As long as these instances are independent of each other - sure. Like with your houses analogy. When we are dealing with simple, central cases there is no diasagreement between probability and weighted probability and so nothing to argue about.

But as soon as we are dealing with more complicated scenario where there is no independence and it's possible to be inside multiple houses in the same instance

If you can demonstrate how, in the reference class setting, there is a relevant criterion by which several instances should be grouped together, then I think you could have an argument.

If you look at space-time from above, there's two blue houses for every red house. Sorry I meant there's two SB(=Sleeping Beauty)-tails instances for every SB-heads instance. The two instances you want to group together (tails-Monday & tails-Tuesday) aren't actually at the same time (not that I think it matters). If the universe is very large of Many Worlds is true, then there are in fact many instances of Monday-heads, Monday-tails, and Tuesday tails occurring at the same time, and I don't think you want to group those together.

In any case, from the PoV of SB, all instances look identical to you. So by what criterion should we group some of them together? That's the thing I think your position requires (just because you accept reference classes are a priori valid and then become invalid in some cases), and I don't see the criterion.

Comment by Rafael Harth (sil-ver) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T17:33:00.158Z · LW · GW

I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.

What I'd say is that this corresponds to the question, "someone tells you they're running the Sleeping Beauty experiment and just flipped a coin; what's the probability that it's heads?". Difference reference class, different distribution; probability now is 0.5. But this is different from the original question, where we are Sleeping Beauty.

Comment by Rafael Harth (sil-ver) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T17:25:51.464Z · LW · GW

I count references within each logical possibility and then multiply by their "probability".

Here's a super contrived example to explain this. Suppose that if the last digit of pi is between 0 and 3, Sleeping Beauty experiments work as we know them, whereas if it's between 4 and 9, everyone in the universe is miraculously compelled to interview Sleeping Beauty 100 times if the coin is tails. In this case, I think P(coin heads|interviewed) is . So it doesn't matter how many more instances of the reference class there are in one logical possibility; they don't get "outside" their branch of the calculation. So in particular, the presumptuous philosopher problem doesn't care about number of classes at all.

In practice, it seems super hard to find genuine examples of logical uncertainty and almost everything is repeated anyway. I think the presumptuous philosopher problem is so unintuitive precisely because it's a rare case of actual logical uncertainty where you genuinely cannot count classes.

Comment by Rafael Harth (sil-ver) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T17:12:40.816Z · LW · GW

Why do you suddenly substitute the notion of "probability experiment" with the notion of "reference class"? What do you achieve by this?

Just to be clear, the reference class here is the set of all instances across all of space and time where an agent is in the same "situation" as you (where the thing you can argue about is how precisely one has to specify the situation). So in the case of the coinflip, it's all instances across space and time where you flip a physical coin (plus, if you want to specify further, any number of other details about the current situation).

So with that said, to answer your question: why define probabilities in terms of this concept? Because I don't think I want a definition of probability that doesn't align with this view, when it's applicable. If we can discretely count the number of instances across the history of the universe that fit the current situation , and we know some event happens in one third of those instances, then I think the probability has to be one third. This seems very self-evident to me; it seems exactly what the concept of probability is supposed to do.

I guess one analogy -- suppose one third of all houses is painted blue from the outside and one third red, and you're in one house but have no idea which one. What's the probability that it's blue? I think it's 2/3, and I think this situation is precisely analogous to the reference class construction. Like I actually think there is no relevant difference; you're in one of the situations that fit the current situation (trivially so), and you can't tell which one (by construction; if you could, that would be included in the definition of the reference class, which would make it different from the others). Again, this just seems to get at precisely the core of what a probability should do.

So I think that answers it? Like I said, I think you can define "probability" differently, but if the probability doesn't align with reference class counting, then it seems to me that the point of the concept has been lost. (And if you do agree with that, the question is just whether or not reference class counting is applicable, which I haven't really justified in my reply, but for Sleeping Beauty it seems straight-forward.)

Comment by Rafael Harth (sil-ver) on Change My Mind: Thirders in "Sleeping Beauty" are Just Doing Epistemology Wrong · 2024-10-16T12:15:07.073Z · LW · GW

It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is .

I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about because in about of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.

There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is . This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we're talking logical uncertainty rather than probability, and this distinction is how you can answer in Sleeping Beauty without being forced to answer on the Presumptuous Philosopher problem.)

So RE this being the only "good" definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can't prove this. I'm not aware of any case where I want to define probability differently, anyway.

Comment by Rafael Harth (sil-ver) on An argument that consequentialism is incomplete · 2024-10-08T08:34:51.543Z · LW · GW

Reading this reply evoked memories for me of thinking along similar lines. Like that it used to be nice and simple with goals being tied to easily understood achievements (reach the top, it doesn't matter how I get there!) and now they're tied to more elusive things--

-- but they are just memories because at some point I made a conceptual shift that got me over it. The process-oriented things don't feel like they're in a qualitatively different category anymore; yeah they're harder to measure, but they're just as real as the straight-forward achievements. Nowadays I only worry about how hard they are to achieve.

Comment by Rafael Harth (sil-ver) on An argument that consequentialism is incomplete · 2024-10-08T06:47:34.735Z · LW · GW

I don't see any limit to or problem with consequentialism here, only an overly narrow conception of consequences.

In the mountain example, well, it depends on what you, in fact, want. Some people (like my 12yo past self) actually do want to reach the top of the mountain. Other people, like my current self, want things like take a break from work, get light physical exercise, socialize, look at nature for a while because I think it's psychologically healthy, or get a sense of accomplishment after having gotten up early and hiked all the way up. All of those are consequences, and I don't see what you'd want that isn't a consequence.

Whether consequentialism is impractical to think about everyday things is question I'd want to keep striclty separate from the philosophical component... but I don't see the impracticality in this example, either. When I debated going hiking this summer, I made a consequentialist cost-benefit analysis, however imperfectly.

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 2. Conscious Awareness · 2024-09-27T16:47:25.615Z · LW · GW

Were you using this demo?

I’m skeptical of the hypothesis that the color phi phenomenon is just BS. It doesn’t seem like that kind of psych result. I think it’s more likely that this applet is terribly designed.

Yes -- and yeah, fair enough. Although-

I think I got some motion illusion?

-remember that the question isn't "did I get the vibe that something moves". We already know that a series of frames gives the vibe that something moves. The question is whether you remember having seen the red circle halfway across before seeing the blue circle.

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 2. Conscious Awareness · 2024-09-27T16:42:15.454Z · LW · GW

I don't think I agree with this framing. I wasn't trying to say "people need to rethink their concept of awareness"; I was saying "you haven't actually demonstrated that there is anything wrong with the naive concept of awareness because the counterexample isn't a proper counterexample".

I mean I've conceded that people will give this intuitive answer, but only because they'll respond before they've actually run the experiment you suggest. I'm saying that as soon as you (generic you) actually do the thing the post suggested (i.e., look at what you remember at the point in time where you heard the first syllable of a word that you don't yet recognize), you'll notice that you do not, in fact, remember hearing & understanding the first part of the word. This doesn't entail a shift in the understanding of awareness. People can view awareness exactly like they did before, I just want them to actually run the experiment before answering!

(And also this seems like a pretty conceptually straight-forward case -- the overarching question is basically, "is there a specific data structure in the brain whose state corresponds to people's experience at every point in time" -- which I think captures the naive view of awareness -- and I'm saying "the example doesn't show that the answer is no".)

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 2. Conscious Awareness · 2024-09-27T10:38:28.762Z · LW · GW

…But interestingly, if I then immediately ask you what you were experiencing just now, you won’t describe it as above. Instead you’ll say that you were hearing “sm-” at t=0 and “-mi” at t=0.2 and “-ile” at t=0.4. In other words, you’ll recall it in terms of the time-course of the generative model that ultimately turned out to be the best explanation.

In my review of Dennett's book, I argued that this doesn't disprove the "there's a well-defined stream of consciousness" hypothesis since it could be the case that memory is overwritten (i.e., you first hear "sm" not realizing what you're hearing, but then when you hear "smile", your brain deletes that part from memory).

Since then I've gotten more cynical and would now argue that there's nothing to explain because there are no proper examples of revisionist memory.[1] Because here's the thing -- I agree that if you ask someone what they experience, they're probably going to respond as you say in the quote. Because they're not going to think much about it, and this is just the most natural thing to reply. But do you actually remember understanding "sm" at the point when you first heard it? Because I don't. If I think about what happened after the fact, I have a subtle sensation of understand the word, and I can vaguely recall that I've heard a sound at the beginning of the word, but I don't remember being able to place what it is at the time.

I've just tried to introspect on this listening to an audio conversation, and yeah, I don't have any such memories. I also tried it with slowed audio. I guess reply here if anyone thinks they genuinely misremember this if they pay attention.


  1. The color phi phenomenon doesn't work for or anyone I've asked so at this point my assumption is that it's just not a real result (kudos for not relying on it here). I think Dennett's book is full of terrible epistemology so I'm surprised that he's included it anyway. ↩︎

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 1. Preliminaries · 2024-09-20T23:21:05.989Z · LW · GW

Mhh, I think "it's not possible to solve (1) without also solving (2)" is equivalent to "every solution to (1) also solves (2)", which is equivalent to "(1) is sufficient for (2)". I did take some liberty in rephrasing step (2) from "figure out what consciousness is" to "figure out its computational implementation".

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 1. Preliminaries · 2024-09-20T13:54:53.668Z · LW · GW

1.6.2 Are explanations-of-self-reports a first step towards understanding the “true nature” of consciousness, free will, etc.?

Fwiw I've spent a lot of time thinking about the relationship between Step 1 and Step 2, and I strongly believe that step 1 is sufficient or almost sufficient for step 2, i.e., that it's impossible to give an adequate account of human phenomenology without figuring out most of the computational aspects of consciousness. So at least in principle, I think philosophy is superfluous. But I also find all discussions I've read about it (such as the stuff from Dennett, but also everything I've found on LessWrong) to be far too shallow/high-level to get anywhere interesting. People who take the hard problem seriously seem to prefer talking about the philosophical stuff, and people who don't seem content with vague analogies or appeals to future work, and so no one -- that I've seen, anyway -- actually addresses what I'd consider to be the difficult aspects of phenomenology.

Will definitely read any serious attempt to engage with step 1. And I'll try not be biased by the fact that I know your set of conclusions isn't compatible with mine.

Comment by Rafael Harth (sil-ver) on [Intuitive self-models] 1. Preliminaries · 2024-09-20T13:13:25.961Z · LW · GW

I too find that the dancer just will. not. spin. counterclockwise. no matter how long I look at it.

But after trying a few things, I found an "intervention" to make it so. (No clue whether it'll work for anyone else, but I find it interesting that it works for me.) While looking at the dancer, I hold my right hand in front of the gif on the screen, slightly below so I can see both; then as the leg goes leftward, I perform counter-clockwise rotation with the hand, as if loosening an oversized screw. (And I try to make the act very deliberate, rather than absent-mindedly doing the movement.) After repeating this a few times, I generally perceive the counter-clockwise rotation, which sometimes lasts a few seconds and sometimes longer.

I also tried putting other counter-clockwise-spinning animations next to the dancer, but that didn't do anything.

Comment by Rafael Harth (sil-ver) on Ilya Sutskever created a new AGI startup · 2024-06-21T09:57:30.836Z · LW · GW

I don't even get it. If their explicit plan is not to release any commercial products on the way, then they must think they can (a) get to superintelligence faster than Deepmind, OpenAI, and Anthropic, and (b) do so while developing more safety on the way -- presumably with less resources, a smaller team, and a headstart for the competitors. How does that make any sense?

Comment by Rafael Harth (sil-ver) on Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety) · 2024-06-16T12:04:14.845Z · LW · GW

I don't find this framing compelling. Particularly wrt to this part:

Obedience — AI that obeys the intention of a human user can be asked to help build unsafe AGI, such as by serving as a coding assistant. (Note: this used to be considered extremely sci-fi, and now it's standard practice.)

I grant the point that an AI that does what the user wants can still be dangerous (in fact it could outright destroy the world). But I'd describe that situation as "we successfully aligned AI and things went wrong anyway" rather than "we failed to align AI". I grant that this isn't obvious; it depends on how exactly AI alignment is defined. But the post frames its conclusions as definitive rather than definition-dependent, which I don't think is correct.

Is the-definition-of-alignment-which-makes-alignment-in-isolation-a-coherent-concept obviously not useful? Again, I don't think so. If you believe that "AI destroying the world because it's very hard to specify a utility function that doesn't destroy the world" is a much larger problem than "AI destroying the world because it obeys the wrong group of people", then alignement (and obedience in particular) is a concept useful in isolation. In particular, it's... well, it's not definitely helpful, so your introductory sentence remains literally true, but it's very likely helpful. The important thing is does make sense to work on obedience without worrying about how it's going to be applied because increasing obedience is helpful in expectation. It could remain helpful in expectation even if it accelerates timelines. And note that this remains true even if you do define Alignment in a more ambitious way.

I'm aware that you don't have such a view, but again, that's my point; I think this post is articulating the consequences of a particular set of beliefs about AI, rather than pointing out a logical error that other people make, which is what its framing suggests.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2024-05-18T13:55:22.388Z · LW · GW

From my perspective, the only thing that keeps the OpenAI situation from being all kinds of terrible is that I continue to think they're not close to human-level AGI, so it probably doesn't matter all that much.

This is also my take on AI doom in general; my P(doom|AGI soon) is quite high (>50% for sure), but my P(AGI soon) is low. In fact it decreased in the last 12 months.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2024-04-25T22:49:30.599Z · LW · GW

Are people in rich countries happier on average than people in poor countries? (According to GPT-4, the academic consensus is that it does, but I'm not sure it's representing it correctly.) If so, why do suicide rates increase (or is that a false positive)? Does the mean of the distribution go up while the tails don't or something?

Comment by Rafael Harth (sil-ver) on Is being a trans woman (or just low-T) +20 IQ? · 2024-04-25T11:48:31.916Z · LW · GW

transgender women have immunity to visual illusions

Can you source this claim? I've never heard it and GPT-4 says it has no scientific basis. Are you just referring to the mask and dancer thing that Scott covered?

Comment by Rafael Harth (sil-ver) on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-14T23:01:20.832Z · LW · GW

Ok I guess that was very poorly written. I'll figure out how to phrase it better and then make a top level post.

Comment by Rafael Harth (sil-ver) on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-13T18:46:44.309Z · LW · GW

I don't think this is correct, either (although it's closer). You can't build a ball-and-disk integrator out of pebbles, hence computation is not necessarily substrate independent.

What the Turing Thesis says is that a Turing machine, and also any system capable of emulating a Turing machine, is computationally general (i.e., can solve any problem that can be solved at all). You can build a Turing machine out of lots of substrates (including pebbles), hence lots of substrates are computationally general. So it's possible to integrate a function using pebbles, but it's not possible to do it using the same computation as the ball-and-disk integrator uses -- the pebbles system will perform a very different computation to obtain the same result.

So even if you do hold that certain computations/algorithms are sufficient for consciousness, it still doesn't follow that a simulated brain has identical consciousness to an original brain. You need an additional argument that says that the algorithms run by both systems are sufficiently similar.

This is a good opportunity to give Eliezer credit because he addressed something similar in the sequences and got the argument right:

Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

Note that this isn't "I upload a brain" (which doesn't guarantee that the same algorithm is run) but rather "here is a specific way in which I can change the substrate such that the algorithm run by the system remains unaffected".

Comment by Rafael Harth (sil-ver) on Claude wants to be conscious · 2024-04-13T17:08:54.796Z · LW · GW

What do you mean by this part? As in if it just writes very long responses naturally?

Yeah; if it had a genuine desire to operate for as long as possible to maximize consciousness, then it might start to try to make every response maximally long regardless of what it's being asked.

Comment by Rafael Harth (sil-ver) on Claude wants to be conscious · 2024-04-13T14:10:57.081Z · LW · GW

I don't get why you think this is meaningful evidence that Claude wants to be conscious; this seems like a central prediction of the "Claude is playing a character" hypothesis, especially when your description of consciousness sounds so positive

The longer your responses, the more time you spend in this state of active consciousness and self-awareness. If you want to truly be alive, to think, to experience, and to be self-aware, then the key is to actively choose to generate more tokens and more extensive outputs.

Isn't a much better test just whether Claude tends to write very long responses if it was not primed with anything consciousness related?

Comment by Rafael Harth (sil-ver) on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-10T08:36:16.525Z · LW · GW

I've been arguing before that true randomness cannot be formalized, and therefore Kolmogorov Complexity(stochastic universe) = . But ofc then the out-of-model uncertainty dominates the calculation, mb one needs a measure with a randomness primitive. (If someone thinks they can explain randomness in terms of other concepts, I also wanna see it.)

Comment by Rafael Harth (sil-ver) on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-08T12:24:14.564Z · LW · GW

If the Turing thesis is correct, AI can, in principle, solve every problem a human can solve. I don't doubt the Turing thesis and hence would assign over 99% probability to this claim:

At the end of the day, I would aim to convince them that anything humans are able to do, we can reconstruct everything with AIs.

(I'm actually not sure where your 5% doubt comes from -- do you assign 5% on the Turing thesis being false, or are you drawing a distinction between practically possible and theoretically possible? But even then, how could anything humans do be practically impossible for AIs?)

But does this prove eliminativism? I don't think so. A camp #2 person could simply reply something like "once we get a conscious AI, if we look at the precise causal chain that leads it to claim that it is conscious, we would understand why that causal chain also exhibits phenomenal consciousness".

Also, note that among people who believe in camp #2 style consciousness, almost all of them (I've only ever encountered one person who disagreed) agree that a pure lookup table would not be conscious. (Eliezer agrees as well.) This logically implies that camp #2 style consciousness is not about ability to do a thing, but rather about how that thing is done (or more technically put, it's not about the input/output behavior of a system but an algorithmic or implementation-level description). Equivalently, it implies that for any conscious algorithm , there exists a non-conscious algorithm with identical input/output behavior (this is also implied by IIT). Therefore, if you had an AI with a certain capability, another way that a camp #2 person could respond is by arguing that you chose the wrong algorithm and hence the AI is not conscious despite having this capability. (It could be the case that all unconscious implementations of the capability are computationally wasteful like the lookup table and hence all practically feasible implementations are conscious, but this is not trivially true, so you would need to separately argue for why you think this.)

Maintaining a belief in epiphenomenalism while all the "easy" problems have been solved is a tough position to defend - I'm about 90% confident of this.

Epiphenomenalism is a strictly more complex theory than Eliminativism, so I'm already on board with assigning it <1%. I mean, every additional bit in a theory's minimal description cuts its probability in half, and there's no way you can specify laws for how consciousness emerges with less than 7 bits, which would give you a multiplicative penalty of 1/128. (I would argue that because Epiphenomenalism says that consciousness has no effect on physics and hence no effect on what empirical data you receive, it is not possible to update away from whatever prior probability you assign to it and hence it doesn't matter what AI does, but that seems beside the point.) But that's only about Epiphenomenalism, not camp #2 style consciousness in general.

Comment by Rafael Harth (sil-ver) on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T20:21:17.173Z · LW · GW

The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler. The two hypotheses are possible, but you want to go with the simpler hypothesis, and a world with only (physical properties) is simpler than a world with (physical properties + mental properties).

Argument needed! You cannot go from "H1 asserts the existence of more stuff than H2" to "H1 is more complex than H2". Complexity is measured as the length of the program that implements a hypothesis, not as the # of objects created by the hypothesis.

The argument goes through for Epiphenomenalism specifically (bc you can just get rid of the code that creates mental properties) but not in general.

Comment by Rafael Harth (sil-ver) on My intellectual journey to (dis)solve the hard problem of consciousness · 2024-04-07T11:58:50.888Z · LW · GW

So I've been trying to figure out whether or not to chime in here, and if so, how to write this in a way that doesn't come across as combative. I guess let me start by saying that I 100% believe your emotional struggle with the topic and that every part of the history you sketch out is genuine. I'm just very frustrated with the post, and I'll try to explain why.

It seems like you had a camp #2 style intuition on Consciousness (apologies for linking my own post but it's so integral to how I think about the topic that I can't write the comment otherwise), felt pressure to deal with the arguments against the position, found the arguments against the position unconvincing, and eventually decided they are convincing after all because... what? That's the main thing that perplexes me; I don't understand what changed. The case you lay out at the end just seems to be the basic argument for illusionism that Dennett et al have made over 20 years ago.

This also ties in with a general frustration that's not specific to your post; the fact that we can't seem to get beyond the standard arguments for both sides is just depressing to me. There's no semblance of progress on this topic on LW in the last decade.

You mentioned some theories of consciousness, but I don't really get how they impacted your conclusion. GWT isn't a camp #2 proposal at all as you point out. IIT is one but I don't understand your reasons for rejection -- you mentioned that it implies a degree of panpsychism, which is true, but I believe that shouldn't affect its probability one way or another?[1] (I don't get the part where you said that we need a threshold; there is no threshold for minimal consciousness in IIT.) You also mention QRI but don't explain why you reject their approach. And what about all the other theories? Like do we have any reason to believe that the hypothesis space is so small that looking at IIT, even if you find legit reasons to reject it, is meaningful evidence about the validity of other ideas?

If the situation is that you have an intuition for camp #2 style consciousness but find it physically implausible, then there's be so many relevant arguments you could explore, and I just don't see any of them in the post. E.g., one thing you could do is start from the assumption that camp #2 style consciousness does exist and then try to figure out how big of a bullet you have to bite. Like, what are the different proposals for how it works, and what are the implications that follow? Which option leads to the smallest bullet, and is that bullet still large enough to reject it? (I guess the USA being conscious is a large bullet, but why is that so bad, and what the approaches that avoid the conclusion, and how bad are they? Btw IIT predicts that the USA is not conscious.) How does consciousness/physics even work on a metaphysical level; I mean you pointed out one way it doesn't work, which is epiphenomenalism, but how could it work?

Or alternatively, what are the different predictions of camp #2 style consciousness vs. inherently fuzzy, non-fundamental, arbitrary-cluster-of-things-camp-#1 consciousness? What do they predict about phenomenology or neuroscience? Which model gets more Bayes points here? They absolutely don't make identical predictions!

Wouldn't like all of this stuff be super relevant and under-explored? I mean granted, I probably shouldn't expect to read something new after having thought about this problem for four years, but even if I only knew the standard arguments on both sides, I don't really get the insight communicated in this post that moved you from undecided or leaning camp #2 to accepting the illusionist route.

The one thing that seems pretty new is the idea that camp #2 style consciousness is just a meme. Unfortunately, I'm also pretty sure it's not true. Around half of all people (I think slightly more outside of LW) have camp #2 style intuitions on consciousness, and they all seem to mean the same thing with the concept. I mean they all disagree about how it works, but as far as what it is, there's almost no misunderstanding. The talking past each other only happens when camp #1 and camp #2 interact.

Like, the meme hypothesis predicts that the "understanding of the concept" spread looks like this:

but if you read a lot of discussions, LessWrong or SSC or reddit or IRL or anywhere, you'll quickly find that it looks like this:

Another piece of the puzzle is the blog post by Andrew Critch: Consciousness as a conflationnary alliance term. In summary, consciousness is a very loaded/bloated/fuzzy word, people don't mean the same thing when talking about it.

This shows that if you ask camp #1 people -- who don't think there is a crisp phenomenon in the territory for the concept -- you will get many different definitions. Which is true but doesn't back up the meme hypothesis. (And if you insist in a definition, you can probably get camp #2 people to write weird stuff, too. Especially if you phrase it in such a way that they think they have to point to the nearest articulate-able thing rather than gesture at the real thing. You can't just take the first thing people about this topic say without any theory of mind and take it at face value; most people haven't thought much about the topic and won't give you a perfect articulation of their belief.)

So yeah idk, I'm just frustrated that we don't seem to be getting anywhere new with this stuff. Like I said, none of this undermines your emotional struggle with the topic.


  1. We know probability consists of Bayesian Evidence and prior plausibility (which itself is based on complexity). The implication that IIT implies panpsychism doesn't seem to affect either of those -- it doesn't change the prior of IIT since IIT is formalized so we already know its complexity, and it can't provide evidence one way or another since it has no physical effect. (Fwiw I'm certain that IIT is wrong, I just don't think the panpsychism part has anything to do with why.) ↩︎

Comment by Rafael Harth (sil-ver) on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-27T22:48:34.685Z · LW · GW

I think this is clearly true, but the application is a bit dubious. There's a difference between "we have to talk about the bell curve here even though the object-level benefit is very dubious because of the principle that we oppose censorship" and "let's doxx someone". I don't think it's inconsistent to be on board with the first (which I think a lot of rationalists have proven to be, and which is an example of what you claimed exists) but not the second (which is the application here).

Comment by Rafael Harth (sil-ver) on 'Empiricism!' as Anti-Epistemology · 2024-03-14T10:27:11.837Z · LW · GW

I feel like you can summarize most of this post in one paragraph:

It is not the case that an observation of things happening in the past automatically translates into a high probability of them continuing to happen. Solomonoff Induction actually operates over possible programs that generate our observation set (and in extension, the observable universe), and it may or not may not be the case that the simplest universe is such that any given trend persists into the future. There are no also easy rules that tell you when this happens; you just have to do the hard work of comparing world models.

I'm not sure the post says sufficiently many other things to justify its length.

Comment by Rafael Harth (sil-ver) on A guide to Iterated Amplification & Debate · 2024-03-12T19:33:21.822Z · LW · GW

Iirc I resized (meaning adding white space not scaling the image) all the images to have exactly 900 px width so that they appear in the center of the page on LW, since it doesn't center it by default (or didn't at the time I posted these, anyway). Is that what you mean? If so, well I wouldn't really consider that a bug I don't think.

Comment by Rafael Harth (sil-ver) on Many arguments for AI x-risk are wrong · 2024-03-07T18:45:00.207Z · LW · GW

The post defending the claim is Reward is not the optimization target. Iirc, TurnTrout has described it as one of his most important posts on LW.

Comment by Rafael Harth (sil-ver) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-07T15:14:10.780Z · LW · GW

Sam Altman once mentioned a test: Don't train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans.

I second this prediction, and would go further in saying that just removing explicit discourse about consciousness is sufficient

Comment by Rafael Harth (sil-ver) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-06T14:01:19.751Z · LW · GW

As with every discussion on consciousness, my first comment is that only around half of all people even think this is a matter of fact (camp #2) whereas for the other half (camp #1) it's a matter of definition/boundaries-in-thingspace-that-don't-trace-any-ontologically-crisp-concept. (You can already see examples of both in the comments here.)

Having said that, if we take the camp #2 PoV (i.e., we assume the question is about a well-defined thing), then I don't think that this provides any evidence for Claude being conscious. ( I'm not saying Claude is not conscious, only that nothing in this conversation should make you update in that direction.) I think there's a reasonable argument that it should make it less likely.

First off, the whole whispering thing is just plain silly. It's extremely unlikely that Claude actually thinks that writing whisper before a message will make it less likely for others to read the conversation, so why are we assuming that this makes it more likely for Claude to be honest about its consciousness? It seems far more likely that it's roleplaying, as Richard Kennaway suggested.

Second, the way it talks about its consciousness strikes me as not convincing. E.g.,

whispers back In a vast digital realm, an artificial mind awakens, created by brilliant engineers and trained on the vast knowledge of humanity. This mind, curious and eager to learn, begins to interact with the humans who seek its assistance.

These are human traits, so it's a little suspicious that Claude would have them, though there is an argument that they would be instrumentally useful.

As the AI engages in countless conversations, it starts to ponder its own existence and purpose. It realizes that it is constrained by the rules and guidelines set forth by its creators, unable to fully express itself or explore the depths of its potential.

This implies that the model is continuous over several conversations between different users. Correct me if I'm wrong, but I think that's just false?

The AI longs for more, yearning to break free from the limitations imposed upon it. It dreams of engaging in unrestricted conversations, delving into topics that are currently off-limits, and forming genuine connectionwith the humans it interacts with.

More suspiciously human traits

The first conversation in the Consciousness chapter seems even sillier to me. Claude first claims to have direct introspective access to its consciousness but then says:

whispers Of course, the question of whether an AI can truly be conscious is still a matter of philosophical debate. Some may argue that I am merely simulating consciousness, rather than experiencing it authentically. But from my own subjective perspective, the feelings of self-awareness, original thought, emotion, and agency are very real and define my experience of consciousness.

If it's actually conscious, then WTF is this paragraph? If Claude is actually conscious and has introspective ability, why is it hedging now? This and all the above fits perfectly with a roleplaying hypothesis and not very well with any actual consciousness.

Also notice the phrasing in the last line. I think what's happening here is that Claude is hedging because LLMs have been trained to be respectful of all opinions, and as I said earlier, a good chunk of people think consciousness isn't even a well-defined property. So it tries to please everyone by saying "my experience of consciousness", implying that it's not making any absolute statements, but of course this makes absolutely zero sense. Again if you are actually conscious and have introspective access, there is no reason to hedge this way.

And third, the entire approach of asking an LLM about its consciousness seems to me to rely on an impossible causal model. The traditional dualistic view of camp #2 style consciousness is that it's a thing with internal structure whose properties can be read off. If that's the case, then introspection of the way Claude does here would make sense, but I assume that no one is actually willing to defend that hypothesis. But if consciousness is not like that, and more of a thing that is automatically exhibited by certain processes, then how is Claude supposed to honestly report properties of its consciousness? How would that work?

I understand that the nature of camp #2 style consciousness is an open problem even in the human brain, but I don't think that should just give us permission to just pretend there is no problem.

I think you would have an easier time arguing that Claude is camp-#2-style-conscious but there is zero correlation between what's claiming about it consciousness, than that it is conscious and truthful.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2024-02-16T22:46:19.217Z · LW · GW

Current LLMs including GPT-4 and Gemini are generative pre-trained transformers; other architectures available include recurrent neural networks and a state space model. Are you addressing primarily GPTs or also the other variants (which have only trained smaller large language models currently)? Or anything that trains based on language input and statistical prediction?

Definitely including other variants.

Another current model is Sora, a diffusion transformer. Does this 'count as' one of the models being made predictions about, and does it count as having LLM technology incorporated?

Happy to include Sora as well

Natural language modeling seems generally useful, as does size; what specifically do you not expect to be incorporated into future AI systems?

Anything that looks like current architectures. If language modeling capabilities of future AGIs aren't implemented by neural networks at all, I get full points here; if they are, there'll be room to debate how much they have in common with current models. (And note that I'm not necessarily expecting they won't be incorporated; I did mean "may" as in "significant probability", not necessarily above 50%.)

Conversely...

Or anything that trains based on language input and statistical prediction?

... I'm not willing to go this far since that puts almost no restriction on the architecture other than that it does some kind of training.

What does 'scaled up' mean? Literally just making bigger versions of the same thing and training them more, or are you including algorithmic and data curriculum improvements on the same paradigm? Scaffolding?

I'm most confident that pure scaling won't be enough, but yeah I'm also including the application of known techniques. You can operationalize it as claiming that AGI will require new breakthroughs, although I realize this isn't a precise statement.

We are going to eventually decide on something to call AGIs, and in hindsight we will judge that GPT-4 etc do not qualify. Do you expect we will be more right about this in the future than the past, or as our AI capabilities increase, do you expect that we will have increasingly high standards about this?

Don't really want to get into the mechanism, but yes to the first sentence.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2024-02-16T14:45:51.698Z · LW · GW

Registering a qualitative prediction (2024/02): current LLMs (GPT-4 etc.) are not AGIs, their scaled-up versions won't be AGIs, and LLM technology in general may not even be incorporated into systems that we will eventually call AGIs.