Posts

Limits to Learning: Rethinking AGI’s Path to Dominance 2023-06-02T16:43:25.635Z

Comments

Comment by tangerine on Daniel Dennett has died (1942-2024) · 2024-04-20T08:35:13.076Z · LW · GW

My introduction to Dennett, half a lifetime ago, was this talk: 

That was the start of his profound influence on my thinking. I especially appreciated his continuous and unapologetic defense of the meme as a useful concept, despite the many detractors of memetics.

Sad to know that we won't be hearing from him anymore.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-12T17:01:01.824Z · LW · GW

Yes. My bad, I shouldn’t have implied all hidden-variables interpretations.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-11T16:08:13.292Z · LW · GW

Every non-deterministic interpretation has a virtually infinite Kolmogorov complexity because it has to hardcode the outcome of each random event.

Hidden-variables interpretations are uncomputable because they are incomplete.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T21:42:38.030Z · LW · GW

It’s the simplest explanation (in terms of Kolmogorov complexity).

It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.

Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.

Comment by tangerine on Any evidence or reason to expect a multiverse / Everett branches? · 2024-04-09T21:28:52.410Z · LW · GW

It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?

Comment by tangerine on my theory of the industrial revolution · 2024-02-29T17:02:01.596Z · LW · GW

Have you read Joseph Henrich’s books The Secret of Our Success, and its sequel The WEIRDest People in the World? If not, they provide a pretty comprehensive view of how humanity innovates and particularly the Western world, which is roughly in line with what you wrote here.

Comment by tangerine on Why you, personally, should want a larger human population · 2024-02-24T19:16:35.101Z · LW · GW

I kind of agree that most knowledge is useless, but the utility of knowledge and experience that people accrue is probably distributed like a bell curve, which means you can't just have more of the good knowledge without also accruing lots of useless knowledge. In addition, very often stuff that seems totally useless turns out to be very useful; you can't always tell which is which.

Comment by tangerine on Why you, personally, should want a larger human population · 2024-02-24T09:30:49.353Z · LW · GW

I completely agree. In Joseph Henrich’s book The Secret of Our Success, he shows that the amount of knowledge possessed by a society is proportional to the number of people in that society. Dwindling population leads to dwindling technology and dwindling quality of life.

Those who advocate for population decline are unwittingly advocating for the disappearance of the knowledge, experience and frankly wisdom that is required to keep the comfortable life that they take for granted going.

Keeping all that knowledge in books is not enough. Otherwise our long years in education would be unnecessary. Knowing how to apply knowledge is its own form of knowledge.

Comment by tangerine on Causality is Everywhere · 2024-02-13T23:09:03.794Z · LW · GW

If causality is everywhere, it is nowhere; declaring “causality is involved” will have no meaning. It begs the question whether an ontology containing the concept of causality is the best one to wield for what you’re trying to achieve. Consider that causality is not axiomatic, since the laws of physics are time-reversible.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-04T15:44:35.163Z · LW · GW

I respect Sutskever a lot, but if he believed that he could get an equivalent world model by spending an equivalent amount of compute learning from next-token prediction using any other set of real-world data samples, why would they go to such lengths to specifically obtain human-generated text for training? They might as well just do lots of random recordings (e.g., video, audio, radio signals) and pump it all into the model. In principle that could probably work, but it’s very inefficient.

Human language is a very high density encoding of world models, so by training on human language models get much of their world model “for free“, because humanity has already done a lot of pre-work by sampling reality in a wide variety of ways and compressing it into the structure of language. However, our use of language still doesn’t capture all of reality exactly and I would argue it’s not even close. (Saying otherwise is equivalent to saying we’ve already discovered almost all possible capabilities, which would entail that AI actually has a hard cap at roughly human ability.)

In order to expand its world model beyond human ability, AI has to sample reality itself, which is much less sample-efficient than sampling human behavior, hence the “soft cap”.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-04T15:42:18.726Z · LW · GW

In theory, yes, but that’s obviously a lot more costly than running just one instance. And you’ll need to keep these virtual researchers running in order to keep the new capabilities coming. At some point this will probably happen and totally eclipse human ability, but I think the soft cap will slow things down by a lot (i.e., no foom). That’s assuming that compute and the number of researchers even is the bottleneck to new discoveries; it could also be empirical data.

Comment by tangerine on On Dwarkesh’s 3rd Podcast With Tyler Cowen · 2024-02-03T15:57:33.912Z · LW · GW

If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements

There is good reason to believe that AI will have a soft cap at roughly human ability (and by “soft cap” I mean that anything beyond the cap will be much harder to achieve) for the same reason that humans have a soft cap at human ability: copying existing capabilities is much easier than discovering new capabilities.

A human being born today can relatively easily achieve abilities that other humans have achieved, because you just copy them; lots of 12-year-olds can learn calculus, which is much easier than inventing it. AI will have the same issue.

Comment by tangerine on Monthly Roundup #14: January 2024 · 2024-01-24T21:44:19.655Z · LW · GW

The European socket map is deceptive. My charger will work anywhere on mainland Europe. Looking at the sockets, can you tell why?

Comment by tangerine on There is way too much serendipity · 2024-01-22T17:08:28.421Z · LW · GW

Does this count as “rational, deliberate design”? I think a case could be made for both yes and no, but I lean towards no. Humans who have studied a certain subject often develop a good intuition for what will work and what won’t and I think deep learning captures that; you can get right answers at an acceptable rate without knowing why. This is not quite rational deliberation based on theory.

Comment by tangerine on There is way too much serendipity · 2024-01-21T14:29:36.152Z · LW · GW

I think that “rational, deliberate design”, as you put it, is simply far less common (than random chance) than you think; that the vast majority of human knowledge is a result of induction instead of deduction; that theory is overrated and experimentalism is underrated.

This is also why I highly doubt that anything but prosaic AI alignment will happen.

Comment by tangerine on Seth Explains Consciousness · 2024-01-14T21:50:56.027Z · LW · GW

I don't think I disagree with what you're saying here, though we may be using different terms to say the same thing.

How does what you say here inform your thoughts about the Hard Problem?

Comment by tangerine on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-02T23:02:14.640Z · LW · GW

Regarding taking hints, the other gender typically does not see all the false positives one has to deal with. What seems obvious is usually not obvious at all. In fact, a socially skilled person will always try to use plausibly deniable (i.e., not-obvious) signals and will consider anything more a gauche faux pas. Acting on such signals is therefore inherently risky and is nowadays perhaps considered more risky than it used to be, especially at work and around close friends.

For example, a few years ago, a woman I had great rapport with called me her Valentine in a very charming way. You might say that's an obvious signal, but when I tried to make plans with her she said that's just a thing she does for friends and family and there was no special meaning to it. Some people are out to get your romantic attention, but ultimately want to keep you at arm's length.

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T22:41:18.876Z · LW · GW

All I’m asking for is a way for other people to determine whether a given explanation will satisfy you. You haven’t given enough information to do that. Until that changes we can’t know that we even agree on the meaning of the Hard Problem.

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T20:11:07.763Z · LW · GW

Also., the existence of a problem does not depend on the existence of a solution.

Agreed, but even if no possible solution can ultimately satisfy objective properties, until those properties are defined the problem itself remains undefined. Can you define these objective properties?

Comment by tangerine on Seth Explains Consciousness · 2023-11-14T17:17:48.280Z · LW · GW

I know. Like I said, neither Chalmers nor you or anyone else have shown it plausible that subjective experience is non-physical. Moreover, you repeatedly avoid giving an objective description what you’re looking for.

Until either of the above change, there is no reason to think there is a Hard Problem.

Comment by tangerine on Seth Explains Consciousness · 2023-11-13T20:19:09.872Z · LW · GW

Chalmers takes hundreds of pages to set out his argument.

His argument does not bridge that gap. He, like you, does not provide objective criteria for a satisfying explanation, which means by definition you do not know what the thing is that requires explanation, no matter how many words are used trying to describe it.

Comment by tangerine on Seth Explains Consciousness · 2023-11-13T19:29:05.066Z · LW · GW

The core issue is that there’s an inference gap between having subjective experience and the claim that it is non-physical. One doesn’t follow from the other. You can define subjective experience as non-physical, as Chalmer’s definition of the Hard Problem does, but that’s not justified. I can just as legitimately define subjective experience as physical.

I can understand why Chalmers finds subjective experience mysterious, but it’s not more mysterious than the existence of something physical such as gravity or the universe in general. Why is General Relativity enough for you to explain gravity, even though the reason for the existence of gravity is mysterious?

Comment by tangerine on Seth Explains Consciousness · 2023-11-12T21:12:41.005Z · LW · GW

Let’s say the Hard Problem is real. That means solutions to the Easy Problem are insufficient, i.e., the usual physical explanations.

But when we speak about physics, we’re really talking about making predictions based on regularities in observations in general. Some observations we could explain by positing the force of gravity. Newton himself was not satisfied with this, because how does gravity “know” to pull on objects? Yet we were able to make very successful predictions about the motions of the planets and of objects on the surface of the Earth, so we considered those things “explained” by Newton’s theory of gravity. But then we noticed a slight discrepancy between some of these predictions and our observations, so Einstein came up with General Relativity to correct those predictions and now we consider these discrepancies “explained”, even though the reason why that particular theory works remains mysterious, e.g., why does spacetime exist? In general, when a hypothesis correctly predicts observations, we consider these observations scientifically explained.

Therefore to say that solutions to the Easy Problem are insufficient to explain qualia indicates (at least to me) one of two things.

  1. Qualia have no regularity that we can observe. If they really didn’t have regularities that we could observe, we wouldn’t be able to observe that they exist, which contradicts the claim that they do exist. However, they do have regularities! We can predict qualia! Which means solutions to the Easy Problem are sufficient after all, which contradicts the assumption that they’re insufficient.
  2. We’re aspiring to a kind of explanation for qualia over and above the scientific one, i.e., just predicting is not enough. You could posit any additional requirements for an explanation to qualify, but presumably we want an explanation to be true. You can’t know beforehand what’s true, so you can’t know that such additional requirements don’t disqualify the truth. There is only one thing that we know will be true however, namely that whatever we will observe in the future is what we will observe in the future. Therefore as long as the predictions of a theory don’t deviate from future observations, we can’t rule out that it’s accurately describing what’s actually going on, i.e., we can’t falsify it. In a way it’s a low bar, but it’s the best we can do. However, if a hypothesis makes predictions that are compatible with any and all observations, i.e., it’s unfalsifiable, then we can’t ever gain any information about its validity from any observations even in principle, which directly contradicts the assumption that you can find an explanation.
Comment by tangerine on Seth Explains Consciousness · 2023-11-12T17:33:01.836Z · LW · GW

You say you see colors and have other subjective experiences and you call those qualia and I can accept that, but when I ask why solutions to the Easy Problem wouldn’t be sufficient you say it’s because you have subjective experiences, but that’s circular reasoning. You haven’t said why exactly solutions to the Easy Problem don’t satisfy you, which is why I keep asking what kind of explanation would satisfy you. I genuinely do not know, based on what you have said. It doesn’t have to be scientific.

If we are talking about scientific explanation: a scientific explanation of X succeeds if it is able to predict X's, particularly novel ones, and it doesn't mispredict X's.

But it’s not clear to me how you would judge that any explanation, scientific or not, does these things for qualia, because it seems to me that solutions to the Easy Problem do exactly this; I can already predict what kind of qualia you experience, even novel ones. If I show you a piece of red paper, you will experience the qualia of red. If I give you a drink or a drug you haven’t had before I can predict that you will have a new experience. I may not be able to predict quite exactly what those experiences will be in a given situation because I don’t have complete information, but that’s true for virtually any explanation, even when using quantum mechanics.

I suspect you may now object again and say, “but that doesn’t explain subjective experience”. Then I will object again and say, “what explanation would satisfy you?”, to which you will again say, “if it predicts qualia”, to which I will say, “but we can already predict what qualia you will have in a given situation”. Then you will again object and say, “but that doesn’t explain subjective experience”. And so on.

It looks to me like you’re holding out for something you don’t know how to recognize. True, maybe an explanation is impossible, but you don’t know that either. When some great genius finally does explain it all, how will you know he’s right? You wouldn’t want to miss out, right?

They don't explain subjective experience. The Easy Problem is everything except subjective experience.

But this is the very thing in question. Can you explain to me how exactly you come to this conclusion? Having subjective experience does not in itself imply that it’s not physical.

The fact that qualia are physically mysterious can't be predicted from physics

I’m genuinely curious what you mean by this. Can you expand on this?

Comment by tangerine on Seth Explains Consciousness · 2023-11-11T21:58:41.114Z · LW · GW

Science isn't based on exactly predermining an explanation before you have it.

But then how you would know that a given explanation, scientific or not, explains qualia to your satisfaction? How will you be able to tell that that explanation is indeed what you were looking for before?

If I can tell that qualia are indescribable or undetectable, I must know something of "qualia" means.

People have earnestly claimed the same thing about various deities. Do you believe in those? Why would your specific belief be true if theirs weren’t? Why are you so sure you’re not mistaken?

And if it is an objective fact that there is some irreducible subjectivity

Could be, but we don’t know that.

One could test a qualiometer on oneself.

How would you determine that it is working? That if you’re seeing something red, the qualiometer says “red”? If so, how would that show that there is something more going on than what’s explained with solutions to the Easy Problem?

it can’t be subjectively true and false at the same time, depending on who you are.

I don't know who is suggesting that.

It’s a logical consequence of claiming there is no objective fact about something.

But you can notice your own qualia.. anaesthesia makes a difference.

Again, I agree with you that subjective experience exists, but I don’t see why solutions to the Easy Problem wouldn’t satisfy you. There’s something mysterious about subjective experience, but that’s true for everything, including atoms and electromagnetic waves and chairs and the rest of objective reality. Why does anything in the universe exist? It’s “why?” all the way down.

Comment by tangerine on Seth Explains Consciousness · 2023-11-11T21:11:26.927Z · LW · GW

If I had to choose between those two phrasings I would prefer the second one, for being the most compatible between both of our notions. My notion of "emerges from" is probably too different from yours. The main difference seems to be that you're a realist about the third-person perspective, whereas I'm a nominalist about it, to use your earlier terms.

That actually sounds more like the first phrasing to me. If you are a nominalist about the third-person perspective, then it seems that you think the third-person perspective does not actually exist and the concept of the third-person perspective is borne of the first-person perspective.

Do you think this works as a double crux?

I’m not sure whether this is a good double crux, because it’s not clear enough to me what we mean by first- and third-person perspectives. It seems conceivable to me that my conception of the third-person perspective is functionally equivalent to your conception of the first-person perspective. Let me expand on that below.

If only the first-person perspective exists, then presumably you cannot be legitimately surprised, because that implies something was true outside of your first-person perspective prior to your experiencing it, unless you define that as being part of your first-person perspective, which seems contradictory to me, but functionally the same as just defining everything from the third-person perspective. The only alternative possibility that seems available is that there are no external facts, which would mean reality is actually an inconsistent series of experiences, which seems absurd; then we wouldn’t even be able to be sure of the consistency of our own reasoning, including this conversation, which defeats itself.

Comment by tangerine on GPT-2030 and Catastrophic Drives: Four Vignettes · 2023-11-11T09:13:06.939Z · LW · GW

I find scenarios in which a single agent forms a significant global threat very implausible because even for very high IQ humans (200+) it seems very difficult to cross a large inference gap on their own.

Moreover, it would have to iterate on empirical data, which it somehow needs to gather, which will be more noticeable as it scales up.

If it employs other agents, such as copies of itself, this only exacerbates the problem, because how will the original agent be able to control its copies enough to keep them from going rogue and being noticed?

The most likely scenario to me seems one where over some number of years we willingly give these agents more and more economic power and they leverage that to gain more and more political power, i.e., by using the same levers of power that humans use and in a collective way, not through a single agent.

Comment by tangerine on Seth Explains Consciousness · 2023-11-08T09:17:41.588Z · LW · GW

Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.).

The analogies do hold, because you don’t get to do special pleading and claim ultimate authority about what’s real inside your subjective experience any more than about what’s real outside of it. Your subjective experience is part of our shared reality, just like mine.

People are mistaken all the time about what goes on inside their mind, about the validity of their memories, or about the real reasons behind their actions. So why should I take at face value your claims about the validity of your thoughts, especially when those thoughts lead to logical contradictions?

Comment by tangerine on Seth Explains Consciousness · 2023-11-07T18:05:25.555Z · LW · GW

That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.

That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.

Comment by tangerine on Seth Explains Consciousness · 2023-11-07T10:05:06.066Z · LW · GW

I don't know how an equation describes a quale, and I also don't know how to build a qualiometer.

When you find an explanation, how will you know that that was the explanation you were looking for?

If as you say you don’t know in advance how to describe qualia, that means you won’t be able to recognize that an explanation actually describes qualia, which in turn means you don’t actually know what you mean when you talk about qualia.

If as you say you don’t know in advance how to measure qualia, that means the explanation’s predictions can’t be tested against observations because we won’t know whether we are actually measuring qualia, which in turn means any explanation is a priori unfalsifiable.

You need to know in advance how to describe and measure what you’re seeking to explain in such a way that a third party can use those descriptions and measurements to falsify an explanation, otherwise the falsity of any explanation depends on your personal sensibilities; somebody else may have different sensibilities and come to an equally legitimate yet contradictory decision. Presumably, we are in a shared reality where it is an objective matter of fact that we either have qualia or we don’t; it can’t be subjectively true and false at the same time, depending on who you are.

I’m not saying qualia don’t exist, but I am saying that without objective descriptions of qualia and the ability to measure them objectively we can’t tell the difference between qualia and something that doesn’t exist.

Comment by tangerine on Seth Explains Consciousness · 2023-11-06T18:04:09.162Z · LW · GW

This is an opportunity to extend the theory by introducing a cause. In these dualistic quantum mind theories, a nonphysical mind is added to quantum mechanics as an additional causal factor that determines some of the randomness.

Firstly, how would we know that the correct way to extend the theory was to introduce a nonphysical mind as a cause? How would we tell the difference between the validity of this hypothesis and that of the infinite other possible causes?

Secondly, what is the difference between something physical and nonphysical? I hope I can assume that you agree that if something exists, then it behaves in some way. It is then up to us to try to describe that behavior as far as we can. Whether or not something is physical or not seems meaningless at this point. Quarks might as well be considered supernatural, magical, nonphysical objects whose behavior we happen to be able to describe, including how our mundane, physical reality emerges from it.

Supernatural, magical and nonphysical are contradictions in terms unless one decides on some arbitrary distinction between behaviors that are such and those that are not, because they will regardless behave in some way and we can predict that behavior inasfar as we can describe it.

Comment by tangerine on Seth Explains Consciousness · 2023-11-03T17:43:32.580Z · LW · GW

How do you decide that an explanation specifically for this something (that is not currently explained from an objective viewpoint) is falsified?

Comment by tangerine on Seth Explains Consciousness · 2023-11-03T09:28:50.851Z · LW · GW

This is a gap which dualists can use.

How could dualists use a random process?

Comment by tangerine on Seth Explains Consciousness · 2023-11-02T20:55:08.211Z · LW · GW

>Why would you? The point of using reductuive explanation is that it *identitfies" phenomenal consciousness with neural activity, and therefore supports physicalism. On the other hand, you would still be able find correlations in a universe where dualism holds

You asked how changing neural patterns in a person’s brain can be linked to what experiences. You can use the scientific process to establish those links inasfar as they can be linked.

There is no possible universe in which dualism holds due to the interaction problem, unless you use a very narrow definition of what’s physical. (For example, I have encountered people who claimed that light is not physical. That’s fair, but that’s not the definition of physical that I or the vast majority of physicists and scientists use.)

>So what? You can't assume that only things you want to explain in a particular way exist. Why would the universe care?

The point is that you can’t say anything meaningful about things you can’t explain using the scientific process. You can’t even say they exist. They may well exist, but you can’t tell something that doesn’t exist apart from something that can’t be explained scientifically. The scientific process is not just a particular way to explain things; indeed, the universe does not care to what degree you can know things; it just so happens that falsifying theories through predictions is the only way to know things. If horoscopes or dowsing rods were a way to know what’s true they would be science, but they aren’t so they're not.

>I''m not positing that there is: I have subjective conscious experience because I'm a subject. I'm not looking at myself from the outside. Are you?

You are an object. Of course it looks like you’re a subject because that’s what your brain (i.e., “you”) looks like to that brain.

>I am saying that there is something that is not , currently, explained from an objective viewpoint. I've said so several times.

How do you decide whether a candidate explanation is sufficient to explain this something?

Comment by tangerine on Seth Explains Consciousness · 2023-11-02T17:30:16.740Z · LW · GW

>What new experiences? That's the hard problem.

Sure, that’s a hard problem, but it’s not the hard problem. You can go through the usual scientific process and identify what neural patterns correlate with which experiences, but that’s all doable with solutions to the Easy Problem.

>Yes, non human animals have some experiences in common with humans. They also have some that are different, like dolphin sonar. That's the other hard problem?

Again, sure, a hard problem, but to explain such things you can go through the usual scientific process and come up with new ontologies to describe new kinds of experiences.

In contrast, the problem with the Hard Problem is that you can’t even begin the scientific process. What it looks like to me what you’re trying to get at is that, for example, if there is a cup, we can both acknowledge that there are physical constituents that make up the cup, but you seem to pose that in addition to this there is a “cupness” to the cup. This is basically the essentialist position, which is related to philosophical realism.

In terms of consciousness, you seem to be saying that there is something it is like to be conscious, in addition to what the brain is doing from an objective standpoint. I deny that this is the case and therefore I deny that there is a problem that needs explanation. What does need explanation is why some people such as yourself claim that it requires an explanation, which I have tried to explain earlier.

Comment by tangerine on Seth Explains Consciousness · 2023-11-01T16:49:01.513Z · LW · GW

Sure, change the neural patterns in a person’s brain and they’ll get new experiences. As far as non-humans are concerned, if you punch them in the face they’ll experience pain and fear or anger. Red looks like that because that’s what we mean when we say red. If a cup breaks, can you explain where its cupness has gone?

Comment by tangerine on Seth Explains Consciousness · 2023-11-01T15:21:23.511Z · LW · GW

How do you decide whether a candidate explanation is sufficient to explain phenomenal consciousness?

Comment by tangerine on Seth Explains Consciousness · 2023-11-01T14:09:49.052Z · LW · GW

Are you saying that we have, today, a theory which can predict the nature of sensory qualities from objective facts?

Yes, for example, if blood flow to the brain is decreased, you can use that to correctly predict a decrease in consciousness. If I show you a red piece of paper, you will experience red, if the paper is green, you experience green, etc.

Comment by tangerine on Seth Explains Consciousness · 2023-10-31T19:18:28.550Z · LW · GW

As I said ,it is an explanandum, not an explanation. We have prima facie evidence of consciousness because we are conscious.

I believe consciousness exists and that we both have it, but I don’t think either of us have the kind of consciousness that you claim you have, namely consciousness as described by the Hard Problem. By consciousness as described by the Hard Problem I mean the kind of consciousness that is not fully explained by solutions to the Easy Problem.

Why do you believe that solutions to the Easy Problem are not sufficient? Conversely, why do you believe that heat is a sufficient explanation for what happens to one’s finger when touching fire? What does the latter do that the former does not? How do you in general decide that an explanation is sufficient?

Comment by tangerine on Seth Explains Consciousness · 2023-10-29T18:29:42.647Z · LW · GW

Would I be correct to say that you think the third-person perspective emerges from the first-person perspective? Or would you say that they’re simply separate?

Comment by tangerine on Seth Explains Consciousness · 2023-10-29T17:42:55.009Z · LW · GW

But that's a completely general argument. If the worst thing you can say about phenomenal consciousness is that it is occasionally inapplicable, it is no worse off than heat.

Unlike heat, I can’t imagine any situation in which consciousness as described by the Hard Problem is applicable. Can you give me a situation in which you can make better predictions using the concept?

Note the difference between the phenomenon being explained, the explanandum, and the explanation. There is not much doubt that thunder and lightning exist , but there is much doubt that Zeus or Thor causes them .

Zeus is posited, dobntfully, to explain something for which there is clear evidence. Consciousness is equivalent to the thunder, not the thunder god (particularly under a minimal definition ... it's important not to get misled by the idea that qualia are necessarily nonphyiscal or something).

We can agree that thunder and lightning exist and that Zeus and Thor do not, but not that consciousness exists as posed by the Hard Problem. To resolve that disagreement we need to agree on what it means for something to exist. I proposed this litmus test of additive predictive power.

The litmus test of philosophy is that it must tell the truth. If prediction isn't available, you should accept that. You shouldn't argue against X on the basis that it prevents prediction , because you have no reason to believe that the universe is entirely predictable. Science is based on the hope that things are predictable and comprehensible, but not in the certainty.. They are falsifiable claims.

How does one test that a statement is true (or at least not false)? I accept that there may be things that are true that I can’t know are true, but there is an infinite number of such possible things. How would I decide which to believe and which not? And if I did, what would that get me?

The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power.

Why? Where is that proven?

Consciousness as described by the Hard Problem is not derived from any observation that can be independently corroborated. When you claim to observe your own consciousness, you are not observing reality directly, you are observing your own ontology. Your ontology contains consciousness as described by the Hard Problem and that is why you’re seeing it.

Comment by tangerine on Seth Explains Consciousness · 2023-10-25T21:12:03.202Z · LW · GW

Like TAG said, in a trivial sense human observations are made from a first-person, subjective viewpoint. But all of these observations are also happening from a third-person perspective, just like the rest of reality. The way I see it, the third-person perspective is basically the default, i.e., reality is like a list of facts, from which the first-person view emerges. Then of course the question is, how is that emergence possible? I can understand the intuition that the third-person and first-person view seem to be fundamentally different, but I think of it this way: all the thoughts you think and the statements you make are happening in reality and the structure of that reality determines your thoughts. This is where the illusion arguments become relevant; illusions, such as optical ones, demonstrate clearly that you can be made to believe things that are wrong because of convenience or simply brain malfunction. Changing the configuration of your brain matter can make you believe absolutely anything. The belief in the first-person perspective has evolved because it’s just very useful for survival and you can’t choose to disbelieve what your brain makes you believe.

Given the above, to say that the first-person perspective is fundamentally different seems like the more supernatural claim to me.

Comment by tangerine on Seth Explains Consciousness · 2023-10-24T16:47:08.087Z · LW · GW

the natural structure of space and time ("mathematics")

What exactly do you mean by this? That nature is mathematical?

all observations are subjective (first person)

This sounds like it could be a double crux, because if I believed this the Hard Problem would follow trivially, but I don’t believe it.

Comment by tangerine on Limits to Learning: Rethinking AGI’s Path to Dominance · 2023-10-22T14:38:51.944Z · LW · GW

I’d say there virtually must be an upper bound. As to where this upper bound is, you could do the following back-of-the-napkin calculation.

ChatGPT is pretty good at reading and writing and it has something on the order of 100 billion to 1,000 billion parameters, one for each artificial synapse. A human brain has on the order of 100,000 billion natural synapses and a chimpanzee brain has about a third of that. If we could roughly equate artificial and natural synapses, it seems that a chimpanzee brain should in principle be able to model reading and writing as well as ChatGPT and then some. But then you’d have to devise a training method to set the strengths of natural synapses as desired.

Comment by tangerine on Seth Explains Consciousness · 2023-10-22T08:28:44.854Z · LW · GW

heat still exists

Saying that something “exists” is more subtle than that. In everyday life we don’t have to be pedantic about it, but in this discussion, I think we do.

There are lots of different ontologies which explain how certain parts of reality work. The concept of heat is one that most people include in their ontologies, because it’s just very useful most of the time, though not always. For example, there’s not much sense in asking what the temperature is of a single particle. Virtually every ontology breaks down in such a way at some point, which is to say that in certain situations it does not describe what happens in reality closely enough to be of practical value in that situation.

In pagan cultures, there were ontologies containing gods which ostensibly influenced certain parts of reality. There’s a storm? Zeus must be angry. To these cultures, Zeus existed, because it seemed to explain what was happening. It wasn’t a very good explanation from our perspective because it didn’t bestow great power in predicting storms.

But also in modern science, we have had and still do have theories which explain reality only partially. Newtonian mechanics describes the world very accurately, but not quite exactly. Einstein’s general relativity filled in some of the gaps, but we’re pretty sure that that is not exactly right either, because it’s not a quantum theory, which we think a better theory should be. Given that we know our theories are wrong, does inertia exist? Does spacetime exist? Do points of infinite density exist?

The hard problem emerges from the requirement to explain consciousness reductively.

You could similarly say that any valid ontology has the requirement to explain heat reductively, but then the pagan could also say that any ontology has the requirement to explain Zeus reductively. Seeing reality through the lens of ontologies, which we all have no choice but to do, colors the perception of what you think exists and needs to be explained. True, “heat” needs to be explained insofar as it does correspond to reality, but we might Pareto-improve our understanding of reality by using an entirely different ontology which doesn’t contain the concept of heat at all, which is pretty much what happened to the concept of Zeus. The concept of consciousness must be held to the same standard. We have to take a step back from our ontologies and ask what parts are actually useful and what exactly it means for them to be useful. The litmus test of modern science is that it must add predictive power. The problem is that consciousness, as described by the Hard Problem, is an ontological outgrowth (derived analytically from an existing ontology) that does not have any predictive power. Even worse, consciousness as described by the Hard Problem is unfalsifiable, meaning it has been by definition pre-empted from having predictive power (otherwise it could potentially have been falsified by comparing its predictions to some outcome), so why should I include it in my ontology?

Comment by tangerine on Seth Explains Consciousness · 2023-10-21T07:08:24.808Z · LW · GW

Well, for me, one crux is this question of nominalism vs philosophical realism. One way to investigate this question for yourself is to ask whether mathematics is invented (nominalism) or discovered (philosophical realism). I don’t often like to think in terms of -isms, but I have to admit I fall pretty squarely in the nominalist camp, because while concepts and words are useful tools, I think they are just that: tools, that we invented. Reality is only real in a reductionist sense; there are no people, no numbers and no consciousness, because those are just words that attempt to cope with the complexity of reality, so we just shouldn’t take them so seriously. If you agree with this, I don’t see how you can think the Hard Problem is worth taking seriously. If you disagree, I’m interested to see why. If you could convince me that there is merit to the philosophical realist position, I would strongly update towards the Hard Problem being worth taking seriously.

Comment by tangerine on Seth Explains Consciousness · 2023-10-18T20:04:32.680Z · LW · GW

From my point of view, much or all of the disagreement around the existence of the Hard Problem seems to boil down to the opposition between nominalism and philosophical realism. I’ll discuss how I think this opposition applies to consciousness, but let me start by illustrating it with the example of money having value.

In one sense, the value of money is not real, because it's just a piece of paper or metal or a number in a bank’s database. We have systems in place such that we can track relatively consistently that if I work some number of hours, I get some of these pieces of paper or metal or the numbers on my bank account change in some specific way and I can go to a store and give them some of these materials or connect with the bank’s database to have the numbers decrease in some specific way, while in exchange I get a coffee or a t-shirt or whatever. But this is a very obtuse way of communicating, so we just say that “money has value” and everybody understands that it refers to this system of exchanging materials and changing numbers. So in the case of money, we are pretty much all nominalists; we say that money has value as a shorthand and in that sense the value of money is real. On the other hand, a philosophical realist would say that actually the value of money is real independently from our definition of the words. (I view this idea similarly to how Eliezer Yudkowsky talks about buckets being “magical” in this story.)

In the case of the value of money, philosophical realism does not seem to be a common position. However, when it comes to consciousness, the philosophical realist position seems much more common. This strikes me as odd, since both value and consciousness appear to me to originate in the same way; there is some physical system which we, through the evolution of language and culture generally, come to describe with shorthands (i.e., words), because reality is too complicated to talk about exhaustively and in most practical matters we all understand what we mean anyway. However, for philosophical realists, such words appear to take on a life of their own, perhaps because existing words are simply passed down to younger generations as if they were the only way to think about the world, without mentioning that those words happen to conceptualize the world in one specific way out of an infinite and diverse number of ways and, importantly, that that conceptualization oversimplifies to a large extent. This oversimplification is not something we can escape. Any language, including any internal one, has to cope with the fact that reality is too complicated to capture in an exhaustive way. Even if we’re rationalists, we’re severely bounded ones. We can’t see reality for what it is and compare it to how we think about it to see where the differences are; we only see our own thoughts.

Following the Hard Problem through to its logical conslusions seems to lead to contradictions such as the interaction problem. None of the solutions proposed by myself or any Hard Problem enthusiast dissolve these contradictions in a way that satisfy me, therefore I conclude that my mind's conception of my own consciousness is flawed. I'll nonetheless stick to that conception because it's useful, but I have no illusions that it is universally correct; this last step is one that proponents of the Hard Problem seem not to be prepared to take. From my point of view it looks like they conclude that because their minds conceptualize the world in a certain way that this conception must somehow correspond exactly to reality. However, the map is not the territory.

P.S. I would not call myself an eliminativist. I consider “experience” and “consciousness” and related terms as real as the value of money.

Comment by tangerine on We don't understand what happened with culture enough · 2023-10-10T16:43:15.676Z · LW · GW

Cultural evolution is a bit of a catch-22; you need to keep it going for generations before you gain an advantage from it, but you can’t keep it going unless you’re already gaining an advantage from it. A young human today has a massive advantage in absorbing existing culture, but other species don’t and didn’t. It requires a very long up-front investment without immediate returns, which is exactly not what evolution tends to favor.

Regarding the relevance to AI, the importance of cultural evolution is a strong counter argument to fast take-off. Yudkowsky himself argues that humans somehow separated themselves from other apes, such that humans can uniquely do things that seem wildly out-of-distribution, like going to and walking on the moon, that we’re therefore more generally intelligent and that therefore AI could similarly separate itself from humans by becoming even more generally intelligent and gaining capabilities that are even more wildly out-of-distribution. However, the thing that separates humans from other apes is cultural evolution; it’s a one-time gain without a superlative. Moreover, it’s a counter argument to the very idea of general intelligence, because it shows that going to and walking on the moon are not in fact as wildly out-of-distribution as it first seems. The astronauts and the engineers who built their vehicles were trained exhaustively in the required capabilities, which had been previously culturally accumulated. Walking on the moon only seems striking because the compounding speed of memetic evolution is much higher than the extraordinary slow pace of genetic evolution.

A further argument I would make for the relevance of cultural evolution to AI is that in my view it shows that the ability of individual human agents to discover new capabilities is on average extremely limited and that the same is likely true for AI, although perhaps to a somewhat lesser extent. Humanity as a whole makes great strides, because among the many who try new things the very few who succeed pass on their new capabilities to the others. The vast majority of any individual’s capabilities relies on absorbing existing knowledge and habits. At the same time, most individuals do not pass on anything new and even when they do it’s the luck of the draw. I think the same is mostly true for any individual AI, because of the inherent rarity of useful behaviors in the space of all behaviors. If this is indeed true, then that means we have less to fear from misaligned individuals than from misaligned cultures.

Comment by tangerine on Seth Explains Consciousness · 2023-10-09T16:36:29.629Z · LW · GW

The fact that we can't fully explain consciousness is a point in favour of the HP.

But my question was, what exactly can’t we fully explain? What are you referring to when you say “consciousness” and what about it can’t we explain?

they have arguments you haven't addressed.

Such as?

I use the criterion of being able to make novel predictions. We clearly don't have a solution that reaches that criterion.

Agreed, but what exactly should it predict? General relativity made novel predictions when it was first formulated, but about the movement of planets and so forth, so I presume that doesn’t count as a solution to the Hard Problem of consciousness.

Comment by tangerine on Seth Explains Consciousness · 2023-10-07T10:48:36.975Z · LW · GW

You're equivocating, conflating consciousness with self-awareness. Consciousness is not the sense-of-self.

I agree those are separate, but the (useful, evolved) sense-of-self leads to a belief in consciousness. Disproving the reality of the self (i.e., the sense of self being illusory) removes the logical support for consciousness.

Moreover, proponents of the Hard Problem often say “If consciousness is illusory, who is experiencing the illusion?”, thereby revealing their belief that a self is required for consciousness.

So, ostensibly, consciousness and a sense of self are not the same but do imply each other. However, I argue that proponents of the Hard Problem confuse the existence of the sense of self with the existence of an actual self, which leads to erroneous conclusions.