Posts

TAG's Shortform 2020-08-13T09:30:22.058Z

Comments

Comment by TAG on Several Arguments Against the Mathematical Universe Hypothesis · 2025-02-19T23:28:37.328Z · LW · GW

Possibly we are just in one of the mathematical universes that happens to have an arrow of time—the arrow seems to arise from fairly simple assumptions, mainly an initial condition and coarse graining

You are misunderstanding the objection. It's not just an arrow of time in the sense of order, such as increasing entropy, it's the passingness of the time. An arrow can exist statically, but that's not how we experience time. We don't experience it as a simultaneous group of moments that happen to be ordered , we experience one moment at a time. A row of houses is ordered but not one-at-a-time, like a succession of movie frames.

The valence of pleasure and pain is not just a sign change, they serve vastly different psychological functions and evolved for distinct evolutionary reasons.

And the associate qualia? What's the mathematical theory of qualia? Is it bottom-up ...we have some mathematical descriptions of qualia, and it's only a matter of time before we have the rest...or top-down...everything is mathematical, so qualia must be...?

Comment by TAG on [Intuitive self-models] 3. The Homunculus · 2025-02-11T21:22:29.702Z · LW · GW

(Extensively reviesed and edited).

Reductionism

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

Things like airplane wings actually are, at least as approximations. I don't see why you are.approvingly quoting this: it conflates reduction and elimination.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.

If that's a scientific claim ,it needs to be treated as falsifiable, not as dogma.

You can’t handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)"

It's not black and white. A simplified model isn't entirely out there, but it's partly out there. There's still a difference between an aeroplane wing and horse feathers.

Vitalistic Force

Vitalistic force (§3.3) is an intuitive concept that we apply to animals, people, cartoon characters, and machines that “seem alive” (as opposed to seeming “inanimate”).

It amounts to a sense that something has intrinsic important unpredictability in its behavior

The intuitive model says that the decisions are caused by the homunculus, and the homunculus is infused with vitalistic force and hence unpredictable. And not just unpredictable as a state of our limited modeling ability, but unpredictable as an intrinsic property of the thing itself—analogous to how it’s very different for something to be “transparent” versus “of unknown color”, or how “a shirt that is red” is very different from “a shirt that appears red in the current lighting conditions

Unpredictability is the absence of a property: predictability. Vitalistic force sounds like the presence of one. It's difficult to see why a negative property would equate to a positive one. We don't have to regard an unpredictable entity as quasi-alive. We don't regard gambling machines in casinos as quasi alive. Our ancestors used to regard the weather as quasi alive, but we don't -- so it's not all that compulsive. We also don't have to regard living things as unpredictable --an ox ploughing a furrow is pretty predictable. Unpredictability and vitalism aren't the same concept, and aren't very rigidly linked, psychologically.

It doesn’t veridically (§1.3.2) correspond to anything in the real world (§3.3.3).

Except..

Granted, one can argue that observer-independent intrinsic unpredictability does in fact exist “in the territory”. For example, there’s a meaningful distinction between “true” quantum randomness versus pseudorandomness. However, that property in the “territory” has so little correlation with “vitalistic force” in the map, that we should really think of them as two unrelated things.

So let's say that two different things: unpredictableness , non-pseudo randomness could exist in the territory, and could found a real, non-supernatural version of free will. Vitality could exist in the territory too -- reductionism only requires that it is not fundamental, not that it is not real at all. It could be as real as an airplane wing. Reduction is not elimination.

However, that property in the “territory” has so little correlation with “vitalistic force” in the map, that we should really think of them as two unrelated things

So what is the definition of vitalistic force that's a) different from intrinsic surprisingness b) incapable of existing in the territory even as an approximation?

Homunculi

The strong version of the homunculus , the one-stop-shop that explains everything about consciousness, identity, and free will, is probably false...but bits and pieces of it could still be rescued.

Function: it's possible that there are control systems even if they don't have a specific physical location.

Location: Its quite possible for higher brain areas to be a homunculus (or homunculi) lite, in the sense that , they exert executive control, or are where sensory data are correlated. Rejecting ghostly homunculi because they are ghostly doesn't entail rejecting physical homunculi The sensory and mirror homunculi.

Vitalism: It's possible for intrinsic surprisingness to exist in the territory, because intrinsic surprisingness is the same thing as indeterminism.

There's also a further level of confusion about whether your idea of homunculus is observer or observed.

Are "we" are observing "ourselves" as a vitalistic homunculus , observing the rest of ourselves? If the latter, which is the real self, the the observer or the homunculus?

As discussed in Post 1, the cortex’s predictive learning algorithm systematically builds generative models that can predict what’s about to happen

No one has discovered a brain algorithm, so far.

Free Will

the suite of intuitions related to free will has spread its tentacles into every corner of how we think and talk about motivation, desires, akrasia, willpower, self, and more

https://www.lesserwrong.com/posts/JLZnSnJptzmPtSRTc/intuitive-self-models-8-rooting-out-free-will-intuitions

And now we come to the part of the argument where an objective unbiased assessment of free will. concludes that the concept (or rather concepts) are so utterly broken and wrong that any vestige has to be "rooted out".

Now, I expect that most people reading this are scoffing right now that they long ago moved past their childhood state of confusion about free will. Isn’t this “Physicalism 101” stuff?

It's the case that a lot of people think that the age old problem of free will is solved at a stroke by "physics, lol"... but there are also sophisticated naturalistic defences.

There are two dimensions to the problem: the what-we-mean-by-free-will dimension, and the what-reality-offers-us dimension. The question of free will partially depends on how free will is defined, so accepting a basically scientific approach does not avoid the "semantic" issues of how free will, determinism , and so on, are best conceptualised.

( @Seth Herd

I don’t know what people mean by “free will” and I don’t think they usually do either.

Professional philosophers are quite capable of stating their definitions, and you at capable of looking them up.)

Mr. Yudkowsky has no novel insight to offer into how the territory works, nor any novel insight into the correct semantics of free will. He has not solved either sub problem, let alone both. He has proposed a mechanism (not novel) about how the feeling of free will could be a predictable illusion, but that falls short of proving that it is..he basically relies on having an audience who are already strongly biased against free will.

To dismiss fee will, just on the basis of Physicalism, not even deterministic physics, is to tacitly define it as supernatural. Does everyone define it that way? No,there are compatibilists and naturalistic libertarians.

Compatibilism is a naturalistic theory of free will, and libertarianism can be.

(https://insidepoliticalscience.com/libertarian-free-will-vs-compatibilism/)

To provide a mechanism by which the feeling of free will could be an illusion , which he had done, , does not show that it actually is an illusion, because of the usual use laws of modal logic -- he needs to show that his model is the only possibility, not just a possibility. (These problems were pointed out long ago, of course).

It is possible, in the right kind universe to have libertarian free will backed by an entirely physical mechanism, since physics be indeterministic ... and to have a veridical perception of it. The existence of another possibility, where the sense of free will is illusory, doesn't negate the veridical possibility. "Yes,but physicalism " doesn't either.

You don’t observe your brain processes so you don’t observe them as deterministic or indeterministic .. An assumption of determinism has been smuggled in by a choice of language, the use of the word “algorithm". But, contrary to what many believe, algorithms can be indeterministic.

If someone demonstrated that brains run on an indeterministic algorithm, that fulfils the various criteria for libertarian free will, would you still deny that humans have any kind of free will?

Didn’t Eliezer Yudkowsky describe free will as “about as easy as a philosophical problem in reductionism can get, while still appearing ‘impossible’ to at least some philosophers”?

Questions can seem easy if you don't understand their complexities.

Yudkowsky posted his solution to the question of free will along time ago, and the problems were pointed out almost immediately. And ignored for over a decade.

More precisely: If there are deterministic upstream explanations of what the homunculus is doing and why, e.g. via algorithmic or other mechanisms happening under the hood, then that feels like a complete undermining of one’s free will and agency (§3.3.6)

Why? How can you demonstrate that without a definition of free will Obviously , that would have no impact given the compatibilist definition of free will, for instance?

I have had a lot of discussions on the subject , and I have noticed that many laypeople believe in dualism, or a ghost -in-the-machine theory. In that case, I suppose lead that the machine is do it could be devastating. But..I said laypeople. Professional philosophers generally don't define FW that way, and don't think that dualism and free will are the same thing.

Typical definitions are:-

  1. The ability or discretion to choose; free choice.

  2. The power of making choices that are neither determined by natural causality nor predestined by fate or divine will.

  3. A person's natural inclination; unforced choice.

And if there are probabilistic upstream explanations of what the homunculus is doing and why, e.g. the homunculus wants to eat when hungry, then that correspondingly feels like a partial undermining of free will and agency, in proportion to how confident those predictions are.

That's hardly an undermining of libertarian free will at all..LFW only requires that you could have done otherwise..not that you could have done anything at all, or that you could defy statistical laws.

The way intuitive models work (I claim) is that there are concepts, and associations / implications / connotations of those concepts. There’s a core intuitive concept “carrot”, and it has implications about shape, color, taste, botanical origin, etc. And if you specify the shape, color, etc. of a thing, and they’re somewhat different from most normal carrots, then people will feel like there’s a question “but now is it really a carrot?” that goes beyond the complete list of its actual properties.

There's way of thinking about free will and selfhood that is just a list of naturalistically respectable properties , and nothing beyond. Libertarianism doesn't require imperceptible essences, on the naturalistic view, it could just be the operation of a ghost-free machine.I

According to science, the human brain/body is a complex mechanism made up of organs and tissues which are themselves made of cells which are themselves made of proteins, and so on.

Science does not tell you that you are a ghost in a deterministic machine, trapped inside it and unable to control its operation. Or that you are an immaterial soul trapped inside an indetrministic machine. Science tells you that you are, for better or worse, the machine itself.

Although I have used the term "machine", I do not intend to imply that a, machine is necessarily deterministic. It is not known whether physics is deterministic, so "you are a deterministic machine" does not follow from "you are entirely physical". The correct conclusion is "you are no more undetermined than physics allows you to be".

So the scientific question of free will becomes the question of how the machine behaves, whether it has the combination of unpredictability, self direction, self modification and so on, that might characterise free will... depending on how you define free will.

There is a whole science of self-controlling machines: cybernetics. Airplane autopilots and , more recently, self driving cars are examples. Self control, without indeterminism is not sufficient for libertarian free will, but indeterminism without self control is not either

All of those things can be ascertained by looking at a person (or an animal or a machine) from the outside. They don't require a subjective inner self... unless you define free will that way. If you define free will as dependent on a ghostly inner self, then you are not going to have a scientific model of free will.

Consciousness

As a typical example, Loch Kelly at one point mentions “the boundless ground of the infinite, invisible life source”. OK, I grant that it feels to him like there’s an infinite, invisible life source. But in the real world, there isn’t. I’m picking on Loch Kelly, but his descriptions of PNSE are much less mystical than most of them. "

I grant that it feels to you like you have certain knowledge of the universe's true ontology, but at best what you actually have a set of scientific models -- mental constructs, maps -- that make good predictions. I am not saying I have certain knowledge that the mystical ontology is certainly correct, I am saying we are both behind Kantian veils. Prediction underdermines ontology. So long as boundless life source somehow behaves just like matter, under the right circumstances, physics can't disprove it -- just as physicalism requires matter to behave like consciousness, somehow, under the right circumstances

The old Yudkowsky post “How An Algorithm Feels From Inside” is a great discussion of this point.

As has been pointed out many times, there is no known reason for an algorithm to feel like anything from the inside

Comment by TAG on Seth Explains Consciousness · 2025-02-05T17:23:34.297Z · LW · GW

This Cartesian dualism in various disguises is at the heart of most “paradoxes” of consciousness. P-zombies are beings materially identical to humans but lacking this special res cogitans sauce, and their conceivability requires accepting substance dualism.

Only their physical possibility requires some kind of nonphysicality. Physically impossible things can be conceivable if you don't know why they are physically impossible, if you can't see the contradiction between their existence and the laws of physics. The conceivability of zombies is therefore evidence for phenomenal consciousness not having been explained, at least. Which it hasn't anyway: zombies are in no way necessary to state the HP.

The famous “hard problem of consciousness” asks how a “rich inner life” (i.e., res cogitans) can arise from mere “physical processing” and claims that no study of the physical could ever give a satisfying answer.

A rich inner life is something you have whatever your metaphysics. It doesn't go.away when you stop believing in it. It's the phenomenon to be explained. Res Cogitans, or some other dualistic metaphysics, is among an number of ways explaining it...not something needed to pose the problem.

The HP only claims that the problem of phenomenal consciousness is harder-er than other aspects of consciousness. Further arguments by Chalmers tend towards the lack of a physical solution, but you are telescoping them all into the same issue.

We have also solved the mystery of “the dress”:

But not the Hard Problem: the HP is about having any qualia at all, not about ambiguous or anomalous qualia. There would be an HP if everyone just saw the same.uniform shade of red all the time.

As with life, consciousness can be broken into multiple components and aspects that can be explained, predicted, and controlled. If we can do all three we can claim a true understanding of each

If. But we in fact lag in understanding the phenomenal aspect, compared to the others. In that sense, there is a defacto hard-er problem.

The important point here is that “redness” is a property of your brain’s best model for predicting the states of certain neurons. Redness is not “objective” in the sense of being “in the object".

No, that's not important. The HP starts with the subjectivity of qualia, it doesn't stop with it.

Subjectivity isn't just the trivial issue of being had by a subject, it is the serious issue of incommunicability, or ineffability.

Philosophers of consciousness have committed the same sins as “philosophers of life” before them: they have mistaken their own confusion for a fundamental mystery, and, as with élan vital, they smuggled in foreign substances to cover the gaps. This is René Descartes’ res cogitans, a mental substance that is separate from the material.

No, you can state and justify the HP without assuming dualism.

Are you truly exercising free will or merely following the laws of physics?

Or both?

And how is the topic of free will related to consciousness anyway?

There is no “spooky free will”

There could be non spooky free will...that is more than a mere feeling. Inasmuch as Seth has skipped that issue -- whether there is a physically plausible, naturalistic free will -- he hasn't solved free will.

There are ways in which you could have both, because there are multiple definitions of free will, as well as open questions about physics. Apart from compatibilist free will, which is obviously compatible with physics, including deterministic physics, naturalistic libertarian free will is possible in an indeterministic universe. NLFW is just an objectively determinable property of a system, a man-machine. Free will doesn't have to be explained away, and isn't direct require an assumption of dualism.

But selfhood is itself just a bundle of perceptions, separable from each other and from experiences like pain or pleasure.

The subjective e, sense -of-self is,.pretty much by definition. Whether there are any further objective facts, that would answer questions about destructive teleportation and the like, is another question. As with free will, explaining the subjective aspect doesn't explain away the objective.aspect.

Comment by TAG on When is a mind me? · 2025-02-02T17:49:30.711Z · LW · GW

First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,

Thats a rather small nit. The vast majority of computationalists are talking about classical computation.

Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,

That's not much of a boast: pure logic can't solve metaphysical problems about consciousness, time, space, identity, and so on. That's why they are still problems. There's a simple logical theory of identity, but it doesn't answer the metaphysical problems, what I have called the synchronic and diachronic problems.

Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.

Physicalism doesn't answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism --along the lines of "if one atom changes, you're a different person". An abstraction would be useful -- but it needs to be the right one.

Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicaliskt answer for specialized questions.

You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment ("No you're not the original Duncan Idaho, you're ghola number 476. You think you're the one and only Duncan because you're brain state is a clone of the original Duncan's"). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing -- if computationalism is is correct, you can be cloned, and the you really are on two places at once.

My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.

Well, how? If you could predict qualia from self control, you'd have a solution --not a dissolution --to the HP.

Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality.

Granting that "empirical" means "outer empirical" .... not including introspection.

I don't think there is much evidence for the "purely". Chalmers doesn't disbelieve in the easy problem aspects of conscious.

Comment by TAG on The Functionalist Case for Machine Consciousness: Evidence from Large Language Models · 2025-01-30T22:36:17.976Z · LW · GW

We’re talking about “physical processes”

We are talking about functionalism -- it's in the title. I am contrasting physical processes with abstract functions.

In ordinary parlance, the function of a physical thing is itself a physical effect...toasters toast, kettles boil, planes fly.

In the philosophy of mind, a function is an abstraction, more like the mathematical sense of a function. In maths, a function takes some inputs and or produces some outputs. Well known examples are familiar arithmetic operations like addition, multiplication , squaring, and so on. But the inputs and outputs are not concrete physical realities. In computation,the inputs and outputs of a functional unit, such as a NAND gate, always have some concrete value, some specific voltage, but not always the same one. Indeed, general Turing complete computers don't even have to be electrical -- they can be implemented in clockwork, hydraulics, photonics, etc.

This is the basis for the idea that a compute programme can be the same as a mind, despite being made of different matter -- it implements the same.abstract functions. The abstraction of the abstract, philosopy-of-mind concept of a function is part of its usefulness.

Searle is famous critic of computationalism, and his substitute for it is a biological essentialism in which the generation of consciousness is a brain function -- in the concrete sense of function.It's true that something whose concrete function is to generate consciousness will generate consciousness..but it's vacuously, trivially true.

The point is that the functions which this physical process is implementing are what’s required for consciousness not the actual physical properties themselves.

If you mean that abstract, computational functions are known to be sufficient to give rise to all.asoexs of consciousness including qualia, that is what I am contesting.

I think I’m more optimistic than you that a moderately accurate functional isomorph of the brain could be built which preserves consciousness (largely due to the reasons I mentioned in my previous comment around robustness.

I'm less optimistic because of my.arguments.

But putting this aside for a second, would you agree that if all the relevant functions could be implemented in silicon then a functional isomorph would be conscious?

No, not necessarily. That , in the "not necessary" form --is what I've been arguing all along. I also don't think that consciousnes had a single meaning , or that there is a agreement about what it means, or that it is a simple binary.

The controversial point is whether consciousness in the hard problem sense --phenomenal consciousness, qualia-- will be reproduced with reproduction of function. It's not controversial that easy problem consciousness -- capacities and behaviour-- will be reproduced by functional reproduction. I don t know which you believe, because you are only talking about consciousness not otherwise specified.

If you do mean that a functional duplicate will necessarily have phenomenal consciousness, and you are arguing the point, not just holding it as an opinion, you have a heavy burden:-

You need to show some theory of how computation generates conscious experience. Or you need to show why the concrete physical implementation couldn't possibly make a difference.

@rife

Yes, I’m specifically focused on the behaviour of an honest self-report

Well,. you're not rejecting phenomenal consciousness wholesale.

fine-grained information becomes irrelevant implementation details. If the neuron still fires, or doesn’t, smaller noise doesn’t matter. The only reason I point this out is specifically as it applies to the behaviour of a self-report (which we will circle back to in a moment). If it doesn’t materially effect the output so powerfully that it alters that final outcome, then it is not responsible for outward behaviour.

But outward behaviour is not what I am talking about. The question is whether functional duplication preserves (full) consciousness. And, as I have said, physicalism is not just about fine grained details. There's also the basic fact of running on the metal

I’m saying that we have ruled out that a functional duplicate could lack conscious experience because: we have established conscious experience as part of the causal chain

"In humans". Even if it's always the case that qualia are causal in humans, it doesn't follow that reports of qualia in any entity whatsoever are caused by qualia. Yudkowsky's argument is no help here, because he doesn't require reports of consciousness to be *directly" caused by consciousness -- a computational zombies reports would be caused , not by it's own consciousness, but by the programming and data created by humans.

to be able to feel something and then output a description through voice or typing that is based on that feeling. If conscious experience was part of that causal chain, and the causal chain consists purely of neuron firings, then conscious experience is contained in that functionality.

Neural firings are specific physical behaviour, not abstract function. Computationalism is about abstract function

Comment by TAG on The Functionalist Case for Machine Consciousness: Evidence from Large Language Models · 2025-01-28T13:41:03.844Z · LW · GW

I don’t find this position compelling for several reasons:

First, if consciousness really required extremely precise physical conditions—so precise that we’d need atom-by-atom level duplication to preserve it, we’d expect it to be very fragile.

Don't assume that then. Minimally, non computation physicalism only requires that the physical substrate makes some sort of difference. Maybe approximate physical resemblance results in approximate qualia.

Yet consciousness is actually remarkably robust: it persists through significant brain damage, chemical alterations (drugs and hallucinogens) and even as neurons die and are replaced.

You seem to be assuming a maximally coarse-grained either-conscious-or-not model.

If you allow for fine grained differences in functioning and behaviour , all those things produce fine grained differences. There would be no point in administering anaesthesia if it made no difference to consciousness. Likewise ,there would be no point in repairing brain injuries. Are you thinking of consciousness as a synonym for personhood?

We also see consciousness in different species with very different neural architectures.

We don't see that they have the same kind of level of consciousness.

Given this robustness, it seems more natural to assume that consciousness is about maintaining what the state is doing (implementing feedback loops, self-models, integrating information etc.) rather than their exact physical state.

Stability is nothing like a sufficient explabation of consciousness, particularly the hard problem of conscious experience...even if it is necessary.But it isn't necessary either , as the cycle of sleep and waking tells all of us every day.

Second, consider what happens during sleep or under anaesthesia. The physical properties of our brains remain largely unchanged, yet consciousness is dramatically altered or absent.

Obviously the electrical and chemical activity changes. You are narrowing "physical" to "connectome". Physcalism is compatible with the idea that specific kinds of physical.acriviry are crucial.

Immediately after death (before decay sets in), most physical properties of the brain are still present, yet consciousness is gone. This suggests consciousness tracks what the brain is doing (its functions)

No, physical behaviour isn't function. Function is abstract, physical behaviour is concrete. Flight simulators functionally duplicate flight without flying. If function were not abstract, functionalism would not lead to substrate independence. You can build a model of ion channels and synaptic clefts, but the modelled sodium ions aren't actual sodium ion, and if the universe cares about activity being implemented by actual sodium ions, your model isn't going to be conscious

Rather than what it physically is. The physical structure has not changed but the functional patterns have changed or ceased.

Physical activity is physical.

I acknowledge that functionalism struggles with the hard problem of consciousness—it’s difficult to explain how subjective experience could emerge from abstract computational processes. However, non-computationalist physicalism faces exactly the same challenge. Simply identifying a physical property common to all conscious systems doesn’t explain why that property gives rise to subjective experience.

I never said it did. I said it had more resources. It's badly off, but not as badly off.

Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.

If we can see that someone is a human, we know that they gave a high degree of biological similarity. So webl have behavioural similarity, and biological similarity, and it's not obvious how much lifting each is doing.

@rife

Functionalism doesn’t require giving up on qualia, but only acknowledging physics. If neuron firing behavior is preserved, the exact same outcome is preserved,

Well, the externally visible outcome is.

If I say “It’s difficult to describe what it feels like to taste wine, or even what it feels like to read the label, but it’s definitely like something”—There are two options—either -it’s perpetual coincidence that my experience of attempting to translate the feeling of qualia into words always aligns with words that actually come out of my mouth or it is not Since perpetual coincidence is statistically impossible, then we know that experience had some type of causal effect.

In humans.

So far that tells us that epiphenomenalism is wrong, not that functionalism is right.

The binary conclusion of whether a neuron fires or not encapsulates any lower level details, from the quantum scale to the micro-biological scale

What does "encapsulates"means? Are you saying that fine grained information gets lost? Note that the basic fact of running on the metal is not lost.

—this means that the causal effect experience has is somehow contained in the actual firing patterns.

Yes. That doesn't mean the experience is, because a computational Zombie will produce the same outputs even if it lacks consciousness, uncoincidentally.

A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input

We have already eliminated the possibility of happenstance or some parallel non-causal experience,

You haven't eliminated the possibility of a functional duplicate still being a functional duplicate if it lacks conscious experience.

Basically

  1. Epiphenenomenalism
  2. Coincidence
  3. Functionalism

Aren't the only options.

Comment by TAG on The Functionalist Case for Machine Consciousness: Evidence from Large Language Models · 2025-01-25T17:03:57.937Z · LW · GW

Imagine that we could successfully implement a functional isomorph of the human brain in silicon. A proponent of 2) would need to explain why this functional isomorph of the human brain which has all the same functional properties as an actual brain does not, in fact, have consciousness.

Physicalism can do that easily,.because it implies that there can be something special about running running unsimulated , on bare metal.

Computationalism, even very fine grained computationalism, isn't a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies -- unconscious duplicates that are identical computationally, but not physically.

So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.

Anil Seth where he tries to pin down the properties X which biological systems may require for consciousness https://osf.io/preprints/psyarxiv/tz6an. His argument suggests that extremely complex biological systems may implement functions which are non-Turing computable

It presupposes computationalism to assume that the only possible defeater for a computational theory is the wrong kind of computation.

My contention in this post is that if they’re able to reason about their internal experience and qualia in a sophisticated manner then this is at least circumstantial evidence that they’re not missing the “important function.”

There's no evidence that they are not stochastic-parrotting , since their training data wasn't pruned of statements about consciousness.

If the claim of consciouness is based on LLMs introspecting their own qualia and report on them , there's no clinching evidence they are doing so at all. You've got the fact computational functionalism isn't necessarily true, the fact that TT type investigations don't pin down function, and the fact that there is another potential explanation diverge results.

Comment by TAG on The Functionalist Case for Machine Consciousness: Evidence from Large Language Models · 2025-01-23T12:07:59.072Z · LW · GW

Whether computationalism functionalism is true or not depends on the nature of consciousness as well as the nature of computation.

While embracing computational functionalism and rejecting supernatural or dualist views of mind

As before, they also reject non -computationalist physicalism, eg. biological essentialism whether they realise it or not.

It seems to privilege biology without clear justification. If a silicon system can implement the same information processing as a biological system, what principled reason is there to deny it could be conscious?

The reason would be that there is more to consciousness than information processing...the idea that experience is more than information processing not-otherwise-specified, that drinking the wine is different to reading the label.

It struggles to explain why biological implementation specifically would be necessary for consciousness. What about biological neurons makes them uniquely capable of generating conscious experience?

Their specific physics. Computation is an abstraction from physics, so physics is richer than computation. Physics is richer than computation, so it has more resources available to explain conscious experience. Computation has no resources to explain conscious experience -- there just isn't any computational theory of experience.

It appears to violate the principle of substrate independence that underlies much of computational theory.

Substrate independence is an implication of computationalism, not something that's independently known to be true. Arguments from substrate independence are therefore question begging.

Of course, there is minor substrate independence, in that brains which have biological differences able to realise similar capacities and mental states. That could be explained by a coarse graining or abstraction other than computationalism. A standard argument against computationalism, not mentioned here is that it allows to much substrate independence and multiple realisability -- blockheads and so on.

It potentially leads to arbitrary distinctions. If only biological systems can be conscious, what about hybrid systems? Systems with some artificial neurons? Where exactly is the line?

Consciousness doesn't have to be a binary. We experience variations in our conscious experience every day.

However, this objection becomes less decisive under functionalism. If consciousness is about implementing certain functional patterns, then the way these patterns were acquired (through evolution, learning, or training) shouldn’t matter. What matters is that the system can actually perform the relevant functions

But that can't be inferred from responses alone, since, in general, more than one function can generate the same output for a given input.

It’s not clear what would constitute the difference between “genuine” experience and sophisticated functional implementation of experience-like processing

You mean there is difference to an outside observer, or to the subject themself?

The same objection could potentially apply to human consciousness—how do we know other humans aren’t philosophical zombies

It's implausible given physicalism, so giving up computationalism in favour of physicalism doesn't mean embracing p-zombies.

If we accept functionalism, the distinction between “real” consciousness and a perfect functional simulation of consciousness becomes increasingly hard to maintain.

It's hard to see how you can accept functionalism without giving up qualia, and easy to see how zombies are imponderable once you have given up qualia. Whether you think qualia are necessary for consciousness is the most important crux here.

Comment by TAG on Chance is in the Map, not the Territory · 2025-01-19T20:26:19.953Z · LW · GW

We de-empahsized QM in the post

You did a bit more than de-emphasize it in the title!

Also:

Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality.

"Are"?

**Insofar as we assign positive probability to such theories, we should not rule out chance as being part of the world in a fundamental way. **Indeed, we tried to point out in the post that the de Finetti theorem doesn’t rule out chances, it just shows we don’t need them in order to apply our standard statistical reasoning. In many contexts—such as the first two bullet points in the comment to which I am replying—I think that the de Finetti result gives us strong evidence that we shouldn’t reify chance.

The perennial source of confusion here is the assumption that the question is whether chance/probability is in the map or the territory... but the question sidelines the "both" option. If there were
strong evidence of mutual exclusion, of an XOR rather than IOR premise, the question would be appropriate. But there isn't.

If there is no evidence of an XOR, no amount of evidence in favour of subjective probability is evidence against objective probability, and objective probability needs to be argued for (or against), on independent grounds. Since there is strong evidence for subjective probability, the choices are subjective+objective versus subjective only, not subjective versus objective.

(This goes right back to "probability is in the mind")

Occams razor isn't much help. If you assume determinism as the obvious default, objective uncertainty looks like an additional assumption...but if you assume randomness as the obvious default, then any deteministic or quasi deteministic law seems like an additional thing

In general, my understanding is that in many worlds you need to add some kind of rationality principle or constraint to an agent in the theory so that you get out the Born rule probabilities, either via self-locating uncertainty (as the previous comment suggested) or via a kind of decision theoretic argument.

@quiet_NaN

There's a purely mathematical argument for the Born rule. The tricky thing is explaining why observations have a classical basis -- why observers who are entangled with a superposed system don't go into superposition with themselves. There are multiple aspects to the measurement problem...the existence or otherwise if a fundamental measurement process, the justification the Born rule, the reason for the emergence of sharp pointer states, and reason for the appearance of a classical basis. Everett theory does rather badly on the last two.

If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.

OK, but people here tend to prefer many worlds to Bohmian mechanics.. it isn't clear that MWI is more complex ... but it also isn't clear that it is a actually simpler than the alternatives ...as it's stated to be in the rationalsphere.

Comment by TAG on When is a mind me? · 2025-01-16T21:13:17.680Z · LW · GW

Computationalism is a bad theory of synchronic non-identity (in the sense of "why am I a unique individual, even though I have an identical twin"), because computations are so easy to clone -- computational states are more cloneable than physical states.

Computationalism might be a better theory of diachronic identity (in the sense of "why am I still the same person, even though I have physically changed"), since it's abstract, and so avoids the "one atom has changed" problem of naive physicalism. Other abstractions are available, though. "Having the same memories" is a traditional one unadulterated computation.

Its still a bad theory of consciousness-qua-awareness (phenomenal consciousness , qualia, hard problem stuff) because, being an abstraction, it has fewer resources than physicalism to explain phenomenal experience. There is no computational theory of qualia whatsoever, no algorithm for seeRed().

It's still an ok explanation of consciousness-qua-function (easy problem stuff), but not obviously the best.

Most importantly: it's still the case that, if you answer one of these four questions, you don't get answers to the other three automatically.

I believe computationalism is a very general way to look at effectively everything,

Computation is an abstraction, and its not guaranteed to be the best.

This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology

A perfect map has the same structure as the territory, but still is not the territory. The on-the-metalness is lacking. Flight simulators don't fly. You can grow potatoes in a map, not even a 1:1 one.

...also hears that the largest map considered really useful would be six inches to the mile; although his country had learnt map-making from his host Nation, it had carried it much further, having gone through maps that are six feet to the mile, then six yards to the mile, next a hundred yards to the mile—finally, a mile to the mile (the farmers said that if such a map was to be spread out, it would block out the sun and crops would fail, so the project was abandoned).

https://en.m.wikipedia.org/wiki/Sylvie_and_Bruno

my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves

So..it's nothing to do with qualia/phenomenality/HP stuff? Can't self modelling and phenomenality be separate questions?

Comment by TAG on Chance is in the Map, not the Territory · 2025-01-14T21:27:21.036Z · LW · GW

Others say chance is a physical property – a “propensity” of systems to produce certain outcomes. But this feels suspiciously like adding a mysterious force to our physics.[4] When we look closely at physical systems (leaving quantum mechanics aside for now), they often seem deterministic: if you could flip a coin exactly the same way twice, it would land the same way both times.

Don't sideline QM: it's highly relevant. If there are propensities, real probabilities, then they are not mysterious, they are just the way reality works. They might be unnecessary to explain many of our practices of ordinary probablistic reasoning, but that doesn't make them mysterious in themselves.

If you can give a map-based account of probablistic reasoning, that's fine as far as it goes ...but it doesn't go as far as proving there are no propensities

This approach aligns perfectly with the rationalist emphasis on “the map is not the territory.”

Whatever that means , it doesn't mean that maps can never correspond to territories. In-the-map does not imply not-in-the-territory. "Can be thought about in a certain way" does not imply "has be thought about in a certain way".

Like latitude and longitude, chances are helpful coordinates on our mental map, not fundamental properties of reality. When we say there’s a 70% chance of rain, we’re not making claims about mysterious properties in the world.

But you could be partially making claims about the world,since propensities are logically possible...even though there is a layer of subjective ,lack-of-knowkedge-based uncertainty on top.

(And the fact that there is so much ambiguity between in-the-map probability and in-the-territory probability itself explains why there is so much confusion about QM).

@Maxwell Peterson

Well, you can regard QM as deterministic, so long as you are willing to embrace nonlocality..but you don't have to.

Although it is worth noting that many theories of quantum mechanics— in particular, Everettian and Bohmian quantum mechanics—are perfectly deterministic.

...only means you can.

The existence of real probabilities is still an open question, and still not closed by noticing that there is a version of probability/possibility/chance in the mind/map ...because that doesn't mean there is isn't also a version in the territory/reality.

Bayesianism in particular doesn't mean probability is in the mind in a sense exclusive of being in the territory.

Consider performing a Bayesian experiment in a universe with propensities. You start off with a prior of 0.5 , on indifference, that your photons will be spin up. You perform a run of a experiments,and 50% of them are spin up. So your posterior is also 0.5...which is also in the in-the-territory probability.

@Cubefox

Credences need to be about something, but they don't need to be about propensities. A Bayesian can prove that they have the right credences by winning bets, which is quite possible in a deterministic universe.

Comment by TAG on [deleted post] 2025-01-04T22:59:49.405Z

ethical, political and religious differences (which i’d mostly not place in the category of ‘priors’, e.g. at least ‘ethics’ is totally separate from priors aka beliefs about what is)

That's rather what I am saying. Although I would include "what is" as opposed to "what appears to be". There may well be fact/value gap, but there's also an appearance/reality gap. The epistemology you get from evolutionary argument only goes as far as the apparent. You are not going to die if you have interpreted the underlying nature or reality of a dangerous thing incorrectly -- you should drink water even if you think it's a fundamental element, you should avoid marshes even if you think fever is caused by bad smells.

are explained by different reasons (some also evolutionary, e.g. i guess it increased survival for not all humans to be the same), so this question is mostly orthogonal / not contradicting that human starting beliefs came from evolution.

But that isn't the point of the OP. The point of the OP is to address an epistemological problem, to show that our priors have some validity, because the evolutionary process that produced them would tend to produce truth seeking ones. It's epistemically pointless to say that we have some arbitrary starting point of no known validity -- as the already-in-motion argument in fact does

I don’t understand the next three lines in your comment.

The point is that an evolutionary process depends on feedback from what is directly observable and workable ("a process tuned to achieving directly observable practical results")...and that has limitations. It's not useless, but it doesn't solve every epistemological problem. (Ie. "non-obvious theoretical truth").

Truth and usefulness, reality and appearance are different

The usefulness cluster of concepts includes the ability to make predictions, as well as create technology. The truth cluster of concepts involves identification of the causes of perceptions, and offering explanations, not just predictions. The usefulness cluster corresponds to scientific instrumentalism , the truth cluster to scientific instrumentalism. The truth cluster corresponds to epistemological rationalism, the usefulness cluster to instrumental rationalism. Truth is correspondence to reality , which is not identical to the ability to make predictions. One can predict that the sun will rise, without knowing what the Sun really is. "Curve fitting" science is adequate to make predictions. Trial and error is adequate to come up with useful technologies. But other means are needed to find the underlying reality. One can't achieve convergence by "just using evidence" because the questions of what evidence is, and how to interpret it depends on one's episteme.

Comment by TAG on [deleted post] 2025-01-03T04:23:16.195Z

A) If priors are formed by an evolutionary process common to all humans, why do they differ so much? Why are there deep ethical, political and religious divides?

B) how can a process tuned to achieving directly observable practical results allow different agents to converge on non-obvious theoretical truth?

These questions answer each other, to a large extent. B -- they cant, A -- that's where the divides come from. Values aren't dictated by facts, and neither are interpretations-of-facts.

@quila

The already-in-motion argument is even weaker than the evolutionary argument, because it says nothing about the validity of the episteme you already have...and nothing about the uniformity/divergence between individuals , either

@Carl Feynman

Observations overwhelming priors needs to account for the divergence as well. But , of course, real agents aren't ideal Bayesians...in particular , they dont have access to every possible hypothesis , and if you've never even thought of a hypothesis, the evidence can't support it in practice. It's as if the unimagined hypotheses -- the overwhelming majority -- have 0 credence.

Comment by TAG on Everything you care about is in the map · 2024-12-21T23:00:19.713Z · LW · GW

you can only care about what you fully understand

I think I need an operational definition of “care about” to process this

If you define "care about" as "put resources into trying to achieve" , there's plenty of evidence that people care about things that can't fully define, and don't fully understand, not least the truth-seeking that happens here.

Comment by TAG on Everything you care about is in the map · 2024-12-18T17:18:32.680Z · LW · GW

You can only get from the premise "we can only know our own maps" to the conclusion "we can only care about our own maps" via the minor premise "you can only care about what you fully understand ". That premise is clearly wrong: one can care about unknown reality, just as one can care about the result of a football match that hasn't happened yet. A lot of people do care about reality directionally.

@Dagon

Embedded agents are in the territory. How helpful that is depends on the territory

@Noosphere89

you can model the territory under consideration well enough to make the map-territory distinction illusory.

Well,no. A perfect map is still a map. The map territory distinction dies not lie in imperfect representation alone.

Comment by TAG on Estimating the kolmogorov complexity of the known laws of physics? · 2024-12-15T15:27:26.877Z · LW · GW

To specify the Universe, you only have to specify enough information to pick it out from the landscape of all possible Universes

Of course not. You have to specify the landscape itself, otherwise it's like saying "page 273 of [unspecified book]" .

According to string theory (which is a Universal theory in the sense that it is Turing-complete)

As far as I can see, that is only true in that ST allows Turing machines to exist physically. That's not the kind s of Turing completeness you want. You want to know that String Theory is itself Turing computable, not requiring hypercomputation. Or whatever is actually the ultimate physical theory. Because K complexity doesn't work other wise. And the computability of physics is far from a given:-

https://en.m.wikipedia.org/wiki/Computability_in_Analysis_and_Physics

Note that the fact that a theory might consist of a small number differential equations is quite irrelevant, because any one equation could be uncomputable.

Comment by TAG on Ethical Implications of the Quantum Multiverse · 2024-12-10T16:43:02.394Z · LW · GW

They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.

QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.

Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.

Comment by TAG on Is the mind a program? · 2024-12-02T17:56:32.112Z · LW · GW

@Dagon

This comes down to a HUGE unknown - what features of reality need to be replicated in another medium in order to result in sufficiently-close results

That's at least two unknowns: what needs to be replicated in order to get the objective functioning; and what needs to be replicated to get the subjective awarness as well.

Which is all just to say -- isn't it much more likely that the problem has been solved, and there are people who are highly confident in the solution because they have verified all the steps that led them there, and they know with high confidence which features need to be replicated to preserve consciousness...

And how do they that, in terms of the second problem? The final stage would need to be confirmation of subjective awareness. We don't have instruments for that, and it's no good just asking the sim, since a functional duplicate is likely to answer yes, even if it's a zombie.

And that's why it it can be argued that consciousness is a uniqueness difficult problem, beyond the "non-existent proof"

because "find the correct solution" and "convince people of a solution" are mostly independent problems,

That's not just a theoretical possibility People , eg. Dennett,keep claiming to have explained consciousness, and other people keep being unconvinced because they notice they have skipped the hard part.

"That's just saying he hasn't explained some invisible essence of consciousness , equivalent to élan vital".

"Qualia aren't invisible, they are the most obvious thing there is to the person that has them".

Comment by TAG on Arthropod (non) sentience · 2024-11-25T18:30:44.720Z · LW · GW

Physicalist epiphenomenalism is the only philosophy that is compatible with the autonomy of matter and my experience of consciousness, so it has not competitors as a cosmovision

No, identity theory and illusionism are competitors. And epiphenenomenalism is dualism, not physicalism. As I have pointed out before.

Comment by TAG on Ethical Implications of the Quantum Multiverse · 2024-11-21T23:07:47.558Z · LW · GW

And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions..

The macroscopically different branches and their weights?

Focussing on the weight isn't obviously correct , ethically. You cant assume that the answer to "what do I expect to see" will work the same as the answer to "what should I do". Is-ought gap and all that.

Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else's experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don't decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?

There is quandary here: sticking to the usual "adds up to normality" principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone's moral worth on the basis of factors that aren't intrinsic to them, means you should not discount..and that the usual decision theory does not apply.

Basically, there is a tension between four things Rationalists are inclined to believe in:-

  • Some kind of MWI is true.

  • Some kind of utilitarian and universalist ethics is true.

  • Subjective things like suffering are ethically relevant. It's not all about number of kittens

  • It's all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.

Comment by TAG on Ethical Implications of the Quantum Multiverse · 2024-11-19T14:03:01.825Z · LW · GW

According the many-worlds interpretation (MWI) of quantum mechanics, the universe is constantly splitting into a staggeringly large number of decoherent branches containing galaxies, civilizations, and people exactly like you and me

There is more than one many worlds interpretation. The version stated above is not known to be true.

There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible. Coherent splitting gives you the very large numbers of "worlds"..except that they are not worlds, conceptually.

Many worlders are pointing at something in the physics and saying "that's a world"....but whether it qualifies as a world is a separate question , and a separate kind of question, from whether it is really there in the physics. One would expect a world, or universe, to be large, stable, non-interacting, objective and so on . A successful MWI needs to jump three hurdles: mathematical correctness, conceptual correctness, and empirical correctness.

Decoherent branches are expected to be large, stable, non interacting, objective and irreversible...everything that would be intuitively expected of a "world". But there is no empirical evidence for them , nor are they obviously supported by the core mathematics of quantum mechanics, the Schrödinger equation. Coherent superpositions are small scale , down to single particles, observer dependent, reversible, and continue to interact (strictly speaking , interfere) after "splitting".

(Note that Wallace has given up on the objectivity of decoherent branches. That's another indication that MWI is not a single theory).

There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event -- that would be combining the frequency of coherent style splitting with the finality of decoherent splitting. We dont know much about decoherence , but we know it is a multi-particle process that takes time, so decoherent splitting, if there is such a thing, must be rarer than the frequency of single particle interactions. ( And so decoherence isn't simple ). As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way

Also see

https://www.lesswrong.com/posts/wvGqjZEZoYnsS5xfn/any-evidence-or-reason-to-expect-a-multiverse-everett?commentId=o6RzrFRCiE5kr3xD4

I’m not going to argue for this view as that was done very well by Eliezer in his Quantum Physics.

Which view? Everetts view? DeWitts view? Deutsch's Zeh's view? Wallace's view? Saunders view?

@dr_s

I feel like branches being in fact an uncountable continuum is essentially a given

Decoherent branches being being countable, uncountable, or anything else is not given, since there is no established theory of of decoherence.

It's a given that some observables have continuous spectra..but what's that got to do with splitting? A observed state that isn't sharp (in some basis) can get entangled with an apparatus, which then goes into a non-sharp state, and so on. And the whole shebang never splits , or becomes classically sharp.

I mean that the amount of universes that is created will be created anyway, just as a consequence of time passing. So it doesn’t matter anyway. If your actions e.g. cause misery in 20% of those worlds, then the fraction is all that matters; the worlds will exist anyway, and the total amount is not something you’re affecting or controlling.

That's a special case of "no moral responsibility under determinism". which might be true , but it's very different from "utilitarianism works fine under MWI".

**Enough of the physics confusions -- onto the ethics confusions!""

As well as confusion over the correct version of many worlds, there is of course confusion about which theory of ethics is correct.

There's broadly three areas where MWI has ethical implications. One is concerned with determinism, freedom of choice, and moral responsibility. One is over the fact that MW means low probability events have to happen every time -- as opposed to single universe physics, where they usually don't. The other is over whether they are discounted in moral significance for being low in quantum mechanical measure or probability

MWI and Free Will

MWI allows probabilities of world states to change over time, but doesn't allow them to be changed, in a sense amounting to libertarian free will. Agents are just part of the universal wave function, not anything outside the system, or operating by different rules.MWI is, as it's proponents claim, a deterministic theory, and it only differs from single world determinism in that possible actions can't be refrained from, and possible futures can't be avoided. Alternative possibilities are realities, in other words.

MWI, Moral Responsibility, and Refraining.

A standard argument holds that causal determinism excludes libertarian free will by removing alternative possibilities. Without alternative possibilities, you could but have done other than you did, and , the argument goes, you cannot be held responsible for what you had no choice but to do.

Many world strongly implies that you make all possible decisions: according to David Deutsch's argument that means it allows alternative possibilities, and so removes the objection from moral responsibility despite being a basically deterministic theory.

However, deontology assumed that performing a required act involves restraining from alternatives.. and that it is possible to retain from forbidden acts. Neither is possible under many worlds. Many worlds creates the possibility, indeed the necessity, of doing otherwise, but removes the possibility of refraining from an act. Even though many worlds allows Alternative Possibilities, unfortunately for Deutschs argument, that other aspects create a similar objection on the basis of moral responsibility: why would you hold someone morally responsible for an act if they could not refrain from it?

MWI, Probability, and Utilitarian Ethics

Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. One wrinkle is that QM measure isn't probability -- the probability of something occurring or not -- because all possible branches occur in MWI. Another is that it is reasonable to assess the moral weight of someone else's experiences and existence from their point of view. That is the intuition behind the common rationalist/utilitarian/EA view that human lives don't decline in moral worth with distance. So why should they decline with lower quantum mechanical measure? There is quandary here: sticking to the usual "adds up to normality" principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way, even if you are in a multiverse. But sticking to the equally usual universalist axiom, that that you dont get to discount someone's moral worth on the basis of factors that aren't intrinsic to them, means you should not

Measure is not probability.

Mathematically, Quantum mechanical measure—amplitude—isn’t ordinary probability, which is why you need the Born rule.The point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Ontologcally, it also not probability, because it does not represent the likelihood of one happening instead of another. And it has its own role, unlike that if ordinary probability, which is explaining how much contribution to a coherent superposition each component state makes (although what that means in the case of irrevocably decohered branches is unclear)

Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.

The Ethical Weight or Low Measure Worlds

MWI creates the puzzle that low probability outcomes still happen, and have to be taken into account ethically. Many rationalists assume that they simply matter less, because that is the only way to restore anything like a normal view of ethical action -- but one should not assume something merely because it is convenient.

It can be argued that most decision theoretic calculations come out the same under different interpretations of QM...but altruistic ethics is different. In standard decision theory, you can tell directly how much utility you are getting; but in altruistic ethics , you are not measuring your suffering/happiness, you are assessing someone else's...and in the many worlds setting, that means solving the problem of how they are affected by their measure. It is not clear how low measure worlds should be considered in utilitarian ethics. It's tempting to ethically discount low measure worlds in some way, because that most closely approximates conventional single world utilitarianism. The alternative might force one to the conclusion that overall good outcomes are impossible to attain , so long as one cannot reduce the measures of worlds full of suffering zero. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn't comes with built in ethics

One part of the problem is that QM measure isn't probability, because all possible branches occur in MWI. Another stems from the fact that what other people experience is relevant to them, wheareas for a probability calculation, I only need to be able to statistically predict my own observations.. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily.equivalent ethically.

Suppose they low measure worlds are discounted ethically. If people in low measure worlds experience their suffering fully, then a 1%, of creating a hell-world would be equivalent in suffering to a 100% chance, and discount is unjustified. But if people in low measure worlds are like philosophical zombies, with little or no phenomenal consciousness, so that their sensations are faint or nonexistent, the moral hazard is much lower, and the discount is justified. A point against discounting is that our experiences seem fully real to us, although we are unlikely to be in a high measure world

A similar, but slightly less obvious argument applies to causing death. Causing the "death" of a complete zombie is presumably as morally culpable as causing the death of a character in a video game...which, by common consent, is not problem at all. So... causing the death of a 50% zombie would be only half as bad as killing a real person...maybe.

Classical Measure isn't Quantum Mechanical Measure

A large classical universe is analogous to Many Worlds in that the same structures -- the same people and planets -- repeat over long distances. It's even possible to define a measure, by counting repetitions up to a certain level of similarity. And one has the option if thinking about Quantum Mechanical measure that way,as a "head count"....but one is not forced to do so. On one hand, it features normality, on the other hand It is not "following the maths" ,because there's nothing in the formalism to suggest summing a number of identical low measure states is the only way to get a high measure one. So, again, it’s an extraneous assumption, and circular reasoning .

Ethical Calculus is not Decision Theory

Of course, MWI doesn't directly answer the question about consciousness and zombiehood .You can have objective information about observations, and if your probability calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows physics to be less wrong. And you can have subjective information about your own mental states, and if your personal calculus is wrong , you will get wrong results and know that you are getting wrong results. That is the negative feedback that allows personal decision theory to be less wrong.

Altruistic ethics is different. You don't have either kind of direct evidence, because you are concerned with other people's subjective sensations , not objective evidence , or your own subjectivity. Questions about ethics are downstream of questions about qualia, and qualia are subjective, and because they are subjective, there is no reason to expect them to behave like third person observations.

"But it all adds up to normality!"

If "it all" means every conjecture you can come up with, no It doesn't. Most conjectures are wrong. The point of empirical testing is to pick out the right ones -- the ones that make correct predictions, save appearances, add up to normality That's a difficult process, not something you get for free.

So "it all adds up to normality" is not some universal truth And ethical theories relating to someone else's feelings are difficult to test, especially if someone else is in the far future, or an unobservable branch of the multiverse. Testability isn't an automatic given either.

There are no major ethical implications at all...Wallace makes a similar claim in his book: “But do [the many worlds in MWI] matter to ordinary, banal thought, action and language? Friendship is still friendship. Boredom is still boredom. Sex is still sex

That's very narrow circle ethics, if it's ethics at all --he just likes a bunch of things that impact him directly And it's rather obvious that small circle ethical theories have the least interaction with large universe physical theories. So it likely he hasn't even considered the question of altruistic ethics in many worlds, and is therefore coming to the conclusion that it all adds up to normality rather cheaply. It's his ethical outlook that is the structural element , not his take on MWI.

Comment by TAG on If I care about measure, choices have additional burden (+AI generated LW-comments) · 2024-11-15T13:58:40.048Z · LW · GW

Every quantum event splits the multiverse, so my measure should decline by 20 orders of magnitude every second.

There isn’t the slightest evidence that irrevocable splitting, splitting into decoherent branches occurs at every microscopic event -- that would be combining the frequency of coherentism style splitting with the finality of decoherent splitting. As well as the conceptual incoherence, there is In fact plenty of evidence—eg. the existence of quantum computing—that it doesnt work that way

"David Deutsch, one of the founders of quantum computing in the 1980s, certainly thinks that it would. Though to be fair, Deutsch thinks the impact would “merely” be psychological – since for him, quantum mechanics has already proved the existence of parallel uni- verses! Deutsch is fond of asking questions like the following: if Shor’s algorithm succeeds in factoring a 3000-digit integer, then where was the number factored? Where did the computational resources needed to factor the number come from, if not from some sort of “multiverse” exponentially bigger than the universe we see? To my mind, Deutsch seems to be tacitly assuming here that factoring is not in BPP – but no matter; for purposes of argument, we can certainly grant him that assumption. It should surprise no one that Deutsch’s views about this are far from universally accepted. Many who agree about the possibil- ity of building quantum computers, and the formalism needed to describe them, nevertheless disagree that the formalism is best inter- preted in terms of “parallel universes.” To Deutsch, these people are simply intellectual wusses – like the churchmen who agreed that the Copernican system was practically useful, so long as one remembers that obviously the Earth doesn’t really go around the sun. So, how do the intellectual wusses respond to the charges? For one thing, they point out that viewing a quantum computer in terms of “parallel universes” raises serious difficulties of its own. In particular, there’s what those condemned to worry about such things call the “preferred basis problem.” The problem is basically this: how do we define a “split” between one parallel universe and another? There are infinitely many ways you could imagine slic- ing up a quantum state, and it’s not clear why one is better than another! One can push the argument further. The key thing that quan- tum computers rely on for speedups – indeed, the thing that makes quantum mechanics different from classical probability theory in the first place – is interference between positive and negative amplitudes. But to whatever extent different “branches” of the multiverse can usefully interfere for quantum computing, to that extent they don’t seem like separate branches at all! I mean, the whole point of inter- ference is to mix branches together so that they lose their individual identities. If they retain their identities, then for exactly that reason we don’t see interference. Of course, a many-worlder could respond that, in order to lose their separate identities by interfering with each other, the branches had to be there in the first place! And the argument could go on (indeed, has gone on) for quite a while. Rather than take sides in this fraught, fascinating, but perhaps ultimately meaningless debate..."..Scott Aaronson , QCSD, p148

Also see

https://www.lesswrong.com/posts/wvGqjZEZoYnsS5xfn/any-evidence-or-reason-to-expect-a-multiverse-everett?commentId=o6RzrFRCiE5kr3xD4

Comment by TAG on Theories With Mentalistic Atoms Are As Validly Called Theories As Theories With Only Non-Mentalistic Atoms · 2024-11-13T15:24:10.785Z · LW · GW

It seems common for people trying to talk about AI extinction to get hung up on whether statements derived from abstract theories containing mentalistic atoms can have objective truth or falsity values. They can. And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another, that we can know whether something that lives in a physical substrate that is unlike ours is conscious, and that there can be some degree of objective truth as to what is valuable [not that all beings that are merely intelligent will necessarily pursue these things], it in fact becomes much more natural to make clear statements and judgments in the abstract or general case, about what very smart non-aligned agents will in fact do to the physical world.

Why does any of that matter for AI safety? AI safety is a matter of public policy. In public policy making, you have a set of preferences, which you get from votes or surveys, and you formulate policy based on your best objective understanding of cause and effect. The preferences don't have to be objective, because they are taken as given. It's quite different to philosophy, because you are trying to achieve or avoid something, not figure out what something ultimately is. You do t have to answer Wolfram's questions in their own terms, because you can challenge the framing.

And if we can first agree on such basic elements of our ontology/epistemology as that one agent can be objectively smarter than another,

It's not all that relevant to AI safety, because an AI only needs some potentially dangerous capabilities. Admittedly, a lot of the literature gives the opposite impression.

that we can know whether something that lives in a physical substrate that is unlike ours is conscious,

You haven't defined consciousness and you haven't explained how . It doesn't follow automatically from considerations about intelligence. And it doesn't follow from having some mentalistic terms in our theories.

and that there can be some degree of objective truth as to what is valuable

there doesn't need to be. You don't have to solve ethics to set policy.

Comment by TAG on Set Theory Multiverse vs Mathematical Truth - Philosophical Discussion · 2024-11-03T20:19:31.834Z · LW · GW

Arguably, “basic logical principles” are those that are true in natural language.

That's where the problem starts, not where it stops. Natural language supports a bunch of assumptions that are hard to formally reconcile: if you want your strict PNC, you have to give up on something else. The whole 2500 yeah history of logic has been a history of trying to come up with formal systems that fulfil various desiderata. It is now formally proven that you can't have all of them at once, and it's not obvious what to keep and what to ditch. (Godelian problems can be avoided with lower power systems, but that's another tradeoff, since high power is desirable).

Formalists are happy to pick a system that's appropriate for a practical domain, and to explore the theoretical properties of different systems in parallel.

Platonists believe that only one axiom system has truth in addition to usefulness, but can't agree which one it is, so it makes no difference in practice

I'm not seeing a specific problem with sets -- you can avoid some of the problems of naive self theory by adding limitations, but that's tradeoffs again.

Otherwise nothing stops us from considering absurd logical systems where “true and true” is false, or the like.

"You can't have all the intuitive principles in full strength in one system"

doesn't imply

"adopt unintuitive axioms".

Even formalists don't believe all axiomisations are equally useful.

Likewise, “one plus one is two” seems to be a “basic mathematical principle” in natural language.

What's 12+1?

Any axiomatization which produces “one plus one is three” can be dismissed on grounds of contradicting the meanings of terms like “one” or “plus” in natural language.

They're ambiguous in natural language, hence the need for formalisation.

The trouble with set theory is that, unlike logic or arithmetic, it often doesn’t involve strong intuitions from natural language.

It involves some intuitions . It works like clubs. Being a senator is being a member of a set, not exemplifying a universal.

Sets are a fairly artificial concept compared to natural language collections (empty sets, for example, can produce arbitrary nestings), especially when it comes to infinite sets.

If you want finitism, you need a principled way to select a largest finite number.

Comment by TAG on Set Theory Multiverse vs Mathematical Truth - Philosophical Discussion · 2024-11-01T22:08:09.886Z · LW · GW

However, I find myself appealing to basic logical principles like the law of non-contradiction.

The law of non contradiction isn't true in all "universes" , either. It's not true in paraconsistent logic, specifically.

Comment by TAG on Most arguments for AI Doom are either bad or weak · 2024-10-28T23:48:46.613Z · LW · GW

Yes, and Logan is claiming that arguments which cannot be communicated to him in no more than two sentences suffer from a conjunctive complexity burden that renders them “weak”.

@Logan Zoellner being wrong doesn't make anyone else right. If the actual argument is conjunctive and complex, then all the component claims need to be high probability. That is not the case. So Logan is right for not quite the right reasons -- it's not length alone.

That’s not trivial. There’s no proof that there is such a coherent entity as “human values”, there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.

Many possible objections here, but of course spelling everything out would violate Logan’s request for a short argument.

And it wouldn't help anyway. I have read the Sequences , and there is nothing resembling a proof , or even strong argument, for the claim about coherent human values. Ditto the standard claims about utility functions, agency , etc. Reading the sequence would allow him to understand the LessWrong collective, but should not persuade him.

Whereas the same amount of time could, more reasonably, be spent learning how AI actually works.

Needless to say, that request does not have anything to do with effectively tracking reality,

Tracking reality is a thing you have to put effort into, not something you get for free, by labelling yourself a rationalist.

The original Sequences have did not track reality , because they are not evidence based -- they are not derived from academic study or industry experience. Yudkowsky is proud that they are "derived from the empty string" -- his way of saying that they are armchair guesswork.

His armchair guesses are based on Bayes,von Neumann rationality, utility maximisation, brute force search etc, which isnt the only way to think about AI, or particularly relevant to real world AI. But it does explain many doom arguments, since they are based on the same model -- the kinds of argument that immediately start talking about values and agency. But of course that's a problem in itself. The short doomer arguments use concepts from the Bayes/VonNeumann era in a "sleepwalking" way, out of sheer habit, given that the basis is doubtful. Current examples of AIs aren't agents, and it's doubtful whether they have values. It's not irrational to base your thinking on real world examples, rather than speculation.

In addition , they haven't been updated in the light of new developments , something else you have to do to track reality. tracking reality has a cost -- you have to change your mind and admit you are wrong. If you don't experience the discomfort of doing that, you are not tracking reality.

People other than Yudkowsky have written about AI safety from the perspective of how real world AIs work, but adding that injust makes the overall mass of information larger and more confusing.

where there is no “platonic” argument for any non-trivial claim describable in only two sentence, and yet things continue to be true

You are confusing truth and justification.

@Tarnish

You need to say something about motivation.

@avturchin

There are dozens of independent ways in which AI can cause a mass extinction event at different stages of its existence.

While each may have around a 10 percent chance a priori, cumulatively there is more than a 99 percent chance that at least one bad thing will happen.

Same problem. Yes, there's lots of means. That's not the weak spot. The weak spot is motivation.

@Odd anon

Same problem. You've done nothing to fill the gap between "ASI will happen" and "ASI will kill us all".

Comment by TAG on Logical Proof for the Emergence and Substrate Independence of Sentience · 2024-10-26T15:43:27.281Z · LW · GW

As other people have said, this is a known argument; specifically, it’s in The Generalized Anti-Zombie Principle in the Physicalism 201 series. From the very early days of LessWrong

Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”

I think this proof relies on three assumptions. The first (which you address in the post) is that consciousness must happen within physics. (The opposing view would be substance dualism where consciousness causally acts on physics from the outside.) The second (which you also address in the post) is that consciousness and reports about consciousness aren’t aligned by chance. (The opposing view would be epiphenomenalism, which is also what Eliezer trashes extensively in this sequence.) physical duplicate might do the same, although. that would imply the original's consciousness is epiphenomenal. Which is itself a reason to disbelieve in p-zombies , although not an impossibility proof.

This of course contradicts the Generalised Anti Zombie Principle announced by Eliezer Yudowsky. The original idea was that in a zombie world, it would be incredibly unlikely for an entity's claims of consciousness to be caused by something other than consciousness. "

Excluding coincidence doesn't proved that an entity's reports of consciousness are directly caused by its own consciousness. Robo-Chalmers will claim to be conscious because Chalmers does. It might actually be conscious, as an additional reason, or it might not. The fact that the claim is made does not distinguish the two cases. Yudkowsky makes much of the fact that Robo-Chalmers claim.would be caused indirectly by consciousness -- Chalmers has to be conscious in order to make a computational duplicate of his consciousness -- but at best that refutes the possibility of a zombie world, where entities claim to be conscious, although consciousness has never existed. Robo-Chalmers would still be possible in this world for reasons Yudkowsky accepts. So there is one possible kind of zombie, even given physicalism so the Generalised Anti Zombie Principle is false

(Note that I am talking about computational zombies, or c-zombies, not p-zombies

Computationalism isn't a direct consequence of physicalism. Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because there CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies-duplicates that are identical computationally, but not physically).

@Richard_Kennaway

That sounds like a Chalmers paper. https://consc.net/papers/qualia.html

Comment by TAG on Most arguments for AI Doom are either bad or weak · 2024-10-16T21:16:01.597Z · LW · GW

Argument length is substantially a function of shared premises

A stated argument could have a short length if it's communicated between two individuals who have common knowledge of each others premises..as opposed to the "Platonic" form, where every load bearing component is made explicit, and there is noting extraneous.

But that's a communication issue....not a truth issue. A conjunctive argument doesn't become likelier because you don't state some of the premises. The length of the stated argument has little to do with its likelihood.

How true an argument is, how easily it persuades another person, how easy it is to understand have little to do with each other.

The likelihood of an ideal argument depends in the likelihood of it's load bearing premises...both how many there are, and their individual likelihoods.

Public communication, where you have no foreknowledge of shared premises, needs to keep the actual form closer to the Platonic form.

Public communication is obviously the most important kind when it comes to avoiding AI doom.

This is important, because the longer your argument, the more details that have to be true, and the more likely that you have made a mistake

Correct. The fact that you don't have to explicitly communicate every step of an argument to a known recipient, doesnt stop the overall probability of a conjunctive argument from depending on the number, and individual likelihood, of the steps of the Platonic version, where everything necessary is stated and nothing unnecessary is stated

Argument strength is not an inverse function with respect to argument length, because not every additional “piece” of an argument is a logical conjunction which, if false, renders the entire argument false.

Correct. Stated arguments can contain elements that are explanatory, or otherwise redundant for an ideal recipient.

Nonetheless, there is a Platonic form, that does not contain redundant elements or unstated, load bearing steps.

Anyways, the trivial argument that AI doom is likely [...]s that it’s not going to have values that are friendly to humans

That's not trivial. There's no proof that there is such a coherent entity as "human values", there is no proof that AIs will be value-driven agents, etc, etc. You skipped over 99% of the Platonic argument there.

This is a classic example of failing to communicate with people outside the bubble. Your assumptions about values and agency just aren't shared by the general public or political leaders.

PS .

@Logan Zoellner

A fact cannot be self evidently true if many people disagree with it.

That's self evidently true. So why does it have five disagreement downvotes ?

Comment by TAG on Alexander Gietelink Oldenziel's Shortform · 2024-10-01T18:56:44.438Z · LW · GW

I mean that if turing machine is computing universe according to the laws of quantum mechanics,

I assume you mean the laws of QM except the collapse postulate.

observers in such universe would be distributed uniformly,

Not at all. The problem is that their observations would mostly not be in a classical basis.

not by Born probability.

Born probability relates to observations, not observers.

So you either need some modification to current physics, such as mangled worlds,

Or collapse. Mangled worlds is kind of a nothing burger--its a variation on the idea than interference between superposed states leads to both a classical basi and the Born probabilities, which is an old idea, but wihtout making it any more quantiative.

or you can postulate that Born probabilities are truly random.

??

Comment by TAG on The Other Existential Crisis · 2024-09-26T18:14:04.562Z · LW · GW

One might be determined to throw in the towel on cognitive effort if they were to take a particular interpretation of determinism, and they, and the rest of us, would be worse off for it.

Determinists are always telling each other to act like libertarians. That's a clue that libertarianism is worth wanting. @James Stephen Brown

Compatibilist free will has all the properties worth wanting: your values and beliefs determine the future, to the extent you exert the effort to make good decisions.

No it doesn't, because it doesn't have the property of being able to shape the future, or steer towards a future that wasn't inevitable. Which is pretty important if you are trying to avoid the AI kills everyone future

Libertarian free will is able to do that.

Naturalistic libertarianism appeals to some form of indeterminism, or randomness, inherent in physics rather than a soul or ghost-in-the-machine unique to humans, , that overrides the physical behaviour of the brain. The problem is to explain how indeterminism does not undermine other features of a kind free will "worth wanting" -- purposiveness, rationality and so on.

Randomness is not what we want

Explaining NLFW in terms of "randomness" is difficult, because the word has connotations of purposelessness , meaninglessness, and so on. But these are only connotations, not strict implications. "Not deteminism" doesn't imply lack of reason , purpose , or control. It doesn't have to separate your from from your beliefs and values. Therefore,I prefer the term "indeterminism" over the term "randomness".

So, how to explain indeterminism does not undermine other features of a kind free will "worth wanting".

Part of the answer is to note that mixtures of indeterminism and determinism are possible, so that libertarian free will is not just pure randomness, where any action is equally likely.

Another part is proposing a mechanism , with indeterminism occurring at different places and times, rather than being slathered evenly over neural activity.

Another part is noting that control doesn't have to mean predetermination.

Another part is that notice that a choice between things you wish to do cannot leave you doing something you do not wish to do, something unconnected to your desires and beliefs.

The basic mechanism is that the unconscious mind proposes various ideas and actions , which a the conscious mind decides between. Thus is similar to the mechanism provided by the determinist Sam Harris. He makes much of the fact that the conscious mind, the executive function, does not predetermined the suggestions: I argue that the choice between them, the decision to act in one rather than another, *is* conscious control. -- and conscious control clearly exists in health adults.

Comment by TAG on Book Review: On the Edge: The Fundamentals · 2024-09-23T21:52:57.890Z · LW · GW

I noticed the same thing -- even Scott Alexander dropped a reference to it without explaining it. Anyway, here what I came up with:-

https://www.reddit.com/r/askphilosophy/s/lVNnjhTurI

(That's me done for another two days)

Comment by TAG on The Other Existential Crisis · 2024-09-21T19:08:26.402Z · LW · GW

You are a subject, and you determine your own future

Not so much , given deteminism.

Determinism allows you to cause the future in a limited sense. Under determinism, events still need to be caused,and your (determined) actions can be part of the cause of a future state that is itself determined, that has probability 1.0. Determinism allows you to cause the future ,but it doesn't allow you to control the future in any sense other than causing it. (and the sense in which you are causing the future is just the sense in which any future state depends on causes in he past -- it is nothing special and nothing different from physical causation). It allows, in a purely theoretical sense "if I had made choice b instead of choice a, then future B would have happened instead of future A" ... but without the ability to have actually chosen b. You are a link in a deterministic chain that leads to a future state, so without you, the state will not happen ... not that you have any choose use in the matter. You can't stop or change the future because you can't fail to make your choices, or make them differently. You can't anything of your own, since everything about you and your choices was determined by at the time of the Big Bang. Under determinism , you are nothing special...only the BB is special.

Tthis is still true under many worlds. even though MWI implies that there is not a single inevitable future, it doesn't allow you to influence the future in a way that makes future A more likely than future B , as a result of some choice you make now. Under MW determinism, the probabilities of A and B are what they are, and always were -- before you make a decision, after you make a decision , and before you were born. You can't choosee between them, even in the sense of adjusting the probabilities.

Libertarian free will, by contrast, does allow the future to depend on decisions which are not themselves determined. That means there are valid statements of the form "if I had made choice b instead of choice a, then future B would have happened instead of future A". And you actually could have made choice a or choice b....these are real possibilities, not merely conceptual or logical ones.

Comment by TAG on [deleted post] 2024-09-19T09:49:00.954Z

Your model of muon decay doesn't conserve charge -- you start with -1e , then have -2e and finally have zero. Also, the second electron is never observed.

Comment by TAG on james oofou's Shortform · 2024-09-14T10:42:52.898Z · LW · GW

What I have noticed is that while there are cogent overviews of AI safety that don't come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren't any that do both. From that I conclude there aren't any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like "Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household believe him anyway".

(See @DPiepgrass saying something similar and of course getting downvoted).

@MitchellPorter supplies us with some examples of gappy arguments.

human survival and flourishing require specific complex values that we don't know how to specify

There 's no evidence that "human values" are even a coherent entity , and no reason to believe that any AI of any architecture would need them.

But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.

What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world

Hang on a minute. Where does control of the come from? Do we give it to the AI? Does it take it?

to something non-human. Eliezer's position is that you shouldn't do that unless you absolutely know what you're doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.

One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. "Generative superintelligence" has the potential to produce a surprising and possibly "wrong" output that will transform the world and be impossible to undo.

Current generative AI has no ability to directly affect anything. Where would that come from?

Comment by TAG on Why Large Bureaucratic Organizations? · 2024-08-29T11:14:17.341Z · LW · GW

Large: economies of scale; need to coordinate many specialised skills. ( Factories were developed before automation)

Hierarchical: Needed because Large. It's how you co-ordinate a.much greater than Dunbar number of people. (Complex software is also hierarchical).

Bureaucratic: Hierarchical subdivision by itself is necessary but insufficient...it makes organisations manageable but not managed. Reports create legibility and Rules ensure that units are contributing to the whole, not pursuing their own ends.


I don't see what Wentworld is:

Are you giving up on scale per se?

Are you accepting scale but giving up on hierarchy -- If so, how do a thousand people in a flat structure co-ordinate?

Are you accepting scale and hierarchy , but giving up on bureaucracy?

Are you accepting scale, hierarchy, and bureaucracy, but...the right kind that doesn't come from the Will to Power?

Its easy to imagine a Dunbar number of grad student types all getting along very well with each other...but it isn't a world.l, its a senior common room, or boutique R&D department.


The trick of hierarchy is to divide a large amount of information about the whole organisation into a manageable amount of coarse grained information about the whole organisation (for senior managers) ... and a manageable amounts of fine grained information about sub-units (for middle managers)

From a super intelligent POV there is probably a ton of identifiable waste, but from a merely intelligent POV, you still have the problem of trading off globality against granularity. Its much easier to prove waste exists than come up with a practical solution for eliminating it.


Which, of course, is not to say that waste doesn't exist, or that there is no negative-sum status-seeking.

Comment by TAG on shminux's Shortform · 2024-08-19T10:55:29.494Z · LW · GW

I really don’t understand what “best explanation”, “true”, or “exist” mean, as stand-alone words divorced from predictions about observations we might ultimately make about them.

Nobody is saying that anything has to be divorced from prediction , in the sense that emperical evidence is ignored: the realist claim is that empirical evidence should be supplemented by other epistemic considerations.

Best explanation:- I already pointed out that EY is not an instrumentalist. For instance, he supports the MWI over the CI, although they make identical predictions. Why does he do that? For reasons of explanatory simplicity , consilience with the rest of physics, etc .as he says. That gives you a clue as to what "best explanation" is. (Your bafflement is baffling...it sometimes sounds like you have read the sequences, it sometimes sounds like you haven't. Of course abduction, parsimony, etc are widely discussed in the mainstream literature as well ).

True:- mental concept corresponds to reality.

Exists:- You can take yourself as existing , and you can regard other putative entieties as existing if they gave some ability to causally interact with you. That's another baffling one, because you actually use something like that definition in your argument against mathematical realism below.

This isn’t just a semantic point, I think. If there are no observations we can make that ultimately reflect whether something exists in this (seems to me to be) free-floating sense, I don’t understand what it can mean to have evidence for or against such a proposition.

Empirical evidence doesn't exhaust justification. But you kind of know that, because you mention "good argument" below.

So I don’t understand how I am even supposed to ever justifiably change my mind on this topic, even if I were to accept it as something worth discussing on the object-level.

Apriori necessary truths can be novel and surprising to an agent, in practice, even though they are apriori and necessary in principle... because a realistic agent can't instantaneously and perfectly correlate their mental contents, and don't have an oracular list of every theory in their head. You are not a perfect Bayesian. You can notice a contradiction that you haven't noticed before. You can be informed of a simpler explanation that you hadn't formulated yourself.

What can possibly sway me one way or another when all variables X that I appear to be able to observe (or think about, etc.) are in the concrete realm, which is defined to be entirely non-intersecting with the Platonic realm?

Huh? I was debating nomic realism. Mathematical realism is another thing. Objectively existing natural laws obviously intersect with concrete observations , because if Gravity worked on an inverse cube law (etc), everything looked very different.

You don't have to buy into realism about all things, or anti realism about all things. You can pick and choose. I don't personally believe in Platonic realism about mathematics, for the same reasons you don't. I believe Nomic realism is another question...its logically possible for physical laws to have been different.

@shminux defined the the thing he is arguing against as "Platonic" .. I don't have to completely agree with that, nor do you. Maybe it's just a mistake to think of nomic realism as Platonism. Platonism marries the idea of non-mental existence and the idea of non causality...But they can be treated separately.

What can that possibly mean in this context?”

what context? You're talking about mathematical realism, I'm talking about nomic realism.

as lines of logic and reasoning, whose validity and soundness implies we are more likely to be in a world where certain possibilities are true rather than others (when mulling over multiple hypotheses

What have I said that makes you think I have departed from that?

@Shminux

If push comes to shove, I would even dispute that “real” is a useful category once we start examining deep ontological claims

Useful for what? If you terminally value uncovering the true nature of reality, as most scientists and philosophers do, you can hardly manage without some concept of "real". If you only value making predictions, perhaps you don't need the concept....But then the instrumentalist/realist divide is a difference in values, as I previously said, not a case of one side being wrong and the other side being right.

“Exist” is another emergent concept that is not even close to being binary, but more of a multidimensional spectrum (numbers, fairies and historical figures lie on some of the axes).

"Not a binary" is a different take from "not useful".

The critical point is that we have no direct access to the underlying reality, so we, as tiny embedded agents, are stuck dealing with the models regardless.

"No direct access to reality" is a different claim to "no access to reality" is a different claim to "there is no reality" is a different to "the concept of reality is not useful".l

I can provisionally accept that there is something like a universe that “exists”, but, as I said many years ago in another thread, I am much more comfortable with the ontology where it is models all thea way down (and up and sideways and every which way).

It's incoherent. What are these models, models of?

Comment by TAG on shminux's Shortform · 2024-08-17T16:30:29.444Z · LW · GW

Is there anything different about the orld that I should expect to observe depending on whether Platonic math “exists” in some ideal realm? If not, why would I care about this topic once I have already dissolved my confusion about what beliefs are meant to refer to?

Word of Yud is that beliefs aren't just about predicting experience. While he wrote Beliefs Must Pay Rent, he also wrote No Logical Positivist I.

(Another thing that has been going on for years is people quoting VBeliefs Must Pay Rent as though it's the whole story).

Maybe you are a logical positivist, though....you're allowed to be , and the rest of us are allowed not to be. It's a value judgement: what doesn't have instrumental value toward predicting experience can still.have terminal value.

If you are not an LP,.idealist, etc, you are interested in finding the best explanation for.your observations -- that's metaphysics. Shminux seems.sure that certain negative metaphysical claims are true -- there are No Platonic numbers, objective laws,.nor real probabilities. LP. does not allow such conclusions: it rejects both positive and negative metaphysical claim as meaningless.

The question is what would support the dogmatic version of nomic antirealism, as.opposed to the much more defensible claim that we don't know one way or the other (irrealism)

Later on in the thread, you talked about “laws of physics” as abstractions written in textbooks, made so they can be understandable to human minds. But, as a terminological matter, I think it is better to think of the laws of physics as the rules that determine how the territory functions, i.e., the structured, inescapable patterns guiding how our observations come about, as opposed to the inner structure of our imperfect maps that generate our beliefs.

The term can be used in either sense. Importantly, it can be used in both senses: the existence of in-the-mind sense doesn't preclude the existence of the in--reality sense. Maps dont necessarily correspond to reality, but they can. "Doesn't necessarily correspond " doesnt mean the same thing as necessarily doesn't correspond".

@Shminux

It is not clear whether any randomly generated world would necessarily get emergent patterns like that, but the one we live in does, at least to a degree

And maybe there is a.reason for that...and maybe the reason is the existence of Platonic in -the-territory physical laws. So there .s an argument for nomic realism. Is there an argument against? You haven't given one, just "articulated a claim".

So in your opinion, there is no reason why anything happens?

There is an emergent reason, one that lives in the minds of the agents.

But that's not the kind of reason that makes anything happen -- it's just a passive model.

The universe just is.

That isn't an argument against or for Platonic laws. Maybe it just is in a way that includes Platonic laws, maybe it isn't.

In other words, if you are a hypothetical Laplace’s demon, you don’t need the notion of a reason, you see it all at once, past, present and future.

I think you mean a hypothetical God with a 4D view of spacetime. And LD only has the ability to work out the future from a 3D snapshot. Yes, if you could see past present , you wouldn't need in-the-mind laws to.make predictions..but, again that says nothing about in-the-territory, Platonic laws. Even if God doesn't need in-the-mind laws, it's still possible that reality needs in-the-territory laws to make things happen.

“a causal or explanatory factor” is also inside the mind

Anthropics and Boltzmann brains are also in the mind. As concepts.

What's in the mind has to make sense, to fit together. Even if maths is all in the mind, maths problems still need to be solved. Saying maths is all in the mind does not tell you whether a particular theorem is true or false. Likewise , saying metaphysics is all in the mind does bot tell you that nomic realism is false, and anthropics true.

We have a meta map of the mind world relation, and if we assume a causal relation from the world to the mind, we can explain where new information comes from, and if we assume lawful behaviour in the world, we can explain regularities. Maybe these are all concepts we have, but we still need to fit them.together in a way that reduces the overall mystery, just as we still need to solve maths problems.

What do you mean by an “actual explanation”?

Fitting them.together in a way that reduces the overall mystery.

We live in it and are trying to make sense of it

And if you want us to believe that the instrumentalist picture makes the most sense, you need to argue for it. The case for realism.l, by contrast, has been made.

A more coherent question would be “why is the world partially lossily compressible from the inside”, and I don’t know a non-anthropic answer

The objective existence of physical laws, nomic realism, is a non anthropic answer which has already been put to you.

ETA

Maybe, again, we differ where they live, in the world as basic entities or in the mind as our model of making sense of the world.

...or both, since...

it is foolish to reduce potential avenues of exploration.

Yudowsky's argument that probability is subjective is flawed, because it rests on assumption that the existence of subjective probability implies the non existence of objective probabilty but the assumption is never justified. But you seem to buy into it anyway. And you seem to be basing your anti realism o n a similar unargued assumption.

Comment by TAG on Relativity Theory for What the Future 'You' Is and Isn't · 2024-08-14T19:54:27.537Z · LW · GW

the ‘instantaneous’ mind (with its preferences etc., see post) is*—if we look closely and don’t forget to keep a healthy dose of skepticism about our intuitions about our own mind/self*—sufficient to make sense of what we actually observe

Huh? If you mean my future observations, then you are assuming a future self, and therefore temporally extended self. If you mean my present observations, then they include memories of past observations.

in fact I’ve defended some strong computationalist position in the past

But a computation is an series of steps over time, so it is temporarily extended

Comment by TAG on Circular Reasoning · 2024-08-08T14:18:58.825Z · LW · GW

I think it’s fair to say that the most relevant objection to valid circular arguments is that they are not very good at convincing someone who does not already accept the conclusion.

I think the most relevant objection is quodlibet. Simple circular arguments be generated for any conclusion. Since they are formally equivalent, they must have equal justifcatory (probability raising) power, which must be zero. That doesn't quite mean they are invalid...it could mean there are valid arguments with no justificatory force.

@Seed Using something like empiricism or instrumentalism to avoid the Regress Problem works for a subset of questions only. For instance, questions about the correct interpretation of observations can't be answered by observations. (Logical Positivism tried to make a feature of the bug, by asserting that the questions it can't answer were never meaningful).

In a sense, there are multiple solutions to the Regress problem -- but they all involve giving something up, so there are no completely satisfactory solutions.

The Desiderata of an episteme are, roughly:-

Certainty. A huge issue in early modern philosophy which has been largely abandoned in contemporary philosophy.

Completeness. Everything is either true or false, nothing is neither.

Consistency. Nothing is both true and false.

Convergence. Everyone can agree on a single truth.

Rationalists have already given up Certainty, and might well have to give up on Convergence (single truth-ism) as well, if they adopt Cohererentism. Or Completeness , if they adopt instrumentalism.

Comment by TAG on Relativity Theory for What the Future 'You' Is and Isn't · 2024-08-06T12:04:23.568Z · LW · GW

Yes, but here the right belief is the realization that what connects you to what we traditionally called your future “self”, is nothing supernatural

As before merely rejecting the supernatural doesn't give you a single correct theory, mainly because it doesn't give you a single theory. There a many more than two non-soul theories of personal identity (and the one Bensinger was assuming isn't the one you are assuming).

e. no super-material unified continuous self of extra value:

That's a flurry of claims. One of the alternatives to the momentary theory of personal identity is the theory that a person is a world-line, a 4D structure -- and that's a materialistic theory.

we don’t have any hint at such stuff;

Perhaps we have no evidence of something with all those properties, but we don't need something with all those properties to supply one alternative. Bensinger 's computationalism is also non magical (etc).

So due to the absence of this extra “self”: : “You” are simply this instant’s mind we currently observe from you.

Again, the theory of momentary identity isn't right just because soul theory is wrong.

But as the only thing that in reality connects you with what we traditionally would have called “your future self”, is your own particular preferences/hopes/

No, since I have never been destructively transported, I am also connected by material continuity. You can hardly call that supernatural!

In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.

Great. So it isn't all about my values. It's possible for me to align my subjective sense of identity with objective data. .

Comment by TAG on Relativity Theory for What the Future 'You' Is and Isn't · 2024-07-30T17:40:34.015Z · LW · GW

Care however it occurs to you!

Good decisions need to be based on correct beliefs as well as values.

Well, what do you anticipate experiencing? Something or nothing? You anticipate whatever you do anticipate and that’s all there is to know—there’s no “should” here

Why not? If there is some discernable fact of the matter about how personal continuity works, that epistemically-should constrain your expectations. Aside from any ethically-should issues.

What we must not do, is insist on reaching a universal, ‘objective’ truth about it.

Why not?

The current “me” is precisely my current mind at this exact moment—nothing more, nothing less.

Is it?

There's a theory that personal identity is only ever instantaneous...an "observer. moment"... such that as an objective fact, you have no successors. I don't know whether you believe it. If it's true , you epistemically-should believe it, but you don't seem to believe in epistemic norms.

There's another, locally popular , theory that the continuity of personal identity is only about what you care about. (It either just is that, or it needs to be simplified to that...it's not clear which). But it's still irrational to care about things that aren't real...you shouldn't care about collecting unicorns...so if there is some fact of the matter that you don't survive destructive teleportation, you shouldn't go for it, irrespective of your values.

Comment by TAG on When is a mind me? · 2024-07-19T11:08:52.029Z · LW · GW

.

The Olson twins are do not at all have qualitative identity.

Not 100% , but enough to illustrate the concept.

So I just don’t know what your position is.

I didn't have to have a solution to point out the flaws in other solutions. My main point is that a no to soul- theory isn't a yes to computationalism. Computationalism isn't the only alternative, or the best.

You claim that there doesn’t need to be an answer;

Some problems are insoluble.

that seems false, as you could have to make decisions informed by your belief.

My belief isn't necessarily the actually really answer ..is it? That's basic rationality. You need beliefs to act...but beliefs aren't necessarily true.

And I have no practical need for a theory that can answer puzzles about destructive teleportation and the like.

You currently value your future self more than other people, so you act like you believe that’s you in a functional sense.

Yes. That's not an argument in favour of the contentious points, like computationalism and Plural Is. If I try to reverse the logic, and great everything I value as me, I get bizarre results...I am my dog, country, etc.

Are you the same person tomorrow? It’s not an identical pattern, but a continuation.

Tomorrow-me is a physical continuation , too.

I’m saying it’s pretty-much you because the elements you wouldn’t want changed about yourself are there.

If I accept that pattern is all that matters , I have to face counterintuitive consequences like Plural I's.

If I accept that material continuity is all that matters, then I face other counterintuitive consequences, like having my connectome rewired.

Its an open philosophical problem. If there were an simple answer , it would have been answered long ago.

"Yer an algorithm, Arry" is a simple answer. Just not good

If you value your body or your continuity over the continuity of your memories, beliefs, values, and the rest of your mind that’s fine,

Fortunately, it's not an either-or choice.

I certainly do believe in the plural I (under the speciall cirrumstance I discussed); we must be understanding something differently in the torture question. I don’t have a preference pre-copy for who gets tortured; both identical future copies are me from my perspective before copying. Maybe you’re agreeing with that?

...and post copy I have a preference for the copy who isn't me to be tortured. Which is to say that both copies say the same thing, which is to say that they are only copies. If they regarded themselves as numerically identical, the response "the other one!" would make no sense, and nor would the question. The questions presumes a lack of numerical identity, so how can it prove it?

I was addressing a perfect computational copy. An imperfect but good computational copy is higher resolution, not lower, than a biological twin. It is orders of magnitude more similar to the pattern that makes your mind, even though it is less similar to the pattern that makes your body.

You're assuming pattern continuity matters more than material continuity. There's no proof of that, and no proof that you have to make an either-or choice.

What is writing your words is your mind, not your body, so when it says “I” it meets the mind.

The abstract pattern can't cause anything without the brain/body.

Noncomputational physicalism sounds like it’s just confused. Physics performs computations and can’t be separated from doing that.

Noncomputational physicalism isn't the claim that computation never occurs. Its the claim that the computational abstraction doesn't capture everything that's relevant to consciousness/mind. Its not physically necessary that the computational abstraction captures all the causally relevant information, so it isn't logically necessary, a fortiori.

Dual aspect theory is incoherent because you can’t have our physics without doing computation that can create a being that claims and experiences consciousness like we do.

Computation is a lossy , high level abstraction of a what a physical system does. It doesn't fundamentally cause anything in itself.

Now, you can argue that a physical duplicate would make the same claims to be conscious without actually having consciousness, and that's literally a p-zombie argument.

But we do have consciousness. The insight of DAT is that "reports of consciousness have a physical/computational basis" isn't exclusive of "reports of consciousness are caused by consciousness". You can have your cake and eat it!

Of course, the above is all about consciousness-qua-awareness , not consciousness qua personal identity.

I concede it’s possible that consciousness includes some magic nonphysical component (that’s not computation or pattern instantiated by physics as a pure result of how physics works).

If it's physical, why call it magical?

It's completely standard that all computations run on a substrate. If you want to say that all physics is computation, OK, but then all computation is physics. You then no longer have plural I's, because physics doesn't allow the selfsame object to have multiple instances.

Do you think a successful upload would say things like “I’m still me!” and think thoughts like “I’m so glad I payed extra to give myself cool virtual environment options”? That seems like an inevitability if the causal patterns of your mind were captured. And it would be tough to disagree with a thing claiming up and down it’s you, citing your most personal memories as evidence

It's easy to disagree if there is another explanation, which there is: a functional duplicate will behave the same, because it's a functional duplicate..whether it's conscious of not, whether it's you or not.

Comment by TAG on When is a mind me? · 2024-07-10T10:25:49.459Z · LW · GW

You’ve got a lot of questions to raise, but no apparent alternative.

Non computationalism physicalism is an alternative to either or both the computationalist theories. (That performing a certain class of computations is sufficient to be conscious in general, or that performing a specific one is sufficient to be a particular conscious individual. Computation as a theory of consciousness qua awareness isn't known to be true, and even if it is assumed, it doesn't directly give you a theory of personal identity).

The non existence, or incoherence, of personal identity is another. There doesn't have to be an answer to "when is a mind me".

Note that no one except andeslodes is arguing against copying. The issue is when a mind is me, the person typing this, not a copy-of-me.

Reproduce the matter, you’ve reproduced the mind.

Well, that's only copying.

Consciousness, qua Awareness, and Personal Identity are easily confused, not least because both are often called "consciousness".

A computational theory of consciousness is sometimes called on to solve the second problem, the problem of personal identity. But there is no strong reason to think a computational duplicate of you, actually is you, since there is no strong reason to think any other kind of duplicate is.

Qualitative identity is a relationship between two or more things that are identical in all their properties. Numerical identity is the relationship a thing has only to itself. The Olsen twins enjoy qualitative identity; Stephanie Germanota and Lady Gaga have numerical identity. The trick is to jump from qualitative identity to numerical identity, because the claim is that a computational duplicate of you, is you, the very same person.

Suppose you found out you had an identical twin. You would not consider them to be you yourself. Likewise for a biological clone. A computational duplicate would be lower resolution still, so why would it be you? The major problem is that you and your duplicate exist simultaneously in different places, which goes against the intuition that you are a unique individual.

You’re fighting against the counterintuitive conclusion. Sure I’d rather have a different version of me be tortured; it’s slightly different. But I won’t be happy about it. And my intuition is still drawn toward continuity being important, even though my whole rational mind disagrees. I’ve been back and forth over this extensively, and the conclusion is always the same- ever since I got over the counter-intuitive nature of the plural I

You don't really believe in the plural I theory, or you would have a different and we to the torture question.

Non -computationalist physicalism doesn't have to be the claim that material continuity matters , and pattern doesnt: it can be the claim that both do. So that you cease to be you if you are destructively cloned, and also if your mind is badly scrambled. No bullet biting about plural Is is required.

Comment by TAG on Free Will, Determinism, And Choice · 2024-07-07T13:45:34.698Z · LW · GW

Both determinism and free will are metaphysical assumptions. In other words, they are presuppositions of thought.

Neither is a presupposition of thought. You don't have to presume free will, beyond some general decision making ability, and you don't have to presume strict determinism beyond some good-enough causal reliability. Moreover, both are potentially discoverable as facts.

A choice must be determined by your mental processes, knowledge and desires. If choices arose out of nowhere, as uncaused causes, they would not be choices.

False dichotomy. A choice can be influenced by your mental processes, knowledge and desires without being determined by them.

A choice is not an uncaused cause. A choice is when thought generates an intention, based on pre-existing preferences and knowledge, and that intention generates action toward making the intention real.

You can't assume that any kind of choice counts as free will.

Free will is not “free” in the sense of being uncaused. It is “free” in the sense that you are the cause

I see determinism, you are not the cause, only a cause. The choice you made was already a fact before you were born.

. If an uncaused cause arose out of nowhere and made you pick the chocolate, that would not be a choice. It would be a strange, supernatural event.

Indeterminism based free will doesn't have to separate you from your own desires, values, and goals, because, realistically ,they are often conflicting , so that they don't determine a single action. This point is explained by the parable of the cake. If I am offered a slice of cake, I might want to take it so as not to refuse my hostess, but also to refuse it so as to stick to my diet. Whichever action I chose, would have been supported by a reason. Reasons and actions can be chosen in pairs.

. By contrast, I am free to make a cup of coffee right now, because I have the power to turn that intention into a reality.

That's only freedom in the compatibilists sense.

Determinism is a presupposition of science

No, much of science is statistical and probablistic.

The free will | determinism paradox is one of a family of paradoxes created by thinking about the self as an object.

Determinism excludes libertarian free will by removing the ability to have done otherwise: you have offered nothing to restore it.

Comment by TAG on Isomorphisms don't preserve subjective experience... right? · 2024-07-04T18:07:08.559Z · LW · GW

You don’t have to be a substance dualist to believe a sim (something computationally or functionally isomorphic to a person) could be a zombie. It's a common error , that because dualism is a reason to reject something as being genuinely conscious,it is the only reason --there is also an argument based on physicalism.

There are three things that can defeat the multiple realisability of consciousness:-

  1. Computationalism is true, and the physical basis makes a difference to the kinds of computations that are possible.

  2. Physicalism is true, but computationalism isn't. Having the right computation without the right physics only gives a semblance of consciousness.

  3. Dualism is true. Consciousness depends on something that is neither physics nor computation.

So there are two issues: what explains claims of consciousness? What explains absence of consciousness?

Computationalism is a theory of multiple realisability: the hardware on which the computation runs doesn't matter, so long as it is adequate to run the computation, so grey matter and silicon can run the same computations...and a lot of physical details are therefore irrelevant to conscious.

Computationalism isn't a direct consequence of physicalism.

Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because there CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies-duplicates that are identical computationally, but not physically.

So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.

A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input. Likewise, a duplicate of a non believer will deny them. (This point is clearer if you think in terms of duplicates of specific individuals with consistent views, like Dennett and Chalmers, rather than a generic human ).

@JuliaHP

Instead of analyzing whether you yourself are conscious or not, analyze what is causally upstream of your mind thinking that you are conscious, or your body uttering the words “I am conscious”.

Since an effect can have more that one cause that isn't going to tell you much.

Comment by TAG on Free will isn’t a concept (unless you mean determinism) · 2024-06-25T18:16:58.506Z · LW · GW

None of these are free will (as commonly understood

Some believe that free will must be a tertium datur, a third thing fundamentally different to both determinism and indeterminism. This argument has the advantage that it makes free will logically impossible,and the disadvantage that hardly any who believes in free will defined it that way. In particular, naturalistic libertarians are happy to base free will on a mere mixture of determinism and indeterminism.

Another concern about naturalistic libertarianism is that determinism is needed to put a decision into effect once it had been made. If one's actions are unrelated to ones decisions, one would certainly lack control in a relevant sense. But it is not the case that we are able to get the required results 100% of the time, so full determinism is perhaps unnecessary to achieve a realistic, "good enough" level of control. Additionally, there does not have to be the same amount of indeterminism at every stage of the deciding-and-acting process. In "two stage" models, the agent alternates between going into a more indeterministic mode to make the "coin toss" , and then into a more deterministic mode to implement it.

If you made choices (or some element of them) not controlled by your personality, experience, thoughts and anything else that comes under the heading of ‘the state of your brain as a result of genetics and your prior environments’, they would be random, which still isn’t free will

Because of the stipulative definition? Such choices can still relate to onesaims., values personality , and history.

And that is precisely why they are determined. They are determined by you

If they are determined by me and nothing else, that would be something like free will...but it's not pure determinism, because pure determinism means everything is inevitable from the dawn of time , including my decisions. It is something you can get from a two stage theory, though.

Comment by TAG on The Kernel of Meaning in Property Rights · 2024-06-21T18:14:16.070Z · LW · GW

everything seems to collapse to tautology

Successful explanation makes things seem less arbitrary, more predictable, more obvious. A tautology is the ultimate in non arbitrary obviousness.

Comment by TAG on Our Intuitions About The Criminal Justice System Are Screwed Up · 2024-06-17T13:36:20.066Z · LW · GW

You are using "The criminal justice system" to mean " The US criminal justice system " throughout. Typical-countrying is particularly problematic in this case, because the US is such an outlier.

The way to humanize a prison system is not to replace unofficial tortures with official ones. Other countries have abandoned capital and corporal punishment , and have lower incarceration rates.

If the death penalty is not so bad, why does almost everyone on death row seek to appeal it?

Comment by TAG on My AI Model Delta Compared To Yudkowsky · 2024-06-11T12:28:38.696Z · LW · GW

I wonder what MIRI thinks about this 2013 post (“The genie knows, but doesn’t care”) nowadays. Seems like the argument is less persuasive now,

The genie argument was flawed at the time, for reasons pointed out at the time, and ignored at the time.

Comment by TAG on Why write down the basics of logic if they are so evident? · 2024-06-08T18:26:41.820Z · LW · GW

Bayesianism works up to a point. Frequentism works up to a point. Various other things work.

You haven't shown that frequentism doesn't work, or that frequentism and bayesianism are mutually exclusive.