Posts

A Philosophical Tautology 2023-12-09T14:06:34.463Z
Mathematics As Physics 2023-12-06T22:27:54.140Z
Mental Models Of People Can Be People 2023-04-25T00:03:59.911Z

Comments

Comment by Nox ML on A Philosophical Tautology · 2023-12-13T00:53:46.031Z · LW · GW

So what part of a mathematical universe do you find distasteful?

the idea that “2” exists as an abstract idea apart from any physical model

It's this one.

Okay, but if actual infinities are allowed, then what defines small in the “made up of small parts”? Like, would tiny ghosts be okay because they’re “small”?

Given that you're asking this question, I still haven't been clear enough. I'll try to explain it one last time. This time I'll talk about Conway's Game of Life and AI. The argument will carry over straightforwardly to physics and humans. (I know that Conway's Game of life is made up of discrete cells, but I won't be using that fact in the following argument.)

Suppose there is a Game of Life board which has an initial state which will simulate an AI. Hopefully it is inarguable that the AI's behavior is entirely determined by the cell states and GoL rules.

Now suppose that as the game board evolves, the AI discovers Peano Arithmetic, derives "2 + 2 = 4", and observes that this corresponds to what happens when it puts 2 apples in a bag that already contains 2 apples (there are apple-like things in the AI's simulation). The fact that the AI derives "2 + 2 = 4", and the fact that it observes a correspondence between this and the apples, has to be entirely determined by the rules of the Game of Life and the initial state.

In case this seems too simple and obvious so far and you're wondering if you're missing something, you're probably not missing anything, this is meant to be simple and obvious.

If the AI notices how deep and intricate math is, how its many branches seem to be greatly interconnected with each other, and postulates that math is unreasonably effective. This also has to be caused entirely by the initial state and rules of the Game of Life. And if the Game of Life board is made up of sets embedded inside some model of set theory, or if it's not embedded in anything and is just the only thing in all of existence, in either case nothing changes about the AI's observations or actions and nothing ought to change about its predictions!

And if the existence or non-existence of something changes nothing about what it will observe, then using its existence to "explain" any of its observations is a contradiction in terms. This means that even its observation of the unreasonable effectiveness of math cannot be explained by the existence of a mathematical universe outside of the Game of Life board.

Connecting this back to what I was saying before, the "small parts" here are the cells of the Game of Life. You'll note that it doesn't matter if we replace the Game of Life by some other similar game where the board is a continuum. It also doesn't even matter if the act of translating statements about the AI into statements about the board is uncomputable. All that matters is that the AI's behavior is entirely determined by the "small parts".


You might have noticed a loophole in this argument, in that even though the existence of math cannot change anything past the initial board state, if the board was embedded inside a model of set theory, then it would be that model which determined the initial state and rules. However, since the existence of math is compatible with every consistent set of rules and literally every initial board state, knowing this would also give no predictive power to the AI.

At best the AI could try to argue that being embedded inside a mathematical universe explains why the Game of Life rules are consistent. But then it would still be a mystery why the mathematical universe itself follows consistent rules, so in the end the AI would be left with just as many questions as it started with.

Comment by Nox ML on A Philosophical Tautology · 2023-12-12T11:24:51.791Z · LW · GW

My view is compatible with the existence of actual infinities within the physical universe. One potential source of infinity is, as you say, the possibility of infinite subdivision of spacetime. Another is the possibility that spacetime is unboundedly large. I don't have strong opinions one way or another on if these possibilities are true or not.

Comment by Nox ML on A Philosophical Tautology · 2023-12-11T05:16:57.537Z · LW · GW

The assumption is that everything is made up of small physical parts. I do not assume or believe that it's easy to predict the large physical systems from those small physical parts. But I do assume that the behavior of the large physical systems is determined solely from their smaller parts.

The tautology is that any explanation about large-scale behavior that invokes the existence of things other than the small physical parts must be wrong, because those other things cannot have any effect on what happens. Note that this does not mean that we need to describe everything in terms of quantum physics. But it does mean that a proper explanation must only invoke abstractions that we in principle would be able to break down into statements about physics, if we had arbitrary time and memory to work out the reduction. (Now I've used the word reduction again, because I can't think of a better word, but hopefully what I mean is clear.)

This rules out many common beliefs, including the platonic existence of math separately from physics, since the platonic existence of math cannot have any effect on why math works in the physical world. It does not rule out using math, since every known instance of math, being encoded in human brains / computers, must in principle be convertible into a statement about the physical world.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T16:15:46.164Z · LW · GW

I completely agree that reasoning about worlds that do not exist reaches meaningful conclusions, though my view classifies that as a physical fact (since we produce a description of that nonexistent world inside our brains, and this description is itself physical).

it becomes apparent that if our physical world wasn’t real in a similar sense, literally nothing about anything would change as a result.

It seems to me like if every possible world is equally not real, then expecting a pink elephant to appear next to me after I submit this post seems just as justified as any other expectation, because there are possible worlds where it happens, and ones where it doesn't. But I have high confidence that no pink elephant will appear, and this is not because I care more about worlds where pink elephants don't appear, but because nothing like that has ever happened before, so my priors that it will happen are low.

For this reason I don't think I agree that nothing would change if the physical world wasn't real in a similar sense as hypothetical ones.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T15:58:58.829Z · LW · GW

I will refer to this other comment of mine to explain this miscommunication.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T14:27:50.555Z · LW · GW

Reasoning being real and the thing it reasons about being real are different things.

I do agree with this, but I am very confused about what your position is. In your sibling comment you said this:

Possibly the fact that I perceive the argument about reality of physics as both irrelevant and incorrect (the latter being a point I didn’t bring up) caused this mistake in misperceiving something relevant to it as not relevant to anything.

The existence of physics is a premise in my reasoning, which I justify (but cannot prove) by using the observation that humanity has used this knowledge to accomplish incredible things. But you seem to base your reasoning on very different starting premises, and I don't understand what they are, so it's hard to get at the heart of the disagreement.

Edit: I understand that using observation of the physical world to justify that it exists is a bit circular. However, I think that premises based on things that everyone has to at least act like they believe is the weakest possible sort of premise one can have. I assume you also must at least act like the physical world is real, otherwise you would not be alive to talk to me.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T13:47:15.668Z · LW · GW

Okay, let's forget the stuff about the "I", you're right that it's not relevant here.

For existence in the sense that physics exists, I don’t see how it’s relevant for reasoning, but I do see how it’s relevant to decision making

Okay, I think my view actually has some interesting things to say about this. Since reasoning takes place in a physical brain, reasoning about things that don't exist can be seen as a form of physical experiment, where your brain builds a description which has properties which we assume the thing that doesn't exist would have if it existed. I will reuse my example from my previous post to explain what I mean by this:

To be more clear about what I mean by mathematical descriptions “sharing properties” with the thing it describes, we can take as example the real numbers again. The real numbers have a property called the least upper bound property, which says that every nonempty collection of real numbers which is bounded above has a least upper bound. In mathematics, if I assume that a variable x is assigned to a nonempty set of real numbers which is bounded above, I can assume a variable y which points to its least upper bound. That I can do this is a very useful property that my description of the reals shares with the real numbers, but not with the rational numbers or the computable real numbers.

So my view would say that reasoning is not fundamentally different from running experiments. Experiments seem to me to be in a gray area with respect to this reasoning/decision-making dichotomy, since you have to make decisions to perform experiments.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T13:06:54.068Z · LW · GW

I don't say in this post that everything can be deduced from bottom up reasoning.

Comment by Nox ML on A Philosophical Tautology · 2023-12-10T00:13:10.019Z · LW · GW

The fact that I live in a physical world is just a fact that I've observed, it's not a part of my values. If I lived in a different world where the evidence pointed in a different direction, I would reason about the different direction instead. And regardless of my values, if I stopped reasoning about the physical world, I would die, and this seems to me to be an important difference between the physical world and other worlds I could be thinking about.

Of course this is predicated on the concept of "I" being meaningful. But I think that this is better supported by my observations than the idea that every possible world exists and the idea that probability just represents a statement about my values.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T20:25:38.888Z · LW · GW

clearly physical brains can think about non physical things.

Yes, but this is not evidence for the existence of those things.

But it’s not conclusive in every case, because the simplest adequate explanation need not be a physical explanation.

There is one notion of simplicity where it is conclusive in every case: every explanation has to include physics, and then we can just cut out the extra stuff from the explanation to get one that postulates strictly less things and has equally good predictions.

But you're right, there are other notions of simple for which this might not hold. For example if we define simple as "shortest description of the world which contains all our observations". Though I think this definition has its own issues, since it probably depends on the choice of language.

Still, this is the most interesting point that has been brought up so far, thank you.

Edit: I was too quick with this reply and am actually wrong that my notion of simplicity is conclusive in every case. I still think this applies in every case that we know of, however.

Edit 2: I think the only case where it is not conclusive is the case where we have some explanation of the initial conditions of the universe which we find has predictive power but which requires postulating more things.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T20:03:55.320Z · LW · GW

Whatever "built on top of" means.

In ZFC, the Axiom of Infinity can be written entirely in terms of ∈, ∧, ¬, and ∀. Since all of math can be encoded in ZFC (plus large cardinal axioms as necessary), all our knowledge about infinity can be described with ∀ as our only source of infinity.

Only for the subset of maths that’s also physical. You can’t resolve the Axiom of Choice problem that way.

You can't resolve the Axiom of Choice problem in any way. Both it and its negation are consistent.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T19:45:48.377Z · LW · GW

Again: every mathematical error is a real physical even in someone’s brain, so , again, physics guarantees nothing.

I don't get what you're trying to show with this. If I mistakenly derive in Peano Arithmetic that 2 + 2 = 3, I will find myself shocked when I put 2 apples inside a bag that already contains 2 apples and find that there are now 4 apples in that bag. Incorrect mathematical reasoning is physically distinguishible from correct mathematical reasoning.

There are of course, lots of infinities in maths.

Everything we know about all other infinities can be built on top of just FORALL in first-order logic.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T19:24:24.273Z · LW · GW

Sure, I think I agree. My point is that because all known reasoning takes place in physics, we don't need to assume that any of the other things we talk about exist in the same way that physics does.

I even go a little further than that and assert that assuming that any non-physical thing exists is a mistake. It's a mistake because it's impossible for us to have evidence in favor of its existence, but we do have evidence against it: that evidence is known as Occam's Razor.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T19:10:19.565Z · LW · GW

Physics doesn’t guarantee that mathematical reasoning works.

All of math can be built on top of first-order logic. In the sub-case of propositional logic, it's easy to see entirely within physics that if I observe that "A AND B" corresponds to reality, then when I check if "A" corresponds to reality, I will also find that it does. Every such deduction in propositional logic corresponds to something you can check in the real physical world.

The only infinity in first-order logic are quantifiers, of which only one is needed: FORALL, which is basically just an infinite AND. I don't think it's too surprising that a logical deduction from an infinite AND will hold in every finite case that we can check, for similar reasons to why logical deductions hold for the finite case.

It is mysterious that physics is ordered in a way that this works out, but pending the answer you say exists, it's not any more mysterious than asking why math is ordered that way.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T18:42:49.113Z · LW · GW

I haven't used the word "reduce" since you gave a definition of it in the other thread which didn't match the precise meaning I was aiming for. The meaning I am aiming for is given in this paragraph from this post:

If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn’t matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it’s easy to compute for humans), there is a simple conclusion that follows logically from that.

It doesn't matter if we have found an explanation for consciousness yet. We still know with high confidence that it has to be entirely determined by the small physical components of the brain, so we can have high confidence that any attempted explanation will be wrong if it relies on the existence of other things than the physical components.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T18:34:57.643Z · LW · GW

There are answers to that question.

If you don't mind, I would be interested in a link to a place that gives those answers, or at least a keyword to look up to find such answers.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T17:45:20.144Z · LW · GW

Well if you're not saying it, then I'm saying it: this is a mysterious fact about physics ;P

I interpreted "which is not the same as being some sort of refutation" as being disagreement, and I knew my use of the word "contradicts" was not entirely correct according to its definition, but I couldn't think of a more accurate word so I figured it was "close enough" and used it anyway (which is a bad communication habit I should probably try to overcome, now that I'm explicitly noticing it). I'm sorry if I came across harshly in my comment.

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T16:55:24.487Z · LW · GW

I disagree that what you're saying contradicts what I'm saying. The physical world is ordered in such a way that the reasoning you described works: this is a fact about physics. You are correct that it is a mysterious fact about physics, but positing the existence of math does not help explain it, merely changes the question from "why is physics ordered in this way" to "why is mathematics ordered in this way".

Comment by Nox ML on A Philosophical Tautology · 2023-12-09T16:42:50.563Z · LW · GW

This is fair, though the lack of experiments showing the existence of anything macro that doesn't map to sub-micro state also adds a lot of confidence, in my opinion, since the amount of hours humans have put into performing scientific experiments is quite high at this point.

Generally I'd say that the macro-level irrelevance of an assumption means that you can reject it out of hand, and lack of micro-level modelling means that there is work to be done until we understand how to model it that way.

Comment by Nox ML on Mathematics As Physics · 2023-12-09T11:46:16.576Z · LW · GW

If you accept that the existence of mathematical truths beyond physical truths cannot have any predictive power, then how do you reconcile that with this previous statement of yours:

Presupposing things without evidence

As you can see, I am not doing that.

I will say again that I don't reject any mathematics. Even 'useless' mathematics is encoded inside physical human brains.

Comment by Nox ML on Mathematics As Physics · 2023-12-09T10:51:21.564Z · LW · GW

If we take as assumption that everything humans have observed has been made up of smaller physical parts (except possibly for the current elementary particles du jour, but that doesn't matter for the sake of this argument) and that the macro state is entirely determined by the micro state (regardless of if it's easy to compute for humans), there is a simple conclusion that follows logically from that.

This conclusion is that nothing extraphysical can have any predictive power above what we can predict from knowledge about physics. This follows because for something to have predictive power, it needs to have some influence on what happens. If it doesn't have any influence on what happens, its existence and non-existence cannot allow us to make any conclusions about the world.

This argument applies to mathematics: if the existence of mathematics separately from physics allowed us to make any conclusions about the world, it would have to have a causal effect on what happens, which would contradict the fact that all macro state we've ever observed has been determined by just the micro state.

Since the original assumption is one with very strong evidence backing it, it's safe to conclude that, in general, whenever we think something extraphysical is required to explain the known facts, we have to be making a mistake somewhere.

In the specific instance of your comment, I think the mistake is that the difference between "a priori" truths and other truths is artificial. When you're doing math you have be doing work inside your brain and getting information from that. This is not fundamentally different from observing particle accelerators and getting information from them.

Comment by Nox ML on Mathematics As Physics · 2023-12-07T23:57:50.726Z · LW · GW

This is only correct if we presuppose that the concept of mathematically true is a meaningful thing separate from physics. The point this post is getting at is that we can still accept all human mathematics without needing to presuppose that there is such a thing. Since not presupposing this is strictly simpler, and presupposing it does not give us any predictive power, we ought not to assume that mathematics exists separately from physics.

This is not just a trivial detail. Presupposing things without evidence is the same kind of mistake as Russell's teapot, and small mistakes like that will snowball into larger ones as you build your philosophy on top of them.

Comment by Nox ML on Mathematics As Physics · 2023-12-07T23:39:36.314Z · LW · GW
Comment by Nox ML on Why am I Me? · 2023-06-28T19:32:27.562Z · LW · GW

I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:

In your thought experiment, the people who bet that they are in the last 95% of humans only win in aggregate, so there is still no selfish reason to think that taking that bet is the best decision for an individual.

My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthropic reasoning, then they have to reject basically all probabilistic decision making. Presumably they will not reject all probabilistic reasoning, and therefore they have to reject this argument. (Assuming that I've correctly understood their argument and the logic I've just laid out holds.)

Edit: Minor changes to improve clarity.

Comment by Nox ML on Why am I Me? · 2023-06-28T18:48:06.311Z · LW · GW

You do this 100 times, would you say you ought to find your number >5 about 95 times?

I actually agree with you that there is no single answer to the question of "what you ought to anticipate"! Where I disagree is that I don't think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.

My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don't. This seems to me to be as close as we can get to defining the notion of "doing better when faced with uncertainty", regardless of if it involves the "I" or not, and regardless of if you are selfish or not.

Edit to add more (and clarify one previous sentence):

Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you'll lose every time, it's just a smaller chance. So even in that case it's only true that the strategy maximizes your personal interest "in aggregate".

I am also neither a "halfer" nor a "thirder". Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.

Comment by Nox ML on Why am I Me? · 2023-06-28T14:18:54.845Z · LW · GW

By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don't. So I don't think it takes any additional assumptions to conclude that even selfish people should say yes.

From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a different person, and repeat this 100 times. Now suppose I was one of these 100 people. In this situation, I could use reasoning that seems very similar to yours to reject any kind of action based on probability:

I either get the reward or not as the die landed on a number >5 or not. Giving an answer based on expected value might maximize the total benefit in aggregate of the 100 people, but it doesn't help me, because I can't know if the die is showing >5 or not. It is correct to say if everyone makes decisions based on expected utility then they will have more reward combined. But I will only have more reward if the die is >5, and this was already determined at the time of my decision, so there is no fact of the matter about what the best decision is.

And granted, it's true, you can't be sure what the die is showing in my experiment, or which copy you are in anthropic problems. But the whole point of probability is reasoning when you're not sure, so that's not a good reason to reject probabilistic reasoning in either of those situations.

Comment by Nox ML on Why am I Me? · 2023-06-26T04:45:24.887Z · LW · GW

Suppose when you are about to die, time freezes, and Omega shows up and tells you this: "I appear once to every human who has ever lived or will live, right when they are about to die. Answer this question with yes or no: are you in the last 95% of humans who will ever live in this universe? If your answer is correct, I will bring you to this amazing afterlife that I've prepared. If you guess wrong, you get nothing." Do you say yes or no?

Let's look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5% of them get to the afterlife. So it seems better to say yes in this case, unless you have access to more information about the world than is specified in this problem. But if you accept that it's better to say yes here, then you've basically accepted the doomsday argument.

However, an important thing to note is that when using the doomsday argument, there will always be 5% of people who are wrong. And those 5% will be the first people who ever lived, whose decisions in many ways have the biggest impact on the world. So in most situations, you should still be acting like there will be a lot more people in the future, because that's what you want the first 5% of people to have been doing.

More generally, my procedure for resolving this type of confusion is similar to how this post handles the Sleeping Beauty problem. Basically, probability is in the mind, so when a thought experiment messes with the concept of "mind", probability can become underspecified. But if you convert it to a decision problem by looking at the actual outcomes and rating them based on your preferences, things start making sense again.

Comment by Nox ML on Sentience matters · 2023-05-30T20:20:10.891Z · LW · GW

I like the distinctions you make between sentient, sapient, and conscious. I would like to bring up some thoughts about how to choose a morality that I think are relevant to your points about death of cows and transient beings, which I disagree with.

I think that when choosing our morality, we should do so under the assumption that we have been given complete omnipotent control over reality and that we should analyze all of our values independently, not taking into consideration any trade-offs, even when some of our values are logically impossible to satisfy simultaneously. Only after doing this do we start talking about what's actually physically and logically possible and what trade-offs we are willing to make, while always making sure to be clear when something is actually part of our morality vs when something is a trade-off.

The reason for this approach is to avoid accidentally locking in trade-offs into our morality which might later turn out to not actually be necessary. And the great thing about it is that if we have not accidentally locked in any trade-offs into our morality, this approach should give back the exact same morality that we started off with, so when it doesn't return the same answer I find it pretty instructive.

I think this applies to the idea that it's okay to kill cows, because when I consider a world where I have to decide whether or not cows die, and this decision will not affect anything else in any way, then my intuition is that I slightly prefer that they not die. Therefore my morality is that cows should not die, even though in practice I think I might make similar trade-offs as you when it comes to cows in the world of today.

Something similar applies to transient computational subprocesses. If you had unlimited power and you had to explicitly choose if the things you currently call "transient computational subprocesses" are terminated, and you were certain that this choice would not affect anything else in any way at all (not even the things you think it's logically impossible for it not to affect), would you still choose to terminate them? Remember that no matter what you choose here, you can still choose to trade things off the same way afterwards, so your answer doesn't have to change your behavior in any way.

It's possible that you still give the exact same answers with this approach, but I figure there's a chance this might be helpful.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-26T01:07:28.602Z · LW · GW

The reason I reject all the arguments of the form "mental models are embedded inside another person, therefore they are that person" is that this argument is too strong. If a conscious AI was simulating you directly inside its main process, I think you would still qualify as a person of your own, even though the AI's conscious experience would contain all your experiences in much the same way that your experience contains all the experiences of your character.

I also added an addendum to the end of the post which explains why I don't think it's safe to assume that you feel everything your character does the same way they do.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-26T00:06:03.378Z · LW · GW

I think we just have different values. I think death is bad in itself, regardless of anything else. If someone dies painlessly and no one ever noticed that they had died, I would still consider it bad.

I also think that truth is good in and of itself. I want to know the truth and I think it's good in general when people know the truth.

Here, I technically don’t think you’re lying to the simulated characters at all—in so far as the mental simulation makes them real, it makes the fictional world, their age, and their job real too.

Telling the truth to a mental model means telling them that they are a mental model, not that they are a regular human. It means telling them that the world they think they live in is actually a small mental model living in your brain with a minuscule population.

And sure, it might technically be true that within the context of your mental models, they "live" inside the fictional world, so "it's not a lie". But not telling them that they are in a mental model is such a incredibly huge thing to omit that I think it's significantly worse than the majority of lies people tell, even though it can technically qualify as a "lie by omission" if you phrase it right.

so I would expect simulating pain in such away to be profoundly uncomfortable for the author.

I've given my opinion on this in an addendum added to the end of the post, since multiple people brought up similar points.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-26T00:04:22.910Z · LW · GW

Points similar to this have come up in many comments, so I've added an addendum at the end of my post where I give my point of view on this.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T17:38:10.737Z · LW · GW

I can definitely create mental models of people who have a pain-analogue which affects their behavior in ways similar to how pain affects mine, without their pain-analogue causing me pain.

there’s no point on reducing this to a minimal Platonic concept of ‘simulating’ in which simulating excruciating pain causes excruciating pain regardless of physiological effects.

I think this is the crux of where we disagree. I don't think it matters if pain is "physiological" in the sense of being physiologically like how a regular human feels pain. I only care if there is an experience of pain.

I don't know of any difference between physiological pain and the pain-analogues I inflicted on my mental models which I would accept as necessary for it to qualify as an experience of pain. But since you clearly do think that there is such a difference, what would you say the difference is?

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T17:24:59.736Z · LW · GW

I don't personally think I'm making this mistake, since I do think that saying "the conscious experience is the data" actually does resolve my confusion about the hard problem of consciousness. (Though I am still left with many questions.)

And if we take reductionism as a strongly supported axiom (which I do), then necessarily any explanation of consciousness will have to be describable in terms of data and computation. So it seems to me that if we're waiting for an explanation of experience that doesn't boil down to saying "it's a certain type of data and computation", then we'll be waiting forever.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T09:00:46.343Z · LW · GW

My best guess about what you mean is that you are referring to the part in the "Ethics" section where I recommend just not creating such mental models in the first place?

To some extent I agree that mortality doesn't mean it should've never lived, and indeed I am not against having children. However, after stumbling on the power to create lives that are entirely at my mercy and very high-maintenance to keep alive, I became more deontological about my approach to the ethics of creating lives. I think it's okay to create lives, but you must put in a best effort to give them the best life that you can. For mental models, that includes keeping them alive for as long as you do, letting them interact with the world, and not lying to them. I think that following this rule leads to better outcomes than not following it.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T04:31:46.132Z · LW · GW

I wouldn't quite say it's a typical mind fallacy, because I am not assuming that everyone is like me. I'm just also not assuming that everyone is different from me, and using heuristics to support my inference that it's probably not too uncommon, such as reports by authors of their characters surprising them. Another small factor in my inference is the fact that I don't know how I'd write good fiction without making mental models that qualified as people, though admittedly I have very high standards with respect to characterization in fiction.

(I am aware that I am not consistent about which phrase I use to describe just how common it is for models to qualify as people. This is because I don't actually know how common it is, I only have inferences based on the evidence I already gave to go on.)

The rest of your post is interesting and I think I agree with it, though we've digressed from the original subject on that part.

Thanks for you replies.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T04:17:31.562Z · LW · GW

The reason I care if something is a person or not is that "caring about people" is part of my values. I feel pretty secure in taking for granted that my readers also share that value, because it's a pretty common one and if they don't then there's nothing to argue about since we just have incompatible utility functions.

What would be different if it were or weren’t, and likewise what would be different if it were just part of our person-hood?

One difference that I would expect in a world where they weren't people is that there would be some feature you could point to in humans which cannot be found in mental models of people, and for which there is a principled reason to say "clearly, anything missing that feature is not a person".

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T03:43:45.330Z · LW · GW

I do not think that literally any mental model of a person is a person, though I do draw the line further than you.

What are your reasons for thinking that mental models are closer to markov models than tulpas? My reason for leaning more on the latter side is my own experience writing, where I found it easy to create mental models of characters who behaved coherently and with whom I could have long conversations on a level above even GPT4, let alone markov models.

Another piece of evidence is this study. I haven't done any actual digging to see if the methodology is any good, all I did was see the given statistic, but it is a much higher percentage than even I would have predicted before seeing it, and I already believed everything I wrote in this post!

Though I should be clear that whether or not a mental model is a person depends on the level of detail, and surely there are a lot that are not detailed enough to qualify. I just also think that there are a lot that do have enough detail, especially among writers.

That said, a lot of the reasons humans want to continue their thread of experience probably don’t apply to most tulpas (e.g. when a human dies, the substrate they were running on stops functioning, all their memories are lost, and they lose their ability to steer the world towards states they prefer whereas if a tulpa “dies” its memories are retained and its substrate remains intact, though it still I think loses its ability to steer the world towards its preferred states).

I find it interesting that multiple people have brought up "memories aren't lost" as part of why it's less bad for mental models or tulpas to die, since I personally don't care if my memories live on after I die and would not consider that to be even close to true immortality.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T02:20:10.738Z · LW · GW

I disagree that it means that all thinking must cease. Only a certain type of thinking, the one involving creating sufficiently detailed mental models (edit: of people). I have already stopped doing that personally, though it was difficult and has harmed my ability to understand others. Though I suppose I can't be sure about what happens when I sleep.

Still, no, I don't want everyone to die.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T02:05:57.081Z · LW · GW

That's right. It's why I included the warning at the top.

Comment by Nox ML on Mental Models Of People Can Be People · 2023-04-25T01:37:02.356Z · LW · GW

One of my difficulties with this is that it seems to contradict one of my core moral intuitions, that suffering is bad. It seems to contradict it because I can inflict truly heinous experiences onto my mental models without personally suffering for it, but your point of view seems to imply that I should be able to write that off just because the mental model happens to be continuous in space-time to me. Or am I misunderstanding your point of view?

To give an analogy and question of my own, what would you think about an alien unaligned AI simulating a human directly inside its own reasoning center? Such a simulated human would be continuous in spacetime with the AI, so would you consider the human to be part of the AI and not have moral value of their own?

Comment by Nox ML on Are tulpas moral patients? · 2022-12-28T17:20:39.436Z · LW · GW

Your heuristic is only useful if it's actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it's very important to get the right heuristics: I've been wrong about what qualified as a person before, and I have blood on my hands because of it.

I don't think it's true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I've been able to create mental constructs which I believe were people and which I was able to start and stop at will.

Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it's not possible to "insert breakpoints" in human plurality. This second part is what I disagree with.

I think there are some forms of plurality where it's not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.

Comment by Nox ML on Are tulpas moral patients? · 2022-12-28T14:52:32.202Z · LW · GW

I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining.

I disagree with this. Why should it matter if someone is dependent on someone else to live? If I'm in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?

Comment by Nox ML on Are tulpas moral patients? · 2022-12-28T14:50:40.840Z · LW · GW

I think integration and termination are two different things. It's possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can't complain. But it's also possible to just terminate one without changing the other, and that is death.

But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.

I don't understand what you mean by this. I do think that tulpas experience things.

Comment by Nox ML on Are tulpas moral patients? · 2022-12-28T00:05:34.760Z · LW · GW

Terminating a tulpa is bad for reasons that homicide is bad.

That is exactly my stance. I don't think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it's immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.

you should head of to cancel Critical Role and JJR Martin.

I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it's possible that they are creating, hurting, and then ending lives. There's nothing I can do about it, though.

It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after.

I don't really know. I'm basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don't personally see how I could write a story with good characterization without creating tulpas.

Comment by Nox ML on Are tulpas moral patients? · 2022-12-27T23:00:15.307Z · LW · GW

That's fair. I've been trying to keep my statements brief and to the point, and did not consider the audience of people who don't know what tulpas are. Thank you for telling me this.

The word "tulpa" is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I'll just use my definition.

It's easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a "mental model" of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop.

In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine.

So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way.

My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don't make a strong distinction and this definition also includes many characters.

And to not really answer your direct questions: I don't know Serial Experiments Lain, and you're the person who's in the best position to figure out if Vax'ildan is a tulpa by my definition. As for "you are your masks", I'm not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don't personally experience that so I don't have much to say about it, except that it doesn't really fit my experiences.

(I do not create new tulpas anymore for ethical reasons.)

Comment by Nox ML on Are tulpas moral patients? · 2022-12-27T21:43:36.612Z · LW · GW

I don't think I'm bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.

I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.

I don't have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.

Comment by Nox ML on Are tulpas moral patients? · 2022-12-27T21:16:06.357Z · LW · GW

Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/​n where n is the number of members within their system

This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don't think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a person.

Regarding your other points, I think that you take the view of narratives too far. What I see, hear, feel, and think, in other words my experiences, are real. (Yes, they are reducible to physics, but so is everything else on Earth, so I think it's fair to use the word "real" here.) I don't see in what way experiences are similar to a meme, and unlike what the word narrative implies, I don't think they are post-hoc rationalizations.

I know there are studies that show that people will often come up with post-hoc rationalizations for why they did something. However, there have been many instances in my life where I consciously thought about something and came to a conclusion which surprised me and changed my behavior, and where I remembered all the steps of my conscious reasoning, such that it seems very unlikely that the conscious chain of reasoning was invented post-hoc.

In addition, being aware of the studies, I've found that if I pay attention I can often notice when I don't actually remember why I did something and I'm just coming up with a plausible-seeming explanation, vs when I actually remember the actual thought process that led to a decision. For this reason I think that post-hoc rationalizations are a learned behavior and not fundamental to experience and personhood / moral patients.

Comment by Nox ML on Are tulpas moral patients? · 2022-12-27T19:22:05.893Z · LW · GW

My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.

If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.

In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in a person. Your mileage may vary, but I've thought about this subject a lot and have not been able to find anything that tulpas are missing which seems like it might be an actual requirement for personhood. Note that a useful thought experiment when investigating possible requirements for personhood that tulpas don't meet is to imagine a non-tulpa with an analogous disability, and see if you would still consider the non-tulpa with that disability to be a person.

Now, if we grant that the tulpa is a person, we must still show that (b) is wrong, and that they are not the same person as the their headmate. My argument here is also very simple. I simply observe that tulpas have different emotions, desires, beliefs, personality, and sense of identity than their headmate. Since these are basically all the things I actually care about in a person, it doesn't make sense to say that someone who differs in all those ways is the same. In addition, I don't think that sharing a brain is a good reason to say that they are the same person, for a similar reason to why I wouldn't consider myself to be the same person as an AI that was simulating me inside its own processors.

Obviously, as with all arguments about consciousness and morality, these arguments are not airtight, but I think they show that the personhood of tulpas should not be easily dismissed.

Edit: I've provided my personal definition of the word "tulpa" in my second reply to Slider below. I do not have a precise definition of the word "person", but I challenge readers to try to identify what difference between tulpas and non-tulpas they think would disqualify a tulpa from being a person.