Posts

What does "probability" really mean? 2022-12-28T03:20:45.651Z
How Death Feels 2022-12-18T23:47:47.117Z
Can we, in principle, know the measure of counterfactual quantum branches? 2022-12-18T22:07:56.736Z
A thought experiment 2022-12-10T05:23:19.868Z
How does acausal trade work in a deterministic multiverse? 2022-11-19T01:50:56.299Z
Is acausal extortion possible? 2022-11-11T19:48:24.672Z

Comments

Comment by sisyphus (benj) on What does "probability" really mean? · 2022-12-29T01:45:57.335Z · LW · GW

Huh, I thought that many people supported both a Tegmark IV multiverse as well as a Bayesian interpretation of probability theory, yet you list them as opposite approaches?

I suppose my current philosophy is that the Tegmark IV multiverse does exist, and probability refers to the credence I should lend to each possible world that I could be embedded in (this assumes that "I" am localized to only one possible world). This seems to incorporate both of the approaches that you listed as "opposite".

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-12-22T23:12:09.811Z · LW · GW

I think I basically agree, though I am pretty uncertain. You'd basically have to simulate not just the other being, but also the other being simulating you, with a certain fidelity. In my post I posed the scenario where the other being is watching you through an ASI simulation, and so it is much more computationally easier for them to simulate you in their head, but this means you have to simulate what the other being is thinking as well as what it is seeing. Simply modelling the being as thinking "I will torture him for X years if he doesn't do action Y" is an oversimplification since you also have to expand out the "him" as "a simulation of you" in very high detail.

Therefore, I think it is still extremely computation-intensive for us to simulate the being simulating us.

Comment by sisyphus (benj) on Can we, in principle, know the measure of counterfactual quantum branches? · 2022-12-19T08:41:08.103Z · LW · GW

I get that doing something like this is basically impossible using any practical technology, but I just wanted to know if there was anything about it that was impossible in principle (e.g. not even an ASI could do it).

The main problem that I wanted to ask and get clarification on is whether or not we could know the measure of existence of branches that we cannot observe. The example I like to use is that it is possible to know where an electron "is" once we measure it, and then the wave function of the electron evolves according to the Schrodinger equation. The measure of existence of a future timeline where electron is measured at a coordinate X is equal to the amplitude of the wave function at X after being evolved forward using the Schrodinger equation. But I am guessing that it is impossible to go backwards, in the sense of deducing the state of the wave function before the initial measurement is made using the measurement result (what was the amplitude of the wave function at Y before we measured the electron at Y)? Does that make sense?

Comment by sisyphus (benj) on Can we, in principle, know the measure of counterfactual quantum branches? · 2022-12-19T08:33:40.594Z · LW · GW

I thought David Deutsch had already worked out a proof that the Born rule using decision theory? I guess it does not explain objective probability but as far as I know the question of what probability even means is very vague.

I know that the branching is just a metaphor for the human brain to understand MWI better, but the main question I wanted to ask is whether or not you can know the amplitude of different timelines that have "diverged" a long time ago. E.g. it is possible to know where an electron "is" once we measure it, and then the wave function of the electron evolves according to the Schrodinger equation. The measure of existence of a future timeline where electron is measured at X is equal to the amplitude of the wave function at X after being evolved forward using the Schrodinger equation. But I am guessing that it is impossible to go backwards, in the sense of deducing the state of the wave function before a measurement is made using the measurement result (what was the amplitude of the wave function at X before we measured the electron at X)? Does that make sense?

Comment by sisyphus (benj) on How Death Feels · 2022-12-19T03:29:32.159Z · LW · GW

I disagree with your 300 room argument. My identity is tied to my mind, which is a computation carried out by all 300 copies of my body in these 300 rooms. If all 300 rooms were suddenly filled with sleeping gas and 299 of the copies are quietly killed, only 1 copy will wake up. However, I should expect to experience waking up with probability 1 since that is the only possible next observer moment in this set up. The 299 dead copies of me cannot generate any further observer moments since they are dead.

I'd argue that you cannot experience a coma since you're unconscious, you can't really experience anything when you're in a coma. When you wake up it will feels as if you have just jumped forward in time. Deep sleep is qualitatively different and I was careful to avoid saying it is exactly like sleep, since sleep is simply a state of lowered consciousness instead of complete unconsciousness (e.g. we can still experience things like dreams). It is possible coma is also lowered consciousness like sleep, in which case you can experience comas but this says nothing about experiencing unconsciousness.

Comment by sisyphus (benj) on How Death Feels · 2022-12-19T02:24:51.374Z · LW · GW

I think the main crux here is the philosophy of identity. You do not regard the emulated mind running on a machine on the other side of the room as "you", but if the subjective experiences are identical, you cannot rule out the possibility of being the emulated mind yourself. They are functionally identical and thus should be both considered "you" as they are carrying out the same computation.

"And to be consistent one would be adviced to "expect" at any moment to fluctuate into mars and choke for lack of air."

You're right that this is a probability and in a multiverse this possibility would be realised. But in this case the probability of it occurring is so little we needn't pay attention to it. This is qualitatively different from "low probability of being resurrected", because "teleporting to Mars" has a counterfactual experience of "not teleporting to Mars" which outweighs its probability, whereas "experiencing resurrection" does not have a corresponding counterfactual experience of "experiencing non-experience" since this is a tautologically meaningless statement.

"And it is not like the universe would suddenly abandon its indifference to human desire just for respecting non-destruction."

I am not saying the universe necessarily has to care about human experiences at all, just that death and nonexperience is a tautologically meaningless concept. In fact I'd rather the universe did not make me immortal.

Comment by sisyphus (benj) on How Death Feels · 2022-12-19T02:15:17.048Z · LW · GW

"You seem to be arguing that you will experience "yourself" in many other parts of a multiverse after you die. Why does this not occur before you die?"

Because even though "you" in the sense of a computation have multiple embeddings in the multiverse, the vast vast majority of them share the same subjective experience and are hence functionally the same being, you can't yourself distinguish between them. The difference is that while some of these embeddings end when they die, you will only experience the ones which continue on afterwards (since you can't experience nonexperience).

"Can you clarify what you mean by "feel like falling asleep"? I don't see why there would be even a moment of unconsciousness if your idea is true."

I meant that death just feels like a timeskip subjectively. And you're correct that there wouldn't be any moments of unconsciousness, in fact this is already true in real life. You can't experience unconsciousness, and when you wake up it subjectively feels like a jump-forward from where you left off.

Comment by sisyphus (benj) on Can we, in principle, know the measure of counterfactual quantum branches? · 2022-12-18T22:56:56.078Z · LW · GW

Yea my main question is that can we even in principle estimate the pure measure of existence of branches which diverged from our current branch? We can know the probabilities conditioned on the present but I don't see how we can work backwards to estimate the probabilities of a past event not occurring. Just like how a wavefunction can be evolved forward after a measurement but cannot be evolved backwards from the measurement itself to deduce the probability of obtaining such a measurement. Or can we?

I mainly picked "world where WW2 did not happen" to illustrate what I mean by counterfactual branches, in the sense that it has already diverged from us and is not in our future.

Comment by benj on [deleted post] 2022-12-14T02:16:37.838Z

Woops, edited. Thanks! :)

Comment by benj on [deleted post] 2022-12-13T22:30:54.026Z

Completely agree here. I've known the risks involved for a long time, but I've only really felt them recently. I think Robert Miles phrases it quite nicely on the Inside View podcast, where "our System 1 thinking finally caught up with our System 2 thinking."

Comment by sisyphus (benj) on A thought experiment · 2022-12-11T11:23:43.209Z · LW · GW

Ah I see. Sorry for not being too familiar with the lingo but does uniform prior just mean equal probability assigned to each possible embedding?

Comment by sisyphus (benj) on A thought experiment · 2022-12-11T06:13:57.972Z · LW · GW

I suppose the lollipops are indeed an unnecessary addition, so the final question can really be reframed as "what is the probability that you will see heads?"

Comment by sisyphus (benj) on A thought experiment · 2022-12-11T06:11:57.681Z · LW · GW

Right, so your perspective is that due to the multiple embeddings of yourself being in the heads scenario, it is the 1001:1 option. That line of reasoning is kind of what I thought as well, but it was against the 1:1 odds as would be suggested by my intuition. I guess this is the same as the halfer vs thirder debate, where 1:1 is the halfer position and the 1001:1 is the thirder position.

Comment by sisyphus (benj) on A thought experiment · 2022-12-11T05:30:01.307Z · LW · GW

I see, thanks. IMO the two are indeed quite similar but I think my example illustrates the problem of self-location uncertainty in a clearer way. That being said, what is your thought on the probability of getting a lollipop if you're in such a scenario? Are the odds 1:1 or 1001:1?

Comment by sisyphus (benj) on A thought experiment · 2022-12-11T05:27:56.253Z · LW · GW

Sorry but I think you may have misunderstood the question since your answer doesn't make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-12-10T21:51:56.717Z · LW · GW

Yes please, I think that would be quite helpful. I'm no longer that scared of it but still has some background anxiety sometimes flaring up. I feel like an FAQ or at least some form of "official" explanation from knowledgeable ppl of why it's not a big deal would help a lot. :)

Comment by sisyphus (benj) on How does acausal trade work in a deterministic multiverse? · 2022-11-20T22:55:35.601Z · LW · GW

I see, thanks for this comment. But can humans be considered as possessing an abstract decision making computation? It seems that due to quantum mechanics it's impossible to predict the decision of a human perfectly even if you have the complete initial conditions.

Comment by sisyphus (benj) on How does acausal trade work in a deterministic multiverse? · 2022-11-20T11:15:31.888Z · LW · GW

I understand the logic but in a deterministic multiverse the expected utility of any action is the same since the amplitude of the universal wave function is fixed at any given time. No action has any effect on the total utility generated by the multiverse.

Comment by sisyphus (benj) on How does acausal trade work in a deterministic multiverse? · 2022-11-20T10:31:35.404Z · LW · GW

I think the fact that the multiverse is deterministic does play a role, since if an agent's utility function covers the entire multiverse and the agent cares about the other branches, its decision theory would suffer paralysis since any action have the same expected utility - the total amount of utility available for the agent within the multiverse, which is predetermined. Utility functions seem to only make sense when constrained to one branch and the agent treats its branch as the sole universe, only in this scenario will different actions have different expected utilities.

Comment by sisyphus (benj) on How does acausal trade work in a deterministic multiverse? · 2022-11-20T06:09:57.967Z · LW · GW

But can that really be called acausal "trade"? It's simply the fact that in an infinite multiverse there will be causally independent agents who converge onto the same computation. If I randomly think "if I do X there will exist an agent who does Y and we both benefit in return" and somewhere in the multiverse there will be an agent who does Y in return for me doing X, can I really call that "trade" instead of just a coincidence that necessarily has to occur? But if my actions are determined by a utility function and my utility function extends to other universes/branches then that utility function simply will not work since no matter what action the agent takes, the total amount of utility in the multiverse is conserved. In order for a utility function to give the agent's actions different amounts of expected utility it necessarily has to focus on the single world the agent is in instead of caring about other branches of the multiverse. Therefore shouldn't perfectly rational beings care only about their own branch of the multiverse since that's the only way to have justified actions?

Comment by sisyphus (benj) on How does acausal trade work in a deterministic multiverse? · 2022-11-20T04:42:49.283Z · LW · GW

Thanks for the reply! I thought the point of the MWI multiverse is that the wavefunction evolves deterministically according to the Schrodinger equation, so if the utility function takes into account what happens in other universes then it will just output a single fixed constant no matter what the agent experiences, since the amplitude of the universal wave function at any given time is fixed. I think the only way for utility functions to make sense is for the agent to only care about its own branch of the universe and its own possible future observer-moments. Whatever "happens" in the other branches along with their reality measure is predetermined.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-13T02:45:07.880Z · LW · GW

Wow. Didn't expect someone from the "rationalist" crowd to do the verbal equivalent of replying clown emojis to tweets you don't like. Your use of all caps really made your arguments so much more convincing. This truly is the pinnacle of human logical discourse: not providing explanations and just ridiculing ideas.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-13T01:34:49.450Z · LW · GW

Like I said, "what they want" is irrelevant to the discussion here, you can imagine them wanting virtually anything. The danger lies in understanding the mechanism. You can imagine the alien telling you to order a chocolate ice cream instead of vanilla because that somehow via the butterfly effect yields positive expected utility for them (e.g. by triggering a chain of subtle causal events that makes the AGI we build slightly more aligned with their values or whatever). The problem is that there will also be an alien that wants you to order vanilla instead of chocolate, and who is also fine with applying a negative incentive. Sure, this means you can order whatever flavor of ice cream you want since you will get punished either way, but you're still getting punished (not good). 

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-12T23:42:39.535Z · LW · GW

The point is "what it wants [us] to do" can essentially be anything we can imagine thanks to the many-gods "refutation" where every possible demand can be imposed on us by some alien on some branch of the quantum multiverse. It can be as ridiculous as leaving your front door open on a Wednesday night or flushing away a straw down a toilet at 3 am, whatever eventually leads to more positive utility to the blackmailer via the butterfly effect (e.g. maybe flushing that straw down the toilet leads to a chain of causal events which makes the utility function of the AGI we build in the future to be slightly more aligned with their goals). "What the alien wants" is irrelevant here, the point is that now you know the mechanism by which aliens can coerce you into doing what they want, and merely knowing so gives other agents increased incentive to acausally extort you. You seem to be hung up on what exactly I'm scared the blackmailer wants me to do, what I am actually worried about is that simply knowing the mechanism imposes danger. The real basilisk is the concept of acausal extortion itself because it opens us up to many dangerous scenarios, not that I am worried about any specific scenario.

The reason why we cannot acausally trade with artificial superintelligences is because we lack the computing power to simulate them accurately, so ASIs would not have any incentive to actually commit to cooperate in a prisoner's dilemma style situation instead of just letting us believe it will while it secretly defects. But we don't have this same problem with non-superintelligences like aliens or even humans who have succeeded in aligning their own AIs, since we can actually simulate such beings in our head. What I am looking for is a concrete argument against this possibility.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-12T21:18:48.055Z · LW · GW

The point is that X can essentially be any action, for the sake of the discussion let's say the alien wants you to build an AGI that maximizes the utility function of the alien in our branch of the multiverse.

My main point is that the many-gods refutation is a refutation against taking a specific action, but is not a refutation against the fact that knowing about acausal extortion increases the proportion of bad future observer moments. It in fact makes it worse because, well, now you'll be tortured no matter what you do.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-12T13:10:07.617Z · LW · GW

I don't think this would help considering my utter lack of capability to carry out such threats. Are there any logical mistakes in my previous reply or in my concerns regarding the usual refutations as stated in the question? I've yet to hear anyone engage with my points against the usual refutations.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-12T03:03:23.730Z · LW · GW

I don't think I completely understood your point but here is my best effort to summarize (please correct me if wrong):

"Having the realization that there may exist other powerful entities that have different value systems should dissuade an individual from pursuing the interest of any one specific "god", and this by itself should act as a deterrent to potential acausal blackmailers."

I don't think this is correct, since beings that acausally trade can simply delegate different amounts of resources to acausally trade with different partners based on the probability of them existing. This is stated on the LW wiki page for acausal trade:

"However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingly. It will do acausal "favors" for one or more trading partners, weighting its effort according to its subjective probability that the trading partner exists. The expectation on utility given and received will come into a good enough balance to benefit the traders, in the limiting case of increasing super-intelligence."

For convenience let's not consider modal realism but just the many-worlds interpretation of quantum mechanics. You would be correct that "every mad god has endless duplicates who make every possible decision" if we're considering versions of MWI where there are infinite universes, but then what matters to our subjective experience is the density of future world states where a specific outcome happens, or the future possible observer moments and what proportion of them are good or bad. What I am concerned about is that acausal extortion increases the probability - or fraction/density - of bad future observer moments to be experienced.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-12T00:18:33.431Z · LW · GW

I understand that if the multiverse theories are true (referencing MWI here not modal realism) then everything logically possible will happen, including quantum branches containing AIs whose utility function directly incentivises torturing humans and maximising pain, so it's not like acausal extortion is the only route by which very-horrible-things could happen to me.

However, my main concern is whether or not being aware of acausal extortion scenarios increases my chance of ending up in such a very-horrible-scenario. For example, I think not being aware of acausal blackmail makes you far less likely to be in horrible scenarios, since blackmailers would have no instrumental incentive to extort unaware individuals, whereas for individuals who understand acausal trade and acausal extortion there is now an increased possibility. Like I said in my post, I don't really find the many-gods refutation helpful since it just means you will get tortured no matter what you do, which is not great if not being tortured is the goal.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-11T22:42:56.881Z · LW · GW

Glad to hear you're planning to write up a post covering stuff like this! I personally think it's quite overdue, especially on a site like this which I suspect has an inherent selection effect on people who take ideas quite seriously like me. I don't quite understand the last part of your reply though, I understand the importance of measure in decision making but like I said in my post, I thought if the blackmailer makes a significant number of simulations then indexical uncertainty could still be established since it could still have a significant effect on your future observer moments. Did I make a mistake anywhere in my reasoning?

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-11T22:24:02.348Z · LW · GW

Hi, I think the reason why people like me freak out about things like this is because we tend to accept new ideas quite quickly (e.g. if someone showed me actual proof god is real I would abandon my 7 years of atheism in a heartbeat and become a priest) so it's quite emotionally salient for me to imagine things like this. And simply saying "You're worrying too much, find something else to do to take your mind off of things like this" doesn't really help since it's like saying to a depressed person "Just be happy, it's all in your head."

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-11T22:19:53.504Z · LW · GW

Hi, thank you for your comment. I consider the many-worlds interpretation to be the most economic interpretation of quantum mechanics and find modal realism relatively convincing so acausal extortion still feels quite salient to me. Do you have any arguments against acausal extortion that would work if we assume that possible worlds are actually real? Thanks again for your reply.

Comment by sisyphus (benj) on Is acausal extortion possible? · 2022-11-11T22:19:32.039Z · LW · GW

Hi, thanks for your reply and for approving my post. I definitely get what you mean when you said "people trapped in a kinda anxiety loop about acausal blackmail", and admittedly I do consider myself somewhat in that category. However, simply being aware of this doesn't really help me get over my fears, since I am someone that really likes to hear concrete arguments about why stuff like this doesn't work instead of just being satisfied with a simple answer. You said that you had to deal with this sort of thing a lot so I presume you've heard a bunch of arguments and scenarios like this, do you mind sharing the reasons why you do not worry about it?