Bayesian Probability is for things that are Space-like Separated from You

post by Scott Garrabrant · 2018-07-10T23:47:49.130Z · LW · GW · 22 comments

Contents

23 comments

First, I should explain what I mean by space-like separated from you. Imagine a world that looks like a Bayesian network, and imagine that you are a node in that Bayesian network. If there is a path from you to another node following edges in the network, I will say that node is time-like separated from you, and in your future. If there is a path from another node to you, I will say that node is time-like separated from you, and in your past. Otherwise, I will say that the node is space-like separated from you.

Nodes in your past can be thought of as things that you observe. When you think about physics, it sure does seem like there are a lot of things in your past that you do not observe, but I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you. (Whether or not you actually can decompose things like this is complicated, and related to whether or not you can use the tickle defense is the smoking lesion problem.)

Nodes in your future can be thought of as things that you control. These are not always things that you want to control. For example, you control the output of "You assign probability less than 1/2 to this sentence," but perhaps you wish you didn't. Again, if you partially control a fact, I want to say that (maybe) you can break that fact into multiple nodes, some of which you control, and some of which you don't.

So, you know the things in your past, so there is no need for probability there. You don't know the things in your future, or things that are space-like separated from you. (Maybe. I'm not sure that talking about knowing things you control is not just a type error.) You may have cached that you should use Bayesian probability to deal with things you are uncertain about. You may have this justified by the fact that if you don't use Bayesian probability, there is a Pareto improvement that will cause you to predict better in all worlds. The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them! Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future! Note that many things in our future (like our future observations) are also in the future of things that are space-like separated from us, so we want to use Bayes to reason about those things in order to have better beliefs about our observations.

I claim that logical inductors do not feel entirely Bayesian, and this might be why. They can't if they are able to think about sentences like "You assign probability less than 1/2 to this sentence."

22 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2018-07-12T21:26:48.137Z · LW(p) · GW(p)
The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

I want to point out that this is not an esoteric abstract problem but a concrete issue that actual humans face all the time. There's a large class of propositions whose truth value is heavily affected by how much you believe (and by "believe" I mean "alieve") them - e.g. propositions about yourself like "I am confident" or even "I am attractive" - and I think the LW zeitgeist doesn't really engage with this. Your beliefs about yourself express themselves in muscle tension which has real effects on your body, and from there leak out in your body language to affect how other people treat you; you are almost always in the state Harry describes in HPMoR of having your cognition constrained by the direct effects of believing things on the world as opposed to just by the effects of actions you take on the basis of your beliefs.

There's an amusing tie-in here to one of the standard ways to break the prediction market game we used to play at CFAR workshops. At the beginning we claim "the best strategy is to always write down your true probability at any time," but the argument that's supposed to establish this has a hidden assumption that the act of doing so doesn't affect the situation the prediction market is about, and it's easy to write down prediction markets violating this assumption, e.g. "the last bet on this prediction market will be under 50%."

Replies from: strangepoop, cousin_it
comment by a gently pricked vein (strangepoop) · 2018-07-13T11:54:17.605Z · LW(p) · GW(p)
I think the LW zeitgeist doesn't really engage with this.

Really? I feel quite the opposite, unless you're saying we could do still more. I think LW is actually one of the few communities that take this sort of non-dualism/naturalism in arriving at a probabilistic judgement (and all its meta levels) seriously. We've been repeatedly exposed to the fact that Newcomblike problems [LW · GW] are [LW · GW] everywhere [LW · GW] since a long time ago, and then relatively recently, with Simler's wonderful post on crony beliefs (and now, his even more delightful book with Hanson, of course).

ETA: I'm missing quite a few posts that were even older (Wei Dai's? Drescher's? yvain had something too IIRC), it'd be nice if someone else who does remember posted them here.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-07-14T14:45:30.604Z · LW(p) · GW(p)

I think your links are a good indication of the way that LW has engaged with a relatively narrow aspect of this, and with a somewhat biased manner. "Crony beliefs" is a good example - starting right from the title, it sets up a dichotomy of "merit beliefs" versus "crony beliefs", with the not-particularly-subtle-connotation of "merit beliefs are this great thing that models reality and in an ideal world we'd only have merit beliefs, but in the real world, we also have to deal with the fact that it's useful to have crony beliefs for the purpose of manipulating others and securing social alliances".

Which... yes, that is one aspect of this. But the more general point of the original post is that there are a wide variety of beliefs which are underdetermined by external reality. It's not that you intentionally have fake beliefs which out of alignment with the world, it's that some beliefs are to some extent self-fulfilling, and their truth value just is whatever you decide to believe in. If your deep-level alief is that "I am confident", then you will be confident; if your deep-level alief is that "I am unconfident", then you will be that.

Another way of putting it: what is the truth value of the belief "I will go to the beach this evening"? Well, if I go to the beach this evening, then it is true; if I don't go to the beach this evening, it's false. Its truth is determined by the actions of the agent, rather than the environment.

The predictive processing thing could be said to take this even further: it hypothesizes that all action is caused by these kinds of self-fulfilling beliefs; on some level our brain believes that we'll take an action, and then it ends up fulfilling that prediction:

About a third of Surfing Uncertainty is on the motor system, it mostly didn’t seem that interesting to me, and I don’t have time to do it justice here (I might make another post on one especially interesting point). But this has been kind of ignored so far. If the brain is mostly just in the business of making predictions, what exactly is the motor system doing?
Based on a bunch of really excellent experiments that I don’t have time to describe here, Clark concludes: it’s predicting action, which causes the action to happen.
This part is almost funny. Remember, the brain really hates prediction error and does its best to minimize it. With failed predictions about eg vision, there’s not much you can do except change your models and try to predict better next time. But with predictions about proprioceptive sense data (ie your sense of where your joints are), there’s an easy way to resolve prediction error: just move your joints so they match the prediction. So (and I’m asserting this, but see Chapters 4 and 5 of the book to hear the scientific case for this position) if you want to lift your arm, your brain just predicts really really strongly that your arm has been lifted, and then lets the lower levels’ drive to minimize prediction error do the rest.
Under this model, the “prediction” of a movement isn’t just the idle thought that a movement might occur, it’s the actual motor program. This gets unpacked at all the various layers – joint sense, proprioception, the exact tension level of various muscles – and finally ends up in a particular fluid movement

Now, I've mostly been talking about cases where the truth of a belief is determined purely by our choices. But as the OP suggests, there are often complex interplays between the agent and the environment. For instance, if you believe that "I will be admitted to Example University if I study hard enough to get in", then that belief may become self-fulfilling in that it causes you to study hard enough to get in. But at the same time, you may simply not be good enough, so the truth value of this belief is determined both by whether you believe in it, and by whether you actually are good enough.

With regard to the thing about confidence; people usually aren't just confident in general, they are confident about something in particular. I'm much more confident in my ability to write on a keyboard, than I am in my ability to do brain surgery. You could say that my confidence in my ability to do X, is the probability that I assign to doing X successfully.

And it's often important that I'm not overconfident. Yes, if I'm really confident in my ability to do something, then other people will give me more respect. But the reason why they do that, is that confidence is actually a bit of a costly signal. So far I've said that an agent's decisions determine the truth-values of many beliefs, but it's also the other way around: the agent's beliefs determine the agent's actions. If I believe myself to be really good at brain surgery even when I'm not, I may be able to talk myself into a situation where I'm allowed to do brain surgery, but the result will be a dead patient. And it's not going to take many dead patients before people realize I'm a fraud and put me in prison. But if I'm completely deluded and firmly believe myself to be a master brain surgeon, that belief will cause me to continue carrying out brain surgeries, even when it would be better from a self-interested perspective to stop doing that.

So there's a complicated thing where beliefs have several effects: they determine your predictions about the world and they determine your future actions and they determine the subconscious signals that you send to others. You have an interest in being overconfident for the sake of persuading others, and for the sake of getting yourself to do things, but also in being just-appropriately-confident for the sake of being able to predict the consequences of your own future actions better.

An important framing here is "your beliefs determine your actions, so how do you get the beliefs which cause the best actions". There have been [LW · GW] some [LW · GW] posts [LW · GW] offering tools for belief-modification which had the goal of causing change, but this mostly hasn't been stated explicitly, and even some of the posts which have offered tools for this (e.g. Nate's "Dark Arts of Rationality") have still talked about it being a "Dark Art" thing which is kinda dirty to engage in. Which I think is dangerous, because getting an epistemically correct map is only half of what you need for success, with the "have beliefs which cause you to take the actions that you need to succeed" being the other half that's just as important to get right. (Except, as noted, they are not two independent things but intertwined with each other in complicated ways.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-07-15T20:41:43.874Z · LW(p) · GW(p)

Yes, this.

There's a thing MIRI people talk about, about the distinction between "cartesian" and "naturalized" agents: a cartesian agent is something like AIXI that has a "cartesian boundary" separating itself from the environment, so it can try to have accurate beliefs about the environment, then try to take the best actions on the environment given those beliefs. But a naturalized agent, which is what we actually are and what any AI we build actually is, is part of the environment; there is no cartesian boundary. Among other things this means that the environment is too big to fully model, and it's much less clear what it even means for the agent to contemplate taking different actions. Scott Garrabrant has said that he does not understand what naturalized agency means; among other things this means we don't have a toy model that deserves to be called "naturalized AIXI."

There's a way in which I think the LW zeitgeist treats humans as cartesian agents, and I think fully internalizing that you're a naturalized agent looks very different, although my concepts and words around this are still relatively nebulous.

comment by cousin_it · 2018-07-13T12:49:53.879Z · LW(p) · GW(p)

I'm confused by this. Sure, your body has involuntary mechanisms that truthfully signal your beliefs to others. But the only reason these mechanisms could exist is to help your genes! Yours specifically! That means you shouldn't try to override them when your interests coincide with those of your genes. In particular, you shouldn't force yourself to believe that you're attractive. Am I missing something?

Replies from: Qiaochu_Yuan, rossry, strangepoop
comment by Qiaochu_Yuan · 2018-07-15T20:43:29.559Z · LW(p) · GW(p)
In particular, you shouldn't force yourself to believe that you're attractive.

And I never said this.

But there's a thing that can happen when someone else gaslights you into believing that you're unattractive, which makes it true, and you might be interested in undoing that damage, for example.

comment by rossry · 2018-07-13T14:32:59.138Z · LW(p) · GW(p)

It seems pretty easy for such mechanisms to be adapted for maximizing reproduction in some ancestral excitement but maladapted for maximizing your preferences in the modern environment.

I think I agree that your point is generally under-considered, especially by the sort of people who compulsively tear down Chesterton's fences.

comment by a gently pricked vein (strangepoop) · 2018-07-13T17:58:29.799Z · LW(p) · GW(p)

What rossry said, but also, why do you expect to be "winning" all arms races here? Genes in other people may have led to development of meme-hacks that you don't know are actually giving someone else an edge in a zero sum game.

In particular, they might call you fat or stupid or incompetent and you might end up believing it.

comment by ksvanhorn · 2018-07-24T04:17:43.400Z · LW(p) · GW(p)

I'm not trying to be mean here, but this post is completely wrong at all levels. No, Bayesian probability is not just for things that are space-like. None of the theorems from which it derived even refer to time.

So, you know the things in your past, so there is no need for probability there.

This simply is not true. There would be no need of detectives or historical researchers if it were true.

If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

You can say it, but it's not even approximately true. If someone flips a coin in front of me but covers it up just before it hits the table, I observe that a coin flip has occurred, but not whether it was heads or tails -- and that second even is definitely within my past light-cone.

You may have cached that you should use Bayesian probability to deal with things you are uncertain about.

No, I cached nothing. I first spent a considerable amount of time understanding Cox's Theorem in detail, which derives probability theory as the uniquely determined extension of classical propositional logic to a logic that handles uncertainty. There is some controversy about some of its assumptions, so I later proved and published my own theorem that arrives at the same conclusion (and more) using purely logical assumptions/requirements, all of the form, "our extended logic should retain this existing property of classical propositional logical."

The problem is that the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them!

1) It's not clear this is really true. It seems to me that any situation that is affected by an agent's beliefs can be handled within Bayesian probability theory by modeling the agent.

2) So what?

Therefore, our reasons for liking Bayesian probability do not apply to our uncertainty about the things that are in our future!

This is a complete non sequitur. Even if I grant your premise, most things in my future are unaffected by my beliefs. The date on which the Sun will expand and engulf the Earth is in no way affected by any of my beliefs. Whether you will get luck with that woman at the bar next Friday is in no way affected by any of my beliefs. And so on,

Replies from: Scott Garrabrant, dxu
comment by Scott Garrabrant · 2018-08-02T22:00:23.389Z · LW(p) · GW(p)

I think you are correct that I cannot cleanly separate the things that are in my past that I know and the things that are in my post that I do not know. For example, if a probability is chosen uniformly at random in the unit interval, then a coin with that probability is flipped a large number of times, then I see some of the results, I do not know the true probability, but the coin flips that I see really should come after the thing that determines the probability in my Bayes' net.

comment by dxu · 2018-07-24T07:48:08.199Z · LW(p) · GW(p)

[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense--or at the very least, make more sense than before--instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.

Replies from: ksvanhorn
comment by ksvanhorn · 2018-07-25T17:23:38.401Z · LW(p) · GW(p)

This is the first thing I've read from Scott Garrabant, so "otherwise reputable" doesn't apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.

comment by Stuart_Armstrong · 2018-07-16T10:35:45.267Z · LW(p) · GW(p)

The "nodes in the future" part of this, is in part the point I keep trying to make with the rigging/bias and influence posts https://www.lesswrong.com/posts/b8HauRWrjBdnKEwM5/rigging-is-a-form-of-wireheading

comment by romeostevensit · 2018-07-12T18:06:46.053Z · LW(p) · GW(p)

Non central nit: So, you know the things in your past, so there is no need for probability there Doesn't seem true.

Replies from: strangepoop
comment by a gently pricked vein (strangepoop) · 2018-07-13T12:08:32.568Z · LW(p) · GW(p)

I suppose you mean the fallibility of memory. I think Garrabrant meant it tautologically though (ie, as the definition of "past").

Replies from: rossry
comment by rossry · 2018-07-13T14:36:54.508Z · LW(p) · GW(p)

Pretty confident they meant it that way:

I am not thinking about physics-time, I am thinking about logical-time. If something is in your past, but has no effect on what algorithm you are running on what observations you get, then it might as well be considered as space-like separated from you. If you compute how everything in the universe evaluates, the space-like separated things are the things that can be evaluated either before or after you, since their output does not change yours or vice-versa. If you partially observe a fact, then I want to say you can decompose that fact into the part that you observed and the part that you didn't, and say that the part you observed is in your past, while the part you didn't observe is space-like separated from you.

comment by ozziegooen · 2018-07-16T17:50:48.514Z · LW(p) · GW(p)

One way I may begin to write a similar concept formally may be something like:

An agent's probability on a topic is "P(V|C)", where V is some proposition and O represents all conditionals.

There are cases where one of these conditionals will include a statement such as "P(V|C) = f(n)"; whereby one must condition on the output of their total estimate. If this "recursive" conditional influences P(V|C), then the probabilistic assessment is not "state-like separated."

comment by ozziegooen · 2018-07-16T17:50:04.036Z · LW(p) · GW(p)

I generally agree with the main message, and am happy to see it be written up, but see this less as a failure of Bayes theory than a rejection of a common misuse of Bayes theory. I believe I've heard a similar argument a few times before and have found it a bit frustrating for this reason. (Of course, I could be factually wrong in my understanding)

If one were to apply something other than a direct bayesian update, as could make the sense in a more complicated setting, they may as well do so in a process which includes other kinds of bayesian updates. And the decision process that they use to determine the method of updating in these circumstances may well involve bayesian updates.

Replies from: ozziegooen
comment by ozziegooen · 2018-07-16T17:51:34.983Z · LW(p) · GW(p)

I'm not sure how to solve such an equation, though doing it for simple cases seems simple enough. I'll admit I don't understand logical induction near as well as I would like, and mean to do so some time.

comment by Peter Gerdes (peter-gerdes) · 2018-07-15T03:36:35.613Z · LW(p) · GW(p)

Of course, no actual individual or program is a pure Bayesian. Pure Bayesian updating presumes logical omniscience after all. Rather, when we talk about Bayesian reasoning we idealize individuals as abstract agents whose choices (potentially none) have a certain probabilistic effect on the world, i.e., basically we idealize the situation as a 1 person game.

You basically raise the question of what happens in Newcomb like cases where we allow the agent's internal deliberative state to affect outcomes independent of explicit choices made. But whole model breaks down the moment you do this. It no longer even makes sense to idealize a human as this kind of agent and ask what should be done because the moment you bring the agent's internal deliberative state into play it no longer makes sense to idealize the situation as one in which there is a choice to be made. At that point you might as well just shrug and say 'you'll choose whatever the laws of physics says you'll choose.'

Now, one can work around this problem by instead posing a question for a different agent who might idealize a past self, e.g., if I imagine I have a free choice about which belief to commit to having in these sorts of situations which belief/belief function should I presume.

As an aside I would argue that, while a perfectly valid mathematical calculation, there is something wrong in advocating for timeless decision theory or any other particular decision theory as the correct way to make choices in these Newcomb type scenarios. The model of choice making doesn't even really make sense in such situations so any argument over which is the true/correct decision theory must ultimately be a pragmatic one (when we suggest actual people use X versus Y they do better with X) but that's never the sense of correctness that is being claimed.

comment by Pattern · 2018-07-14T01:32:42.116Z · LW(p) · GW(p)

What makes statements you control important?

"You assign probability less than 1/2 to this sentence," but perhaps you wish you didn't.

Why would you wish to assign a different probability to this statement?

Replies from: Walker Vargas
comment by Walker Vargas · 2019-12-14T03:31:48.345Z · LW(p) · GW(p)

It's a variant of the liar's paradox. If you say the statement is unlikely, you're agreeing with what it says. If you agree with it, you clearly don't think it's unlikely, so it's wrong.

comment by kewoc · 2018-07-11T12:20:02.273Z · LW(p) · GW(p)