Epistemic vs. Instrumental Rationality: Approximations
post by Peter_de_Blanc · 2009-04-28T03:12:55.675Z · LW · GW · Legacy · 29 commentsContents
29 comments
What is the probability that my apartment will be struck by a meteorite tomorrow? Based on the information I have, I might say something like 10-18. Now suppose I wanted to approximate that probability with a different number. Which is a better approximation: 0 or 1/2?
The answer depends on what we mean by "better," and this is a situation where epistemic (truthseeking) and instrumental (useful) rationality will disagree.
As an epistemic rationalist, I would say that 1/2 is a better approximation than 0, because the Kullback-Leibler Divergence is (about) 1 bit for the former, and infinity for the latter. This means that my expected Bayes Score drops by one bit if I use 1/2 instead of 10-18, but it drops to minus infinity if I use 0, and any probability conditional on a meteorite striking my apartment would be undefined; if a meteorite did indeed strike, I would instantly fall to the lowest layer of Bayesian hell. This is too horrible a fate to imagine, so I would have to go with a probability of 1/2.
As an instrumental rationalist, I would say that 0 is a better approximation than 1/2. Even if a meteorite does strike my apartment, I will suffer only a finite amount of harm. If I'm still alive, I won't lose all of my powers as a predictor, even if I assigned a probability of 0; I will simply rationalize some other explanation for the destruction of my apartment. Assigning a probability of 1/2 would force me to actually plan for the meteorite strike, perhaps by moving all of my stuff out of the apartment. This is a totally unreasonable price to pay, so I would have to go with a probability of 0.
I hope this can be a simple and uncontroversial example of the difference between epistemic and instrumental rationality. While the normative theory of probabilities is the same for any rationalist, the sorts of approximations a bounded rationalist would prefer can differ very much.
29 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-28T07:25:18.870Z · LW(p) · GW(p)
While KL divergence is a very natural measure of the "goodness of approximation" of a probability distribution, which happens not to talk about the utility function, there is still a strong sense in which only an instrumental rationalist can speak of a "better approximation", because only an instrumental rationalist can say the word "better".
KL divergence is an attempt to use a default sort of metric of goodness of approximation, without talking about the utility function, or while knowing as little as possible about the utility function; but in fact, in the absence of a utility function, you actually just can't say the word "better", period.
Replies from: gjm, Peter_de_Blanc, conchis, conchis↑ comment by gjm · 2009-04-28T09:02:29.455Z · LW(p) · GW(p)
To the extent that this is true, perhaps the very notion of an epistemic rationalist (perhaps also of epistemic rationality) is incoherent. ("Epistemic rationality means acting so as to maximize one's accuracy." "Ah, but hidden in that word accuracy is some sort of evaluation, which you aren't allowed to have.") But it sure seems like a useful notion.
I propose that there is at least one useful notion of epistemic rationality; in fact, there's one for each viable notion of what counts as better accuracy; since real people have utility functions, calling a real person an epistemic rationalist is really shorthand for "has a utility function that highly values accuracy-in-some-particular-sense"; that one can usefully talk about epistemic rationality in general, meaning something like "things that are true about anyone who's an epistemic rationalist in any of that term's many specific senses"; and that it's at least a defensible claim that something enough like K-L divergence to make Peter's argument go through is likely to be part of any viable notion of accuracy.
↑ comment by Peter_de_Blanc · 2009-04-28T12:46:47.062Z · LW(p) · GW(p)
If epistemic rationalists can't speak of a "better approximation," then how can an epistemic rationalist exist in a universe with finite computational resources?
Replies from: Eliezer_Yudkowsky, army1987↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-21T16:20:15.807Z · LW(p) · GW(p)
Pure epistemic rationalists with no utility function? Well, they can't, really. That's part of the problem with the Oracle AI scenario.
↑ comment by A1987dM (army1987) · 2014-01-24T20:35:10.808Z · LW(p) · GW(p)
They can speak of a “closer approximation” instead. (But that still needs a metric.)
↑ comment by conchis · 2009-04-28T09:14:57.738Z · LW(p) · GW(p)
This is basically right, but I guess I think of it in slightly different terms. The KL divergence embodies a particular, implicit utility function, which just happens to be wrong lots of the time. So it can make sense to speak of "better_KL", it's just not something that's necessarily very useful.
Note also that alternative divergence measures, embodying different implicit utility functions, could give different answers. For example, Jensen-Shannon divergence would agree with instrumental rationality here, no? (Though you could obviously construct examples where it too would diverge from our actual utility functions.)
↑ comment by conchis · 2009-04-28T09:04:05.292Z · LW(p) · GW(p)
I basically agree with this, although I guess I'd always thought of it in terms of the KL distance incorporating a particular, implicit utility function that happens to be wrong in many cases. It can speak of "better_KL", but only according to a (sometimes) stupid utility function.
The failure of the KL divergence to incorporate an adequate notion of "betterness" is also demonstrated by the fact that you'd get a different answer if you used an alternative divergence measure. Jensen-Shannon divergence, for example, would give the same answer as instrumental rationality in this example, no? (Though you could obviously construct different examples where it too would diverge from instrumental rationality.)
comment by Unnamed · 2009-04-29T01:55:10.983Z · LW(p) · GW(p)
This OB post covered similar ground. What I took away from that post was that log odds are the natural units for converting evidence into beliefs, and probabilities are the natural units for converting beliefs into actions.
comment by smoofra · 2009-05-01T05:49:54.037Z · LW(p) · GW(p)
As an instrumental rationalist, I would say that 0 is a better approximation than 1/2 ....
How is this train of thought "instrumental"? You aren't making any choices or decisions outside of your own brain.
To make it a real instrumental example, consider whether or not you should go buy a meteorite shield. Lets say the shield costs S and if the meteorite hits you it costs M, and the true probability of the strike is p. So buying the shield is best if pM > S.
Now if you go with 0, you'll never buy the shield, so if pM > S you have an expected loss of (pM - S) due to your approxamation.
If you go with 1/2 then you'll buy the shield if M/2 > S. If M/2 > S and pM <= S then you bought the shield when you shouldn't have, and you lose and expected (S - pM).
So you see, it all depends on how big M is compared to S
M < S/p : 0 is the better instrumental approximation
M > S/p : 1/2 is better
In other words, if the risks (or payoffs) are small compared to the probabilities involved and the costs of shields, round to 0. Otherwise round to 1/2.
comment by James_Miller · 2009-04-28T11:56:58.080Z · LW(p) · GW(p)
Part of your post assumes a contradiction. If forced to choose between 1/2 and zero then zero no longer means can't possibly happen and 1/2 no longer means will happen 50% of the time.
The only way your analysis works is if you are forced to choose between zero and 1/2 knowing that in the future you will forget that your choices were limited to zero and 1/2.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2009-04-28T12:43:42.612Z · LW(p) · GW(p)
When I'm choosing between approximations, I haven't actually started using the approximation yet. I'm predicting, based on the full knowledge I have now, the cost of replacing that full knowledge with an approximation.
So to calculate the expected utility of changing my beliefs (to the approximation), I use the approximation to calculate my hypothetical actions, but I use my current beliefs as probabilities for the expected utility calculation.
Replies from: James_Miller, Mark_Neznansky↑ comment by James_Miller · 2009-04-28T14:02:39.240Z · LW(p) · GW(p)
So you are assuming that in the future you will be forced to act on the belief that the probability can't be something other than 0 or 1/2 even though in the future you will know that the probability will almost certainly be something other than 0 or 1/2.
But isn't this the same as assuming that in the future you will forget that your choices had been limited to zero and 1/2?
Replies from: Voltairina↑ comment by Voltairina · 2013-06-19T18:20:54.676Z · LW(p) · GW(p)
Hrm, I think you might be ignoring the cost of actually doing the calculations, unless I'm missing something. The value of simplifying assumptions comes from how much easier it makes a situation to model. I guess the question would be, is the effort saved in modeling this thing with an approximation rather than exact figures worth the risks of modeling this thing with an approximation rather than exact figures? Especially if you have to do many models like this, or model a lot of other factors as well. Such as trying to sort out what are the best ways to spend your time overall, including possibly meteorite preparations.
↑ comment by Mark_Neznansky · 2009-04-28T14:47:27.773Z · LW(p) · GW(p)
It seems to me you use wrong wording. In contrary to the epistemic rationalist, the instrumental rationalist does not "gain" any "utility" from changing his beliefs. He is gaining utility from changing his action. Since he can either prepare or not prepare for a meteoritic catastrophe and not "half prepare", I think the numbers you should choose are 0 and 1 and not 0 and 0.5. I'm not entirely sure what different numbers it will yield, but I think it's worth mentioning.
Replies from: Mulcibercomment by Wei Dai (Wei_Dai) · 2009-04-29T03:10:30.389Z · LW(p) · GW(p)
This is a bit tangential, but perhaps a bounded rationalist should represent his beliefs by a family of probability functions, rather than by an approximate probability function. When he needs to make a decision, he can compute upper and lower bounds on the expected utilities of each choice, and then either make the decision based on the beliefs he has, or decide to seek out or recall further information if the upper and lower expected utilities point to different choices, and the bounds are too far apart compared to the cost of getting more information.
I found one decision theory that uses families of probability functions like this (page 35 of http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.1906), although the motivation is different. I wonder if such decision systems have been considered for the purpose of handling bounded rationality.
comment by Mark_Neznansky · 2009-04-28T14:27:15.012Z · LW(p) · GW(p)
I admit that I've learned about the KL divergence just now and through the wiki-link, and that my math in general is not so profound. But as it's not about calculation but about the reasoning behind the calculation, I suppose I can have my word:
The wiki-entry mentions that
Typically P represents the "true" distribution of data, observations, or a precise calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P.
So P here is 10^-18 and Q is either 0 or 0.5.
What your epistemic rationalist has done seems like falling pray to the bias of anchoring and adjusting. The use of mathematical equations just makes the anchoring mistake look more fomal; it's not less wrong in any way. So while the instrumental rationalist might have a reason to choose the arbitary figure of 1/2 (it makes his decisions be more simple, for example) the epistemic rationalist does not. If the epistemic rationalist is shown the two figures of 0 and 1/2 and is asked what approximation is "better" he would probably say 0. And that's for several reason: First of all, if he is an epistemic rationalist and thus trueseeking, he wouldn't use the KL equation at all. The KL takes something accurate (or true) P and makes it less accurate (or less true) KLD, and that's exactly against what he is seeking - having more accurate and true results. But you tell me he has to choose between either "0" or "1/2". Well, if he has to chooce between one of these numbers he will still not choose to use the KL equation. The wiki mentions that the Q in the equation typically stands for "... a theory, model, description, or approximation of P" while the number "1/2" in your example is none of these but an arbitary number - this equation, then, does not fit the situation. He will use a different mathematical method, let's say, subtraction, and see the absolute value of what difference is smaller, in which case it will be 0's. Also, since 1/2 and 0 are arbitary numbers, an epistemic rationalist would know better than use any of these numbers in any equation, since it will produce a result that is accurate just as if would use any other two arbitary numbers. He would know that he should do his own calculations - ignoring the numbers 0 and 1/2 - and then compare his result to the numbers he is "offered" (0 and 1/2) and choose the closest number to his own calculation. Since he knows that the "true" probability is 10^-18 he will choose the closest number to his result which seems to be 0.
Of course, everything that I said about "1/2" above holds true about "0".
(I'm sorry in advance if my mathematical explentation are unclear or clumsy. If I explain arguments through math badly, then I explain arguments through math in English much worst as I was studying mathematics in a different language)
comment by Gordon Seidoh Worley (gworley) · 2009-04-28T13:39:00.770Z · LW(p) · GW(p)
Reading the comments so far, I think Peter wasn't as clear as he had hoped (or this is all jumping to disagree too quickly). As I see it, the point is that an epistemic rationalist, a completely abstract mathematical construct to the best of our knowledge of the physical world, would make a choice that is at odds with an instrumental rationalist, i.e. a real person who's trying to win in real life. Having bounded resources, there is some threshold below which a physically existing rationalist will treat probabilities as equivalent to zero, i.e. will choose not to expend any resources on preparing for such a situation.
A meteorite makes a bad example because it's easy to imagine it happening. Suppose we consider the probability of a three layer chocolate cake spontaneously appearing in the passenger seat of our car during the drive home this afternoon. Yes, the probability must be nonzero, but it's so small as to not be worth considering. All those events with probabilities so small they aren't worth any resources are the ones you never even think about, so they are equivalent to having a probability of zero for the bounded rationalist.
Replies from: gjm↑ comment by gjm · 2009-04-28T14:58:13.954Z · LW(p) · GW(p)
All those events with probabilities so small they aren't worth the resources are the ones you never even think about
... Ah, if only that were so.
(But I take it you mean "the ones you never even think about if you are an optimized bounded rationalist", in which case I think you're right.)
comment by AndySimpson · 2009-04-28T08:50:20.345Z · LW(p) · GW(p)
So what lesson does a rationalist draw from this? What is best for the Bayesian mathematical model is not best in practice? Conserving information is not always "good"?
Also,
I will simply rationalize some other explanation for the destruction of my apartment.
This seems distinctly contrary to what an instrumental rationalist would do. It seems more likely he'd say "I was wrong, there was actually an infinitesimal probability of a meteorite strike that I previously ignored because of incomplete information/negligence/a rounding error."
comment by PhilGoetz · 2009-04-29T23:56:42.830Z · LW(p) · GW(p)
I say this with trepidation, since Peter and Eliezer have both already read this, but...
As an epistemic rationalist, I would say that 1/2 is a better approximation than 0, because the Kullback-Leibler Divergence is (about) 1 bit for the former, and infinity for the latter.
(If the probability distribution peaked at 1/2, it would be not-completely-unreasonable to use a flat distribution, and express a probability as a fixed-point number between 0 and 1. In that case, it would take 60 bits to express 10^-18. With floating point, you'd get a good approximation with 7 bits.)
But you're not really making a fair comparison. You're comparing "probability distribution centered on 1/2" with "0, no probability distribution". If the "centered on 0" choice doesn't get to have a distribution, neither should the "centered on 1/2" choice. Then both give you a divergence of infinity.
The KL-divergence comparison assumes use of a probability distribution. The probability distribution that peaks at zero is going to be able to represent 1E-18 with many fewer bits than the one that peaks at 1/2. So zero wins in both cases, and there is no demonstrated conflict between epistemic and instrumental rationality.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2009-04-30T12:39:39.917Z · LW(p) · GW(p)
I was talking about a discrete probability distribution over two possible states: {meteorite, no meteorite}. You seem to be talking about something else.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-04-30T15:32:27.924Z · LW(p) · GW(p)
Okay. I thought you were talking about real-valued probability distributions from 0 to 1. But I don't know if you can claim to draw significant conclusions about epistemic rationality from using the wrong type of probability distribution.
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2009-04-30T16:11:51.172Z · LW(p) · GW(p)
What do you mean by "the wrong type of probability distribution"?
comment by Cyan · 2009-04-28T16:29:51.084Z · LW(p) · GW(p)
It might clarify things to note the connection between Kullback-Leibler divergence and communication theory. The Kullback-Leibler divergence is the utility function to use when minimizing the expected length of the signal encoding (i.e, recording or communicating) what actually happened. The choice of "1/2" or "0" is equivalent to to constraining the agent to choose between using one bit or or an infinite amount of bits to record/communicate the state of "improbable event did (not) occur".
In short, KL divergence isn't about truth-seeking per se. It's about the resources necessary to encode signals -- definitely an instrumental question.
comment by Mark_Neznansky · 2009-04-28T14:26:40.066Z · LW(p) · GW(p)
I admit that I've learned about the KL divergence just now and through the wiki-link, and that my math in general is not so profound. But as it's not about calculation but about the reasoning behind the calculation, I suppose I can have my word:
The wiki-entry mentions that
Typically P represents the "true" distribution of data, observations, or a precise calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P. So P here is 10^-18 and Q is either 0 or 0.5.
What your epistemic rationalist has done seems like falling pray to the bias of anchoring and adjusting. The use of mathematical equations just makes the anchoring mistake look more fomal; it's not less wrong in any way. So while the instrumental rationalist might have a reason to choose the arbitary figure of 1/2 (it makes his decisions be more simple, for example) the epistemic rationalist does not. If the epistemic rationalist is shown the two figures of 0 and 1/2 and is asked what approximation is "better" he would probably say 0. And that's for several reason: First of all, if he is an epistemic rationalist and thus trueseeking, he wouldn't use the KL equation at all. The KL takes something accurate (or true) P and makes it less accurate (or less true) KLD, and that's exactly against what he is seeking - having more accurate and true results. But you tell me he has to choose between either "0" or "1/2". Well, if he has to chooce between one of these numbers he will still not choose to use the KL equation. The wiki mentions that the Q in the equation typically stands for "... a theory, model, description, or approximation of P" while the number "1/2" in your example is none of these but an arbitary number - this equation, then, does not fit the situation. He will use a different mathematical method, let's say, subtraction, and see the absolute value of what difference is smaller, in which case it will be 0's. Also, since 1/2 and 0 are arbitary numbers, an epistemic rationalist would know better than use any of these numbers in any equation, since it will produce a result that is accurate just as if would use any other two arbitary numbers. He would know that he should do his own calculations - ignoring the numbers 0 and 1/2 - and then compare his result to the numbers he is "offered" (0 and 1/2) and choose the closest number to his own calculation. Since he knows that the "true" probability is 10^-18 he will choose the closest number to his result which seems to be 0.
Of course, everything that I said about "1/2" above holds true about "0".
(I'm sorry in advance if my mathematical explentation are unclear or clumsy. If I explain arguments through math badly, then I explain arguments through math in English much worst as I was studying mathematics in a different language)
comment by Mark_Neznansky · 2009-04-28T14:26:33.910Z · LW(p) · GW(p)
I admit that I've learned about the KL divergence just now and through the wiki-link, and that my math in general is not so profound. But as it's not about calculation but about the reasoning behind the calculation, I suppose I can have my word:
The wiki-entry mentions that
Typically P represents the "true" distribution of data, observations, or a precise calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P. So P here is 10^-18 and Q is either 0 or 0.5.
What your epistemic rationalist has done seems like falling pray to the bias of anchoring and adjusting. The use of mathematical equations just makes the anchoring mistake look more fomal; it's not less wrong in any way. So while the instrumental rationalist might have a reason to choose the arbitary figure of 1/2 (it makes his decisions be more simple, for example) the epistemic rationalist does not. If the epistemic rationalist is shown the two figures of 0 and 1/2 and is asked what approximation is "better" he would probably say 0. And that's for several reason: First of all, if he is an epistemic rationalist and thus trueseeking, he wouldn't use the KL equation at all. The KL takes something accurate (or true) P and makes it less accurate (or less true) KLD, and that's exactly against what he is seeking - having more accurate and true results. But you tell me he has to choose between either "0" or "1/2". Well, if he has to chooce between one of these numbers he will still not choose to use the KL equation. The wiki mentions that the Q in the equation typically stands for "... a theory, model, description, or approximation of P" while the number "1/2" in your example is none of these but an arbitary number - this equation, then, does not fit the situation. He will use a different mathematical method, let's say, subtraction, and see the absolute value of what difference is smaller, in which case it will be 0's. Also, since 1/2 and 0 are arbitary numbers, an epistemic rationalist would know better than use any of these numbers in any equation, since it will produce a result that is accurate just as if would use any other two arbitary numbers. He would know that he should do his own calculations - ignoring the numbers 0 and 1/2 - and then compare his result to the numbers he is "offered" (0 and 1/2) and choose the closest number to his own calculation. Since he knows that the "true" probability is 10^-18 he will choose the closest number to his result which seems to be 0.
Of course, everything that I said about "1/2" above holds true about "0".
(I'm sorry in advance if my mathematical explentation are unclear or clumsy. If I explain arguments through math badly, then I explain arguments through math in English much worst as I was studying mathematics in a different language)