Litany of Instrumentarski
post by Shmi (shminux) · 2013-04-09T15:07:10.565Z · LW · GW · Legacy · 45 commentsContents
45 comments
The Litany of Tarski (formulated by Eliezer, not Tarski) reads
If the box contains a diamond,
I desire to believe that the box contains a diamond;
If the box does not contain a diamond,
I desire to believe that the box does not contain a diamond;
Let me not become attached to beliefs I may not want.
This works for a physical realist, but I have been feeling uncomfortable with it for some time now. So I have decided to reformulate it in a more instrumental way, replacing existential statements with testable predictions. I had to find a new name for it, so I call it the Litany of Instrumentarski:
If believing that there is a diamond in the box lets me find the diamond in the box,
I desire to believe that there is a diamond in the box;
If believing that there is a diamond in the box leaves me with an empty box,
I desire to believe that there is no diamond in the box;
Let me not become attached to inaccurate beliefs.
Posting it here in a hope that someone else also finds it more palatable and unassuming than straight-up realism.
EDIT: It seems to me that this modification also guides you to straight-up one-box on Newcomb, where the original one is mired in the EDT vs CDT issues.
EDIT2: Looks like the above version resulting in people confusing desiring accurate beliefs with desiring diamonds. It's about accurate accounting, not about utility of a certain form of crystallized carbon.
Maybe the first line should be modified to something like "If I later find a diamond in the box...", or something. How about the following?
If I will find a diamond in the box,
I desire to believe that I will find a diamond in the box;
If I will find no diamond in the box,
I desire to believe that I will find no diamond in the box;
Let me not become attached to inaccurate beliefs.
For some reason the editor does not let me use the <strike> tag to cross out the previous version, not sure how to work around it.
45 comments
Comments sorted by top scores.
comment by OrphanWilde · 2013-04-10T14:43:16.813Z · LW(p) · GW(p)
If my beliefs change reality, I desire to believe that my beliefs will change reality. If my beliefs have no effect upon reality, I desire to believe my beliefs have no effect upon reality. Let me not become attached to inaccurate beliefs.
Further:
My beliefs exist in reality. If my beliefs, by so existing, change the outcome of reality, I desire to believe that my beliefs change reality. If my beliefs, by so existing, do not change the outcome of reality, I desire to believe that my beliefs are immaterial to reality. Let me not become attached to mind-body dualism.
You don't need to reject realism to reject the idea that beliefs can only be reflections of reality, rather than a causal part of it. The map is part of the territory. How accurately does your map represent your map?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-10T16:47:54.024Z · LW(p) · GW(p)
If my beliefs, by so existing, change the outcome of reality
So, if your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?
You don't need to reject realism to reject the idea that beliefs can only be reflections of reality, rather than a causal part of it.
You don't need to, no. But the concept of reality is less useful if all you mean by it is future inputs to test the accuracy of your beliefs, as opposed to a largely unchanged territory that you map. If you have to build the proverbial epicycles to keep your belief alive, you might want to consider a simpler model.
The map is part of the territory.
Is this a useful assertion? If so, how?
How accurately does your map represent your map?
Not sure what you mean by this. That beliefs can be nested? Sure. That the term "map" presumes some territory it maps? It sure does, in the realist picture. Hence my preference for the term "model" or even "belief". Of course, a realist can ask something like "but what is your model a model of [in reality]?", which to me is not a useful question if your reality is changing depending on the models.
Replies from: Document, someonewrongonthenet, OrphanWilde, MugaSofer↑ comment by Document · 2013-08-11T21:26:52.680Z · LW(p) · GW(p)
At the risk of dogpiling:
Replies from: shminuxSo, if it is true in reality that your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area...
↑ comment by Shmi (shminux) · 2013-08-11T22:12:16.283Z · LW(p) · GW(p)
It happens. Probably not with dragons, but with placebo and with many other things where the nonlinear second order effect map->reality is ignored in this simplified map/territory distinction. So why make the distinction?
Replies from: Document↑ comment by Document · 2013-08-11T22:21:21.051Z · LW(p) · GW(p)
It is true in reality that it happens...
(Sorry.) To answer your question: for the times when it doesn't happen? I wasn't actually planning to join a debate; you might find it more productive to ask one of the people who gave more in-depth replies.
↑ comment by someonewrongonthenet · 2013-04-11T00:35:17.020Z · LW(p) · GW(p)
how useful is the concept of reality
Reality is a useful concept in all possible universes you might find yourself in!
Real things cause qualia, unreal things do not. No matter what you care about, this distinction will impact it.
May I ask what your definition of reality you are currently using?
↑ comment by OrphanWilde · 2013-04-10T17:37:52.049Z · LW(p) · GW(p)
So, if your writing "here be dragons" on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?
The placebo effect came up elsewhere as an example where beliefs alter reality. Similarly, self-fulfilling prophecies need not rely on magic; if I believe I'll fail at a task, I very probably alter, just by holding this belief, my odds of completing that task. The modified litany isn't "All beliefs modify reality," but "I should have accurate beliefs about what beliefs have repercussions in reality." Your dragon example is merely a demonstration of a belief which is immaterial to reality, for the purposes at least of the subject of the belief.
I believe this response suffices in answering the rest of your objections, as well.
See http://lesswrong.com/lw/h69/litany_of_instrumentarski/8qht for an example of a pretty common theme in this post. Contrary to the argument presented in the comment [ETA: I misread the comment; this argument isn't actually present. My apologies!], rationality doesn't break down, a specific and faulty idea held by some rationalists breaks down.
comment by fubarobfusco · 2013-04-09T18:32:45.929Z · LW(p) · GW(p)
It seems to me that the number of false things¹ that it is actually instrumentally useful to believe is much less than the number of false things that someone else would like me to think that it is instrumentally useful to believe in order that they may take advantage of my belief in false things.
In other words, for any belief X, if X is false and some person S wants to convince me that my believing X would be instrumentally useful to me, it is almost certainly the case that ① believing X actually isn't instrumentally useful to me, and indeed ② my believing X would put me at a disadvantage to S.
In other other words, the more you try to convince me that believing false things is good for me, the more I will conclude that you are bad for me.
¹ As opposed to merely inaccurate but usually-good-enough approximations, as found in classical physics or kindergarten hygiene instruction
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T18:41:41.152Z · LW(p) · GW(p)
Not sure how this is related to what I posted. It was about accurate accounting, not accurate reporting of your account to someone who doesn't want an accurate account.
comment by MalcolmOcean (malcolmocean) · 2013-04-10T06:44:39.627Z · LW(p) · GW(p)
I was expecting to find someone commenting about beliefs whose truth-value may be hard to know but whose effect is positive nonetheless. Several examples (which I don't necessarily personally endorse)
If believing this homeopathic sugar pill works will make it work,
I desire to believe that this sugar pill works.
If believing this homeopathic sugar pill works will not make it work,
I desire to believe that this sugar pill does not work.
Let me not become attached to beliefs that do not serve me.
or
If believing in synchronicities will cause more good things to happen in my life,
I desire to believe in synchronicities.
If believing in synchronicities will not cause more good things to happen in my life,
I desire to not believe in synchronicities.
Let me not become attached to beliefs I do not want.
It appears that, if you have the ability to actually self-modify your beliefs as such, the "Litany of Instrumentarski" could be a useful way to deal with the thing where rationality breaks things like the placebo effect. Sugar pills, or whatever, if you can adopt the positive sides of beliefs that are self-fulfilling prophecies (true either way you believe them, like e.g. the Pygmalion effect) then that ought to be conducive to winning.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-10T06:59:45.550Z · LW(p) · GW(p)
That's a good point. I guess I still have to quite a ways to go to rid myself of the notion of external reality, which I was subconsciously assuming. If belief changes reality, too bad for reality. It's the accuracy of the belief that is important.
Replies from: Kenny, MugaSofer, malcolmocean↑ comment by MugaSofer · 2013-04-12T13:39:55.172Z · LW(p) · GW(p)
Self-fulfilling beliefs don't mean there's no external reality, they just mean your mind (and thus your beliefs) are part of reality, and therefore capable of influencing it. If they weren't, naturally you would be unable to act on them in any case. The correct belief is, of course, "if someone believes X, X will occur. If someone believes Y, Y will occur."
EDIT: The last sentence, which is slightly tangential to the rest, has been moved (on the theory that it was attracting downvotes) to increase the signal-to-noise ratio. It still exists in the comment below, if you wish to downvote it.
Replies from: MugaSofer, Document↑ comment by Document · 2013-08-11T21:25:42.659Z · LW(p) · GW(p)
That seems like a responable response to shminux's post, so I'm not sure why you were at -2 (unless it was for your final sentence).
Replies from: MugaSofer↑ comment by MugaSofer · 2013-08-18T19:51:02.103Z · LW(p) · GW(p)
Huh, I hadn't noticed that. You're probably right; such a statement is something of an anti-applause light here on lesswrong. (And, to be fair, with good reason.)
EDIT: I think I'll remove it, actually ... I'll move it to a comment so as not to torture the poor souls who saw this cryptic conversation.
↑ comment by MalcolmOcean (malcolmocean) · 2013-04-11T21:46:02.838Z · LW(p) · GW(p)
But if the belief is accurate either way, then you can basically pick whatever belief you want. This is the weird paradox of self-fulfilling prophecies, like the Pygmalion effect. So what then?
comment by jetm · 2013-04-09T19:12:52.553Z · LW(p) · GW(p)
If I'm reading this correctly, if A is true and the evidence available to you for A is false, you wish to believe that A is false? Or am I missing something?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T20:04:50.471Z · LW(p) · GW(p)
if A is true
Note that this statement only makes sense if you already subscribe to physical realism, as it presumes the territory separate from any maps.
If you don't make this assumption, this statement means that "at some point I will acquire evidence confirming the model based on A with very high confidence". The currently available evidence may be against A, however. It happens quite often in physics, though not it trivial ways.
For example, light was believed to be composed of particles, until the Poisson's spot was discovered. There was plenty of experimental evidence for it, too. Afterwards, light was believed to be waves, and there was overwhelming evidence for this, as well. Then the UV catastrophe was deduced and the photoelectric effect was discovered, demonstrating that the question "is light a wave or a particle" has a different answer, depending on the manner of asking. The story is far from over at present.
I wish to believe that I will update my beliefs based on available evidence (a bit meta here).
Replies from: Decius↑ comment by Decius · 2013-04-09T20:43:56.739Z · LW(p) · GW(p)
The correct answer to "is light a wave or a particle" is "No, it is not the case that there exists 'light' that is a wave or a particle. Electromagnetism behaves according to these equations, which closely approximate wavelike behavior in these areas and closely approximate billiard balls in these areas."
comment by Dorikka · 2013-04-10T22:12:42.658Z · LW(p) · GW(p)
I think your heuristic requires too much computational power to wield as effectively as the original version for it to be worth it. The temptation to take bad black swan bets seems too great.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-10T22:29:54.585Z · LW(p) · GW(p)
I don't follow. What computational power do you mean? And what bad black swan bets? A couple of examples would be great.
comment by Vaniver · 2013-04-09T20:48:17.346Z · LW(p) · GW(p)
Maybe the first line should be modified to something like "If I later find a diamond in the box...", or something. How about the following?
I disagree with this modification. The first one explicitly focuses on the causal effect of the belief, but the second one focuses on the temporal successors of the belief. The first is much stronger, more useful, and more general than the second.
Replies from: shminux, wedrifid↑ comment by Shmi (shminux) · 2013-04-09T20:57:20.210Z · LW(p) · GW(p)
Interesting. Do you mind elaborating?
Replies from: Vaniver↑ comment by Vaniver · 2013-04-09T23:48:50.681Z · LW(p) · GW(p)
Stronger because the second looks like a codification of "post hoc ergo propter hoc," better because the relationships are narrower, and more general because it responds well to situations where you let causation flow backwards in time. (For example, the first will let you pay in the Parfit's Hitchhiker scenario.)
↑ comment by wedrifid · 2013-04-10T01:50:52.274Z · LW(p) · GW(p)
I disagree with this modification. The first one explicitly focuses on the causal effect of the belief, but the second one focuses on the temporal successors of the belief. The first is much stronger, more useful, and more general than the second.
I prefer the modification, for some of the same reasons that you disagree with it. That is, because the modification is weaker, less general, actually doesn't serve to convey shminux's position and avoids conflating instrumentality considerations with the anti-realist position.
Specifically, saying this:
If I will find no diamond in the box,
I desire to believe that I will find no diamond in the box;
... does not entail any sort of claim about the distribution of the diamond in situations in which one will not happen to, or expect to be able to, personally interact with the diamond but still care whether diamond containing box are sent to some place. ie. It is technically compatible with:
If Sally will find a diamond in the box but I will never receive any message from Sally or the box after the box arrives at Sally,
I still desire to believe that Sally will find a diamond in the box.
(Or, you know, food rations and a terraforming device for her colonization mission.)
comment by D_Malik · 2013-04-09T15:30:01.377Z · LW(p) · GW(p)
Analogously, the proposition "snow is white" is true if and only if believing that snow is white has positive utility.
If you're a perfect Bayesian reasoner, believing that snow is white has positive utility iff snow is actually white, and so the above sentence simplifies to "The proposition "snow is white" is true if and only if snow is white." But you are not a perfect Bayesian reasoner, and insofar as you are imperfect, things are fuzzy.
The first paragraph here is pretty much tautological; what you can disagree about is whether the cost involved, and the benefit to be gained, are ever such that you can actually gain utility by self-delusion.
Replies from: Decius, Tenoke↑ comment by Decius · 2013-04-09T20:40:41.458Z · LW(p) · GW(p)
More analogously, you should believe that the proposition "snow is white" is true if and only if believing that snow is white has positive utility.
There is a difference between the proposition being true and believing the proposition is true, right?
↑ comment by Tenoke · 2013-04-09T16:40:01.721Z · LW(p) · GW(p)
The proposition "snow is white" is true if and only if believing that snow is white has positive utility.
That's quite catchy.
Replies from: Baughn↑ comment by Baughn · 2013-04-09T16:54:13.166Z · LW(p) · GW(p)
It also seems to claim that snow is black if doing so has positive utility, regardless of whether or not it's actually true.
Consider, for example, if Big Brother can read your mind and will punish you horribly if you believe that snow is white. Yes, in that case it might make sense to believe that it's black (if you are capable of doing so), but that doesn't make it true.
Replies from: Tenoke, shminux↑ comment by Shmi (shminux) · 2013-04-09T17:04:25.254Z · LW(p) · GW(p)
Utility is not the same thing as testability. Your color detector may return the same result when pointed at snow as when pointed at a sheet of paper, but you may decide to call the former "black" and the latter "white" for utilitarian reasons. Which is quite common IRL.
comment by NancyLebovitz · 2013-04-09T15:57:00.157Z · LW(p) · GW(p)
I'm interested in when you think the utility of beliefs diverges from their truth.
Replies from: shminux, Kawoomba↑ comment by Shmi (shminux) · 2013-04-09T16:51:15.593Z · LW(p) · GW(p)
I don't believe we are talking about the same thing. I wasn't talking about utility, I was talking about testability. My operational definition of truth is the accuracy of predictions. Except for the "mathematical truths", which are well formed finite strings of symbols.
comment by TimS · 2013-04-09T17:43:26.895Z · LW(p) · GW(p)
Sminux,
One of the problems with your position is that physical realism is the beginning of the debate, not its end. Positions on the ontological status of physical entities have all sorts of implications elsewhere.
You yourself implicitly acknowledge as much when you said that you desire to find diamonds in the box, and want to adjust your beliefs to maximize the likelihood of such a pleasant discovery. In other words, finding diamonds means more than just evidence of accurate belief or accurate ability to make predictions - finding a (valuable) diamond has other benefits as well.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T17:59:13.089Z · LW(p) · GW(p)
You yourself implicitly acknowledge as much when you said that you desire to find diamonds in the box
I never said that. In this example I don't care about diamonds. I desire to believe that my expectations of the number of diamonds will match the reported number of diamonds should, I bother checking. Could be one or could be none, whatever, as long as it matches.
Replies from: TimS↑ comment by TimS · 2013-04-09T18:04:26.920Z · LW(p) · GW(p)
You said:
If believing that there is a diamond in the box lets me find the diamond in the box, I desire to believe that there is a diamond in the box
This implies that you'd like to find a diamond in the box. That desire to find a diamond has nothing to do with physical pragmatism.
But if you say I've misread the emphasized portion of your quote, then I believe you. Not sure what it changes about my point that the physical realism debate exists in part to provide a firmer underpinning for other debates (like morality or preference).
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T18:09:21.659Z · LW(p) · GW(p)
This implies that you'd like to find a diamond in the box.
Only if I believe that I will find one. Actually not even that. It's the other way around. I desire to believe that I will find a diamond if and only if I will find the diamond.
I guess I sort of see where the confusion is coming from. Maybe I should rephrase it. I have edited the OP.
EDIT:
the physical realism debate exists in part to provide a firmer underpinning for other debates (like morality or preference).
Are you saying that I must subscribe to the physical realism because of moral considerations?
Replies from: TimS↑ comment by TimS · 2013-04-09T18:44:28.903Z · LW(p) · GW(p)
Are you saying that I must subscribe to the physical realism because of moral considerations?
No. But your position (or any position) on physical realism has implications in meta-ethics. Personally, those implications are the only reason I find the physical realism debate interesting at all.
In other words, a moral realist who is a physical anti-realist is very confused. In general, the desire of all realists is to have a consistent definition of "real" for both physical entities and moral facts. (Probably, we all desire it, but realists believe the characteristic "real" is a worthwhile label to try to apply).
I'm confused by your stance because you seem to think one's position on physical realism has no bearing on one's moral position. Whereas I think most of the motivation for (interesting) arguments about physical realism are outgrowths of disputes in other kinds of realism debates.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T18:57:03.702Z · LW(p) · GW(p)
I'm confused by your stance because you seem to think one's position on physical realism has no bearing on one's moral position.
Not quite. I assert that instrumentalism/physical pragmatism gives you a cleaner path to moral considerations than physical realism. The resulting positions may or may not be the same, depending on other factors (not all physical realists have the same set of morals, either). But not getting sidetracked into what exist and what doesn't and instead concentrating on accurate and inaccurate models of past, present and future inputs lets you bypass a lot of rubbish along the way. Unfortunately, it does not let you avoid being strawmanned by everyone else.
Replies from: TimS↑ comment by TimS · 2013-04-09T19:15:53.028Z · LW(p) · GW(p)
Suffice it to say that I don't agree. Having a consistent definition of exists would help immeasurably in clarifying positions on the moral realism / anti-realism debate. And you don't do a good job of noting when you are using a word in a non-standard way (and your other interlocutors are not great at noticing that your usage is non-standard).
You do realize that the standard understandings in the moral realism debate would say that referencing wrongness to a particular (non-universal) source of judgment is an anti-realist position?
Saying that right and wrong are meaningful only given a particular social context is practically the textbook definition of moral relativism, which is an anti-realist position.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-09T20:11:16.212Z · LW(p) · GW(p)
Suffice it to say that I don't agree.
That's a position, not an argument.
Having a consistent definition of exists would help immeasurably in clarifying positions on the moral realism / anti-realism debate.
Boooring... I care about accurate models, not choosing between two equally untestable positions.
You do realize that the standard understandings in the moral realism debate would say that referencing wrongness to a particular (non-universal) source of judgment is an anti-realist position?
Why should I care what a particular school of untestables says?
Saying that right and wrong are meaningful only given a particular social context is practically the textbook definition of moral relativism, which is an anti-realist position.
Again, I don't care about the labels, I care about accurate beliefs.