Making Beliefs Pay Rent (in Anticipated Experiences)

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-07-28T22:59:48.000Z · LW · GW · Legacy · 266 comments

Contents

266 comments

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.

Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?

Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.

It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock’s second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance.

Or suppose your English professor teaches you that the famous writer Wulky Wilkinsen is actually a “retropositional author,” which you can tell because his books exhibit “alienated resublimation.” And perhaps your professor knows all this because their professor told them; but all they're able to say about resublimation is that it's characteristic of retropositional thought, and of retropositionality that it's marked by alienated resublimation. What does this mean you should expect from Wulky Wilkinsen’s books?

Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all. But you had better remember the propositional assertions that “Wulky Wilkinsen” has the “retropositionality” attribute and also the “alienated resublimation” attribute, so you can regurgitate them on the upcoming quiz. The two beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a retropositional author? Then what do you expect to see because of that? No, not “alienated resublimation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you? Do you believe that Élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever.

Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

266 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Richard_Pointer2 · 2007-07-29T01:41:37.000Z · LW(p) · GW(p)

Great post. As always.

comment by michael_vassar3 · 2007-07-29T04:51:34.000Z · LW(p) · GW(p)

I assume that most of math is being ignored for simplicity's sake?

Replies from: David_Allencourt
comment by David_Allencourt · 2021-08-26T06:18:55.098Z · LW(p) · GW(p)

I think his point isn't so much that what you're saying WILL have a practical impact on your sensory experiences, just that it has the potential to do so. What you "expect" to experience as a result. In real life we can't weld a pair of trillion-pound bars of gold to each other and then see how much they weigh, but because of mathematics, we know that if we were to place them on an accurate scale we would see a weight of two trillion pounds.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-07-29T05:31:18.000Z · LW(p) · GW(p)

What good is math if people don't know what to connect it to?

Replies from: VKS, jirkazr, army1987, Pineapple264, TheAncientGeek
comment by VKS · 2012-03-17T15:35:25.858Z · LW(p) · GW(p)

All math pays rent.

For all mathematical theorems can be restated in the form:

If the axioms A, B, and C and the conditions X, Y and Z are satisfied, then the statement Q is also true.

Therefore, in any situations where the statements A,B,C and X,Y,Z are true, you will expect Q to also be verified.

In other words, mathematical statements automatically pay rent in terms of changing what you expect. (Which is) the very thing it was required to show. ■


In practice:

If you demonstrate Pythagoras's Theorem, and you calculate that 3^2+4^2=5^2, you will expect a certain method of getting right angles to work.

If you exhibit the aperiodic Penrose Tiling, you will expect Quasicrystals to exist.

If you demonstrate the impossibility of solving to the Halting Problem, you will not expect even a hypothetical hyperintelligence to be able to solve it.

If you understand why you can't trisect an angle with an unmarked ruler and a compass (not both used at the same time), you will know immediately that certain proofs are going to be wrong.

and so on and so forth.

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them.

Replies from: Daniel Clayton
comment by Daniel Clayton · 2021-03-18T08:05:53.952Z · LW(p) · GW(p)

Is this to say that one of the purposes of mathematics is to prove something new, even without knowing what it might be used for, with the awareness that it might be useful at a later point? Or that it might form part of a proof for something else that is also currently unknown? 

Replies from: MIN0010
comment by MIN0010 · 2024-01-27T11:16:43.249Z · LW(p) · GW(p)

Yes, there are numerous cases where a field in "pure" mathematics proved interesting theorems that mathematicians undertook because of its challenging and elegant nature (like certain theorems possess generality and elegance) which were then to be found to be practically useful, which are called "applied" mathematics. Frankly, this distinction is blurred as pure mathematics are so useful (see Eugene Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences") that the abstract nature of mathematics has huge extensibility and general applications in multiple domains [LW · GW]. For instance, Einstein's GR was based on the pure mathematics of Riemannian manifolds, which is an abstract topological structure, not tied to reality in any way initially. Or how algebraic topology is used for data mining, how number theory is used for cryptography, how linear algebra is used for machine learning, group theory is used for particle physics... and even how Bayesian probability theory is used for LW rationality.

Stephen Wolfram has great resources on rulial spaces and the nature of computation for the universe's fundamental ontology (the territory not the map) in which these networks of theorems can correspond to our empirical reality. (psst I am a very new LW user, and I am deciding if I should do a Sequence for this idea of "rulial cover" which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction, would be great if someone thinks this is interesting to explore so I can be motivated [LW · GW])

To link back to Eliezer's post, "floating beliefs" in a Bayesian net can be connected through adjusting the "weights" of the edges that connects that belief using Bayesian inference, and mathematics make these robust inferences from axioms (deductively validity as 100% in weight and 0% in prior). Therefore, anticipation becomes certain under a set of idealized axioms.

Replies from: bruno-vieira
comment by Bruno Vieira (bruno-vieira) · 2024-04-20T22:54:04.985Z · LW(p) · GW(p)

'I am deciding if I should do a Sequence for this idea of "rulial cover" which is how rulial deduction can be applied to Solomonoff induction and Bayesian abduction'
 

I don't really know what you mean, but if it's something unseen you can expect it to be useful!

comment by jirkazr · 2012-08-23T14:51:29.095Z · LW(p) · GW(p)

Is it not the purpose of math to tell us "how" to connect things? At the bottom, there are some axioms that we accept as basis of the model, and using another formal model we can infer what to expect from anything whose behavior matches our axioms.

Math makes it very hard to reason about models incorrectly. That's why it's good. Even parts of math that seem particularly outlandish and disconnected just build a higher-level framework on top of more basic concepts that have been successfully utilized over and over again.

That gives us a solid framework on which we can base our reasoning about abstract ideas. Just a few decades ago most people believed the theory of probability was just a useless mathematical game, disconnected from any empirical reality. Now people like you and me use it every day to quantify uncertainty and make better decisions. The connections are not always obvious.

comment by Pineapple264 · 2013-12-22T03:48:44.465Z · LW(p) · GW(p)

Thats exactly how i felt in high school. Im glad i changed that because it wouldn't be useful to me if i'd never learned algebra. The first part of the class is hard to use and discouraging to new students.

comment by TheAncientGeek · 2015-02-22T18:32:50.595Z · LW(p) · GW(p)

Is pure math a set of beliefs that should be evicted?

Replies from: g_pepper
comment by g_pepper · 2015-02-22T19:12:39.432Z · LW(p) · GW(p)

Is pure math a set of beliefs that should be evicted?

No, for reasons expressed above by VKS.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-22T21:42:06.369Z · LW(p) · GW(p)

Note the word "pure". By definition, pure maths doesn't pay off in experience. If it did, it would be applied.

Replies from: g_pepper
comment by g_pepper · 2015-02-22T22:12:05.837Z · LW(p) · GW(p)

IMO the distinction between pure and applied math is artificial, or at least contingent; today's pure math may be tomorrow's applied math. This point was made in VKS's comment referenced above:

Yes, we might not immediately know where a given mathematical fact will come in handy when observing the world, but by their nature, mathematical facts tell us exactly when to expect them

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-22T22:41:03.664Z · LW(p) · GW(p)

The question is whether anyone should believe pure maths now. If you are allowed to believe things that might possibly pay off, then the criterion excludes nothing.

Replies from: lalaithion, g_pepper, Epictetus
comment by lalaithion · 2015-02-22T23:15:06.930Z · LW(p) · GW(p)

Metabeleifs! Applied math concepts that seem useless now, have, in the past, become useful. Therefore, the belief that "believing in applied math concepts pays rent in experience" pays rent in experience, so therefore you should believe it.

comment by g_pepper · 2015-02-22T23:58:10.023Z · LW(p) · GW(p)

Unlike scientific knowledge or other beliefs about the material world, a mathematical fact (e.g. that z follows from X1, X2,..., Xn), once proven, is beyond dispute; there is no chance that such a fact will be contradicted by future observations. One is allowed to believe mathematical facts (once proven) because they are indisputably true; that these facts pay rent is supported by VKS's argument.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T11:02:44.169Z · LW(p) · GW(p)

Truths of pure maths don't pay rent in terms iof expected experience. EY has put forward a criterion of truth, correspondence, and a criterion of believability, expected experience , and pure maths fits neither. He didn't want that to happen, and the problem remains, here and elsewhere, of how to include abstract maths and still exclude the things you don't like. This is old ground, that the logical postivists went over in the mid 20th century.

Replies from: Richard_Kennaway, g_pepper
comment by Richard_Kennaway · 2015-02-23T13:21:27.154Z · LW(p) · GW(p)

Truths of pure maths don't pay rent in terms iof expected experience.

Here is a truth of pure mathematics: every positive integer can be expressed as a sum of four squares.

Expected experiences: there will be proofs of this theorem, proofs that I can follow through myself to check their correctness.

Et voilà!

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T14:24:02.443Z · LW(p) · GW(p)

Truth of astrology: mars in conjunction with Jupiter is dangerous for Leos

Expected experience: there will be astrology articles saying Leo's are in danger when mars is in conjunction with Jupiter.

Replies from: Richard_Kennaway, polymathwannabe, paolo-falabella
comment by Richard_Kennaway · 2015-02-23T14:56:23.663Z · LW(p) · GW(p)

Of course astrological claims pay rent. The problem with astrology is not that it's meaningless but that it's false, and the problem with astrologers is that they don't pay the epistemological rent.

Also, a proof is a different thing from a mathematician saying so. The rent that is being paid there is not merely that the theorem will be asserted but that there will be a proof.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T15:56:57.205Z · LW(p) · GW(p)

Of course astrological claims pay rent.

Try telling Eliezer

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-23T16:41:40.852Z · LW(p) · GW(p)

The original post does not mention astrology. If you want to spy out some place where Eliezer has said that astrological claims are meaningless, go right ahead. I am not particularly concerned with whether he has or not.

Here and now, you are talking to me, and as I pointed out, the belief can pay rent, but astrologers are not making it do so. Those who have seriously looked for evidence, have, so I understand, generally found the beliefs false.

comment by polymathwannabe · 2015-02-23T14:56:33.813Z · LW(p) · GW(p)

From that belief, the expected experience should be Leo people being less fortunate during those days.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T15:50:16.322Z · LW(p) · GW(p)

That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it's a cheat expect to pure maths truths ...

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-23T16:49:17.572Z · LW(p) · GW(p)

That was the point. Its a cheat to expect astrology truths to product experiences of reading written materials about astrology, so it's a cheat expect to pure maths truths ...

Let me complete the ellipsis with what I actually said. A mathematical assertion leads me to expect a proof. Not merely experiences of reading written materials repeating the assertion.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-24T10:37:47.533Z · LW(p) · GW(p)

And a proof still isnt an .experience in the relevant sense. Its not like predicting an eclipse,

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-04-21T18:48:56.178Z · LW(p) · GW(p)

What's the difference between behaviours of non-sentient objects and behaviours of sentient people that makes one an experience and the other not?

comment by Paolo Falabella (paolo-falabella) · 2019-05-17T09:39:09.220Z · LW(p) · GW(p)

I think this is both right and not in contradiction with the post.

The belief that pays the rent here is that there is going to be a high correlation between Mars being in conjunction with Jupiter and astrology believers born around August experiencing heightened feelings of being in danger.

That does not say anything on the "truth" of astrology itself.

Same applies to the article's example on Wulky Wilkinsen. The belief that alienated resublimation justifies the fictional author's retropositionality does not pay rent. The belief that failing to mention retropositionality correlates with higher chances of failing a literature test on Wilkinsen does probably pay rent.

comment by g_pepper · 2015-02-24T01:07:30.213Z · LW(p) · GW(p)

I think I see where you are going with this.

My initial interpretation of EY's original post is that he was explicating a scientific standard of belief that would make sense in many situations, including in reasoning about the physical world (EY's initial examples were physical phenomena - trees falling, bowling balls dropping, phlogiston, etc.). I did not really think he was proposing the only standard of belief. This is why I was baffled by your insistence that unless a mathematical fact had made successful predictions about physical, observable phenomena, it should be evicted.

However, later in the original post EY used an example out of literary criticism, and here he appears to be applying the standard to mathematics. So, you may be on to something - perhaps EY did intend the standard to be universally applied.

It seems to me that applying EY's standard too broadly is tantamount to scientism (which I suspect is more-less the point you were making).

comment by Epictetus · 2015-02-23T06:07:49.324Z · LW(p) · GW(p)

If you believe in applied math, what are the grounds for excluding "pure" math? Most of the time "pure" just means that the mathematician makes no explicit reference to real-world applications and that the theorems are formulated in an abstract setting. Abstraction usually just boils down to figuring out exactly which hypotheses are necessary to get the conclusion you want and then dispensing with the rest.

Let's take the theory of probability as an example. There's nothing in the general theory that contradicts everyday, real-world probability applications. Most of the time the general theory does little other than make precise our intuitive notions and avoid the paradoxes that plague a naive approach. This is an artifact of our insistence on logic. A thorough, logical examination of just about any piece of mathematics will quickly lead to the domain "pure" math.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T10:01:13.576Z · LW(p) · GW(p)

I am not making the statement "exclude pure math", I am posing the question "if pure math stays, what else stays?"

Maybe post utopianism is an abstract idealisation that makes certain concepts precise.

Replies from: Epictetus
comment by Epictetus · 2015-02-23T16:22:00.123Z · LW(p) · GW(p)

There are beliefs that directly pay rent, and then there are beliefs that are logical consequences of rent-paying beliefs. The same basic principles that give you applied math will also lead to pure math. We can justify spending effort on pure math on the grounds that it may pay off in the future. However, our belief in pure math is tied to our belief in logic.

If you asked whether this can be applied to something like astrology, I'd ask whether astrology was a logical consequence of beliefs that do pay rent.

comment by Doug_S. · 2007-07-29T06:08:50.000Z · LW(p) · GW(p)
What good is math if people don't know what to connect it to?

Allow me to answer your question with a question: What good is music?

Replies from: MikeKrebsbach, Rixie
comment by MikeKrebsbach · 2011-06-21T13:47:13.354Z · LW(p) · GW(p)

Lacking a point of reference, the word 'music' is interchangeable with 'noise'. Consider your query as if read by one who had been deaf right out of the womb.

forgive the presumption, I'm new to this whole thing in many ways, but I have a feeling you either did not read or did not understand the 'map and territory' sequence. Perhaps this would help you answer your own question.

http://wiki.lesswrong.com/wiki/Map_and_Territory

comment by Rixie · 2013-01-25T01:16:14.232Z · LW(p) · GW(p)

That's not the right question. In order for facts to be useful, they must connect to something. Music is not a fact, except in the sense that "Music exists, therefore I expect to hear it under this circumstance."

Math is an expression of what is constantly happening around us. Music is a thing, not a belief. "What good is Music?" Well, what good are trees? What good is love?

You put music into the wrong catagory. Music is a thing, not a belief.

Replies from: Rixie, shminux
comment by Rixie · 2013-01-25T01:16:55.033Z · LW(p) · GW(p)

The what good is love thing wasn't meant to be philosophical, it was just an example.

comment by Shmi (shminux) · 2013-01-25T03:51:28.908Z · LW(p) · GW(p)

Music is a thing, not a belief.

Define music.

Replies from: None, Rixie
comment by [deleted] · 2013-01-25T04:03:00.132Z · LW(p) · GW(p)

'e probably means the set of sounds that have certain structural properties, and that humans find enjoyable to listen to.

comment by Rixie · 2013-01-25T04:14:55.398Z · LW(p) · GW(p)

The question is, what definition of music could change the meaning of my explanation?

(Not implying that there isn't one, just wondering, and making you wonder too.)

comment by michael_vassar3 · 2007-07-29T06:54:35.000Z · LW(p) · GW(p)

In practice, most of the time people figure out what to connect it to later. More precisely, most of it probably doesn't connect to anything, but what does connect to stuff usually isn't found to do so until much later than it is invented/discovered.

comment by Vladimir_Nesov2 · 2007-07-29T10:01:17.000Z · LW(p) · GW(p)

Some ungrounded concepts can produce your own behavior which in itself can be experienced, so it's difficult to draw the line just by requiring concepts to be grounded. You believe that you believe in something, because you experience yourself acting in a way consistent with you believing in it. It can define intrinsic goal system, point in mind design space as you call it. So one can't abolish all such concepts, only resist acquiring them.

comment by Robin_Hanson2 · 2007-07-29T15:00:35.000Z · LW(p) · GW(p)

For any instrumental activity, done to achieve some other end, it makes sense to check that specific examples are in fact achieving the intended end.

Most beliefs may have as their end the refinement of personal decisions. For such beliefs it makes sense not only to check whether they effect your personal experience, but also whether they effect any decisions you might make; beliefs could effect experience without mattering for decisions.

On the other hand, some beliefs may have as their end effecting the experiences or decisions of other creatures, such as in the far future. And you may care about effects that are not experienced by any creatures.

Replies from: None
comment by [deleted] · 2015-09-20T06:20:05.273Z · LW(p) · GW(p)

For any instrumental activity, done to achieve some other end, it makes sense to check that specific examples are in fact achieving the intended end.

Only if you have reason to believe your naive pattern matching of expectations to observation isn't already updating your expectations about instrumental activity.

Otherwise, your ''privileging the hypothesis'' that you are in fact wrong.

It's kind of like smoothing in machine learning. It will have costs and benefits.

comment by Michael_Rooney · 2007-07-29T18:14:09.000Z · LW(p) · GW(p)

Elizer, your post above strikes me, at least, as a restatement of verificationism: roughly, the view that the truth of a claim is the set of observations that it predicts. While this view enjoyed considerable popularity in the first part of the last century (and has notable antecedents going back into the early 18th century), it faces considerable conceptual hurdles, all of which have been extensively discussed in philosophical circles. One of the most prominent (and noteworthy in light of some of your other views) is the conflict between verificationism and scientific realism: that is, the presumption that science is more than mere data-predictive modeling, but the discovery of how the world really is. See also here and here.

Replies from: fburnaby
comment by fburnaby · 2011-10-22T21:52:17.179Z · LW(p) · GW(p)

Maybe I'm inferring from too little data, but I suspect that most readers at this site aren't too interested in sceintific realism.

Our favourite mantra ("the map is not the territory") acknowledges and then gracefully side-steps the issues that you're raising.

(I just realized that Eliezeer answers this below. Comment retracted. Is there some way for me to delete this?)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-07-29T18:38:10.000Z · LW(p) · GW(p)

Rooney, as discussed in The Simple Truth I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

Replies from: Perplexed, mendel, Ty-Guy9
comment by Perplexed · 2010-07-22T03:18:43.430Z · LW(p) · GW(p)

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable. We can't do quantum mechanics with kets, but no bras. We can't do Gentzen natural deduction with rules of elimination, but no rules of introduction. We can't do Bayesian updating with observations, but no priors. And I claim that you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.

This position of mine comes from my interpretation of the dissertation of Noam Zeilberger of CMU (2005, I think). Zeilberger's main concern lies in Logic and Computer Science, but along the way he discusses theories of truth implicit in the work of Martin-Lof and Dummett.

Replies from: timtyler, George Noah Fitzgerald
comment by timtyler · 2010-11-30T20:08:50.355Z · LW(p) · GW(p)

I, too, am nervous about having anticipated experience as the only criterion for truth and meaning. It seems to me that a statement can get its meaning either from the class of prior actions which make it true or from the class of future observations which its truth makes inevitable.

That seems obviously correct. However, unless you pursue knowledge for its own sake, you should probably not be overly concerned with preserving past truths - unless they are going to impact on future decisions.

Of course, the decisions of a future superintelligence might depend on all kinds of historical minutae that we don't regard as important. So maybe we should preserve those truths we regard as insignificant to us for it. However, today, probably relatively few are enslaved to future superintelligences - and even then, it isn't clear that this is what they would want us to do.

comment by Peter Pehlivanov (George Noah Fitzgerald) · 2022-05-30T20:24:46.670Z · LW(p) · GW(p)

Perplexed, I'm not sure I understood what you meant by

you can't have a theory of meaning which deals only with consequences of statements being true but not with what actions put the universe into a state in which the statement becomes true.

Or if I agree with it at all. Wouldn't statements about what actions make certain statements true simply be part of the first category? I don't see a problem with only having statements and their consequences. I see you've made this comment 12 years ago, so I don't know how you would stand on this today.

comment by mendel · 2011-05-19T13:22:14.444Z · LW(p) · GW(p)

An explicit belief that you would not allow yourself to hold under these conditions would be that the tree which falls in the forest makes a sound - because no one heard it, and because we can't sense it afterwards, whether it made sound or not had no empirical consequence.

Every time I have seen this philosophical question posed on lesswrong, the two sophists that were arguing about it were in agreement that a sound would be produced (under the physical definition of the word), so I'd be really surprised if you could let go of that belief.

Replies from: Manfred
comment by Manfred · 2011-06-20T01:09:12.775Z · LW(p) · GW(p)

Hm, yeah. The trouble is how the doctrine handles deductive logic - for example, the belief that a falling tree makes vibrations in the air when the laws of physics say so is really a direct consequence of part of physics. The correct answer definitely appears to be that you can apply logic, and so the doctrine should be not to believe in something when there is no Bayesian evidence that differentiates it from some alternative.

comment by Ty-Guy9 · 2015-03-20T09:05:27.898Z · LW(p) · GW(p)

While I fully agree with the principle of the article, something stuck out to me about your comment:

In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:

You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false.

You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.

By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)

Replies from: CBHacking, gjm
comment by CBHacking · 2016-01-18T23:31:39.823Z · LW(p) · GW(p)

I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte.

If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here.

If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true.

As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence - enough to rule out half the options - to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I've narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it's only half of the necessary evidence. I'd need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.

comment by gjm · 2016-01-19T11:36:55.187Z · LW(p) · GW(p)

If you have an arbitrary proposition -- a random sequence of symbols constrained only by the grammar of whatever language you're using -- then perhaps it's about equally likely to be true or false, since for each proposition p there's a corresponding proposition not p of similar complexity.

But the "beliefs" people are mostly interested in are things like these:

  • There is exactly one god, who created the universe and watches over us; he likes forgiveness, incense-burning, and choral music, and hates murder, atheism and same-sex marriage.
  • Two nearby large objects, whatever they are, will exert an attractive force on one another proportional to the mass of each and inversely proportional to the square of the distance between them.

and the negations of these are much less interesting because they say so much less:

  • Either there is no god or there are multiple gods, or else there is one god but it either didn't create the universe or doesn't watch over us -- or else there is one god, who created the universe and watches over us, but its preferences are not exactly the ones stated above.
  • If you have two nearby objects, whatever force there may be between them is not perfectly accurately described by saying it's proportional to their masses, inversely proportional to the square of the distance, and unaffected by exactly what they're made of.

So: yeah, sure, there are ways to pick a "random" belief and be pretty sure it's correct (just say "it isn't the case that" followed by something very specific) but if what you're picking are things like scientific theories or religious doctrines or political parties then I think it's reasonable to say that the great majority of possible beliefs are wrong, because the only beliefs we're actually interested in are the quite specific ones.

comment by Nick_Tarleton · 2007-07-31T03:03:22.000Z · LW(p) · GW(p)

It's amazing how many forms of irrationality failure to see the map-territory distinction, and the resulting reification of categories (like 'sound') that exist in the mind, causes: stupid arguments, phlogiston, the Mind Projection Fallacy, correspondence bias, and probably also monotheism, substance dualism, the illusion of the self, the use of the correspondence theory of truth in moral questions... how many more?

I think you're being too hard on the English professor, though. I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them. But I've never experienced a college English class; perhaps my innocent fantasies will be shaken then.

Michael V, you could say that mathematical propositions are really predictions about the behavior of physical systems like adding machines and mathematicians. I don't find that view very satisfying, because math seems to so fundamentally underly everything else - mathematical truths can't be changed by changing anything physical, for instance - but it's one way to make math compatible with anticipation.

Replies from: TsviBT
comment by TsviBT · 2012-03-06T08:08:06.800Z · LW(p) · GW(p)

I suspect literary labels do have something to do with the contents of a book, no matter how much nonsense might be attached to them

I think Eliezer's point was about the student. "Wulky Wilkinsen is a 'post-utopian'" could be meaningful, if you know what a post-utopian is and is not (I don't, and don't care). The student who learns just the statement, however, has formed a floating belief.

We might even initially use propositional beliefs as indicators of meaningful beliefs about the world. But if we then discuss these highly compressed beliefs without referencing their meaning, we often feel like we are reasoning when really we have ceased to speak about the world. That is, grounded beliefs can become "floaty" and spawn further "floaty" beliefs.

In my sociology class, we talk about how "Man in his natural state has liberty because everyone is equal". "Natural state", "liberty", and "equal" could conceivably be linked to descriptions of social interaction or something. However, class after class we refrain from talking about specific behaviors. Concepts float away from their referents without much resistance - it's all the same to the student, who only needs to make a few unremarkable remarks to get his B+ for class participation. Compare:

"Man in his natural state has liberty because everyone is equal"

"Man in his natural state is equal because everyone has liberty"

"When everyone has liberty and is equal, man is in his natural state"

These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth.

(Edit for minor grammar and formatting)

comment by Nick_Tarleton · 2007-07-31T03:04:01.000Z · LW(p) · GW(p)

It's amazing how many forms of irrationality failure to see the map-territory distinction

Should have been "how many forms of irrationality result from failure...". Sorry.

comment by crasshopper · 2008-03-14T23:27:35.000Z · LW(p) · GW(p)

I agree with those who say it's okay to figure things out later. If my music professor says a certain composer favors the Aeolian mode, I may not be able to visualize that on the spot but who cares? I can remember that statement and think about it later. Likewise with phlogiston, I have a vague concept of what it is and someday the alchemists will discover more precisely what's going on there.

Too much cognitive effort would be spent if, every time I thought about linear algebra, I had to visualize the myriad concrete instances in which it will be applied. I bet thinking in abstractions results in way more economical use of thinking time and thinking-matter.

comment by Mark_Probst · 2008-04-07T13:37:13.000Z · LW(p) · GW(p)

In what way is the belief that beliefs should be grounded not a free-floating belief itself?

Replies from: adamisom, MarkusRamikin
comment by adamisom · 2012-04-14T06:52:13.495Z · LW(p) · GW(p)

One way of answering might be to say that there is no separate "belief" that beliefs should be grounded. But i'm not sure.

All I know is that the question annoys me, but I can't quite put my finger on it. It reminds me of questions like (1) the accusation that you can't justify the use of logic logically, or (2) the accusation that tolerance is actually intolerant - because it's intolerant of intolerance. There might be a level distinction that needs to be made here, as in (2) - and maybe in (1) though I think that's different.

Replies from: Danfly
comment by Danfly · 2012-04-14T11:23:45.381Z · LW(p) · GW(p)

(1) has come out of my mouth on a few occasions, albeit not in those exact words. It's normally after a few beers and I feel like playing the extreme skeptic a la David Hume, just to annoy everyone. I think the best way around it is to resort to the empirical argument and say that, in our experience, it is always right: Essentially the same thing Yudkowsky does with PA arithmetic here. Trying to find an argument against it which is truly "rationalist" in the continental sense has been a dead end in my experience.

(2) sort of depends on the pragmatics and what "tolerance" actually means to the persons involved in a given context. If you define tolerance as simply being tolerant of other viewpoints, then you can still be tolerant of the intolerant viewpoints. However, if you define it as freedom from bigotry, then that could indeed be called "intolerant" by the standards of the first definition.

I hope I'm making sense here.

comment by MarkusRamikin · 2012-04-14T07:27:03.248Z · LW(p) · GW(p)

I anticipate expressing free-floating beliefs would get me negative karma on Less Wrong.

More seriously:

I do not anticipate free-floating beliefs being useful in the same sense that maps of reality are useful. A map can turn out to be accurate or inaccurate, and insofar as it is accurate it can help me navigate and manipulate reality. My belief that "a proper belief should not be free-floating" prohibits free-floating beliefs from doing any of that.

Or one might as well see it as not a belief, but as a definition. There's BeliefType1 which is grounded in reality, and BeliefType2 which is not, and we happen to call BeliefType1 a "proper belief". (Of course we still do it for a reason, because we care about our sheep, or rather, we care about our beliefs being true and thus useful.)

Not sure which approach makes more sense.

Replies from: Klevador
comment by Klevador · 2012-04-14T09:08:01.397Z · LW(p) · GW(p)

The ability to anticipate experiences is one of our maximands because we have goals that are optimally achieved with this ability. To believe that beliefs should allow us to anticipate experiences is grounded in the desire to achieve our goals.

comment by Jan_Kanis · 2008-05-26T10:13:08.000Z · LW(p) · GW(p)

Mark: Believing that beliefs should be grounded anticipates that there is absolutely no change in anticipation if one were to change these free floating ideas. Of course this doesn't really answer your question because it just restates the definition of 'free floating beliefs' in different words. This belief actually follows from Eliezers belief in Occam's Razor, which predicts that when faced with unexplained events, if one creates a set of theories explaining these events, any predictions made by the simple theories are more likely to actually happen than predictions made by complex theories. I'm not quite sure if Occam's Razor is an axiom of science or just yet another belief. At least there is quite a bit of support for this belief, if you look into the history of science.

Another point: I think phlogiston is a bit of a poor example. Phlogiston actually corresponds very closely with something currently believed as real: phlogiston is the absence of oxygen. Seeing it this way, it's very well possible to build a theory of phlogiston explaining and predicting nearly all observations of fire, e.g. fire releases phlogiston, and if you burn something in a confined space the air gets saturated with phlogiston and cannot take in any more, so the fire goes out. A very important argument in the debate between phlogistionians and oxygants was when experiments were done to measure the weight of phlogiston and oxygen, and phlogiston turned out to have a negative weight.

comment by steve_roberts · 2008-07-05T15:27:19.000Z · LW(p) · GW(p)

Jan: Occam's razor is not so much a rule of science but an operating guideline for doing science. It could be reduced to "test simple theories first". In the past this has been very useful in keeping scientific effort productive, the 'belief' is that it will continue to be useful in this way.

comment by Hopefully_Anonymous · 2008-07-05T19:05:57.000Z · LW(p) · GW(p)

This led to a fun read of "occam's razor" wikipedia entry. Hickum's dictum in particular was a great find (generalized beyond medicine, it could be that explanations for unexplained events can be as complex as they damn well please). As a practical corrective, it seems to me that probability theory suggests that the best accessible explanation to us for unexplained events is in the set of simpler theories, but is probably not one of the absolute simplest.

comment by James · 2008-07-29T23:22:34.000Z · LW(p) · GW(p)

Eliezer once wrote that "We can build up whole networks of beliefs that are connected only to each other - call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict - or better yet, prohibit."

I can't see how nearly all of the beliefs expressed in this post predict or prohibit any experience.

comment by DanielLC · 2009-12-28T23:54:23.743Z · LW(p) · GW(p)

"Alchemists believed that phlogiston caused fire"

How is that different than our current belief that oxygen causes fire?

Replies from: Jack, Sniffnoy, thomblake
comment by Jack · 2009-12-28T23:58:26.756Z · LW(p) · GW(p)

Uhhh... oxygen exists?

Replies from: DanielLC
comment by DanielLC · 2009-12-29T00:40:41.196Z · LW(p) · GW(p)

And so does the absence of oxygen, or, as they called it, phlogiston.

Replies from: Nick_Tarleton, Jack
comment by Nick_Tarleton · 2009-12-29T01:01:17.554Z · LW(p) · GW(p)

The absence of oxygen isn't much like a substance whose release is fire:

  • it doesn't have any consistent physical or chemical properties;
  • many things not containing oxygen fail to burn in air, and none burn in vacuum;
  • on the other hand, things do burn under oxidizers other than oxygen;
  • oxidized substances are very poorly modeled by mixtures of the original substance and oxygen;
  • things burned in open air can either gain or lose weight;

etc.

Replies from: DanielLC
comment by DanielLC · 2009-12-31T00:41:13.299Z · LW(p) · GW(p)

"it doesn't have any consistent physical or chemical properties;"

And oxides do? Or are you referring to pure phlogiston? It's not that big a deal that you can't get pure phlogiston. It's nigh impossible to purify fluorine. I think that under our current understanding of physics, it's totally impossible to isolate a single quark.

It moves because it's attracted to some things more than others. It's still attracted to everything more than itself.

"many things not containing oxygen fail to burn in air"

Hurts both theories equally. Presumably, it's strongly bonded to the phlogiston/it doesn't strongly bond to oxygen.

"...and none burn in vacuum;"

As I said, you can't get pure phlogiston.

"on the other hand, things do burn under oxidizers other than oxygen;"

Hurts both theories equally. The only way to solve it to my knowledge is that there are things that cause fire other than phlogiston/oxygen.

"things burned in open air can either gain or lose weight;"

Hurts both theories equally. Presumably, some of the matter escapes into the air sometimes.

Everything you listed either is only a very minor problem or is exactly as bad for the idea of oxygen.

comment by Jack · 2009-12-29T05:05:12.049Z · LW(p) · GW(p)

You're giving phlogiston qualities no one who held that theory gave it. If you want to call the absence of oxygen phlogiston, okay, but you aren't talking about the same phlogiston everyone else is talking about. Moreover, thinking about fire this way is clumsy and incompatible with the rest of our knowledge about physics and chemistry.

We already had a conception of matter when phlogiston was invented... and phlogiston was understood as a kind of matter. To say the phlogiston is really this other kind of thing, which isn't matter but a particular kind of absence of matter is both unhelpful and a distortion of phlogiston theory. The whole point of the phlogiston theory was that they thought there was a kind of matter responsible for fire! But there isn't matter like that.

Now by defining phlogiston as the absence of oxygen you might be able to model combustion in a narrow set of circumstances-- but you couldn't fit that model with any of your other knowledge about physics and chemistry.

In short neither the original kind nor your kind of phlogiston exist.

Replies from: DanielLC
comment by DanielLC · 2009-12-31T00:32:22.997Z · LW(p) · GW(p)

It was at one point theorized to have negative mass. If it's matter, and you make everything else weigh more, it works out the same.

I fail to see why you think it can't fit it with other knowledge of physics and chemistry. You can think of electricity as positively charged particles moving around with virtually zero loss of predicting power.

Replies from: Jack
comment by Jack · 2009-12-31T01:19:21.281Z · LW(p) · GW(p)

For example, you can't use phlogiston in any model that also includes oxygen. Nor can you do any work at the molecular or sub-molecular level.

Similarly, thinking of electricity in terms of positively charged particles would be incompatible with atomic theory.

Replies from: DanielLC
comment by DanielLC · 2009-12-31T01:45:11.597Z · LW(p) · GW(p)

"you can't use phlogiston in any model that also includes oxygen"

You also can't use oxygen in any model that also includes phlogiston. Oxygen and phlogiston both describe the same phenomena. They've been looking at it from both ends, and found out they were the same thing. Oxygen was slightly more accurate then phlogiston, but they were both about the same accuracy.

"Nor can you do any work at the molecular or sub-molecular level."

It's also incompatible with much of quantum physics.

Every physical theory we've come up with, when examined close enough, is completely and utterly wrong. If we're going to have any useful definition of accuracy, you can't just throw it out of the window because of that.

It worked perfectly for almost everything they did at the time. For that matter, it works perfectly for almost everything we're doing now.

Replies from: Jack
comment by Jack · 2010-01-01T07:31:15.508Z · LW(p) · GW(p)

Sigh. Oxygen has properties that have nothing to do with fire. You need it to properly model cellular respiration, water electrolysis, air currents, buoyancy, the properties of compounds of which the element is a part etc. Give me a coherent periodic table of elements that includes phlogiston instead of oxygen and we can talk.

Every physical theory we've come up with, when examined close enough, is completely and utterly wrong. If we're going to have any useful definition of accuracy, you can't just throw it out of the window because of that.

Some theories are less wrong. So yes, you absolutely can throw a physical theory out the window if it is wrong. You might save the equations so you can make quick, approximate calculations (i.e. Newtonian mechanics) but that doesn't mean you include all the entities in the theory in your ontology.

It worked perfectly for almost everything they did at the time.

This is essentially a truism for all outdated scientific theories.

For that matter, it works perfectly for almost everything we're doing now.

Sure unless you want to make sense of combustion and anything that requires knowledge of modern chemistry or atomic theory at the same time!

comment by Sniffnoy · 2009-12-29T01:22:20.022Z · LW(p) · GW(p)

Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling. Now, if you make enough observations, maybe you'll eventually conclude that "phlogiston is the absence of oxygen" (even though this isn't really correct), but at that point you can throw out the label "phlogiston". Contrariwise, if you say "oxidization causes fire", where "oxygen" is a previously known thing with known properties, then this allows you to actually make predictions about fire. E.g. the fact a candle in a sufficiently small closed space will go out before it melts, but not necessarily if there's a plant in there too. One pays rent, the other doesn't.

Replies from: DanielLC, Nick_Tarleton
comment by DanielLC · 2009-12-29T02:17:02.782Z · LW(p) · GW(p)

You can make exactly the same predictions with phlogiston. If you burn coal next to iron, it will refine it. You could predict this with oxygen (oxygen is moving from the iron to the coal) or with phlogiston (phlogiston is moving from the coal to the iron).

It's like with electric charge. If you think of it as positive charge moving around, it has almost exactly the same predictive power as thinking of it as electrons moving around.

Replies from: Nick_Tarleton, Sniffnoy
comment by Nick_Tarleton · 2009-12-29T02:36:31.860Z · LW(p) · GW(p)

If you burn coal next to iron, it will refine it. You could predict this with oxygen (oxygen is moving from the iron to the coal) or with phlogiston (phlogiston is moving from the coal to the iron).

In this specific example and at that level of precision, yes; but only one of these models can be (easily) refined to make precise, correct quantitative predictions. Even at that qualitative level, though, they make different predictions about burning things in vacuum or in non-oxygen atmospheres.

comment by Sniffnoy · 2009-12-29T05:42:33.160Z · LW(p) · GW(p)

But you can only predict it if you already know that a gain of phlogiston refines iron; if you don't, you can only observe it afterward and write it down as a property of phlogiston.

If you don't know anything about oxygen or phlogiston beforehand, then, sure, they're pretty much equally predictive, i.e., not very much. But if "oxygen" is not in fact just an arbitrary label as "phlogiston" is, but in fact something you're already working with in other ways, then they're not symmetric.

Also as Nick Tarleton points out below there are other asymmetries, though those are not so much in the predictive power.

Replies from: DanielLC
comment by DanielLC · 2009-12-31T00:42:43.224Z · LW(p) · GW(p)

"But you can only predict it if you already know that a gain of phlogiston refines iron"

Same goes for oxygen.

Replies from: DanielLC, Sniffnoy
comment by DanielLC · 2009-12-31T01:47:12.032Z · LW(p) · GW(p)

Okay, I admit that that's not really a prediction, but until then, they couldn't even explain it.

If you're going to do it like this, what's one thing oxygen predicted?

By the way, I'm responding to the fact that I lost two karma points on that, not any actual post.

comment by Sniffnoy · 2009-12-31T01:59:17.297Z · LW(p) · GW(p)

That's what I just said.

Replies from: DanielLC
comment by DanielLC · 2009-12-31T02:25:24.698Z · LW(p) · GW(p)

Sorry. Too used to defending my position to realize you're not attacking it.

comment by Nick_Tarleton · 2009-12-29T02:43:28.260Z · LW(p) · GW(p)

Because one of these allows you to make predictions, and the other doesn't. Saying "fire has a cause, and I'm going to call it 'phlogiston'!" doesn't tell you anything about fire, it's just a relabeling.

The hypothesis went a little deeper than that. "Flammable things contain a substance, and its release is fire" lets you make many predictions — e.g., that things will burn in vacuum, or that things burned in open air will always lose mass (this is how it was falsified).

Replies from: Sniffnoy, DanielLC
comment by Sniffnoy · 2009-12-29T05:36:58.789Z · LW(p) · GW(p)

Ah, true.

comment by DanielLC · 2009-12-31T01:51:35.615Z · LW(p) · GW(p)

Always gain mass, once they realized it was negative mass.

The idea that it doesn't always gain mass doesn't falsify phlogiston any more than it falsifies oxygen for the same reason.

Also, people didn't find the change in weight particularly useful, so this wasn't that big a problem.

Again, the vacuum thing isn't much of a problem either. It's not necessarily possible to purify phlogiston.

Replies from: bigjeff5
comment by bigjeff5 · 2011-01-27T17:44:27.724Z · LW(p) · GW(p)

I'm not sure I follow, oxidization doesn't predict gaining or losing mass (on any scale like phlogiston would, that is), it predicts an interaction of materials forming a new composite substance. Oxidation doesn't prevent material from being lost or changed in other ways which could cause an overall greater or lesser mass than the original object. What it does predict, however, is that the total mass of all molecules in the equation, once accounted for, will be the same. This is consistent with observation.

If phlogiston has a negative mass, then anything that can burn must gain mass. I don't see any way around it. The theory states that it is a release of negative material, and there is no way to account for it once released.

One thing you would expect to find with phlogiston is an object that was primarily made up of phlogiston, giving it a negative mass. Explosives, for example, clearly have so much phlogiston that it literally rips the object (and anything nearby) apart when released. You would therefore expect all explosives to be relatively light in spite of the original weight of their components.

You could test this with black powder: saltpeter, charcoal, and sulfer each release a certain amount of phlogiston when burned. Combine them and a significantly more phlogiston is clearly released. You would therefore expect more phlogiston to have flowed into the material during the combination of the three objects during the making of gunpowder. However, the weights actually stay quite the same. The observation doesn't bear out the prediction, so the prediction is clearly wrong. If the prediction is wrong, the theory that made it is either wrong outright, or flawed in some way. Since the only prediction phlogiston can make is wrong, then the theory is at the very least flawed in some crippling way, and needs to be completely re-worked.

It's lack of ability to predict expectations is what killed it. You can predict what will happen when you add oxygen to a reaction. You cannot predict what will add phlogiston to a material, thereby allowing it to burn.

A huge example is the sun. It is a giant ball of fire - therefore, a giant ball of phlogiston, or at least a very significant portion of its mass to be made up of phlogiston in order to burn that intensely for that long. So it should have a low mass, possibly even a negative mass. Yet this giant ball of mostly phlogiston is actually the heaviest thing in the solar system by a massive margin.

Phlogiston is incompatible with many, many theories that have been independantly verified. Also, oxygen causing fire is not the theory. The theory is molecules and their chemical interactions, of which oxygen is just one type, and the predictions of oxygen causing most of the exothermic reactions is consistent with all other chemical reactions and is predictable based on rules that are consistent whether a reaction is exothermic or endothermic, among a great many other things. It also predicts which objects will burn and which will not. This same chemical theory leads to atomic theory, which predicts fusion, which has absolutely nothing at all to do with oxygen, yet describes the behavior of the sun very accurately before you even start to measure the sun's output.

The way to test a theory is to predict first, then observe. This is basic science. Phlogiston cannot pass this test, chemical theory can.

comment by thomblake · 2010-05-03T14:18:54.428Z · LW(p) · GW(p)

Just because I haven't seen the link in this particular discussion, some more defense of phlogiston link

comment by simplicio · 2010-03-06T06:39:28.307Z · LW(p) · GW(p)

I loved this post, but I have to be a worthless pedant.

If you drop a ball off a 120-m tall building, you expect impact in t=sqrt(2H/g)=~5 s. But that would be when the second-hand is on the 1 numeral.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-07T23:32:03.273Z · LW(p) · GW(p)

Heh. I got this right originally, then reread it just recently while working on the book, saw what I thought was an error (1 numeral? just one second? why?) and "fixed" it.

comment by Dpar · 2010-05-11T06:57:50.844Z · LW(p) · GW(p)

What about knowledge for the sake of knowledge? For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way. Does that then mean that this belief is completely worthless and on the same level as the belief in ghosts, psychics, phlogiston, etc.?

Wouldn't taking your chain of reasoning to its logical conclusion require one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch? After all, how much personal sensory experience do you have that confirms the existence of atoms, for example?

DP

Replies from: RobinZ, RobinZ, MarsColony_in10years
comment by RobinZ · 2010-05-11T15:30:12.409Z · LW(p) · GW(p)

I think Eliezer's point is less strong than you think: for one thing, reading a history book is a sensory experience, and fewer history books would proclaim that The Crusades occurred in worlds where they had not than in worlds where they had.

Replies from: Dpar
comment by Dpar · 2010-06-07T11:27:21.040Z · LW(p) · GW(p)

I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let's just call it a day.

DP

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-07T12:25:34.943Z · LW(p) · GW(p)

I was going to write a more detailed reply, but then realized that any continued discussion will require us to debate what exactly the OP meant to say in his post, which is pointless since neither of us can read his mind. So let's just call it a day.

This is something of a fallacy of gray. Of course we can read his mind, through the power of human telepathy, by reading more on the same topic. We can't read minds perfectly, but perfect knowledge is never available anyway, and unless you can point out the specific uncertainty you have that decides the discussion, there is no sense in requiring more detail. You might want to stop the discussion for other reasons, but the reason you stated rings false.

Replies from: Dpar, anon895
comment by Dpar · 2010-08-09T17:33:17.296Z · LW(p) · GW(p)

First of all, calling speech "human telepathy" strikes me as a little pretentious, as well as inaccurate, since the word "telepathy" is generally accepted to have supernatural connotations. Speech is speech; no need to complicate the concept.

Secondly, the article you linked seemed a little rambling and without a clear point. All I was able to take away from it is that the meaning of words is relative. If that's the case then I respond with "well, duh!"; if I missed a deeper point, please enlighten me.

Finally, when you take it upon yourself to question another person's purely subjective reasoning, you're treading very close to completely indefensible territory. If I say that I wanted to stop the discussion because I believe that the author's intended meaning is ambiguous, it's a tall order to question that that is indeed what I believe. Unless you can come up with clear evidence of how my behavior contradicts my stated subjective opinion, you more or less have to take my word that that really is what I think.

DP

Replies from: thomblake
comment by thomblake · 2010-08-09T17:42:45.882Z · LW(p) · GW(p)

You misunderstand. Vladimir Nesov was not claiming that you don't believe that the author's intended meaning is ambiguous. Rather, he was claiming that your belief that "the author's intended meaning is ambiguous" is false, or at least not enough to constitute a good reason for stopping the discussion.

The point of calling speech 'human telepathy' in this instance is that you claimed there's no way to know what the author was thinking since we "can't read his mind". But there is a way to know what the author was thinking to some extent, so by reading your own reasoning backwards we therefore indeed can read minds.

Replies from: Dpar
comment by Dpar · 2010-08-09T18:06:45.814Z · LW(p) · GW(p)

I stated that taking the OP's reasoning to its logical conclusion requires one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch. RobinZ responded by saying that the OP's point is less strong than I think. Since two (presumably) reasonable people can disagree on what the OP meant, his point, as it is written, is by definition ambiguous.

Where do we go from here other than debate what he really meant? What is the point of such debate since neither of us has any special insight into his thought process that would allow us to settle this difference of subjective interpretations? I believe that to be sufficient reason for stopping the discussion. I'm not sure what specifically Vladimir takes issue with here.

As to your point of human telepathy -- comparing reading what someone wrote to reading his mind is a very big stretch. I can see how you could make that argument if you get really technical with word definitions, but I think that it is generally accepted that reading what a person wrote on a computer screen and reading his mind are two very different things.

DP

Replies from: thomblake
comment by thomblake · 2010-08-09T18:20:15.792Z · LW(p) · GW(p)

I stated that taking the OP's reasoning to its logical conclusion requires one to "evict" all beliefs in everything that one has not, and does not anticipate to, personally see, hear, smell, taste, or touch.

Right, but RobinZ was not arguing against this claim (depending on what you mean by 'personally' here) but rather pointing out that your reasoning was flawed.

For instance I don't anticipate that my belief that The Crusades took place will ever directly affect my sensory experiences in any way.

RobinZ pointed out that your belief that the crusades took place affects your sensory experience; if you believe they happened, then you should anticipate having the sensory experience of seeing them in the appropriate place in a history book, if you were to check.

If you thought that your belief that the crusades happened did not imply any such anticipated experiences, then yes, it would be worthless and on the same level as belief in an invisible dragon in your garage.

Replies from: Dpar
comment by Dpar · 2010-08-09T18:32:19.463Z · LW(p) · GW(p)

So reading about something in a book is a sensory experience now? I beg to differ. A sensory experience of The Crusades would be witnessing them first hand. The sensory experience of reading about them is perceiving patterns of ink on a piece of paper.

DP

Edit: Also, I think that RobinZ didn't state that as something that she believed, she stated that as something that she believed the OP meant. It's that subjective interpretation of his position that I didn't want to debate. If you wish to adapt that position as your own and debate its substance, we certainly can.

Replies from: Oligopsony, Vladimir_Nesov
comment by Oligopsony · 2010-08-09T18:40:37.526Z · LW(p) · GW(p)

What's important isn't the number of degrees of removal, but that the belief's being true corresponds to different expected sensory experiences of any kind at all than its being false. The sensory experience of perceiving patterns of ink on a piece of paper counts.

Now you could say: "reading about the Crusades in history books is strong evidence that 'the Crusades happened' is the current academic consensus," and you could hypothesize that the academic consensus was wrong. This further hypothesis would lead to further expected sensory data - for instance, examining the documents cited by historians and finding that they must have been forgeries, or whatever.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:01:01.849Z · LW(p) · GW(p)

If you adapt that position, then the belief in ghosts for instance will result in the sensory experience of reading or hearing about them, no? Can you then point to ANY belief that doesn't result in a sensory experience other than something that you make up yourself out of thin air?

If the concept of sensory experience is to have any meaning at all, you can't just extrapolate it as you see fit. If you can't see, hear, smell, taste, or touch an object directly, you have not had sensory experience with that object. That does not mean that that object does not exist though.

DP

Replies from: Unknowns
comment by Unknowns · 2010-08-09T19:11:15.796Z · LW(p) · GW(p)

Yes, ghost stories are evidence for the existence of ghosts. Just not very strong evidence.

There can be indirect sensory evidence as well as direct.

comment by Vladimir_Nesov · 2010-08-09T18:45:13.478Z · LW(p) · GW(p)

So reading about something in a book is a sensory experience now? I beg to differ.

You are disputing definitions. Reading something in a book is a sort of thing you'd change expectation about depending on your model of the world, as are any other observations. If your beliefs influence your expectation about observations, they are part of your model of reality. On the other hand, if they don't, they are sometimes too part of your model of reality, but it's a more subtle point.

And returning to your earlier concerns, consider me having a special insight into the intended meaning, and proving counterexample to the impossibility of continuing the discussion. Reading something in a history book definitely counts as anticipated experience.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:10:03.051Z · LW(p) · GW(p)

Very interesting read on disputing definitions. While the solution proposed there is very clever and elegant, this particular discussion is complicated by the fact that we're discussing the statements of a person who is not currently participating. Coming up with alternate words to describe our ideas of what "sensory experience" means does nothing to help us understand what he meant by it. Incidentally this is why I didn't want to get drawn into this debate to begin with.

Also -- "consider me having a special insight into the intended meaning" -- on what grounds shall I consider your having such special insight?

Replies from: Cyan, Vladimir_Nesov
comment by Cyan · 2010-08-09T19:18:59.961Z · LW(p) · GW(p)

Also -- "consider me having a special insight into the intended meaning" -- on what grounds shall I consider your having such special insight?

At the bottom of the sidebar at the bottom, you will find a list of top contributors; Vladimir Nesov is on the list.

comment by Vladimir_Nesov · 2010-08-09T19:19:43.773Z · LW(p) · GW(p)

on what grounds shall I consider your having such special insight?

I've closely followed Yudkowsky's work for a while, and have a pretty good model of what he believes on topics he publicly discusses.

Replies from: Dpar
comment by Dpar · 2010-08-09T19:27:49.575Z · LW(p) · GW(p)

Fair enough. So if, on your authority, the OP believes that reading about something is anticipated experience, does that not then cover every rumor, fairy tale, and flat out non-sense that has ever been written? What then would be an example of a belief that CANNOT be connected to an "anticipated experience"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T19:32:56.365Z · LW(p) · GW(p)

See this comment on the first part of your question and this page on the second (but, again, there are valid beliefs that don't translate into anticipated experience).

Replies from: Dpar
comment by Dpar · 2010-08-09T19:43:06.037Z · LW(p) · GW(p)

I agree wholeheartedly that there are valid beliefs that don't translate into anticipated experience. As a matter of fact what's written there was pretty much the exact point that I was trying to make with my very first response in this topic.

Does that not, however, contradict the OP's assertion that "Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it."? That's what I took issue with to begin with.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T20:05:55.861Z · LW(p) · GW(p)

It does contradict that assertion, but not at first approximation, and not in the sense you took the issue with. You have to be very careful if a belief doesn't translate into anticipated experience. Beliefs about historical facts that don't translate into anticipated experience (or don't follow from past experience, that is observations) are usually invalid.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:17:30.773Z · LW(p) · GW(p)

You seem to place a good deal of value on the concept of anticipated experience, but you give it a definition that's so broad that the overwhelming majority of beliefs will meet the criteria. If the belief in ghosts for instance can lead to the anticipated experience of reading about them in a book, what validity does the notion have as a means of evaluating beliefs?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T20:25:58.788Z · LW(p) · GW(p)

When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It's reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it's not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows.

The problem are the beliefs that purport to be speaking about reality, but really don't, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain.

It is a great heuristic to treat any beliefs that don't translate into anticipated experience with utmost suspicion, or even to run away from them in horror.

Replies from: Dpar
comment by Dpar · 2010-08-09T20:47:13.430Z · LW(p) · GW(p)

How would you learn that there are no ghosts? You form the belief "there are ghosts" which leads to the anticipated experience (by your definition of such) that "I will read about ghosts in a book", you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T21:06:32.948Z · LW(p) · GW(p)

That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you'd read about ghosts even though there is no such thing.

Replies from: Dpar
comment by Dpar · 2010-08-09T21:25:01.778Z · LW(p) · GW(p)

You're not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T21:28:00.051Z · LW(p) · GW(p)

I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.

Replies from: Dpar
comment by Dpar · 2010-08-09T21:42:10.759Z · LW(p) · GW(p)

Alright, I think I see what you're getting it, but I still can't help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like "I have a third leg that I can never see or feel", did I get that about right?

Replies from: Vladimir_Nesov, jimrandomh
comment by Vladimir_Nesov · 2010-08-09T21:49:11.515Z · LW(p) · GW(p)

I mean the only type of belief that it seems to filter out is absolute nonsense like "I have a third leg that I can never see or feel", did I get that about right?

Yes. It happens all the time. It's one way nonsense protects itself, to persist for a long time in minds of individual people and cultures.

(More generally, see anti-epistemology.)

Replies from: Dpar
comment by Dpar · 2010-08-09T21:55:46.373Z · LW(p) · GW(p)

So essentially what you and Eliezer are referring to as "anticipated experience" is just basic falsifiability then?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T22:03:10.680Z · LW(p) · GW(p)

With a bayesian twist: things don't actually get falsified, don't become wrong with absolute certainty, rather observations can adjust your level of belief.

Replies from: SilasBarta, Dpar
comment by SilasBarta · 2010-08-09T22:31:02.644Z · LW(p) · GW(p)

Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, "Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability."

But the way I see it, this doesn't refute Popper, or the notion of falsifiability: it just means we've generalized the notion to probabilistic cases, instead of just the binary categorization of "unfalsified" vs. "falsified". This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.

Replies from: Vladimir_Nesov, steven0461
comment by Vladimir_Nesov · 2010-08-09T23:11:25.716Z · LW(p) · GW(p)

I reached much clearer understanding once I've peeled away the structure of probability measure and got down to mathematically crisp events on sample spaces (classes of possible worlds). From this perspective, there are falsifiable concepts, but they usually don't constitute useful statements, so we work with the ones that can't be completely falsified, even though parts of them (some of the possible worlds included in them) do get falsified all the time, when you observe something.

comment by steven0461 · 2010-08-09T23:33:26.261Z · LW(p) · GW(p)

Isn't that like saying we've generalized the theory that "all is fire" to cases where the universe is only part fire? If falsification is absolute then Popper's insight that "all is falsification" is just plain wrong; if falsification is probabilistic then surely the relevant ideas existed before Popper as probability theory. It's not like Popper invented the notion that if a hypothesis is falsified we shouldn't believe it.

comment by Dpar · 2010-08-09T23:03:27.154Z · LW(p) · GW(p)

Ok, I understand what you mean now. Now that you've clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.

comment by jimrandomh · 2010-08-09T22:09:28.855Z · LW(p) · GW(p)

Falsifiability can be quantified, in bits. If the only test you have for whether something's true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability.

(This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I'm not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)

Replies from: SilasBarta, Dpar
comment by SilasBarta · 2010-08-09T22:20:53.555Z · LW(p) · GW(p)

Just a note about terminology: "expected bits of evidence" also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.)

And if I understand you correctly, you're saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them?

If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability.

But then, at that point, I'm not sure if you're measuring falsifiability per se, or just general "epistemic goodness". It's okay to have those characteristics you want as a separate desideratum from falsifiability.

comment by Dpar · 2010-08-09T23:09:43.428Z · LW(p) · GW(p)

Isn't it an essential criteria of falsifiability to be able to design an experiment that can DEFINITIVELY prove the theory false?

Replies from: RobinZ, JoshuaZ, satt
comment by RobinZ · 2010-08-10T00:49:25.143Z · LW(p) · GW(p)

That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that "you need to be able to define experiments where at least one result would be completely impossible by the theory", a Bayesian will tell you that "you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result".

Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can "definitely prove" that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads.

Edit PS: I am totally stealing the meme that "Bayes is a generalization of Popper" from SilasBarta.

Replies from: SilasBarta, thomblake
comment by SilasBarta · 2010-08-10T14:05:17.228Z · LW(p) · GW(p)

Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out "Popper is so 70 years ago", the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.)

Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.

comment by thomblake · 2010-08-10T14:18:50.485Z · LW(p) · GW(p)

Edit PS: I am totally stealing the meme that "Bayes is a generalization of Popper" from SilasBarta.

I'm pretty sure that was handily discussed in An Intuitive Explanation of Bayes's Theorem and A Technical Explanation of Technical Explanation.

Replies from: RobinZ, SilasBarta
comment by RobinZ · 2010-08-10T15:40:25.616Z · LW(p) · GW(p)

Ehhcks-cellent!

comment by SilasBarta · 2010-08-11T15:31:25.634Z · LW(p) · GW(p)

Fair point, and it was EY's essay that showed me the connection. But keep in mind, the point of the essay is, "Bayesian inference is right, look how Popper is a crippled version of it."

My point in saying "my" meme is different: "Popper and falsificationism are on the right track -- don't shy away from the concepts entirely just because they're not sufficiently general." It's a warning against taking the failures of Popper to mean that any version of falsificationism is severely flawed.

comment by JoshuaZ · 2010-08-10T14:15:05.964Z · LW(p) · GW(p)

As Robin's explained below Bayesianism doesn't do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can't falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-10T14:42:18.981Z · LW(p) · GW(p)

Yes, but that still doesn't show falsificationism to be wrong, as opposed to "narrow" or "insufficiently generalized". Lakatos and Quine have also failed to show how it's a problem that you can't rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper's binary "falsified vs. unfalsified" to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations.

The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-08-10T16:03:46.605Z · LW(p) · GW(p)

Right, I was speaking in a non-Bayesian context.

comment by satt · 2010-08-10T21:38:07.655Z · LW(p) · GW(p)

If I remember rightly, that's where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn't work out how to rigorously make it flexible. And as no experiment's exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said.

But more recent work in philosophy of science suggests a more sophisticated way to talk about how falsifiability can work in the real world.

The key idea is "severe testing", where a "severe test" is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don't. This approach also disarms the "auxiliary hypotheses" objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn't be surprised to find out that's roughly equivalent to the Bayes net approach SilasBarta mentioned.)

comment by anon895 · 2010-08-09T18:13:14.460Z · LW(p) · GW(p)

I was expecting the link to be Mundane Magic.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-09T18:32:13.534Z · LW(p) · GW(p)

The point is not that the ability is "magical", but that it's real, that we do have an ability to read minds, in exactly the same sense as Dpar appealed to the impossibility of.

comment by RobinZ · 2010-05-11T15:34:21.632Z · LW(p) · GW(p)

Belatedly: Welcome to Less Wrong! Please feel free to introduce yourself.

Replies from: Dpar
comment by Dpar · 2010-06-07T11:27:41.850Z · LW(p) · GW(p)

A belated thanks! :)

DP

comment by MarsColony_in10years · 2015-02-19T19:06:44.947Z · LW(p) · GW(p)

The LessWrong FAQ says that there is value in replying to old content, so I'm commenting in hopes that it is useful to someone in the future, and just for the sake of organizing my thoughts.

I would have phrased this differently than Yudkowsky, but I think I understand the concept he was getting at when he gave this example:

Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all.

His point is that this is just semantics. It makes no difference to the world whether we label something "post-utopian" or "aegffsdfa eereraksrfa" or anything else. The words you read in the book will be the same. The reason I don’t like this example is that, if I actually knew some literary jargon, I might get some real verifiable information that does actually mean I should expect a specific kind of sensory experience. It’s just that the classification scheme is arbitrary, and so is my belief that one classification scheme is "correct".

The label is just a label, so arguing about classification schemes is just semantics. Using this definition, your belief that the crusades took place would affect what sorts of things you would expect to read, and what sorts of archeological finds you would expect to find if you went looking for them. However, if you believe that the crusades marked the beginning of the high middle ages, that would just be semantics. We could say that the middle ages started at the sacking of Rome, or we could make a label like "dark ages" to describe the intermediary period. What we call it and how we classify it makes no difference in the actual reality of history. It's just semantics.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-02-23T12:38:23.440Z · LW(p) · GW(p)

Semantic labels are part of the structure of an explicit model. For instance, the Chinese use the same word for both "rat" and "mouse". A model with a ratmouse vertex will behave differently to a model with separate rat and mouse verteces. The structure and function of model affect what it predicts, what it's users can notice, how they behave. Agents do not passively receive a stream of predetermined experiences, they interact with the world, and the experiences they can expect depend on the structure and function of their models...

..and more besides. Models contain evaluative weightings as well as neutral structure. For instance, in the English speaking world, mice have the connotation of being cute, rats of being vermin. The professor might not be failing to specify an empirical confirmable concept when describing the writer as a post utopian: she might rather be succeeding in tweaking her students' evaluative model. She might be aiming at making a social or political point.

There is a long history of the political influence of language ranging from Greek rhetoricIan's to Orwell' s essays. A STEM type might consider it pointless, to focus on such issues, rather than what can be proved objectively. A humanities type might also consider it pointless to focus on objective, empirical claims with no social or political upshot. Neither complaint is really about meaningfullness or semantics, in the sense if the meaningfulness of the words, rather they are both about the subjectively evaluated pointfulness of an activity.

By a convoluted meta level irony, the way the way the term "semantics" is often used is itself a way if funneling the reader towards a conclusion. We have seen that there are circumstances where a semantic change would make a difference: where it makes a structural/functional change, and where it makes an evaluative/connotational difference. Since these circumstances don't always to apply, there are circumstances where a semantic change really is trivial, really "just semantics". For instance, if the word cat were replaced by the word zeb, in a connotationally neutral way, that would be semantics of a pointless kind that doesn't change anything. But that situation is atypical. Although the standard rhetoric about what is "just semantic" suggests the opposite., most rewordings make a difference. Indeed, it is likely that people object to recordings because they do make a difference, not because they don't.

Consider: A: So youre pro abortion? B: I'm pro choice A: Thats just semantics.

A has spotted that B's rewording has strengthened his argument, by introducing a phrasing with a positive connotation, and so she objects to it... using the common apprehension that rewordings are just semantics, and don't change anything!

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-02-24T22:11:09.351Z · LW(p) · GW(p)

Thanks for breaching that topic. I considered pointing out that my "aegffsdfa eereraksrfa" example might be more difficult to pronounce than "post-utopian", and so actually would have an impact on the world in general. On reflection, I decided to make the assertion that it "makes no difference", since that would spare a lot of confusion. It's a good first order approximation. When introducing a topic, it's important to take the Bohr model view of the world before trying to explain quarks and leptons.

The entanglement of semantic language with our interpretation of reality clouds things. Scientific language is precise, but often dry and hard to understand. However, by de-coupling the two worlds, we study the underlying reality without those (or perhaps with only minimal) distorting effects from our language. That's what we are doing when we talk about Map and Territory here on LW. We get a better map from this, but if we also compare the collective maps of societies to the best maps of reality, we can look for systematic differences. Some of these are cognitive biases, which we tend to concentrate on here on LW. However, there are also many other interesting or useful things that we can learn about ourselves as mapmakers. For example, the Bouba/kiki effect might help us choose more intuitive vocabulary as we build a more and more extensive set of jargon.

Just studying the way languages evolve can be informative, whether it's rigorously using Computational Linguistics or informally by an author or artist. The mere existence of a formal scientific understanding of reality allows a poet or philosopher, if they are familiar only with the answers but not the underlying explanations, to look at some facet of human nature and ask "isn't it odd when people...". A great deal of social commentary is built from that one question.

comment by garethrees · 2010-05-12T16:24:53.103Z · LW(p) · GW(p)

You write, “suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a ‘post-utopian’. What does this mean you should expect from his books? Nothing.”

I’m sympathetic to your general argument in this article, but this particular jibe is overstating your case.

There may be nothing particularly profound in the idea of ‘post-utopianism’, but it’s not meaningless. Let me see if I can persuade you.

Utopianism is the belief that an ideal society (or at least one that's much better than ours) can be constructed, for example by the application of a particular political ideology. It’s an idea that has been considered and criticized here on LessWrong. Utopian fiction explores this belief, often by portraying such an ideal society, or the process that leads to one. In utopian fiction one expects to see characters who are perfectible, conflicts resolved successfully or peacefully, and some kind of argument in favour of utopianism. Post-utopian fiction is written in reaction to this, from a skeptical or critical viewpoint about the perfectibility of people and the possibility of improving society. One expects to see irretrievably flawed characters, idealistic projects turn to failure, conflicts that are destructive and unresolved, portrayals of dystopian societies and argument against utopianism (not necessarily all of these at once, of course, but much more often than chance).

Literary categories are vague, of course, and one can argue about their boundaries, but they do make sense. H. G. Wells’ “A Modern Utopia” is a utopian novel, and Aldous Huxley’s “Brave New World” is post-utopian.

Replies from: NancyLebovitz, Jack, David_Gerard, BarbaraB
comment by NancyLebovitz · 2010-05-13T00:28:18.974Z · LW(p) · GW(p)

Would you consider Le Guin's The Dispossessed to be post-utopian? I think she intends her Anarres to be a good place on the whole, and a decent partial attempt at achieving a utopia, but still to have plausible problems.

Replies from: tog
comment by tog · 2011-10-21T06:44:42.986Z · LW(p) · GW(p)

Not to go off on a tangent, but I'd say it's more utopian than critical of utopia - I don't think we can require utopias to be perfect to deserve the name, and Anarres is pretty (perhaps unrealistically) good, with radical (though not complete) changes in human nature for the better.

comment by Jack · 2010-05-13T00:32:30.130Z · LW(p) · GW(p)

Brave New World is definitely dystopian, not post-utopian. Nancy's suggestion for post-utopian is exactly right. I definitely agree that we can meaningfully classify cultural production, though.

Replies from: garethrees
comment by garethrees · 2010-05-13T11:46:22.035Z · LW(p) · GW(p)

I think it's both. "Brave New World" portrays a dystopia (Huxley called it a "negative utopia") but it's also post-utopian because it displays skepticism towards utopian ideals (Huxley wrote it in reaction to H. G. Wells' "Men Like Gods").

I don't claim any expertise on this subject: in fact, I hadn't heard of post-utopianism at all until I read the word in this article. It just seemed to me to be overstating the case to claim that a term like this is meaningless. Vague, certainly. Not very profound, yes. But meaningless, no.

The meaning is easily deducible: in the history of ideas "post-" is often used to mean "after; in consequence of; in reaction to" (and "utopian" is straightforward). I checked my understanding by searching Google Scholar and Books: there seems to be only one book on the subject (The post-utopian imagination: American culture in the long 1950s by M. Keith Booker) but from reading the preview it seems to be using the word in the way that I described above.

The fact that the literature on the subject is small makes post-utopianism an easier target for this kind of attack: few people are likely to be familiar with the idea, or motivated to defend it, and it's harder to establish what the consensus on the subject is. By contrast, imagine trying to claim that "hard science fiction" was a meaningless term.

comment by David_Gerard · 2010-12-02T14:12:56.681Z · LW(p) · GW(p)

Indeed. Some rationalists have a fondness for using straw postmodernists to illustrate irrationality. (Note that Alan Sokal deliberately chose a very poor journal, not even peer-reviewed, to send his fake paper to.) It's really not all incomprehensible Frenchmen. While there may be a small number of postmodernists who literally do not believe objective reality exists, and some more who try to deconstruct actual science and not just the scientists doing it, it remains the case that the human cultural realm is inherently squishy and much more relative than people commonly assume, and postmodernism is a useful critical technique to get through the layers of obfuscation motivating many human cultural activities. Any writer of fiction who is any good, for instance, needs to know postmodernist techniques, whether they call them that or not.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-02T15:46:53.915Z · LW(p) · GW(p)

Yes.

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

The idea that the categories we experience as "in the world" are actually in our heads is something postmodernists share with cognitive scientists; many of the topics discussed here (especially those explicitly concerned with cognitive bias) are part of that same enterprise.

I suspect this leads to a kind of uncanny valley effect, where something similar-but-different creates more revulsion than something genuinely opposed would.

Of course, knowing that does not make me any less frustrated with the sort of soi-disant postmodernist for whom category deconstruction is just a verbal formula, rather than the end result of actual thought.

I also weakly suspect that postmodernists get a particularly bad rap simply because of the oxymoronic name.

Replies from: David_Gerard
comment by David_Gerard · 2010-12-02T15:51:29.817Z · LW(p) · GW(p)

That said, it's not too surprising that postmodernists are often the straw opponent of choice.

Oh yeah. While it's far from a worthless field, and straw postmodernists are a sign of lazy thinking, it is also the case that postmodernism contains staggering quantities of complete BS.

Thankfully, these are also susceptible to postmodernist analysis, if not by those who wish to keep their status ...

comment by BarbaraB · 2012-06-14T20:55:57.170Z · LW(p) · GW(p)

I played a mental game trying to make predictions based on the information, that Wulky Wilkinsen is post-utopian and shows colonial allienation - never heard of any of that before :-). Wulky Wilkinsen is post-utopian ... I expect to find a bunch of critically acclaimed authors, who wrote their most famous books before Wulky wrote his most famous books (5 - 15 years ahead ?), lived in the same general area as Wulky, and portrayed people who were more altruistic and prone to serve general good than we normally see in real life. It does not say too much about the actual writing style of Wulky - he could have written either in the similar way as "the bunch" (utopians), or just the opposite - he could have been just fed up by the utopians' style and portray people more evil than we normally see in everyday life. So my prediction does not tell what Wulky's books feel like, but it is still a prediction, right ? Colonial allienation - the book contains characters that have lived in a colony (e.g. India) for a long time (athough they might have just arrived to the "maternal" colonial country, e.g. Britain). These characters are confronted with other characters that have lived in the "maternal" colonial country for a long time (athough they might have just arrived to the colony :-) ). There are conflicts between these two groups of people, based on their background. They have different preferences when they are making decisions, probably involving other people. Thus they are allienated. Do not tell me this was not the point of Eliezer's post, let me just have some fun !

comment by Leafy · 2010-05-13T12:56:52.808Z · LW(p) · GW(p)

How is this not just a simple arguement on semantics (on which I believe a vast majority of arguements are based)?

They both accept that the tree causes vibrations in the air as it falls, and they both accept that no human ear will ever hear it. The arguement appears to be based solely on the definition, and surrounding implications, of the word "sound" (or "noise" as it becomes in the article) - and is therefore no arguement at all.

Replies from: bigjeff5
comment by bigjeff5 · 2011-01-27T18:12:59.271Z · LW(p) · GW(p)

I think that may have been the point:

The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

You can define a thing based on any criteria you like. It simply has to allow your expectations to agree with reality in order for it to be true.

One says "it is sound because it vibrates regardless of whether anyone hears it." This person believes that sound is the vibrations.

The other says "it is not sound because it is never processed in a mind." This person does not deny that the vibrations exist, he simply believes it isn't sound until someone hears it.

These two have different definitions of "sound", but within their definitions both allow expectations that are completely consistent with reality. The point is to make sure your beliefs "pay rent" - that they allow you to have expectations that match up with reality. If the second person had the same belief of what sound was as the first (i.e. vibrations in the air), yet also believed that vibrations in the air do not occur when there is nobody to hear them, that belief would not pay rent. When they recorded the sound with nobody around he would expect there to be nothing at all on the tape, yet there would be something on the tape. The only way to resolve this is to adjust your belief after the fact, which means your belief couldn't pay its rent.

comment by timtyler · 2010-08-21T10:07:09.688Z · LW(p) · GW(p)

See also the movie version of this post.

Replies from: Rain
comment by Rain · 2010-08-22T13:14:20.969Z · LW(p) · GW(p)

This video has sound problems which immediately turned me off wanting to try and parse what he's saying. I suggest using a microphone and properly syncing the sound if they intend to do many more of these.

comment by alexvermeer · 2011-01-04T19:07:01.485Z · LW(p) · GW(p)

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his book? Nothing."

When I first read this I thought, "Huh? Surely it tells you something, because I already have beliefs about what 'utopian' probably means, and what the 'post' part of it probably means, and what context these types of terms are usually used in... That sounds like a whole bag of reasons to expect certain things/themes/ideas in his book!"

But I think this missed the point Eliezer is making; a point I suggest would be more clear if he said:

"Or suppose your postmodern English professor teaches you that the famous Wulky Wilkinsen is actually a "barnbeanbaggle". What does this mean you should expect from his book? Nothing."

Darn right. I have no idea what a "barnbeanbaggle" is. It creates no anticipations about what I"ll find in his book; it's free-floating.

Replies from: ata
comment by ata · 2011-01-04T19:46:19.483Z · LW(p) · GW(p)

Free-floating beliefs have to at least feel like beliefs. You can't even think you have a belief about whether Wulky Wilkinsen is a barnbeanbaggle unless you think you have some idea of what "barnbeanbaggle" is being used to mean. The thing about using a made-up word is that it's too easy to notice that you don't know what to anticipate from it. The thing about "post-utopian" is that, even if you have some idea of what "post-utopian" is supposed to mean, being told (by someone you perceive as sufficiently authoritative) that a certain author is "post-utopian" is quite likely to just make you selectively interpret that author's works to fit that schema. Similar to how you can make professional wine tasters describe a white wine the way they usually describe red wines by dying it red.

Replies from: alexvermeer
comment by alexvermeer · 2011-01-04T21:20:07.236Z · LW(p) · GW(p)

The made-up word being too easy to notice is a good point.

  1. "I believe Wulky is a post-utopian."
  2. "The professor says Wulky is a post-utopian, and I expect to figure out what the term means and confirm or disconfirm this claim by reading his book."

When I first read this post I thought (2), and if I understand it right, the post is attacking (1).

I may be getting too tied-up with the labels being used...

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-05T22:29:01.645Z · LW(p) · GW(p)

You originally misunderstood Eliezer's point, and now understand it.

If many people will similarly misunderstand it, that is a reason for Eliezer to change it on lesswrong or if/when it appears in his book. If you are relatively unusual, it is only a weak reason.

Reasons not to change it would be a lack of viable alternatives. Can we think of an alternative better than "post-utopian" or "barnbeanbaggle"? For example, a less meaningful term from literary theory or another field?

Replies from: BarbaraB, BarbaraB
comment by BarbaraB · 2012-06-14T20:07:18.085Z · LW(p) · GW(p)

My boyfriend just suggested "metaspontaneity" !

comment by MoreOn · 2011-02-25T18:45:42.714Z · LW(p) · GW(p)

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”

She thinks: “What a butt-ugly idiot!” and gets the hell away from him.

Joe goes on happily believing that he’s smart and beautiful.

For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.

But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.

Replies from: Spurlock, Manfred, TheOtherDave, jimrandomh, NancyLebovitz, JGWeissman, buybuydandavis, viktor-riabtsev-1
comment by Spurlock · 2011-02-25T19:40:26.417Z · LW(p) · GW(p)

It's sort of taken for granted here that it is in general better to have correct beliefs (though there have been some discussions as to why this is the case). It may be that there are specific (perhaps contrived) situations where this is not the case, but in general, so far as we can tell, having the map that matches the territory is a big win in the utility department.

In Joe's case, it may be that he is happier thinking he's beautiful than he is thinking he is ugly. And it may be that, for you, correct beliefs are not themselves terminal values (ends in themselves). But in both cases, having correct beliefs can still produce utility. Joe for example might make a better effort to improve his appearance, might be more likely to approach girls who are in his league and at his intellectual level, thereby actually finding some sort of romantic fulfillment instead of just scaring away disinterested ladies. He might also not put all his eggs in the "underwear model" and "astrophysicist" baskets career-wise. You can further twist the example to remove these advantages, but then we're just getting further and further from reality.

Overall, the consensus seems to be that wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook will pay off in the long run.

comment by Manfred · 2011-02-25T19:54:04.349Z · LW(p) · GW(p)

The trouble is that this rationale leads directly to wireheading at the first chance you get - choosing to become a brain in a vat with your reward centers constantly stimulated. Many people don't want that, so those people should make their beliefs only a means to an end.

However, there are some people who would be fine with wireheading themselves, and those people will be totally unswayed by this sort of argument. If Joe is one of them... yeah, sure, a sufficiently pleasant belief is better than facing reality. In this particular case, I might still recommend that Joe face the facts, since admitting that you have a problem is the first step. If he shapes up enough, he might even get married and live happily ever after.

comment by TheOtherDave · 2011-02-25T21:04:38.274Z · LW(p) · GW(p)

Well, he might. Or, rather, there might be available ways of becoming smarter or prettier for which jettisoning his false beliefs is a necessary precondition.

But, admittedly, he might not.

Anyway, sure, if Joe "terminally" values his beliefs about the world, then he gets just as much utility out of operating within a VR simulation of his beliefs as out of operating in the world. Or more, if his beliefs turn out to be inconsistent with the world.

That said, I don't actually know anyone for whom this is true.

Replies from: MoreOn
comment by MoreOn · 2011-02-25T23:29:11.171Z · LW(p) · GW(p)

That said, I don't actually know anyone for whom this is true.

I don't know too many theist janitors, either. Doesn't mean they don't exist.

From my perspective, it sucks to be them. But once you're them, all you can do is minimize your misery by finding some local utility maximum and staying there.

comment by jimrandomh · 2011-02-25T22:12:36.640Z · LW(p) · GW(p)

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

They can. They just do so very rarely, and since accepting some inaccurate beliefs makes it harder to determine which beliefs are and aren't beneficial, in practice we get the highest utility from favoring accuracy. It's very hard to keep the negative effects of a false belief contained; they tend to have subtle downsides. In the example you gave, Joe's belief that he's already smart and beautiful might be stopping him from pursuing self-improvements. But there definitely are cases where accurate beliefs are definitely detrimental; Nick Bostrom's Information Hazards has a partial taxonomy of them.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-26T01:47:39.035Z · LW(p) · GW(p)

I don't think it's possible for a reflectively consistent decision-maker to gain utility from self-deception, at least if you're using an updateless decision theory. Hiding an unpleasant fact F from yourself is equivalent to deciding never to know whether F is true or false, which means fixing your belief in F at your prior probability for it. But a consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

Replies from: jimrandomh
comment by jimrandomh · 2011-02-26T03:04:19.968Z · LW(p) · GW(p)

A consistent decision-maker who loses 10 utilons from believing F with probability ~1 must lose p*10 utilons for believing F with probability p.

No, this is not true. Many of the reasons why true beliefs can be bad for you are because information about your beliefs can leak out to other agents in ways other than through your actions, and there is is no particular reason for this effect to be linear. For example, blocking communications from a potential blackmailer is good because knowing with probability 1.0 that you're being blackmailed is more than 5 times worse than knowing with probability 0.2 that you will be blackmailed in the future if you don't.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-26T17:12:04.486Z · LW(p) · GW(p)

Oh, sure. By "gain utility" I meant "gain utility directly," as in the average Joe story.

Replies from: jimrandomh
comment by jimrandomh · 2011-02-26T17:20:27.132Z · LW(p) · GW(p)

I don't think it's linear in the average Joe story, either; if there's one threshold level of belief which changes his behavior, then utility is constant for levels of belief on either side of that threshold and discontinuous in between.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-26T17:47:07.379Z · LW(p) · GW(p)

A rational agent can have its behavior depend on a threshold crossing of belief, but if there's some belief that grants it utility in itself (e.g. Joe likes to believe he is attractive), the utility it gains from that belief has to be linear with the level of belief. Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

Replies from: jimrandomh
comment by jimrandomh · 2011-02-26T17:58:54.940Z · LW(p) · GW(p)

Otherwise, Joe can get dutch-booked by a Monte Carlo plastic surgeon.

This doesn't sound right. Could you describe the Dutch-booking procedure explicitly? Assume that believing P with probability p gives me utility U(p)=p^2+C.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-26T19:33:13.180Z · LW(p) · GW(p)

An additive constant seems meaningless here: if Joe gets C utilons no matter what p is, then those utilons are unrelated to p or to P--Joe's behavior should be identical if U(p)=p^2, so for simplicity I'll ignore the C.

Now, suppose Joe currently believes he is not attractive. A surgery has a .5 chance of making him attractive and a .5 chance of doing nothing. This surgery is worth U(.5)-U(0)=.25 utilons to Joe; he'll pay up to that amount for it.

Suppose instead the surgeon promises to try again, once, if the first surgery fails. Then Joe's overall chance of becoming attractive is .75, so he'll pay U(.75)-U(0)=.75^2=0.5625 for the deal.

Suppose Joe has taken the first deal, and the surgeon offers to upgrade it to the second. Joe is willing to pay up to the difference in prices for the upgrade, so he'll pay .5625-.25=.3125 for the upgrade.

Joe buys the upgrade. The surgeon performs the first surgery. Joe wakes up and learns that the surgery failed. Joe is entitled to a second surgery, thanks to that .3125-utility purchase of the upgrade. But the second surgery is now worth only .25 utility to him! The surgeon offers to buy that second surgery back from him at a cost of .26 utility. Joe accepts. Joe has spent a net of .0525 utility on an upgrade that gave him no benefit.

As a sanity check, let's look at how it would go if Joe's U(p)=p. The single surgery is worth .5. The double surgery is worth .75. Joe will pay up to .25 utility for the upgrade. After the first surgery fails, the upgrade is worth .5 utility. Joe does not regret his purchase.

Replies from: jimrandomh
comment by jimrandomh · 2011-02-26T20:25:59.008Z · LW(p) · GW(p)

You're missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward. If Joe expects to have the surgery but to never find out whether or not it worked, then its value is U(0.5)-U(0)=0.25. On the other hand, if he expects to be told whether it worked or not, then he ends up with a belief-score or either 0 or 1, not 0.5, so its value is (0.5*U(1.0) + 0.5*U(0)) - U(0) = 0.5.

Suppose Joe is uncertain whether he's attractive or not - he assigns it a probability of 1/3. Someone offers to tell him the true answer. If Joe's utility-of-belief function is U(p)=p^2, then being told the answer is worth ((1/3)*U(1) + (2/3)*U(0)) - U(1/3) = ((1/3)*1 + (2/3)*0) - (1/9) = 2/9, so he takes the offer. If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)*sqrt(1) + (2/3)*sqrt(0)) - sqrt(1/3) = -0.244, so he plugs his ears.

Replies from: HonoreDB, HonoreDB
comment by HonoreDB · 2011-02-26T21:33:57.916Z · LW(p) · GW(p)

You're missing the fact that how much Joe values the surgery depends on whether or not he expects to be told whether it worked afterward.

Good point.

If on the other hand his utility-of-belief function were U(p)=sqrt(p), then being told the information would be worth ((1/3)sqrt(1) + (2/3)sqrt(0)) - sqrt(1/3) = -0.244, so he plugs his ears.

I agree here.

But I still suspect that if your U(p) is anything other than linear on p, you can get Dutch-booked. I'll try to come back with a proof, or at least an argument.

comment by HonoreDB · 2011-02-28T22:43:12.584Z · LW(p) · GW(p)

Okay, here we go. I've possibly reinvented the wheel here, but maybe I've come up with a simple, original result. That'd be cool. Or I'm interestingly wrong.


We wish to show that superlinear utility-of-belief functions, or equivalently ones that would cause an agent to prefer ignorance, lead to inconsistency.

Suppose Joe equally wants to believe each of two propositions, P and Q, to be true, with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x. Without loss of generality, we set U(0) to 0 and U(1) to 1. Both propositions concern events that will invisibly occur at some known future time.

Joe anticipates that he will eventually be given the following choice, which will completely determine P and Q:

Option 1: P xor Q. Joe won't know which one is true, so he believes each of them is true with probability 1/2. So he has U(1/2)+U(1/2)=2*U(1/2) utility. By assumption this is greater than 1. So let 2*U(1/2) - 1 = k.

Option 2: One proposition will become definitely true. The other will become true with probability p, where p is chosen to be greater than 0 but less than U-inverse(k). Joe will know which proposition is which. Joe's utility would be less than U(1) + U(U-inverse(k)), or less than 1 + 2*U(1/2) - 1, or less than 2*U(1/2).

Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2*U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2*U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to x*U(1). If he were to modify his utility function such that U'(x) = x*U(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.

Thus we can say that all superlinear utility functions are inherently unstable, in that an agent with U(x) > x*U(1) for all probabilities x, and U(x) strictly increasing with x, may increase its expected U by modifying to U'(x) = x*U(1) for all x.

The strongest possible constraint we can give for inherent stability of a utility-of-belief function is that, with utility-of-belief function U, an agent can never improve its U-utility by switching to any other utility function, except under cases wherein it anticipates being modeled by an outside entity. If we removed this exception, no non-degenerate utility-of-belief function could be called stable because we could always posit an outside entity that punishes agents modeled to have specific utility functions. The linear utility of belief function satisfies this condition, since it behaves identically whether it is maximizing the probability of P or its U(p(P)), so it always anticipates itself maximizing its own utility function. We have just shown that no superlinear function satisfies this constraint.

But by conservation of expected evidence, no agent with a linear or sublinear utility-of-belief function can increase its expected utility-of-belief by hiding evidence from itself.

Therefore, a rational agent with a stable utility function cannot make itself happier by hiding evidence from itself, unless it is being modeled by an outside entity.

Replies from: HonoreDB, nshepperd, jimrandomh
comment by HonoreDB · 2011-03-01T05:55:43.618Z · LW(p) · GW(p)

Apologies; I realize this is both not very clearly written, and full of holes when considered as a formal proof. I have a decent excuse in that I had to rush out the door to go to the HPMOR meetup right after writing it. Rereading it now, it still looks like a sketch of a compelling proof, so if neither jimrandomh nor any lurkers see any obvious problems, I'll write it up as a longer paper, with more rigorous math and better explanations.

Replies from: tog
comment by tog · 2011-10-21T07:13:17.911Z · LW(p) · GW(p)

if neither jimrandomh nor any lurkers see any obvious problems, I'll write it up as a longer paper, with more rigorous math and better explanations

Did you ever end up writing it up? I think I'd follow more easily if you went a little slower and gave some concrete examples.

comment by nshepperd · 2011-03-01T07:15:30.056Z · LW(p) · GW(p)

That's interesting. The one problem that I have is it's rather unclear when a belief is evaluated for the purposes of utility. Which is to say, does Joe care about his belief at time t=now, or t=now+delta, or over all time? It seems obvious that most utility functions that care only about the present moment would have to be dynamically inconsistent, whether or not they mention belief.

Replies from: HonoreDB
comment by HonoreDB · 2011-03-02T00:09:02.138Z · LW(p) · GW(p)

Thanks, that's a good point. In fact, it's possible we can reduce the whole thing to the observation that it matters when utility of belief function is evaluated if and only if it's nonlinear.

comment by jimrandomh · 2011-03-01T13:21:30.453Z · LW(p) · GW(p)

Thanks for taking the time to try puzzling this out, but I suspect it's just interestingly wrong. The magic seems to be happening in this paragraph:

Joe prefers Option 1. Therefore he anticipates that he will choose Option 1. Therefore, his current utility is 2U(1/2). But what if he anticipated that he would choose Option 2? Then his current utility would be 2U(1/2+p/2). So he wishes his k were smaller than U-inverse(k), meaning he wishes his U(x) were closer to xU(1). If he were to modify his utility function such that U'(x) = xU(1) for all x, the new Joe would not regret this decision since it strictly increases his expected utility under the new function.

I don't see where U(1/2+p/2) comes from; should that be U(1)+U(p)? I'm also not sure it's possible for the agent to anticipate choosing option 2, given the information it has. Finally, what does it matter whether a change increases expected utility under the new function? It's only utility under the old function that matters - changing utility function to almost anything maximizes the new function, including degenerate utility functions like number of paperclips.

Replies from: HonoreDB
comment by HonoreDB · 2011-03-02T00:02:36.751Z · LW(p) · GW(p)

I don't see where U(1/2+p/2) comes from

Joe doesn't know yet which proposition would get 1 and which would get p, so he assigns the average to both. He anticipates learning which is which, at which point it would change to 1 and p.

I'm also not sure it's possible for the agent to anticipate choosing option 2, given the information it has.

Not sure what you mean here.

Finally, what does it matter whether a change increases expected utility under the new function?

It just shows the asymmetry. Joe can maximize U by changing into Joe-with-U', but Joe-with-U' can't maximize U' by changing back to U.

comment by NancyLebovitz · 2011-02-25T22:21:16.230Z · LW(p) · GW(p)

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

Is there a difference between utility and anticipated experiences? I can see a case that utility is probability of anticipated, desired experiences, but for purposes of this argument, I don't think that makes for an important difference.

Replies from: MoreOn
comment by MoreOn · 2011-02-25T23:19:03.824Z · LW(p) · GW(p)

"Smart and beautiful" Joe is being Pascal's-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.

I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn't want to be Joe. But once you are Joe, his irrationality looks different from the inside.

comment by JGWeissman · 2011-02-25T23:17:30.695Z · LW(p) · GW(p)

In this example, Joe's belief that he's smart and beautiful does pay rent in anticipated experience. He anticipates a favorable reaction if he approaches a girl with his gimmick and pickup line. As it happens, his innaccurate beliefs are paying rent in inaccurate anticipated experiences, and he goes wrong epistemically by not noticing that his actual experience differs from his anticipated experience and he should update his beliefs accordingly.

The virtue of making beliefs pay rent in anticipated experience protects you from forming incoherent beleifs, maps not corresponding to any territory. Joe's beliefs are coherent, correspond to a part of the territory, and are persistantly wrong.

Replies from: MoreOn
comment by MoreOn · 2011-02-25T23:24:56.584Z · LW(p) · GW(p)

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

Replies from: JGWeissman, Steven_Bukal
comment by JGWeissman · 2011-02-25T23:32:24.396Z · LW(p) · GW(p)

In your view, don't all beliefs pay rent in some anticipated experience, no matter how bad that rent is?

No, for an example of beliefs that don't pay rent in any anticipated experience, see the first 3 paragraphs of this article:

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

Replies from: MoreOn
comment by MoreOn · 2011-02-25T23:34:40.781Z · LW(p) · GW(p)

Two people have semantically different beliefs.

Both beliefs lead them to anticipate the same experience.

EDIT: In other words, two people might think they have different beliefs, but when it comes to anticipated experiences, they have similar enough beliefs about the properties of sound waves and the properties of falling trees and recorders and etc etc that they anticipate the same experience.

Replies from: JGWeissman
comment by JGWeissman · 2011-02-25T23:53:11.836Z · LW(p) · GW(p)

Two people have semantically different beliefs.

Taboo "semantically".

See also the example of The Dragon in the Garage, as discussed in the followup article.

Replies from: MoreOn
comment by MoreOn · 2011-02-26T00:31:18.918Z · LW(p) · GW(p)

Taboo'ed. See edit.

Although I have a bone to pick with the whole "belief in belief" business, right now I'll concede that people actually do carry beliefs around that don't lead to anticipated experiences. Wulky Wilkinsen being a "post-utopian" (as interpreted from my current state of knowing 0 about Wulky Wilkinsen and post-utopians) is a belief that doesn't pay any rent at all, not even a paper that says "moneeez."

comment by Steven_Bukal · 2011-06-27T19:41:47.173Z · LW(p) · GW(p)

If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.

Or they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.

comment by buybuydandavis · 2011-09-21T09:43:35.946Z · LW(p) · GW(p)

But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?

I think you've hit on one of the conceptual weaknesses of many Rationalists. Beliefs can pay rent in many ways, but Rationalists tend to only value the predictive utility of beliefs, and pooh pooh other other utilities of belief. Comfort utility - it makes me feel good to believe it. Social utility - people will like me for believing it. Efficacy utility - I can be more effective if I believe it.

Predictive Truth is a means to value, and even if a value in itself, it's surely not the only value. Instead of pooh poohing other types of utility, to convince people you need to use that predictive utility to analyze how the other utilities can best be fulfilled.

comment by Viktor Riabtsev (viktor-riabtsev-1) · 2018-10-12T13:30:37.196Z · LW(p) · GW(p)

I am going to try and sidetrack this a little bit.

Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success.

I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the wrong way to think of this subject.

Joe can go in front of a mirror and proceed to tell/chant to himself 3-4 times: "I am smart! I am beautiful! Mom always said so!". Is he not in fact, simply pumping himself up? Does it matter that he isn't using any coherent or quantitative evaluation methods with respect to the terms of "smart" or "beautiful"? Is he not simply trying to improve his own morale?

I think the right way to describe this situation is actually: "Joe delivers self motivational mantras/speeches to himself" and believes that this is beneficial. This belief does pay in anticipated experiences. He does feel more confident afterwards, it does make him more effective in conveying himself and his ideas in front of others. Its a real effect, and it has little to do with a false belief that he is actually "smart and beautiful".

comment by rabidchicken · 2011-03-16T01:09:25.984Z · LW(p) · GW(p)

This post probably changed the way I regulate my own thoughts more than any other. How many arguments I have heard never would have happened if everyone involved read this...

comment by undermind · 2011-04-13T23:40:01.555Z · LW(p) · GW(p)

Based on this, I would very much like to make a variant of Monopoly, with beliefs/theories in place of properties, and evidence for money. Invest a large chunk to establish a belief, with its rent determined by sophistication and usefulness of prediction, ranging from Aristotelian physics to relativity, spermatists & ovists to Darwinian evolution, and so on. Other players would have to give you some credit when they land on your theories, and admit that they give results.
This would also be a great way to teach some history of science, if well designed.
Of course, the analogy becomes interesting when you consider what corresponds to the cutthroat capitalism...

comment by mendel · 2011-05-19T13:34:21.282Z · LW(p) · GW(p)

I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).

  • The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better understanding. (I base this reading on Jim Loy's page on Phlogiston Theory.

  • Literary genres don't have much predictive powers if you don't know anything about them - if you do, then they do. Classifying a writer as producing "science fiction" or "fantasy" creates anticipations that are statistically meaningful. For another comparison, saying some band plays "Death Metal" will shape our anticipation; somewhat differently for those who can distinguish Death Metal from Speed Metal as compared to those who merely know that "Metal" means "noise".

I can imagine beliefs leading to false anticipations, and they're obviously inferior to beliefs leading to more correct ones. That doesn't mean they're free-floating.

One example for the free-floating belief is actually about the tree falling in the forest: to believe that it makes a sound does not anticipate any sensory experience, since the tree falls explicitly where nobody is around to hear it, and whether there is sound or no sound will not change how the forest looks when we enter it later. However, to let go of the belief that the tree makes a sound does not seem to me to be very useful. What am I missing?

I understand that many beliefs are held not because they have predictive power, but because they generalize experiences (or thoughts) we have had into a condensed form: a sort of "packing algrithm" for the mind when we detect something common; and when we understand this commonality enough, we get to the point where we can make prediction, and if we don't yet, we can't, but may do so later. There is no belief or thought we can hold that we couldn't trace back to experiences; beliefs are not anticipatory, but formed from hindsight. They organize past experience. Can you predict which of these beliefs is not going to be helpful in organizing future experiences? How?

comment by allenpaltrow · 2011-06-03T17:43:50.027Z · LW(p) · GW(p)

I think that this is really a discussion of explanatory power, of which scientific causation is one example. All theories attempt to explain a set of examples. Scientific theories attempt to explain causation in natural phenomena, thus their "explanatory power" is proportional to their predictive power. A unified theory of forces at the planetary and subatomic levels would explain more examples than any do now, thus it would have great explanatory power.

Yet causation isn't the only type of explanatory relationship. Causation implies time and events, whereas these are only one type of explanation. For example, the Pythagorean theorem explains why physical right triangles in reality have the lengths that they do. It doesn't "cause" them to have the properties they do. It would be foolish to say that any property of physical triangles "explains" or "proves" the Pythagorean theorem, because mathematical truths exist independent of practicalities. Plato's dialogue The Euthyphro beautifully explains why even if the set of things which are x and the set of things which are y are equivalent (in that case, the set of pious actions and the set of god loved actions,) they are not the same quality if one (god loved) explains the other (piety) and not vice versa. Similarly, the total number of hydrogen atoms in a glass of water is always even, but it is the quality of evenness (any number which is a multiple of two must be even) that explains this, not any quality of hydrogen. The one "explains" (but does not "cause") the other.

Thus, I think some parts of this post would be better understood as being stated as thus: any theory which provides no additional explanatory power should be ignored.

So, looking at the case of Phlogiston, the OP is not saying it is "wrong," but that it lacks the explanatory power that justifies it as a useful theory. If I take the Neils Bohr model of the atom, and say that there are extra invisible subatomic particles, and that these particles are "god," you would be hard pressed to prove me wrong. But this theory does not predict any new phenomena, nor is it falsifiable, nor, most importantly, does it have an explanatory relationship with any other known truth about atoms: none of them explain this theory, and it explains none of them. It exists completely independent from any other aspect of atomic theory, thus it lacks any explanatory power as a theory.

Yet there are theories which have great explanatory power but not empirical predictive power. Lets say I'm a simplistic deontologist who says that killing is wrong because human life is good. Along comes a utilitarian who says, I have a theory which explains, in all the cases where you're right, why you are right, and in those cases where you aren't, why you aren't, according to your own first principle. In terms of my very simplistic ethical theory, the utilitarian would absolutely be "less wrong" than me, for he has provided a theory which better explains the hard cases my theory failed to (justified killings, kill 1 save 2 etc.)

In the case of the post-utopian author, I think that we again are getting wrapped up in "prediction" when we should concern ourselves with explanation.

What is a plumber? Is it a man who comes to your house, sits on your couch, eats your food, watches your TV, and flirts with your wife? Even if this is true of all plumbers, it is not the definition of plumber. Definitions should be proscriptive, such that they give you the means to determine what counts as an x, and what a good x is. If a plumber fixes pipes, anyone who fixes pipes is a plumber, a good plumber fixes them well, and no one who doesn't fix pipes is a plumber.

Thus, hold literary labels to the same standard. Don't ask, "is this label true"? Because as we saw earlier with the god particle example, many theories cannot be proven false but still have greater or lesser explanatory power (see economics, ethical theories etc). The better standard is explanatory power. Is there a definition of the quality "post-utopian" such that any book with quality x is post-utopian, x explains why it counts as post utopian, and the more x it is, the more it is post-utopian it is? Saying post-utopian is a,b,c,d,e,f,g,h, but failing to provide a single explanation of the aforementioned form is like calling the plumber a man who eats your food and flirts with your wife: it is a descriptive definition, not a proscriptive definition. It may be true of the every plumbers, but it is not the thing that makes plumbers count as plumbers.

I think the OP meant to say that literary labels like post-utopianism fail to meet this standard. Sure, you can come up with descriptive statements of the terms which may be true (post-utopian books do not portray utopian societies as possible) but this is not a definition because it is not this quality that a. makes post-utopian books count as utopian, b. without which a book cannot be post-utopian, and c. designates a clear set of books which either are, or are not, post-utopian. Textual analysis perhaps can be more wrong and "less wrong," but literary theories are just not the sorts of truth-bearing statements that mathematical, scientific, or philosophical theories are.

Compare "post-utopian" to "even". Even numbers are a set of specific numbers, but there is a single quality they have (being multiples of 2) which explains why they are in the set. Without that quality, they would, "by definition", not be even. This is the standard we should be looking for in definitions and theories. Not just that they are "true" (plumbers do steal your food, watch your tv, and flirt with your wife) but that they have the sort of explanatory power we've isolated.

Thus, I think the larger point of the post stands. There are better theories and worse theories, and we should prefer the better ones.

Replies from: Alicorn
comment by Alicorn · 2011-06-03T18:00:36.829Z · LW(p) · GW(p)

deontologist who says that killing is wrong because human life is good.

Aaaaaaaaugh.

Replies from: allenpaltrow
comment by allenpaltrow · 2011-06-03T18:44:18.343Z · LW(p) · GW(p)

I'm not trying to define the terms, just posit a very very simple theory of the form killing is wrong because human life is good. Such a theory would be inferior on its own premises than a very very simple utilitarianism, regardless of whether either or the premise itself is true. As such I oversimplified utilitarianism just as much, but it doesn't matter for the scope of the example.

Edit: in fact, for the purposes of the example it is better if the "deontologist" is wrong about deontology, because it better illustrates how one theory can have greater explanatory power than another only on the grounds of the former's justification without reference to external verifiability. "human life is good" is a poor first principle, but if it is true, the utilitarian's principle applies it better than the "deontologist's" did.

Replies from: Alicorn
comment by Alicorn · 2011-06-03T18:53:14.581Z · LW(p) · GW(p)

Someone who believes that killing is wrong because human life is good is not a deontologist. See here.

Replies from: allenpaltrow
comment by allenpaltrow · 2011-06-03T19:32:25.556Z · LW(p) · GW(p)

Here the deontologist is arguing for the principe 'killing is wrong regardless of the consequences' (deontic) but uses a poor justification for which consequentialism is a more reasonable conclusion. So the 'deontologist' is wrong even though his principle cannot be externally verified. I was just (unclearly I see) using this strawman to illustrate how theories could be better and worse at explaining what they attempt to explain without being the sorts of things which can be proven. I will attempt to be clearer in future.

comment by Ronny Fernandez (ronny-fernandez) · 2011-06-15T10:50:22.731Z · LW(p) · GW(p)

Wonderful exposition of versificationism (I meant verificationism lol, but I won't change it cause I like the reply bellow). I do have a question though. You said:

It's tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly.

Well yes, we don't directly observe atoms (actually we do now but we didn't have to). But it is still save to say that if a belief doesn't make predictions about future sensory experiences it is meaningless, or at least unverifiable. Those predictions may be about the shape of ink squiggles on a piece of paper after some rules are applied, or they may be a prediction about the pattern that a monitor's many pixels will form after reacting to some instrument in an experiment. In either case, the hypothesis is always linked to the world by the senses, or are you claiming something different?

Replies from: gjm
comment by gjm · 2011-06-20T11:10:07.698Z · LW(p) · GW(p)

Wonderful exposition of versificationism.

Versificationism is presumably the doctrine that the truth of a proposition should be evaluated on the basis of how easily it can be expressed in poetic form. Empirically, this seems to favour any number of probably-untrue beliefs, so I'm inclined to reject it. :-)

I have in fact seen something a little like this, in a more sophisticated form, maintained seriously. For instance, here's Dorothy L Sayers (the context is her series of radio plays "The man born to be king"). "From the purely dramatic point of view the theology is a enormously advantageous, because it locks the whole structure into a massive intellectual coherence. It is scarcely possible to build up anything lop-sided, trivial or uinsound on that steely and gigantic framework. [...] there is no more searching test of a theology than to submit it to dramatic handling; nothing so glaringly exposes inconsistencies in a character, a story, or a philosophy as to put it upon the stage and allow it to speak for itself. [...] As I once made a character say in another context: 'Right in art is right in practice'; and I can only affirm that at no point have I yet found artistic truth and theological truth at variance."

And, though I disagree with her entirely on the truth of the sort of theology she's writing about, I think she does actually have a point of sorts. But a professional writer of fiction like Sayers really ought to have known better than to suggest that truth can be distinguished from untruth by seeing how easily each can be formed into art.

Replies from: AspiringRationalist
comment by NoSignalNoNoise (AspiringRationalist) · 2012-07-19T20:27:00.141Z · LW(p) · GW(p)

A related epistemology that is popular in the business world is PowerPointificationism, which holds that the truth of a proposition should be evaluated by how easily it can be expressed in PowerPoint. Due to the nature of PowerPoint as a means of expression, this epistemology often produces results similar to those of Occam's sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam's razor, Occam's sand-blaster does not require that the explanation be consistent with observation).

Replies from: TheOtherDave, fubarobfusco
comment by TheOtherDave · 2012-07-19T20:56:04.660Z · LW(p) · GW(p)

Occam's sand-blaster, which holds that the simplest explanation is the correct one (note that unlike Occam's razor, Occam's sand-blaster does not require that the explanation be consistent with observation).

...and I just spit coffee on my keyboard.

That's marvelous... is that original with you?

comment by fubarobfusco · 2012-09-15T17:45:14.491Z · LW(p) · GW(p)

I take it you're familiar with Edward Tufte's "The Cognitive Style of PowerPoint"?

comment by bibilthaysose · 2011-07-30T13:40:38.909Z · LW(p) · GW(p)

Good article. Some thoughts:

I probably constrain my experiences in lots of ways that I don't even know about, but I don't think there's always a way to know whether a belief will constrain your experiences, even if it is based on empirical (or even scientific) observation. Isaac Newton's beliefs constrained all of our beliefs for centuries. Scholars were so unwilling to question classical mechanics that they came up with this "ether" stuff that could never be observed directly, and thus didn't further constrain their experience, but had the nice side effect of resolving inconsistencies in their previously held theories. However, even though Einstein's theory was more correct than Newton's, without Newton's theory mechanical engineering wouldn't exist, and without Einstein's, the Bomb wouldn't exist. I mean this is obviously a gross oversimplification of the development of the Bomb, but I'm just saying there's not much use for relativity outside of a classroom/particle accelerator.

Replies from: army1987
comment by A1987dM (army1987) · 2011-09-16T11:04:46.945Z · LW(p) · GW(p)

there's not much use for relativity outside of a classroom/particle accelerator

Global Positioning System

comment by Ab3 · 2012-02-02T22:15:56.199Z · LW(p) · GW(p)

I understand that having beliefs that are falsifiable in principle and make predictions about experience is incredibly important. But I have always wondered if my belief in falsifiability was itself falsifiable. In any possible universe I can imagine it seems that holding the principle of falsifiability for our beliefs would be a good idea. I can't imagine a universe or an experience that would make me give this up.

How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can't tell what. Please help!

Replies from: TheOtherDave, TimS, None
comment by TheOtherDave · 2012-02-04T04:25:01.833Z · LW(p) · GW(p)

Excellent question!

Excellent, because it illustrates the problem with "believing in" the principle of falsifiability, as opposed to using it and understanding how it relates to the rest of my thinking.

Forget that the principle of falsifiability is itself incredibly important. What sorts of beliefs does the principle of falsifiability tell me to increase my confidence in? To decrease my confidence in?

What would the world have to be like for the former beliefs to be in general less likely than the latter?

Replies from: Ab3
comment by Ab3 · 2012-02-04T21:51:24.589Z · LW(p) · GW(p)

Thanks for the reply Dave. Are you saying I should not look at falsifiability as a belief, but rather a tool of some sort? That distinction sounds interesting but is not 100% clear to me. Perhaps someone should do a larger post about why the principle should not be applied to itself.

I have also thought of putting the problem this way: Eliezer states that the only ideas worth having are the ones we would be willing to give up. Is he willing to give up that idea? I don't think so..., and I would be really interested to know why he doesn't believe this to be a contradiction.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-02-05T01:55:24.504Z · LW(p) · GW(p)

What I'm saying is that the important thing is what I can do with my beliefs. If the "principle of falsifiability" does some valuable thing X, then in worlds where the PoF doesn't do X, I should be willing to discard it. If the PoF doesn't do any valuable thing X, then I should be willing to discard it in this world.

Replies from: Ab3
comment by Ab3 · 2012-02-09T18:53:00.046Z · LW(p) · GW(p)

It seems we have empirical and non-empirical beliefs that can both be rational, but what we mean by “rational” has a different sense in each case. We call empirical beliefs “rational” when we have good evidence for them, we call non-empirical beliefs like the PoF “rational” when we find that they have a high utility value, meaning there is a lot we can do with the principle (it excludes maps that can’t conform to any territory).

To answer my original question, it seems a consequence of this is that the PoF doesn’t apply to itself, as it is a principle that is meant for empirical beliefs only. Because the PoF is a different kind of belief from an empirical belief, it need not be falsifiable, only more useful than our current alternatives. What do you think about that?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-02-09T23:28:16.880Z · LW(p) · GW(p)

I think it depends on what the PoF actually is.

If it can be restated as "I will on average be more effective at achieving my goals if I only adopting falsifiable beliefs," for example, then it is equivalent to an empirical belief (and is, incidentally, falsifiable).

If it can be restated as "I should only adopt falsifiable beliefs, whether doing so gets me anything I want or not" then there exists no empirical belief to which it is equivalent (and is, incidentally, worth discarding).

comment by TimS · 2012-02-04T04:50:31.207Z · LW(p) · GW(p)

For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

Because falsifiability is a definition, not a theory about the world, there's no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that's conceptually different.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-02-04T09:00:39.327Z · LW(p) · GW(p)

For me the principle of falsifiability is best understood as a way of distinguishing scientific theories about the world from other theories about the world. In other words, falsifiability is one way of defining what science is and is not. A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

Because falsifiability is a definition, not a theory about the world, there's no reason to think it can be falsified. The definition could be wrong by failing to accurately or usefully define scientific theory, but that's conceptually different.

Falsifiability is a very bad way to define science (or scientific theories). If falsifiability was all it took for a theory to be scientific, then all theories known to be false would be scientific (after all, if something is known to be false, it must be falsifiable). Do we really want a definition of science that says astrology is science because it's false?

Replies from: JoachimSchipper, nshepperd
comment by JoachimSchipper · 2012-02-04T09:49:55.806Z · LW(p) · GW(p)

Astrology does seem to consist of scientific hypotheses.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-02-04T11:02:28.312Z · LW(p) · GW(p)

Astrology does seem to consist of scientific hypotheses.

I chose astrology because it has a reverse halo effect around here (and so would serve me rhetorically). Feel free to replace it with any other known to be false set of propositions.

Replies from: TimS
comment by TimS · 2012-02-04T17:31:25.242Z · LW(p) · GW(p)

I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics.

That said, Newton's physics and geocentric theories are false. Are they not science simply for that reason?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-02-05T06:21:48.187Z · LW(p) · GW(p)

I agree that falsifiability is not a complete definition. My point was only that falsifiability is not applicable to the principle of falsifiability, any more than it applies to mathematics.

Yes. Falsifiability is a poor definition of science and is self-undermining in the sense that it can't pass its own test.

That said, Newton's physics and geocentric theories are false. Are they not science simply for that reason?

Of course not. I'm not claiming a scientific theory must be true. I'm claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.

Replies from: TimS
comment by TimS · 2012-02-06T00:46:37.240Z · LW(p) · GW(p)

A theory that does not constrain experience ("God works in mysterious ways") is not a scientific theory because it can explain any occurrence and is therefore not falsifiable.

That statement does not itself constrain experience. That's not a useful critique of the statement.

I'm claiming that known falseness (which implies falsifiability) is not a sufficient condition for being scientific.

Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false.

Falsifiability is simply the requirement that a scientific theory to list things that can't happen under that theory. Falsifiability says scientific theory don't look for evidence in support, they look for evidence to test the theory.

The fact that no false statements appear doesn't mean that the scientific theory isn't falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.

Replies from: gwern, Jayson_Virissimo
comment by gwern · 2012-02-06T01:50:19.933Z · LW(p) · GW(p)

That statement does not itself constrain experience. That's not a useful critique of the statement.

That doesn't seem true. The statement seems to perfectly constrain experience: you will not experience situations where theories which do not constrain experience will still be falsified.

And indeed, watching the world go by over the years, I see theories like 'Christianity' or 'psychoanalysis' which do not constrain experience at all have yet to be falsified - exactly as predicted.

Replies from: TimS
comment by TimS · 2012-02-06T02:32:17.989Z · LW(p) · GW(p)

Fine, you want to be contrary. What experience would falsify the partial definition of scientific theory that I have labelled "the principle of falsifiability"? If no such experience exists, does this call into doubt the usefulness of the principle?

Replies from: gwern
comment by gwern · 2012-02-06T02:40:05.402Z · LW(p) · GW(p)

What experience would falsify the partial definition of scientific theory that I have labelled "the principle of falsifiability"?

Are you even trying here? Here's what would falsify falsifiability: observing superior predictions being made by unfalsifiable theories, theories which have no reason to work but which do. Imagine a Christianity which came with texts loaded with prophetic symbolism which could be interpreted any way and is unfalsifiable, but which nevertheless keep turning out literally true (writes my hypothetical self, as he is tormented by Satanic wasps with the faces of humans prior to the sea turning into blood or something like that). In such a universe, falsifiability would be pretty useless.

Replies from: TimS
comment by TimS · 2012-02-06T02:53:12.052Z · LW(p) · GW(p)

Isn't that essentially the best case for things like Nostradamus? Even assuming that the prophecies are accurate, they aren't useful because they are so vague. The moment that the predictions are specific enough to be useful, they could be falsified.

What use is it to call that science? How could it possibly produce superior predictions in a world in which science works at all?

Replies from: gwern
comment by gwern · 2012-02-06T02:54:52.206Z · LW(p) · GW(p)

What use is it to call that science? How could it possibly produce superior predictions in a world in which science works at all?

Yes, that is rather the question you should be answering if you want to criticize the desirability of falsifiability as being unfalsifiable itself...

Replies from: TimS
comment by TimS · 2012-02-06T03:14:28.390Z · LW(p) · GW(p)

I don't understand where we disagree, so let me clarify my position: A prophecy that is so vague that it can't be disproved is so vague that it doesn't tell you what will happen ahead of time. Calling that a prediction abuses the term to the point of incoherency.

Yes, that's almost entirely a definitional point. Definitions aren't necessarily empirical statements. They are either useful or not useful in thinking carefully. Thus, the fact that they cannot be falsified is not a relevant thing to say, in the same way that it isn't useful to object that the Pythagorean theory can't be falsified.

If you intend to invoke some other critique of Popper and his use of falsifiability to distinguish science from non-science, please by more explicit, because I don't understand your argument.

comment by Jayson_Virissimo · 2012-02-06T08:56:28.520Z · LW(p) · GW(p)

Know falseness is not really same thing as falsifiability. Known falseness is useless in deciding whether a theory is scientific. Both the Greek pantheon and geocentric theories are known to be false.

Falsifiability is simply the requirement that a scientific theory to list things that can't happen under that theory. Falsifiability says scientific theory don't look for evidence in support, they look for evidence to test the theory.

The fact that no false statements appear doesn't mean that the scientific theory isn't falsifiable. The fact that every statement of a theory has been true does not mean that the theory is falsifiable.

Nothing in this reply contradicts anything I have asserted. I was merely claiming that if falsifiability is a sufficient condition for a hypothesis to be "scientific", then all theories known to be false are scientific (because if we know they are false, then they must be falsifiable). I'm not being contrarian; I'm pointing out a deductive consequence of the very definition of falsifiability that you linked to. Hopefully this closes the inferential distance:

  • If a hypothesis is falsifiable, then it is scientific.
  • If a hypothesis is known to be false, then it is falsifiable.
  • Therefore, if a hypothesis is known to be false then it is scientific.

I am merely denying the first premise via reductio ad absurdum, because the conclusion is obviously false (and the second premise isn't). If you took my claim to be something other than this, then you have simply misread me.

Replies from: TimS
comment by TimS · 2012-02-06T14:59:51.849Z · LW(p) · GW(p)

That's much clearer. I didn't intend to assert that falsifiability was a sufficient condition for a theory being scientific, only that it is a necessary condition. That's what I mean by saying it was a partial definition.

Thus, I don't intend to assert the first sentence of your syllogism. Instead, I would say, "If a hypothesis is not falsifiable, then it is not scientific." Adding the second statement yields: "If a hypothesis is know to be false, then it might be scientific." That's a true statement, but I don't claim it is very insightful.

comment by nshepperd · 2012-02-06T10:39:35.663Z · LW(p) · GW(p)

*shrug*

I don't think the current line of enquiry is particularly useful.

"Astrology works" is a scientific theory to the degree that it is, in fact, acceptable science to do an experiment to see whether or not astrology has predictive power. It's rhetorically inaccurate to say that means "astrology is science" though, because of course the practice of astrology is not. But sure, it's probably a good idea to include other conditions. Excessively unlikely (or non-reductionist?) hypotheses could be classified as non-scientific, for the simple reason that even considering them in the first place would be a case of privileging the hypothesis.

None of this contradicts falsifiability being "a way of distinguishing scientific theories about the world from other theories about the world", if we have other ways of distinguishing scientific from non-scientific, such as "reductionism".

comment by [deleted] · 2012-02-05T07:27:32.544Z · LW(p) · GW(p)

How can I believe in the principle of falsifiability that is itself unfalsifiable?! I feel as though something has gone wrong in my thinking but I can't tell what.

You have just refuted the contention that all warranted beliefs must be falsifiable in principle. Karl Popper, who introduced the falsifiability criterion and pushed it as far if not further than it can go, never advocated that all beliefs should be falsifiable. Rather, he used falsifiability as the criterion of demarcation between science and non-science, while denying that all beliefs should be scientific. His contention that falsifiability demarcates science does imply, as he recognized, that the criterion of falsifiability is not itself a scientific hypothesis.

Rational beliefs are not necessarily scientific beliefs. Mathematics is rational without being falsifiable. The same is true of philosophical beliefs, such as the belief that scientific beliefs are falsifiable. But rational beliefs that are not scientific must be refutable, and falsifiable beliefs are a proper subset of refutable beliefs. Falsifiable beliefs are refutable in one particular way: they are refutable by observation statements, which I think are equivalent to EY's anticipations. Science is special because it is 1) empirical (unlike mathematics) and 2) has an unusual capacity to grow human knowledge systematically (unlike philosophy). But that does not imply that we can make do with scientific beliefs exclusively, one reason being the one that you mention about criteria for the acceptance of scientific theories.

The broader criterion of refutability doesn't necessarily involve refutation by observation statements. How would you refute the falsifiability criterion? It would be false if science it were the case that scientists secured the advance of science by using some other criteria (such as verification).

It's a mistake to conflate the questions of whether a theory is scientific and whether it's corroborated (by attempted falsifications). Or to conflate whether it's scientific or it's rationally believable. Theories aren't bad because they aren't science. They're bad because they're set up so they resist any form of refutation. Rational thought involves making your thinking vulnerable to potential refutation, rather than protecting it from any refutation.In science, the mode of refutation is observation, direct connection to sensory data. But it won't do (as you've realized by trying to apply falsifiability to itself) to limit one's thinking entirely to that which is falsifiable.

You later ask (in effect) whether the refutability criterion is itself even refutable. Would EY be willing, ever, to give it up? He should be, were someone to show that sheer dogmatism conduces to the growth of knowledge. That I can't conceive of a plausible argument to that end doesn't obviate the refutability of the contention

I think that resolves your confusion, but I don't want to imply that Popper uttered the last word—there are problems with neglecting verification in favor of strict falsificationism.

Replies from: Ab3
comment by Ab3 · 2012-02-09T18:30:24.714Z · LW(p) · GW(p)

Thank you for your thoughts.

What are the criteria that we use for accepting or refuting rational non-empirical beliefs? You mention that falsifiability would be refuted if some other criteria “secured the advance of science.” You also mention that we should give up the refutability criterion if “sheer dogmatism conduces to the growth of knowledge.” It sounds like our criteria for the refutability of non-empirical beliefs are mostly practical; we accept the epistemic assumptions that make things “work best.” Is there more to it than this?

Replies from: None
comment by [deleted] · 2012-02-10T03:57:13.164Z · LW(p) · GW(p)

To be pedantic and Popperian, I'd have to correct your use of "empirical beliefs." The philosophical positions at issue aren't scientific but they are empirical. "Empirical"--to be the basis for scientific observation statements-- must be expressible in low-level observation sentences that all competent scientists agree on.

The belief in question is that science's crucial distinguishing feature allowing it to advance is the subjection of science's claims to empirical testing, allowing strict falsification. We can't run an experiment or otherwise record observation statements, so we resort to philosophical debate aimed at refutation. Refutation is obtained by plausible argument. For instance, in the discussion about demarcation, an example of a potentially plausible argument goes if we relied on falsification exclusively, we would never have evidence that a claim is true, only that it isn't false. But we rely on scientific theories and consider them close to the truth (or at least as probably so). Therefore, falsifiability can't explain the distinctiveness of science.

This involves highly plausible claims, based on observation, about how we in fact use scientific theories. But although the result of observation, it can't be reduced to something everyone agrees on that is closely tied to direct perception, as with an observation statement.

comment by vinayak · 2012-05-15T04:26:41.358Z · LW(p) · GW(p)

I have read this post before and have agreed to it. But I read it again just now and have new doubts.

I still agree that beliefs should pay rent in anticipated experiences. But I am not sure any more that the examples stated here demonstrate it.

Consider the example of the tree falling in a forest. Both sides of the argument do have anticipated experiences connected to their beliefs. For the first person, the test of whether a tree makes a sound or not is to place an air vibration detector in the vicinity of the tree and check it later. If it did detect some vibration, the answer is yes. For the second person, the test is to monitor every person living on earth and see if their brains did the kind of auditory processing that the falling tree would make them do. Since the first person's test has turned out to be positive and the second person's test has turned out to be negative, they say "yes" and "no" respectively as answers to the question, "Did the tree make any sound?"

So the problem here doesn't seem to be an absence of rent in anticipated experiences. There is some problem, true, because there is no single anticipated experience where the two people anticipate opposite outcomes even though one says that the tree makes a sound and the other one says it doesn't. But it seems like that's because of a different reason.

Say person A has a set of observations X, Y, and Z that he thinks are crucial for deciding whether the tree made any sound or not. For example, if X is negative, he concludes that the tree did make a sound otherwise it didn't, if Y is negative, he concludes it did not make a sound and so on. Here, X could be "cause air vibration" for example. For all other kinds of observations, A has a don't care protocol, i.e., the other observations do not say anything about the sound. Similarly, person B has a set X', Y', Z' of crucial observations and other observations lie in the set of don't cares. The problem here is just that X,Y, Z are completely disjoint from X', Y', Z'. Thus even though A and B differ in their opinions about whether the tree made a sound, there is no single aspect where they would anticipate completely opposite experiences.

comment by prashantsohani · 2012-06-02T22:18:27.522Z · LW(p) · GW(p)

Suppose someone, on inspecting his own beliefs to date, discovers a certain sense of underlying structure; for instance, one may observe a recurring theme of evolutionary logic. Then while deciding on a new set of beliefs, would it not be considered reasonable for him to anticipate and test for similar structure, just as he would use other 'external' evidence? Here, we are not dealing with direct experience, so much as the mere belief of an experience of coherence within one's thoughts.. which may be an illusion, for all we know. But then again, assuming that the existing thoughts came from previous 'external' evidence, could one say that the anticipated structure is indeed well-rooted in experience already?

comment by abbyjh · 2012-07-11T23:13:24.811Z · LW(p) · GW(p)

I was reading those 'what good is math?' and 'what good is music' comments. You can determine what if any 'system' is good or bad based on the understanding or misunderstanding of the variables involved.

i.e: one does not have any use for math if they do not understand any of the vast variables associated with the concepts of math. Math cannot be any good to this person who doesn't understand.

This principle applies to any 'system' whether it be math, music, love, life... etc.

comment by JohnEPaton · 2012-07-30T05:34:22.171Z · LW(p) · GW(p)

If a belief turns deadbeat, evict it.

This might be challenging because our beliefs tend to shape the world we live in thus masking their error. Does anyone have any practical tips for discovering erroneous beliefs?

Replies from: Nectanebo, TheAncientGeek, ChristianKl
comment by Nectanebo · 2012-07-30T06:31:00.433Z · LW(p) · GW(p)

The post you replied to is helpful advice for doing just that.

Above all, don't ask what to believe—ask what to anticipate.

When what you specifically anticipate doesn't line up with what happens, that's discovering a possible erroneuos belief.

comment by TheAncientGeek · 2015-02-22T18:29:59.120Z · LW(p) · GW(p)

If a belief encapsulates a value, if it's about how you want the world to be, why shouldn't it shape the world, and why should you evict in?

comment by ChristianKl · 2015-02-22T18:56:29.031Z · LW(p) · GW(p)

Does anyone have any practical tips for discovering erroneous beliefs?

Making predictions about the world based on your beliefs and seeing whether those predictions hold true.

comment by Mestroyer · 2013-06-22T13:40:50.164Z · LW(p) · GW(p)

What about things I remember from long ago, which no one else remembers and for which I can find no present evidence or record of besides those memories themselves?

comment by christopherj · 2013-10-03T18:32:34.932Z · LW(p) · GW(p)

Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.

What if I had the belief that a certain coin was unfair, with a 51% chance of heads and only 49% chance of tails? Certainly I could observe an absurd amount of coin flips, and each bunch of them could nudge my belief -- but short of an infinite number of flips, none would "definitely" falsify it. Certainly in this case, I could come to believe with an arbitrary level of certainty in the falsehood of the belief. But I don't believe that would apply in general -- what if to reach any arbitrary level of testing a belief, I'd need to think up and apply an indefinite number of unique tests? For example, a belief concerning the state of mind of another person -- I can't think of a definite test, nor can I repeat any test indefinitely to increase certainty.

On a related note, why abandon Bayes in this case for Popper, without any disclaimer? Eg falsificationism is useful because it fights magic explanations and positive bias, but it is still a predictive belief if observation causes you to slightly shift your probability for that belief.

Replies from: tylerj
comment by tylerj · 2014-01-03T15:33:05.880Z · LW(p) · GW(p)

What caused you to believe a 51 % chance of heads versus 49 % chance of tails?

comment by Mati_Roy (MathieuRoy) · 2013-10-14T02:17:33.350Z · LW(p) · GW(p)

Another example of these types of questions: "If a man who cannot count finds a four-leaf clover, is he lucky?" (Stanisław Jerzy Lec)

comment by tylerj · 2014-01-02T14:47:54.510Z · LW(p) · GW(p)

Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian".

Suppose you, an invisible man, overheard 1,000,000 distinct individual humans proclaim "I believe that Velma Valedo and Wulky Wilkinsen are post-utopians based on several thorough readings of their complete bibliographies!"

Must there be some correspondence (probably an extremely complex connection) between the writings, and, quite possibly, between some of the 1,000,000 brains that believe this? The subjectively defined "post-utopian" does not hold much evidential weight when simply mentioned by one informed English professor, but when the attribute "post-utopian" is used to describe two distinct authors by many blind and informed subjects, does this (even a little bit) allow us to anticipate any similarities between (some of) the subjects' brains or between (some of) the authors' writings?

comment by 3p1cd3m0n · 2014-12-25T17:55:09.051Z · LW(p) · GW(p)

What evidence is there for floating beliefs being uniquely human? As far as I know, neuroscience hasn't advanced far enough to be able to tell if other species have floating beliefs or not.

Edit: Then again, the question of if floating beliefs are uniquely human is practically a floating belief itself.

comment by Wenceslao · 2015-06-30T20:43:35.067Z · LW(p) · GW(p)

Interesting post. However, I do not agree completely in the conclutions on the end.

I am a student in math science, what involves me into an enviroment of researchers of this area. In this way, I am able to see that this people's work is based on beliefs that 'does not exists', I mean, they work on abstract ideas that generally only exists in their minds. And now I wonder, does their efforts 'does not pay rent'? They live from structures and stuff that, in the most of the cases, cannot be found in 'real life', and so, according to the article's conclution, this would not be worth thinking, as is not flowing from a question of anticipation (what were we anticipating if it does not exists?).

Maybe I'm missunderstanding the post, or maybe it is just focus in other life experiences.

Replies from: LawChan
comment by LawrenceC (LawChan) · 2015-06-30T21:18:47.647Z · LW(p) · GW(p)

You're definitely right that there's some areas where it's easier to make beliefs pay rent than others! I think there's two replies to your concern:

1) First, many theories from math DO pay rent (the ones I'm most aware of are statistics and computer-science related ones). For example, better algorithms in theory (say Strassen's algorithm for multiplying matrices) often correspond to better results in practice. Even more abstract stuff like number theory or recursion theory do yield testable predictions.

2) Even things that can't pay rent directly can be logical implications of other things that pay rent. Eliezer wrote about this kind of reasoning here.

comment by BenFRayfield · 2015-08-02T04:14:58.414Z · LW(p) · GW(p)

If we extend the concept of making beliefs pay rent to structures in computer memory, then AIs could better choose which structures are more valuable than they cost when many objects are shared in an acyclic network. Each object at the bottom could cost 1, and any objects pointing at x equally share the cost of x plus 1 for themself. If beliefs are stored in these memory structures, then a belief would be evicted when its objective cost exceeds some measure of its value, and total value would be in units of memory available. When some beliefs are evicted, those they depended on would become more expensive to others who depend on them because the number of beliefs sharing the cost decreases. On the other hand, if many beliefs depend on a certain structure in memory, those many sharing the cost each pay less.

comment by Raz989 · 2015-12-18T10:49:37.638Z · LW(p) · GW(p)

This is enlightening.

comment by nyeven · 2017-09-17T14:32:46.269Z · LW(p) · GW(p)

Wulky Wilkinsen is a “post-utopian.” What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all.

I don't believe this is a good example. That information actually can change your anticipation.

By knowing that information you can expect the book will be set in a post-utopian world. By anticipating that you can maybe take better notice at the setting and how exactly the world is post-utopian.

But a great article nevertheless.

comment by ADITHYA SRINIVASAN (adithya-srinivasan) · 2020-01-15T20:40:25.120Z · LW(p) · GW(p)

I dont get it.Any belief could be said to "pay rent" if you can conceive a situation where it will be useful later on.

A general situation that I made up was.

Given any belief X and at least 2 people believe X,I always have utility in believing X(I think it should be knowing) as it helps me predict the actions of the other 2 people that believe in X.

Even in the example where the student regurgitates it onto the upcoming quiz-the belief had utility for him as he could use that to improve his grades(constraining reality in a way he wants it to be).

I believe you should judge your beliefs based on expected utility in the future(extremely hard to calculate).


PS:This is my first comment/post.Forgive me if is a bit rough

Replies from: jeronimo196
comment by jeronimo196 · 2020-02-01T20:05:26.389Z · LW(p) · GW(p)

Any belief could be said to "pay rent" if you can conceive a situation where it will be useful later on.

Just so. And a belief that leads to correct predictions will (generally) be more useful than a belief that doesn't.

A general situation that I made up was.

Given any belief X and at least 2 people believe X,I always have utility in believing X(I think it should be knowing) as it helps me predict the actions of the other 2 people that believe in X.

I think I see a confusion with the term "eviction" here. There is a difference between believing X exists (knowing about X) and believing X is true (believing X). So, "evicting X" should be understood as "no longer believing X", rather than "erazing all knowledge of X" (which happens involuntary anyway).

I hope this was helpful, as this is my first comment, too. Anyway, I've lurked awhile and I don't think anyone here would begrudge you raising an honest question.

P.S. Welcome to less wrong :) !!!

Edit: formatting.

comment by maxa · 2021-08-27T03:47:19.045Z · LW(p) · GW(p)

Yes! And another way to think about the arguments about beliefs that aren’t predicting anything is that they are really about definitions. When I listen to people talk and argue, I often find myself thinking “well, this depends on how you define X”. For example, is sound something that a living creature perceives, or is it vibrations in the air?

comment by Mark Neyer (mark-neyer) · 2022-02-14T15:49:43.124Z · LW(p) · GW(p)

Why is 'constraining anticipation' the only acceptable form of rent?

What if a belief doesn't modify the predictions generated by the map, but it does reduce the computational complexity of moving around the map in our imaginations? It hasn't reduced anticipation in theory, but in practice it allows us to more cheaply collapse anticipation fields, because it lowers the computational complexity of reasoning about what to anticipate in a given scenario? I find concepts like the multiverse very useful here - you don't 'need' them to reduce your anticipation as long as you're willing to spend more time and computation to model a given situation, but the multiverse concept is very, very useful in quickly collapsing anticipation fields about spaces of possibility outcomes.

Or, what if a belief just makes you feel really good and gives you a ton of energy, allowing you to more successfully accomplish your goals and avoid worrying about things that your rational mind knows are low probability, but which you haven't been able to un-stuck from your brain?   Does that count as acceptable rent? If not, why not?

Or, what if a belief just steamrolls over the 'predictive making' process and just hardwires useful actions in a given context?  If you took a pill that made you become totally blissed out, wireheading you, but it made you extremely effective at accomplishing your goals prior to taking the pill ,why wouldn't you take it?

What's so special about making predictions, over, say,  overcoming fear, anxiety and akrasia?

comment by Martin Čelko (martin-celko) · 2022-08-16T22:20:04.790Z · LW(p) · GW(p)

Then what is the difference between belief and assumption in our mental maps.

What about imagination? Is that belief or assumption or in-congruent map of reality. 

Can imagination be part of mental processing without making us wrong about reality.

For instance, if I imagine that all buses in my city are blue, though they are red, can I then walk around with this model of reality in my head without a false belief? After all its just imagination?

Or is this model going to corrupt my thinking as I walk about thinking it, knowing full well its not true.

Further more !what does the question really ask! 

Does the tree fall, first question? If it does, who is asking? 

Who knows the tree? Who knows where it fell and how far and so on.

The question is more so nonsensical that it assumes the question can be asked without cognitive bias.

The question it self is cognitive bias. 

If we tie down abstract thinking immediately to reality, there is no creative process to be had.

Imagination then leaves no room for us to abstract or use mental process, that bogs us down in every day life, thus we never form connections that allow us to think else.

Its either true or not, but result of sensory and thinking process such as logic is predictable, if done perfectly.

Even language can be cognitive bias.

So then if we translate the question of falling trees into reality, that is, you know what that looks like, the question is pointless. You have experienced a tree falling.

The question then makes zilch sense.

Its irrelevant.

You just know that there are no trees that fall and fail to make a sound.

There is no !if!.

There is no logic to be used.

Its like walking around and  seeing a tree falling and asking people !Did you hear that?! It made a sound?

If however we word the question as such: Do all trees make a sound, all the time, under all conditions, here on Earth. Do all trees fall and hit ground and make a sound then the question is what to make of that?

For instance do all matches burn? How can we know if we don't try them all out?

So in strict abstract sense we can be sure that our model is true, as long as all trees make a sound as we see them falling, but there is a chance that a tree falls, and we won't hear it make a sound. 

comment by Gregory Holmes (gregory-holmes) · 2023-06-16T13:04:41.300Z · LW(p) · GW(p)

A couple of important limitations to the concept:

The concept assumes that beliefs should be tied to observable, testable phenomena. However, there are many important aspects of life and human experience (like emotions, subjective experiences, and certain philosophical or religious beliefs) that aren't easily observable or testable. The concept can be less applicable or useful in these areas. 

It also doesn't address truth value: The concept encourages beliefs to be tied to specific anticipations, but it doesn't necessarily address the truth value of those beliefs. A belief can generate specific anticipations and still be false, or not generate specific anticipations and still be true.

This concept doesn't explain why certain beliefs persist even when they don't lead to accurate anticipations. Factors such as cultural tradition, emotional comfort, cognitive biases, and lack of exposure to alternative viewpoints can all contribute to the persistence of beliefs, even when they don't "pay rent" in terms of generating accurate predictions

There's a risk that people might selectively interpret their experiences to confirm their existing beliefs. This can lead to a situation where beliefs seem to generate accurate anticipations, even when they're not actually based on valid reasoning or evidence.

Replies from: Raemon
comment by Raemon · 2023-06-16T17:47:42.615Z · LW(p) · GW(p)

This concept doesn't explain why certain beliefs persist even when they don't lead to accurate anticipations. Factors such as cultural tradition, emotional comfort, cognitive biases, and lack of exposure to alternative viewpoints can all contribute to the persistence of beliefs, even when they don't "pay rent" in terms of generating accurate predictions

The post isn't meant to be an explanation for why beliefs exist, it's meant to highlight that by default, people have a bundle of things-that-feel-like beliefs that all seem to be a similar shape. But, if your goal is to figure out what's true and make good plans, it's very important to separate out which of your 'beliefs' are about predicting reality, and which are there for other reasons.

comment by GreenBeetle · 2023-10-03T08:33:24.350Z · LW(p) · GW(p)

It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

Do we know the atoms are in fact there? All "rationality" has to start on irrational beliefs or axioms in order to get anywhere. Like I assume people here believe in external reality and other minds, as do I, if not well that's a whole other can of worms. I doubt folks here are solipsists. 

I would say you do experience the floor directly as it does take more than just your eyes and brain to make it, like you said you see the light reflected OFF something. It's also not really inferring the floor from seeing it, if I see a floor there is a floor unless something would cause me to doubt it. After all illusions are something that end up being disproven through testing.

Though my original point still stands, rationality can't telling you everything. Some stuff you just gotta believe and some things can't be determined rationally. External reality and atoms is just something you gotta believe since you cannot truly verify an external world or not. In matters of morality or taste rationality does nothing either. Choosing a flavor of ice cream doesn't really have any rational basis after all. 

comment by Keifer Furzland (kfrz) · 2023-12-05T12:38:40.281Z · LW(p) · GW(p)

There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light
 


But indeed, I experience the floor directly; the experience of the floor is not limited to visual perception but also involves direct sensory inputs. The sensation caused by gravitational pull and the counter-pressure from the floor are experienced directly. Additionally, the sound produced when stepping on the floor and the anticipation of the floor's existence contribute to the direct experience of the floor. Therefore, the floor is experienced directly through a combination of sensory inputs, including but not limited to, visual, tactile, auditory, and proprioceptive sensations.