Rationality quotes: April 2010

post by wnoise · 2010-04-01T20:41:39.003Z · LW · GW · Legacy · 309 comments

Contents

309 comments

This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.

309 comments

Comments sorted by top scores.

comment by RobinZ · 2010-04-01T23:44:06.403Z · LW(p) · GW(p)

I listen to all these complaints about rudeness and intemperateness, and the opinion that I come to is that there is no polite way of asking somebody: have you considered the possibility that your entire life has been devoted to a delusion? But that’s a good question to ask. Of course we should ask that question and of course it’s going to offend people. Tough.

Daniel Dennett, interview for TPM: The Philosopher's Magazine

Replies from: Rain
comment by Rain · 2010-04-07T17:53:47.549Z · LW(p) · GW(p)

If the point is to get them to answer or reason about the topic, then I think we should reject the statement that "there is no polite way of asking." We should find a way of asking politely, such as teaching them to process our questions instead of answering with cached thoughts. Being offensive doesn't win.

I also think it's a poorly phrased question, since it's easily brushed off with "yes/no", avoiding any of the deeper implications in an apparent effort to make it catchy and instantly polarizing.

If the point is to upset people, to feel righteous, or to signal tribal affiliation, then go right ahead.

Replies from: RobinZ
comment by RobinZ · 2010-04-07T19:27:47.703Z · LW(p) · GW(p)

Being offensive doesn't win.

This is not universally true, but I would support trying to create nonoffensive ways to deliver the message - the combination of direct and conciliatory methods is probably more powerful than either alone.

Replies from: Rain
comment by Rain · 2010-04-07T19:35:06.428Z · LW(p) · GW(p)

Yes, I considered that to be the primary statement under contention.

It's not a strategy I wish to use, so I decided to speak out against it even as I realize that's kind of the point, to have purists who can continue to show that there's further to go, and a spectrum of other positions to provide a more gradual path.

I recognize the potential usefulness of it even as I deride it; I am good cop.

comment by Nic_Smith · 2010-04-03T02:55:18.832Z · LW(p) · GW(p)

I recall, for example, suggesting to a regular loser at a weekly poker game that he keep a record of his winnings and losses. His response was that he used to do so but had given up because it proved to be unlucky. - Ken Binmore, Rational Decisions

A side note: All three of the quotes I've posted are from Binmore's Rational Decisions, which I'm about a third of the way through and have found very interesting. It makes a great companion to Less Wrong -- and it's also quite quotable in spots.

Replies from: gwern
comment by gwern · 2010-12-20T01:03:39.834Z · LW(p) · GW(p)

Wow - I think I felt real physical pain in my eyes as I read that one.

comment by RobinZ · 2010-04-01T23:44:53.869Z · LW(p) · GW(p)

My dad used to have an expression: "Don't tell me what you value. Show me your budget, and I'll tell you what you value."

Joe Biden, remarks delivered in Saint Clair Shores, MI, Monday, September 15, 2008

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-04-02T01:13:01.687Z · LW(p) · GW(p)

Of course, to really see what someone values you'd have to see their budget profile across a wide range of wealth levels.

comment by Scott Alexander (Yvain) · 2010-04-02T00:46:16.211Z · LW(p) · GW(p)

"Everyone thinks they've won the Magical Belief Lottery. Everyone thinks they more or less have a handle on things, that they, as opposed to the billions who disagree with them, have somehow lucked into the one true belief system."

-- R Scott Bakker, Neuropath

Replies from: cousin_it
comment by cousin_it · 2010-04-02T00:48:25.815Z · LW(p) · GW(p)

You mean, like every Bayesian believes their prior is correct?

Replies from: Liron, Vladimir_Nesov
comment by Liron · 2010-04-02T04:27:39.464Z · LW(p) · GW(p)

Bayesians don't believe they lucked into their priors. They have a reflectively consistent causal explanation for their priors.

Replies from: Unknowns
comment by Unknowns · 2010-04-02T05:19:23.484Z · LW(p) · GW(p)

Even if their explanation were correct, they would still have lucked into them. Others have different priors and no doubt different causes for their priors. So those Bayesians would have been lucky, in order to have the causes that would produce correct priors instead of incorrect ones.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-02T17:43:01.212Z · LW(p) · GW(p)

But that still doesn't need to be luck. I got my priors offa evolution and they are capable of noticing when something works or doesn't work a hundred times in a row. True, if I had a different prior, I wouldn't care about that either. But even so, that I have this prior is not a question of luck.

Replies from: Yvain, cousin_it
comment by Scott Alexander (Yvain) · 2010-04-03T22:30:13.230Z · LW(p) · GW(p)

It is luck in a sense - every way that your opinion differs from someone else, you believe that factors outside of your control (your intelligence, your education, et cetera) have blessed you in such a way that your mind has done better than that poor person's.

It's just that it's not a problem. Lottery winners got richer than everyone else by luck, but that doesn't mean they're deluded in believing that they're rich. But someone who had only weak evidence ze won the lottery should be very skeptical. The real point of this quote is that being much less wrong than average is an improbable state, and you need correspondingly strong evidence to support the possibility. I think many of the people on this site probably do have some of that evidence (things like higher than average IQ scores would be decent signs of higher than normal probability of being right) but it's still something worth worrying about.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-04T01:24:22.922Z · LW(p) · GW(p)

I think I agree with that: There's nothing necessarily delusive about believing you got lucky, but it should generally require (at least) an amount of evidence proportional to the amount of purported luck.

comment by cousin_it · 2010-04-03T11:50:51.021Z · LW(p) · GW(p)

Then it would make sense to use some evolutionary thingy instead of Bayesianism as your basic theory of "correct behavior", as Shalizi has half-jokingly suggested.

comment by Vladimir_Nesov · 2010-04-02T19:09:29.463Z · LW(p) · GW(p)

Priors can't be correct or incorrect.

(Clarified in detail in this comment.)

Replies from: PhilGoetz, Vladimir_Nesov, wnoise
comment by PhilGoetz · 2010-04-02T23:07:45.299Z · LW(p) · GW(p)

Sounds mysterious to me. Priors are not claims about the world?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T23:11:58.286Z · LW(p) · GW(p)

Not quite. They are the way you process claims about the world. A claim has to come in context of a method for its evaluation, but prior can only be evaluated by comparing it to itself...

comment by Vladimir_Nesov · 2010-04-03T09:35:44.412Z · LW(p) · GW(p)

This downvoting should be accompanied with discussion. I've answered the objections that were voiced, but naturally I can't refute an incredulous stare.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-04-03T22:14:11.963Z · LW(p) · GW(p)

The normal way of understanding priors is that they are or can be expressed as joint probability distributions, which can be more or less well-calibrated. You're skipping over a lot of inferential steps.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T22:19:33.621Z · LW(p) · GW(p)

Right. We could talk of quality of an approximation to a fixed object that is defined as the topic of a pursuit, even if we can't choose the fixed object in the process and thus there is no sense in having preferences about its properties.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-04-03T22:28:43.544Z · LW(p) · GW(p)

I can't tell what you're talking about.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T08:05:03.620Z · LW(p) · GW(p)

Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else's wild guesses about this value as correct or not correct at all.

On the other hand, it doesn't make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can't be correct or incorrect, can't be more or less well-calibrated -- talking this way would indicate a conceptual confusion.

When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn't make sense for similar reasons.

Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn't make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn't itself an approximation, and so can't be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can't be too high or too low.

Replies from: RobinZ
comment by RobinZ · 2010-04-04T14:14:55.915Z · LW(p) · GW(p)

I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly - I'll grant that's not a binary distinction, but it's a useful one.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T14:36:42.694Z · LW(p) · GW(p)

If you are an agent with "less accurate prior", then you won't be able to recognize a "more accurate prior" as a better one. You are trying to look at the situation from the outside, but it's not possible where we discuss your own decision-making algorithms.

Replies from: RobinZ, JGWeissman
comment by RobinZ · 2010-04-04T16:07:54.554Z · LW(p) · GW(p)

If I'm blind, I won't be able to recognize a sighted person by sight. That doesn't change the fact that the sighted person can see better than the blind person.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T16:22:34.729Z · LW(p) · GW(p)

There is no God's view to define the truth, and Faith to attain it. You only get to use your own eyes. If I predict a fair coin will come up "heads", and you predict it'll come up "tails", and it does come up "tails", who was closer to the truth? The truth of such a prediction is not in how well it aligns with the outcome, but in how well it takes into account available information, how well it processes the state of uncertainty. What should be believed given the available information and what is actually true are two separate questions, and the latter question is never asked, as you never have all the information, only some state of uncertainty. Reality is not transparent, it's not possible to glimpse the hidden truth, only to cope with uncertainty. Confuse the two at your own peril.

Replies from: RobinZ
comment by RobinZ · 2010-04-04T20:27:00.673Z · LW(p) · GW(p)

I'm so confused, I can't even tell if we disagree. What I am thinking of is essentially the argument in Eliezer Yudkowsky's "Inductive Bias":

The more inductive bias you have, the faster you learn to predict the future, but only if your inductive bias does in fact concentrate more probability into sequences of observations that actually occur. If your inductive bias concentrates probability into sequences that don't occur, this diverts probability mass from sequences that do occur, and you will learn more slowly, or not learn at all, or even - if you are unlucky enough - learn in the wrong direction.

Inductive biases can be probabilistically correct or probabilistically incorrect, and if they are correct, it is good to have as much of them as possible, and if they are incorrect, you are left worse off than if you had no inductive bias at all. Which is to say that inductive biases are like any other kind of belief; the true ones are good for you, the bad ones are worse than nothing. In contrast, statistical bias is always bad, period - you can trade it off against other ills, but it's never a good thing for itself. Statistical bias is a systematic direction in errors; inductive bias is a systematic direction in belief revisions.

comment by JGWeissman · 2010-04-04T16:26:39.531Z · LW(p) · GW(p)

If you can inspect and analyze your own prior (using your own prior, of course) you can notice that your prior is not reflectively consistent, that you can come up with other priors that your prior expects to get better results. Humans, who are not ideal Bayesians but have a concept of ideal Bayesians, have actually done this.

(Though reflective consistency does not guarantee effectiveness. Some priors are too ineffective to notice they are ineffective.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T16:32:06.045Z · LW(p) · GW(p)

This might be a process of figuring out what your prior is, but the approximations along the way are not your prior (they might be some priors).

Replies from: JGWeissman
comment by JGWeissman · 2010-04-04T16:49:45.966Z · LW(p) · GW(p)

I see three priors to track here:

  1. The prior I would counterfactually have had if I were not able to make this comparison.
  2. The ideal prior I am comparing my approximation of prior (1) to.
  3. My actual prior resulting from this comparison, reflecting that I try to implement prior (2), but cannot always compute/internalize it.

I have prior (3), but I believe prior (2) is better.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-04T16:58:00.638Z · LW(p) · GW(p)

If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior. It is what you (try to) use to make your decisions. (3) is just a tool you employ in the meantime, and you may pick a better tool, judging with (2). I don't know what (1) means (or what (2) means when (1) is realized).

Replies from: JGWeissman
comment by JGWeissman · 2010-04-04T17:18:55.297Z · LW(p) · GW(p)

I don't know what (1) means

(1) is the prior I would have if I had never inspected and analyzed my prior. It is a path not taken from prior (3). The point of introducing it was to point out that I really believe (2) is better than (3), as opposed to (2) is better than (1) (which I also believe, but it isn't the point).

Does "your prior" refer to (A) the prior you identify with, or (B) the prior that describes your actual beliefs as you process evidence, or something else?

If (A), I don't understand:

This might be a process of figuring out what your prior is, but the approximations along the way are not your prior

If (B), I don't understand:

If you have a concept of prior (2), and wish to get better at acting according to it over time, then (2) is your real prior.

comment by wnoise · 2010-04-02T19:34:40.869Z · LW(p) · GW(p)

They can be more or less useful, though.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T20:35:51.021Z · LW(p) · GW(p)

According to what criterion? You'd end up comparing a prior to the prior you hold, with the "best" prior for you just being the same as yours. Like with preference. Clearly not the concept Unknowns was assuming -- you don't need luck to satisfy a tautology.

Replies from: Nick_Tarleton, wnoise
comment by Nick_Tarleton · 2010-04-02T21:15:50.329Z · LW(p) · GW(p)

According to what criterion?

Correspondence to reality.

(Do you realize how inferentially far the idea of prior as part of preference is from the normal worldview here?)

comment by wnoise · 2010-04-02T21:02:49.605Z · LW(p) · GW(p)

Of being better at predicting what happens, of course.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-02T21:04:20.190Z · LW(p) · GW(p)

You can't judge based on info you don't have. Based on what you do have, you can do no better than current prior.

Replies from: PhilGoetz, wnoise
comment by PhilGoetz · 2010-04-05T15:29:53.207Z · LW(p) · GW(p)

But you can go and get info, and then judge, and say, "That prior that I held was wrong."

You're speaking as if all truth were relative. I don't know if you mean this, but your comments in this thread imply that there is no such thing as truth.

You've recently had other discussions about values and ethics, and the argument you're making here parallels your position in that argument. You may be trying to keep your believes about values, and about truths in general, in syntactic conformance. But rationally I hope you agree they're different.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-05T16:56:33.323Z · LW(p) · GW(p)

But you can go and get info, and then judge, and say, "That prior that I held was wrong."

It is only wrong not to update.

Replies from: wnoise
comment by wnoise · 2010-04-05T18:14:09.182Z · LW(p) · GW(p)

And, of course the priors must be updated the correct way.

Nonetheless, it is greatly preferable to have a prior that led to decisions that gave high utility, rather than one that led to decisions that gave low utility. Of course this can't be measured "before hand". But the whole point of updating is to get better priors, in this exact sense, for the next round of decisions and updates.

comment by wnoise · 2010-04-02T21:11:30.331Z · LW(p) · GW(p)

I am in violent agreement.

comment by Vladimir_Nesov · 2010-04-02T08:22:25.612Z · LW(p) · GW(p)

Prior can't be judged. It's not assumed to be "correct". It's just the way you happen to process new info and make decisions, and there is no procedure to change the way it is from inside the system.

Replies from: cousin_it, Jack, Yvain
comment by cousin_it · 2010-04-02T09:54:11.308Z · LW(p) · GW(p)

Locked in, huh? Then I don't want to be a Bayesian.

Replies from: neq1, Vladimir_Nesov
comment by neq1 · 2010-04-02T15:45:20.502Z · LW(p) · GW(p)

If someone was locked in to a belief, then they'd use a point mass prior. All other priors express some uncertainty.

comment by Vladimir_Nesov · 2010-04-02T11:29:37.952Z · LW(p) · GW(p)

Since you are already locked in in some preference anyway, you should figure out how to compute within it best (build a FAI).

Replies from: cousin_it
comment by cousin_it · 2010-04-02T14:39:04.417Z · LW(p) · GW(p)

you are already locked in in some preference anyway

What makes you say that? It's not true. My preferences have changed many times.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2010-04-02T18:06:10.943Z · LW(p) · GW(p)

Distinguish formal preference and likes. Formal preference is like prior: both current beliefs and procedure for updating the beliefs; beliefs change, but not the procedure. Likes are like beliefs: they change all the time, according to formal preference, in response to observations and reflection. Of course, we might consider jumping to a meta level, where the procedure for updating beliefs is itself subject to revision; this doesn't really change the game, you've just named some of the beliefs changing according to fixed prior "object-level priors", and named the process of revising those beliefs according to the fixed prior "process of changing object-level prior".

When formal preference changes, it by definition means that it changed not according to (former) formal preference, that is something undesirable happened. Humans are not able to hold their preference fixed, which means that their preferences do change, what I call "value drift".

You are locked in in some preference in normative sense, not factual. This means that value drift does change your preference, but it is actually desirable (for you) for your formal preference to never change.

Replies from: cousin_it
comment by cousin_it · 2010-04-03T11:31:43.495Z · LW(p) · GW(p)

Formal preference is like prior: both current beliefs and procedure for updating the beliefs; beliefs change, but not the procedure.

I object to your talking about "formal preference" without having a formal definition. Until you invent one, please let's talk about what normal humans mean by "preference" instead.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T11:36:16.179Z · LW(p) · GW(p)

I'm trying to find a formal understanding of a certain concept, and this concept is not what is normally called "preference", as in "likes". To distinguish from the word "preference", I used the label "formal preference" in the above comment to refer to this concept I don't fully understand. Maybe the adjective "formal" is inappropriate for something I can't formally define, but it's not an option to talk about a different concept, as I'm not interested in a different concept. Hence I'm confused about what you are really suggesting by

Until you invent one, please let's talk about what normal humans mean by "preference" instead.

For the purposes of FAI, what I'm discussing as "formal preference", which is the same as "morality", is clearly more important than likes.

Replies from: cousin_it
comment by cousin_it · 2010-04-03T15:53:44.262Z · LW(p) · GW(p)

I'd be willing to bet money that any formalization of "preference" that you invent, short of encoding the whole world into it, will still describe a property that some humans do modify within themselves. So we aren't locked in, but your AIs will be.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-04-03T16:04:59.023Z · LW(p) · GW(p)

Do humans modify that property, or find it desirable to modify it? The distinction between factual and normative is very important here, since we are talking about preference, the pure normative. If humans prefer different preference from a given one, they do so in some lawful way, according to some preference criterion (that they hold in their minds). All such meta-steps should be included. (Of course, it might prove impossible to formalize in practice.)

As for the "encoding the whole world" part, it's the ontology problem, and I'm pretty sure that it's enough to encode preference about strategy (external behavior, given all possible observations) of a given concrete agent, to preserve all of human preference. Preference about external world or the way the agent works on the inside is not required.

comment by [deleted] · 2010-04-02T16:55:14.857Z · LW(p) · GW(p)

What makes you say that Bayesians are locked in? It's not true. If they're presented with evidence for or against their beliefs, they'll change them.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-02T23:06:44.304Z · LW(p) · GW(p)

You're talking about posteriors. They're talking about priors, presumably foundational priors that for some reason aren't posteriors for any computations. An important question is whether such priors exist.

Replies from: None
comment by [deleted] · 2010-04-03T04:10:57.286Z · LW(p) · GW(p)

But your beliefs are your posteriors, not your priors. If the only thing that's locked in is your priors, that's not a locking-in at all.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-04-05T15:37:30.481Z · LW(p) · GW(p)

That's not obvious. You'd need to study many specific cases, and see if starting from different priors reliably predicts the final posteriors. There might be no way to "get there from here" for some priors.

When we speak of the values that an organism has, which are analogous to the priors an organism starts with, it's routine to speak of the role of the initial values as locking in a value system. Why do we treat these cases differently?

Replies from: wnoise
comment by wnoise · 2010-04-05T17:51:34.292Z · LW(p) · GW(p)

There might be no way to "get there from here" for some priors.

That's obviously true for priors that initially assign probability zero somewhere. But as Cosma Shalizi loves pointing out, Diaconis and Freedman have shown it can happen for more reasonable priors too, where the prior is "maladapted to the data generating process".

This is of course one of those questionable cases with a lot of infinities being thrown around, and we know that applying Bayesian reasoning with infinities is not on fully solid footing. And much of the discussion is about failure to satisfy Frequentist conditions that many may not care about (though they do have a section arguing we should care). But it is still a very good paper, showing that non-zero probability isn't quite good enough for some continuous systems.

comment by Jack · 2010-04-06T03:57:49.095Z · LW(p) · GW(p)

I have heard some argue for adjusting priors as a way of dealing with deductive discoveries since we aren't logically omniscient. I think I like that solution. Realizing you forgot to carry a digit in a previous update isn't exactly new information about the belief. Obviously a perfect Bayesian wouldn't have this issue but I think we can feel free to evaluate priors given that we are so far away from that ideal.

comment by Scott Alexander (Yvain) · 2010-04-03T22:24:24.542Z · LW(p) · GW(p)

But one man's prior is another man's posterior: I can use the belief that a medical test is 90% specific when using it to determine whether a patient has a disease, but I arrived at my beliefs about that medical test through Bayesian processes - either logical reasoning about the science behind the test, or more likely trying the test on a bunch of people and using statistics to estimate a specificity.

So it may be mathematically wrong to tell me my 90% prior is false, but the 90% prior from the first question is the same 90% posterior from the second question, and it's totally kosher to say that the 90% posterior from the second question is wrong (and by extension, I'm using the "wrong prior")

The whole reflective consistency thing is that you shouldn't have "foundational priors" in the sense that they're not the posterior of anything. Every foundational prior gets checked by how well it accords with other things, and in that sense is sort of a posterior.

So I agree with cousin_it that it would be a problem if every Bayesian believed their prior to be correct (as in - they got the correct posterior yesterday to use as their prior today).

Replies from: JGWeissman
comment by JGWeissman · 2010-04-06T04:20:38.053Z · LW(p) · GW(p)

Vladimir is using "prior" to mean a map from streams of observations to probability distributions over streams of future observation, not the prior probability before updating. Follow the link in his comment.

comment by anonym · 2010-04-04T01:43:41.148Z · LW(p) · GW(p)

Everything is vague to a degree you do not realize till you have tried to make it precise.

Bertrand Russell

Replies from: RobinZ
comment by RobinZ · 2010-05-28T22:16:47.359Z · LW(p) · GW(p)

Note: phaedrus has provided a citation to "The Philosophy of Logical Atomism", noting that this quote is only part of the sentence.

Replies from: phaedrus, anonym
comment by phaedrus · 2010-05-28T22:21:28.586Z · LW(p) · GW(p)

Thanks RobinZ, The full quote is "Everything is vague to a degree you do not realize till you have tried to make it precise, and everything precise is so remote from everything that we normally think, that you cannot for a moment suppose that is what we really mean when we say what we think."

But the partial quote is much more crisp.

comment by anonym · 2010-05-29T07:07:14.050Z · LW(p) · GW(p)

Oooh, thanks to RobinZ and phaedrus! I hadn't seen the second part, and didn't have the citation.

comment by Rain · 2010-04-01T20:48:19.814Z · LW(p) · GW(p)

The important work of moving the world forward does not wait to be done by perfect men.

-- George Eliot

comment by Rain · 2010-04-01T20:48:00.021Z · LW(p) · GW(p)

Any technique, however worthy and desirable, becomes a disease when the mind is obsessed with it.

-- Bruce Lee

comment by Richard_Kennaway · 2010-04-01T22:04:50.659Z · LW(p) · GW(p)

When I look around and think that everything's completely and utterly fucked up and hopeless, my first thought is "Am I wearing completely and utterly fucked up and hopeless-colored glasses?"

Crap Mariner (Lawrence Simon)

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2010-04-06T10:17:21.063Z · LW(p) · GW(p)

The opposite of rose-tinted spectacles: shit-tinted shades.

comment by RobinZ · 2010-04-05T15:16:59.587Z · LW(p) · GW(p)

You don't have to believe everything you think.

Seen on bumper sticker, via ^zhurnaly.

Replies from: Amanojack, phaedrus
comment by Amanojack · 2010-04-05T20:29:09.466Z · LW(p) · GW(p)

This is more important than it looks. Most people's beliefs are just recorded memes that bubbled up from their subconscious when someone pressed them for their beliefs. They wonder what they believe, their mind regurgitates some chatter they heard somewhere, and they go, "Aha, that must be what I believe." Unless they take special countermeasures, humans are extremely suggestible.

comment by phaedrus · 2010-11-22T23:51:17.634Z · LW(p) · GW(p)

"It is the mark of an educated mind to be able to entertain a thought without accepting it." --- Aristotle

comment by Thomas · 2010-04-02T16:52:20.739Z · LW(p) · GW(p)

Wandering in a vast forest at night, I have only a faint light to guide me. A stranger appears and says to me: 'My friend, you should blow out your candle in order to find your way more clearly.' The stranger is a theologian.

  • Denis Diderot
Replies from: Pfft
comment by Pfft · 2010-04-04T01:17:58.472Z · LW(p) · GW(p)

But blowing out the candle actually would make it easier to find your way (it ruins your night vision).

Replies from: James_K
comment by James_K · 2010-04-04T19:57:01.431Z · LW(p) · GW(p)

Not if the forest is sufficiently dark that your night vision doesn't have enough light to work with.

Replies from: Zubon
comment by Zubon · 2010-04-05T01:02:47.449Z · LW(p) · GW(p)

That seems like an easy case to test, provided you have some way to re-light the candle.

Replies from: roundsquare
comment by roundsquare · 2010-04-05T09:40:00.676Z · LW(p) · GW(p)

You need to make two assumptions for the analogy.

1) You can't re-light the candle.

2) If you do things exactly right, you'll get out with just before starving to death (or dying somehow) otherwise, you are dead.

comment by MichaelGR · 2010-04-05T06:35:25.891Z · LW(p) · GW(p)

"Institutions will try to preserve the problem to which they are the solution."

-- Clay Shirky

comment by komponisto · 2010-04-01T21:39:48.638Z · LW(p) · GW(p)

What can be asserted without evidence can be dismissed without evidence.

-- Christopher Hitchens

Replies from: Oscar_Cunningham, Lightwave, Jonathan_Graehl
comment by Oscar_Cunningham · 2010-04-02T09:08:45.157Z · LW(p) · GW(p)

Well, clearly we can assert anything we want, so the quote becomes:

That without evidence can be dismissed without evidence.

And we notice that evidence doesn't change depending on whether you're considering something for belief or dismissal, so the quote becomes:

That without evidence can be dismissed.

So Hitchens is really telling us that prior probabilities tend to be small, which is true since there are almost always many possible hypotheses that the probability mass is split between.

Replies from: None
comment by [deleted] · 2010-04-02T16:45:33.408Z · LW(p) · GW(p)

You're assuming that probability mass tends to be split between stuff. This would be true, if all interesting statements were mutually exclusive or something. But consider the hypothesis that at least one statement in the Bible is true. This hypothesis is very complex, and yet its prior probability is very large.

comment by Lightwave · 2010-04-03T15:51:53.199Z · LW(p) · GW(p)

One thing that bugs me about this quote is that it isn't strong enough. It might give people the impression that it's up to the reader's opinion or personal preference to decide what to believe or not believe. They're allowed to believe in something they have no evidence for, you're allowed to dismiss it, everyone's happy.

comment by Jonathan_Graehl · 2010-04-01T22:30:17.494Z · LW(p) · GW(p)

Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.

Replies from: Strange7
comment by Strange7 · 2010-04-01T22:38:59.224Z · LW(p) · GW(p)

And, without supporting evidence, such assertions demonstrate nothing.

Replies from: jimrandomh, Psychohistorian
comment by jimrandomh · 2010-04-02T01:16:13.426Z · LW(p) · GW(p)

What can be asserted without evidence can be dismissed without evidence.

-- Christopher Hitchens

Accuracy was sacrificed for a pleasant parallel construction. Anything can be so asserted.

And, without supporting evidence, such assertions demonstrate nothing.

The mere fact that an assertion has been made is, in fact, evidence. For example, I will now flip a coin five times, and assert that the outcome was THHTT. I will not provide any evidence other than that assertion, but that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5. Most assertions don't come with evidence provided unless you go looking for it. If nothing else, most assertions have to be unsupported because they're evidence for other things and the process has to bottom out somewhere.

Now, as a matter of policy we should encourage people to provide more evidence for their assertions wherever possible, but that is entirely separate from the questions of what is evidence, what evidence is needed, and what is demonstrated by an assertion having been made.

Replies from: Jack, Jordan
comment by Jack · 2010-04-07T01:55:31.013Z · LW(p) · GW(p)

The mere fact that an assertion has been made is, in fact, evidence.

Well the evidence here isn't really "the fact that it has been asserted" but "the fact that it has been asserted in a context where truthfulness and authority are usually assumed". The assertion itself doesn't carry the weight. If we're playing poker and in the middle of a big hand I tell you "I have the best hand possible, you should fold." that isn't evidence of anything since it has been asserted in a context where assumptions about truthfulness have been flung out the window.

comment by Jordan · 2010-04-02T05:09:35.547Z · LW(p) · GW(p)

that is sufficient to conclude that your estimate of the probability that it's true should be higher than 1/2^5.

Or it's sufficient to conclude that one's estimate should be less than 1/2^5. Without providing additional evidence (such as "I saw the THHTT outcome") your claim is rather dubious and -- in the realm of humans -- this probably is a good indicator that you are lying or are crazy. I'm not sure how one should update your posteriors.

Replies from: None
comment by [deleted] · 2010-04-02T16:52:25.039Z · LW(p) · GW(p)

Suppose I tell you that my password is D!h98+3(dkE4. Do you conclude that since I don't want you to know my password, I must be trying to mislead you as to what my password is, and so the probability that this is my password is actually less than 1/95^12?

If I assert that the outcome as THHTT, either I'm lying or I'm not lying, and there's little evidence either way. What little evidence there is probably doesn't push my probability of telling the truth below 3%, and surely the strength of the evidence has little, if anything, to do with the prior probability of the coin showing THHTT.

Replies from: Jordan
comment by Jordan · 2010-04-02T19:28:59.381Z · LW(p) · GW(p)

Do you conclude that since I don't want you to know my password, I must be trying to mislead you as to what my password is, and so the probability that this is my password is actually less than 1/95^12?

Good point. Thanks for batting down my idiocy here, much obliged =D

comment by Psychohistorian · 2010-04-02T00:40:52.913Z · LW(p) · GW(p)

"There are no married bachelors."

Replies from: SilasBarta, Tiiba, Psychohistorian
comment by SilasBarta · 2010-04-07T17:06:30.324Z · LW(p) · GW(p)

Tom and Sue, acquaintances through friends of theirs, got legally married, with no ceremony, in order for Tom to avoid being drafted to fight in a war. They barely know each other. They have not spoken to each other in a long time and (obviously) have no children. Neither wears a wedding ring. They plan to void the marriage as soon as the laws allow, with no further transfer of property between them.

Tom is a married bachelor.


There's a reason the term "bachelor" exists, and it's not to make Kant right.

Replies from: Jack, Psychohistorian
comment by Jack · 2010-04-07T17:20:49.100Z · LW(p) · GW(p)

This just looks like an instance of using contradictory language to indicate that Tom fits the the conventional definitions of neither a bachelor or a married man. You could also say Tom is a single spouse. Bachelor happens to have connotations of referring to lifestyle rather than legal status which makes your meaning plainer. The fact that language is flexible enough to get around logic doesn't mean married bachelor isn't a logical contradiction or that Kant is wrong.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-07T17:31:42.386Z · LW(p) · GW(p)

My point is that we have words because they call out a useful, albeit fuzzy, blob of conceptspace. We may try to claim that two words mean the same thing, but if there are different words, there's probably a reason -- because we want to reference different concepts ("connotations") in someone's mind.

It's important to distinguish between the concepts we are trying to reference, vs. some objective equivalence we think exists in the territory. The territory actually includes minds that think different thoughts on hearing "unmarried" vs. "bachelor".

ETA: My point regarding Kant was this: He should have seen statements like "All bachelors are unmarried" as evidence regarding how humans decide to use words, not as evidence for the existence of certain categories in reality's most fundamental ontology.

Replies from: Tyrrell_McAllister, Jack
comment by Tyrrell_McAllister · 2010-04-08T00:14:38.782Z · LW(p) · GW(p)

My point regarding Kant was this: He should have seen statements like "All bachelors are unmarried" as evidence regarding how humans decide to use words, not as evidence for the existence of certain categories in reality's most fundamental ontology.

By "certain categories in reality's most fundamental ontology", do you mean the synthetic/analytic distinction? He wouldn't consider that distinction to be part of reality's most fundamental ontology. He would disavow any ability to get at "fundamental reality", which he would consider to be intrinsically out of reach, locked away in the inaccessible numinous.

Actually, he would say something very close to what you wrote when you said that he "should have seen statements like 'All bachelors are unmarried' as evidence regarding how humans decide to use words". What he would say is that the statement is evidence regarding how humans have decided to build a certain concept out of other concepts.

If you affirm the assertion "All bachelors are unmarried" to yourself, then what you are doing, on Kant's view, is inspecting the concept "bachelor" in your own mind and finding the concept "unmarried" to be among its building blocks. The assertion is analytic because one confirms it to oneself in this way.

Analyticity doesn't have to do with what the things you call bachelors are like in and of themselves. So it's not about fundamental reality. Rather, analysis is the act of inspecting how a concept is put together in your mind, and analytic assertions are just assertions that analysis can justify, such as that one concept is part of another concept.

Kant would even allow that you could make a mistake while carrying out this inspection. You might think that "unmarried" was one of the original pieces out of which you had built "bachelor", when in fact you just now snuck in "unmarried" to form some new concept without realizing it. That is, you might have just unknowingly carried out an act of synthesis. Kant would say, though, that you can reach effective certainty if you are sufficiently careful, just as you can reach effective certainty about a simple arithmetical sum if you perform the sum with sufficient care.

[The above is just to clarify Kant's claims, not to endorse them.]

comment by Jack · 2010-04-07T17:36:18.365Z · LW(p) · GW(p)

I don't disagree with anything here.

Replies from: SilasBarta
comment by SilasBarta · 2010-04-07T17:43:10.077Z · LW(p) · GW(p)

Rockin'.

I'd tie the point back to the original quotation, but I'm losing interest now and actually kind of busy...

comment by Psychohistorian · 2010-04-08T15:51:24.890Z · LW(p) · GW(p)

This is just playing with connotations. A bachelor is an unmarried man, so one could say that Tom acts like a bachelor despite being married. He is not a bachelor, though. To show this has a practical implication, assume Tom met Mary: the two could not get married immediately. If he were a bachelor, they could. He therefore lacks necessary properties of bachelorness (most significantly, not being married), and cannot be a bachelor, even if he may live his life much as a bachelor would.

comment by Tiiba · 2010-04-02T12:54:20.805Z · LW(p) · GW(p)

My dad has a Bachelor's degree.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-04-02T18:53:40.529Z · LW(p) · GW(p)

Is he married?

Replies from: Tiiba
comment by Tiiba · 2010-04-02T21:09:28.016Z · LW(p) · GW(p)

Yes, to mom.

Replies from: Psychohistorian, None
comment by Psychohistorian · 2010-04-04T19:54:01.197Z · LW(p) · GW(p)

"There are no married unmarried men."

I add this grudgingly, as deliberately seeking ambiguity in a clear sentence is just being fatuous; it's not a valid objection.

comment by [deleted] · 2010-04-06T03:49:47.160Z · LW(p) · GW(p)

.

comment by Psychohistorian · 2010-04-08T16:37:55.364Z · LW(p) · GW(p)

I was wrong. On further reflection, this is a failed attempt to refute this point, though I don't think the ensuing discussion of Kant actually gets to why.

If you're familiar with the definition of bachelor, then this statement equates to, "There are no unmarried married men." Any statement of the form "No A are not-A" is completely uninformative. As it can be decided a priori for any consistent value of A, stating it demonstrates nothing.

If you aren't clear on the meaning of bachelor, then this statement would require a citation of the definition in order to be convincing. This would constitute supporting evidence, and it would serve to demonstrate the meaning of "bachelor."

Thus, this does not go to refute the claim that an assertion without supporting evidence demonstrates nothing, as that is clearly the case here.

comment by Rain · 2010-04-01T20:47:27.403Z · LW(p) · GW(p)

If trees could scream, would we be so cavalier about cutting them down? We might, if they screamed all the time, for no good reason.

-- Jack Handey's Deep Thoughts

comment by CaptainOblivious2 · 2010-04-03T02:23:54.233Z · LW(p) · GW(p)

"All things end badly - or else they wouldn't end"

  • Brian Flanagan (Tom Cruise), Cocktail, 1988. He was referring to relationships, but it's actually a surprisingly general rule.
Replies from: Zubon
comment by Zubon · 2010-04-05T00:52:35.787Z · LW(p) · GW(p)

Almost all relationships end in unhappiness or death. Or unhappiness leading to death.

comment by RobinZ · 2010-04-01T23:33:04.701Z · LW(p) · GW(p)

Blind alley, though. If someone's ungrateful and you tell him he's ungrateful, okay, you've called him a name. You haven't solved anything.

Robert Pirsig, Zen and the Art of Motorcycle Maintenance

comment by Rain · 2010-04-01T20:47:41.124Z · LW(p) · GW(p)

The word agnostic is actually used with the two distinct meanings of personal ignorance and intrinsic unknowability in the same context. They are distinguished when necessary with a qualifier.

WEAK agnosticism: I have no fucking idea who fucked this shit up.
STRONG agnosticism: Nobody has any fucking idea who fucked this shit up.

There is a certain confusion with weak atheism which could (and frequently does) arise, but that is properly reserved for the category of theological noncognitivists,

WEAK atheism: What the fuck do you mean with this God shit?
STRONG atheism: Didn't take any God to fuck this shit up.

which is different again from weak theism.

WEAK theism: Somebody fucked this shit up.
STRONG theism: God fucked this shit up.

An interesting cross-categorical theological belief not easily represented above is

DEISM: God set this shit up and it fucked itself.

-- Snocone, in a Slashdot post

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2010-04-12T12:25:35.721Z · LW(p) · GW(p)

Could someone explain why this has been voted up so much? I didn't find it particularly funny, or to have any non-trivial insight.

Replies from: Bo102010, Rain
comment by Bo102010 · 2010-04-12T12:30:20.761Z · LW(p) · GW(p)

It shoehorns the use of giggle-inducing curse words into an explanation of religious views. Someone who has only ever been exposed to Beavis and Butthead cartons, and has never heard about "agnosticism," might be able to learn from this type of explanation.

comment by Rain · 2010-04-12T13:50:42.334Z · LW(p) · GW(p)

It presents a quick and easy, bullet-point spectrum of belief, which many people may not know exists.

An anecdotal data point: I linked to this quote when talking to a friend who was using me to vent their anti-theist ideas since they didn't have many other outlets for such thoughts. They laughed, and were able to properly categorize their beliefs (weak atheist) for the first time, rather than thinking themselves some kind of heretic (evil atheist).

That said, I didn't expect it to be this popular, either.

comment by AlexMennen · 2010-04-05T06:06:58.898Z · LW(p) · GW(p)

An atheist walked into a bar, but seeing no bartender he revised his initial assumption and decided he only walked into a room.

http://friendlyatheist.com/2008/02/29/complete-the-atheist-joke-1/

Replies from: Waldheri
comment by Waldheri · 2010-04-05T08:33:47.686Z · LW(p) · GW(p)

My initial response was to chuckle, but when my analytical capacities kicked in a moment later I was disappointed.

If his initial assumptions was that he was walking into a bar, does that make him atheist in this metaphor? Substitute "walked into a bar" by "believed there is a god", the thing I assume it is a metaphor of. You will see it makes no sense.

Replies from: AlexMennen, roundsquare
comment by AlexMennen · 2010-04-05T15:22:03.844Z · LW(p) · GW(p)

Many atheists were formerly theists.

Still, I suppose it might have been better as "A scientist walked into what he thought was a bar, but seeing no bartender, barstools, or drinks, he revised his initial assumption and decided he only walked into a room."

comment by roundsquare · 2010-04-05T09:02:08.198Z · LW(p) · GW(p)

I think it makes sense, as a poke at atheists.

Think about it this way. You walk into a bar, and you see no bartender. In your mind, you say "anything that is a bar will have a bartender. No bar tender, not a bar." Of course, the best thing to do before revising your assumptions is to wait for a bar tender. Maybe he/she is in the bathroom.

Similarly, if you claim there is no evidence of god that I've seen in my lifetime, you are using the wrong measure. Why should god (if there is one) make itself obvious during the short period that is a human lifetime.

This is almost an "irrationality quote" instead of a rationality quote, but still enlightening.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T12:40:27.474Z · LW(p) · GW(p)

I was with you up until the "similarly". After that you start privileging the hypothesis - you should expect a god to make itself obvious during a human lifetime, by any description of a god ever proposed in history.

Replies from: roundsquare
comment by roundsquare · 2010-04-07T10:45:50.989Z · LW(p) · GW(p)

I'm not sure I see how I"m privileging the hypothesis. Not saying that I'm not, but if you can explain how I'd appreciate it.

Aside from that, I think you are using "god" to mean any of the gods discussed by any popular religion. By this definition, I'd probably agree with you.

I was using the word "god" in a much more general sense... not sure I can define it though, probably something similar to: any "being" that is omnipotent and omniscient, or maybe: any "being" that created reality as we know it. In either definition, there is not really a reason to expect got to make itself obvious to us on any timescale that we consider reasonable. There is no reason to believe that we are special enough that we'd get that kind of treatment.

Replies from: RobinZ
comment by RobinZ · 2010-04-07T10:51:26.064Z · LW(p) · GW(p)

There is no reason to propose such a being - privileging the hypothesis is when you consider a hypothesis before any evidence has forced you to raise that hypothesis to the level of consideration.

Unless you have a mountain of evidence (and I'm guessing it'll have to be cosmological to support a god that hasn't visibly intervened in the world) already driving you to argue that there might be a god, don't bother proposing the possibility.

Replies from: roundsquare
comment by roundsquare · 2010-04-07T11:53:56.661Z · LW(p) · GW(p)

Ah, I see what you are saying. Thanks for the explanation. And you are indeed correct.

comment by gregconen · 2010-04-04T16:52:19.022Z · LW(p) · GW(p)

Do not imagine that mathematics is hard and crabbed, and repulsive to common sense. It is merely the etherealization of common sense.

WIlliam Thomson, Lord Kelvin

Replies from: gwern
comment by gwern · 2010-05-01T01:34:09.145Z · LW(p) · GW(p)

One I got while reading Jaynes's Probability Theory recently:

"Probability theory is nothing but common sense reduced to calculation."

-- Laplace

comment by djcb · 2010-04-02T12:20:39.758Z · LW(p) · GW(p)

The white line down the center of the road is a mediator, and very likely it can err substantially towards one side the other before the disadvantaged side finds advantage in denying its authority.

Source:
-- Schelling, Strategy of conflict, p144

[The book was mentioned a couple of times here on LW, and is a nice introduction to the use of game theory in geopolitics]

comment by Richard_Kennaway · 2010-04-01T22:03:17.338Z · LW(p) · GW(p)

It is always advisable to perceive clearly our ignorance.

Charles Darwin, "The Expressions of the Emotions in Man and Animals", ch.3.

comment by MichaelGR · 2010-04-05T06:35:53.325Z · LW(p) · GW(p)

"Torture the data long enough and they will confess to anything."

--via The Economist, "a saying of statisticians".

Replies from: gwern, RobinZ
comment by gwern · 2010-05-01T01:31:37.340Z · LW(p) · GW(p)

"With 4 parameters I can fit an elephant, and with 5 I can make him wiggle his trunk."

--von Neumann

comment by RobinZ · 2010-04-05T12:41:31.041Z · LW(p) · GW(p)

I like it, but do you have an issue number?

Replies from: Mass_Driver, MichaelGR
comment by Mass_Driver · 2010-04-05T13:49:50.009Z · LW(p) · GW(p)

My father's been saying that as long as I can remember; he hasn't taken a statistics class since '82.

Replies from: RobinZ
comment by RobinZ · 2010-04-05T14:03:00.758Z · LW(p) · GW(p)

Never mind, then!

comment by MichaelGR · 2010-04-05T21:23:56.225Z · LW(p) · GW(p)

Here is the piece I got it from:

http://www.economist.com/specialreports/displaystory.cfm?story_id=15557465

Replies from: RobinZ
comment by RobinZ · 2010-04-05T21:31:06.450Z · LW(p) · GW(p)

"A different game: Information is transforming traditional businesses", Feb 25th 2010 - thanks!

comment by NancyLebovitz · 2010-04-04T17:30:22.560Z · LW(p) · GW(p)

Are the winners the only ones actually writing the history? We need to disabuse ourselves of this habit of saying things because they sound good. ----- Ta-Nehisi Coates

Coates runs a popular culture, black issues, and history blog with a very strong rationalist approach.

comment by Nic_Smith · 2010-04-03T02:07:05.383Z · LW(p) · GW(p)

[Discarding game] theory in favor of some notion of collective rationality makes no sense. One might as well propose abandoning arithmetic because two loaves and seven fish won't feed a multitude. -- Ken Binmore, Rational Decisions

Replies from: cousin_it, Nanani
comment by cousin_it · 2010-04-03T11:37:56.136Z · LW(p) · GW(p)

I'm a big fan of Ken Binmore, and this quote captures a lot of my dissatisfaction with LW's directions of inquiry. For example, it's more or less taken for granted here that future superintelligent AIs should cooperate on the Prisoner's Dilemma, so some of us set out to create a general theory of "superintelligent AIs" (including ones built by aliens, etc.) that would give us the answer we like.

Replies from: Zubon
comment by Zubon · 2010-04-05T00:57:39.208Z · LW(p) · GW(p)

Would it be correct to say you mean "should" in the wishful thinking sense of "we really want this outcome," rather than something normative or probabilistic?

Replies from: cousin_it
comment by cousin_it · 2010-04-05T12:12:36.183Z · LW(p) · GW(p)

Good question. The answer's yes, but now I'm wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.

Replies from: Baughn
comment by Baughn · 2010-04-07T16:24:14.879Z · LW(p) · GW(p)

That is not the impression I got from the story.

The baby-eaters were cooperators, yes; they were also stated to be relatively similar to humanity except for their unfortunate tendency to eat preteens.

The other ones, though? I didn't see them do anything obviously cooperative, but I did see a few events that'd argue against it. The overall impression I got was that we really can't be sure, except that it might be unlikely for both sides of a contact to come out unscathed.

comment by Nanani · 2010-04-05T00:49:05.263Z · LW(p) · GW(p)

Typo-hunt: should read "abandoning arithMetic" (without the capital of course)

Replies from: Nic_Smith
comment by Nic_Smith · 2010-04-05T22:51:06.753Z · LW(p) · GW(p)

Fixed.

comment by aausch · 2010-04-09T03:37:43.765Z · LW(p) · GW(p)

Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.

-- Gautama Buddha

Replies from: ata
comment by ata · 2010-04-12T07:22:21.887Z · LW(p) · GW(p)

I like to point out that spreading this quote is an example of violating it: Buddha never said that. I'm not sure who did originally write it, but it's not found in any Buddhist primary source. "Do not believe in anything simply because it is spoken and rumored by many!"

I've heard it might be a rough paraphrase of a quote from the Kalama Sutta, but in its original form, it would not qualify as a "rationality quote"; it's more a defense of belief in belief, advising people to accept things as true based on whether believing it is true tends to increase one's happiness.

Edit: See RichardKennaway's reply; he is correct about this one. I think I was thinking of a different quote along similar lines.

Replies from: Richard_Kennaway, Jack
comment by Richard_Kennaway · 2010-04-12T09:24:26.456Z · LW(p) · GW(p)

it's not found in any Buddhist primary source

What is a Buddhist primary source? None of the discourses were written down until some centuries after the Buddha's time. The discourses that we have do themselves exist and whatever their provenance before the earliest extant documents, they are part of the canon of Buddhism. The canon has accreted layers over the centuries, but the Kalama Sutta is part of the earliest layer, the Tripitaka.

I've heard it might be

You've heard? That it might be? :-)

It is readily available online in English translation. It attributes these words directly to the Buddha:

Come Kalamas. Do not go upon what has been acquired by repeated hearing; nor upon tradition; nor upon rumor; nor upon what is in a scripture; nor upon surmise; nor upon an axiom; nor upon specious reasoning; nor upon a bias toward a notion that has been pondered over; nor upon another's seeming ability; nor upon the consideration, "The monk is our teacher." Kalamas, when you yourselves know: "These things are bad; these things are blamable; these things are censured by the wise; undertaken and observed, these things lead to harm and ill," abandon them.

and in another translation:

Now, Kalamas, don't go by reports, by legends, by traditions, by scripture, by logical conjecture, by inference, by analogies, by agreement through pondering views, by probability, or by the thought, 'This contemplative is our teacher.' When you know for yourselves that, 'These qualities are skillful; these qualities are blameless; these qualities are praised by the wise; these qualities, when adopted & carried out, lead to welfare & to happiness' — then you should enter & remain in them.

If I had the time, I'd be tempted to annotate the passage with LessWrong links.

ETA: For the second translation, the corresponding paragraph is actually the one preceding the one I quoted. The sutta in fact contains three paragraphs listing these ten faulty sources of knowledge. Buddhist scriptures are full of repetitions and lists, probably to assist memorisation.

ETA2: Rationalist version: Do not rest on weak Bayesian evidence, but go forth and collect strong.

comment by Jack · 2010-04-12T07:59:05.228Z · LW(p) · GW(p)

Great catch. Upvoted.

I've heard it might be a rough paraphrase of a quote from the Kalama Sutta, but in its original form, it would not qualify as a "rationality quote"; it's more a defense of belief in belief, advising people to accept things as true based on whether believing it is true tends to increase one's happiness.

I actually don't think this is right though. I'm pretty sure the original form is about the importance of personal knowledge from direct experience. I think the wikipedia article makes this clear, actually. I suppose you're taking your reading from:

Kalamas, when you yourselves know: "These things are good; these things are not blamable; these things are praised by the wise; undertaken and observed, these things lead to benefit and happiness," enter on and abide in them.'

But the emphasis here should be on "when you yourselves know", not "these things lead to benefit and happiness". Keep in mind the kind of teachings being addressed are often strategies for happiness so it makes sense to be concerned with whether or not a teaching really does increase happiness.

I don't see why we can't take it as an injunction to trust only experiment and observation. It seems about right to me.

(ETA: Except of course he's talking about meditation not experiment and ignores self-deception, placebo effect, brain diversity and the all important intersubjective confirmation, but I'll take what I can get from the 5th century B.C.E.)

comment by beriukay · 2010-04-05T11:58:43.949Z · LW(p) · GW(p)

"It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." ~William Kingdon Clifford

This is the quote that got me thinking about rationality as something other than "a word you use to describe things you believe so that you can deride those who disagree with you."

Replies from: RobinZ
comment by RobinZ · 2010-04-05T14:17:29.407Z · LW(p) · GW(p)

One of the most insidious sources of confusion, I find, is the distinction between the meaning of a word and its most frequent uses. It ties into the whole "Applause Lights" phenomenon, particularly "Fake Norms".

P.S. Belatedly: Welcome to Less Wrong! Feel free to introduce yourself in that thread.

comment by Richard_Kennaway · 2010-04-01T22:41:27.293Z · LW(p) · GW(p)

Gall's Law:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

John Gall, "Systemantics"

Replies from: Peter_de_Blanc, Eliezer_Yudkowsky, Lightwave
comment by Peter_de_Blanc · 2010-04-02T18:02:21.089Z · LW(p) · GW(p)

The "inverse proposition" given is actually the contrapositive of (i.e. is equivalent to) the original statement.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-02T17:44:06.607Z · LW(p) · GW(p)

Counterexample: Space shuttle.

Replies from: NMJablonski, RolfAndreassen, cousin_it, kodos96
comment by NMJablonski · 2010-04-02T19:44:10.149Z · LW(p) · GW(p)

Evolved from both simpler winged aircraft and simpler rockets.

All the base components that went into the space shuttle still existed on a line of technogical progress from the basic to the advanced. Actually, the space shuttle followed Gall's Law precisely.

The lift mechanism was still vertically stacked chemical rockets of the sort that had already flown for decades. The shuttle unit was built from components perfected by the Gemini and Apollo programs, and packed into an aerodynamic form based on decades of aircraft design.

Reducing technologically, the shuttle still depends on simple systems like airfoils, rockets and nozzles, gears, and other known quantities.

Replies from: Eliezer_Yudkowsky, soreff
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-04-02T23:03:59.594Z · LW(p) · GW(p)

Then if that qualifies, what would falsify Gall's Law?

Replies from: NMJablonski, NMJablonski, Tyrrell_McAllister
comment by NMJablonski · 2010-04-02T23:58:35.569Z · LW(p) · GW(p)

Further reply:

I was contemplating this exchange and wondering whether Gall's Law has any value (constrains expected experience).

I think it does. If an engineer today claimed to have successfully designed an Albucierre engine, I would probably execute an algorithm similar to Gall's Law and think:

The technology does not yet exist to warp space to any degree, nor is there an existing power source which could meet the needs of this device. The engineer's claim to have developed a device which can be bound to a craft, controllably warp space, and move it faster than light is beyond existing technological capability. We are too many Gall Steps away for it to be probable.

comment by NMJablonski · 2010-04-02T23:20:35.839Z · LW(p) · GW(p)

The first development of the electronic circuit would have been a case of a complex technological system that worked, but was not based fundamentally upon existing simpler machines. The first use of chemical propulsion - gunpowder / rocketry - might have been a similar case.

(EDIT: Upon further consideration, chemical propulsion is based upon the simpler technologies of airtight confinement and incendiary materials. However, I still think the electronic circuit was effectively the rise of a new fundamental device with complex behavior unconnected to more basic technologies. If anyone thinks they can reduce the circuit to simpler working devices I would be fascinated to explore that.)

It's a good question. I'm turning over various possibilities in my mind.

Do you still hold that the space shuttle falsifies it?

If so, I'd be interesting in hearing your reasoning, and other examples you consider similar.

Replies from: Strange7
comment by Strange7 · 2010-04-03T16:05:58.470Z · LW(p) · GW(p)

The first development of the electronic circuit would have been a case of a complex technological system that worked, but was not based fundamentally upon existing simpler machines.

Electroplating and electrolysis of water both involve a circuit, but aren't overwhelmingly complex. Samuel Thomas von Sommering's electrochemical telegraph was based on electrolysis. It's not like someone pulled doped silicon semiconductors straight out of the lightning-struck sand.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-03T16:13:09.099Z · LW(p) · GW(p)

True, +1 for a thoughtful answer.

However, I still don't see the circuit as reducible to simpler working components. Regardless of the medium across which the current flows, it still seems to me that the circuit is a simple machine - a basic device like the pulley, joint, inclined plane, or lever.

In considering this, I also think that chemical fuels are simple machines and belong on that list, as they are ostensibly devices (can be used by an agent to do work) but also aren't reducible to simpler working components.

comment by Tyrrell_McAllister · 2010-04-02T23:38:31.052Z · LW(p) · GW(p)

Basically, the shuttle is a system of rockets carrying a space-worthy airplane as payload. Both of these components had predecessors. Had the shuttle been the first rocket or first space-worthy airplane, it would have falsified Gall's Law.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-02T23:45:00.667Z · LW(p) · GW(p)

I'm not sure.

Isn't the first rocket or airplane also built on simple technologies?

Couldn't one continue to reduce components to simpler devices until you get to basic joints, inclined planes, tensors (springs), incendiary materials (fuel), etc - that all would have had to be developed and understood before an engineer could design the rocket / airplane?

(EDIT: I realize that I'm essentially positing that Gall's Law holds if all technology should be reducible to simple machines, and that what we call "technology" is improving, refining, and combining those designs.)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-02T23:57:53.811Z · LW(p) · GW(p)

I'm not sure.

Isn't the first rocket or airplane also built on simple technologies?

I'm not saying that the first rocket and first airplane falsified Gall's Law. I'm saying that, had the space shuttle, in the form in which it was actually built, been the first rocket or the first airplane, it would have falsified Gall's Law.

Replies from: gregconen, NMJablonski
comment by gregconen · 2010-04-03T00:22:50.068Z · LW(p) · GW(p)

Suppose a hyperintelligent alien race did build a space shuttle equivalent as their first space-capable craft, and then went on to build interplanetary and interstellar craft.

Alien 1: The [interstellar craft, driven by multiple methods of propulsion and myriad components] disproves Gall's Law.

Alien 2: Not at all. [Craft] is a simple extension of well-developed principles like the space shuttle and the light sail.

You can simply define a "working simple system" as whatever you can make work, making that a pure tautology.

Replies from: Emile, NMJablonski
comment by Emile · 2010-04-06T09:55:42.829Z · LW(p) · GW(p)

I would say that Gall's Law is about the design capacities of human beings (like Dunbar's Number), or is something like "there's a threshold to how much new complexity you can design and expect to work", with the amount of complexity being different for humans, superintelligent aliens, chimps, or Mother Nature.

(the limit is particularly low fo Mother Nature - she makes smaller steps, but got to make much more of them)

Replies from: gregconen
comment by gregconen · 2010-04-06T18:38:42.319Z · LW(p) · GW(p)

That's not my point. My point is that Gall's law is unfalsifiable by anything short of Omega converting its entire light cone into computronium/utilium in a single, plank-time step.

Edit: Not to say that Gall's Law can't be useful to keep in mind during engineering design.

comment by NMJablonski · 2010-04-03T01:13:15.120Z · LW(p) · GW(p)

I agree.

All of these concepts are imprecisely connected to the real world. Does anyone have an idea for how we could more precisely define Gall's Law to more ably discuss real expected experience?

I'm considering a definition which might include the phrase:

"Reducible to previously understood components"

Replies from: gwillen
comment by gwillen · 2010-04-04T20:41:52.156Z · LW(p) · GW(p)

I think the key insight here is that you get a limited number of bits, in design space, to bridge between things that have already been shown to work, and things that have yet to be shown to do so.

For purposes of Gall's law, we are interested in the number of bits of design that went into the space shuttle without ever having been previously shown to work. So you have to subtract off the complexity of "the idea of an airplane", which we already had, and of the solid fuel booster rockets, which we already knew how to build; and also of any subassembly which got built and tested successfully in a lab first -- but perhaps leaving some bits or fraction of a bit to account for the unknown environment when using them on the real shuttle, versus in the lab.

Replies from: Tyrrell_McAllister, NMJablonski
comment by Tyrrell_McAllister · 2010-04-08T00:53:46.120Z · LW(p) · GW(p)

I think the key insight here is that you get a limited number of bits, in design space, to bridge between things that have already been shown to work, and things that have yet to be shown to do so.

That is a very helpful way to put it: "Gall's Law" is the claim that there is this limited number of bits.

Of course, put so clearly, it looks kind of trivial, so I think that we should read Gall as further saying that you can get a reasonable intuitive bound on this limit by just looking at the history of innovation, but that people often propose designs when a little reasonable reflection would have shown them that they are proposing to step far beyond this limit.

comment by NMJablonski · 2010-04-05T14:11:27.715Z · LW(p) · GW(p)

This is an excellent idea - quantizing bits of design information.

It would also demonstrate that if a designer started at the "space shuttle" level of complexity, and layed out a rough design, that design would probably change drastically as the components were built and tested, and the designer collected more bits of information about how to make the complex system work.

comment by NMJablonski · 2010-04-02T23:59:21.348Z · LW(p) · GW(p)

Ah, I understand.

Total agreement.

comment by soreff · 2010-04-04T20:14:09.233Z · LW(p) · GW(p)

The lift mechanism was still vertically stacked chemical rockets of the sort that had already flown for decades.

The Columbia shuttle crew would still be with us if this were correct.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-05T14:19:37.594Z · LW(p) · GW(p)

True, the space shuttle was not completely contained on its vertical axis, but I was talking about the boosters themselves. I said the lift mechanism was a vertically stacked chemical rocket, not that the entire shuttle was a uniform tower, as it obviously wasn't.

The boosters are components of the space shuttle, which is what we were talking about: simpler working components evolving into complex systems.

Simple working component: Rocket booster

Complex system: Shuttle with a crew module, fuel tanks, and multiple boosters

comment by RolfAndreassen · 2010-04-02T21:10:30.551Z · LW(p) · GW(p)

In addition to NMJablonski's point, it is perhaps arguable just how well the Space Shuttle worked. In hindsight it seems that the same amount of orbital lift capacity could have been done rather more cheaply.

Replies from: JulianMorrison
comment by JulianMorrison · 2010-04-06T13:27:22.722Z · LW(p) · GW(p)

It works for a job it isn't used for: launching into a polar orbit to emplace secret military satellites, and gliding a very long distance back to base without a need for a splashdown recovery that might risk its secrecy.

That's what gave it the wings, and once you have the wings the rest of the design follows.

comment by cousin_it · 2010-04-03T12:05:52.695Z · LW(p) · GW(p)

It doesn't qualify 100%, because there were little prototype shuttles. Still, you have a point. If we have good theories, we can build pretty big systems from scratch. Gall's law resonates especially strongly with programmers because much of programming doesn't have good theories, and large system-building endeavors fail all the time.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-03T14:16:59.268Z · LW(p) · GW(p)

Even if there hadn't been prototype shuttles, the shuttle is still reducible to simpler components. Gall Law just articulates that before you can successfully design something like the space shuttle you have to understand how all of its simpler components work.

If an engineer (or even transhuman AI) had sat down and started trying to design the space shuttle, without knowledge of rocketry, aerodynamics, circuits, springs, or screws, it would be pulling from a poorly constrained section of the space of possible designs, and is unlikely to get something that works.

The way this problem is solved is to work backwards until you get to simple components. The shuttle designer realizes his shuttle will need wings, so starts to design the wing, realizes the wing has a materials requirement, so starts to develop the material. He continues to work back until he gets to the screws and rivets that hold the wing together, and other simple machines.

In engineering, once you place the first atom in your design, you have already made a choice about atomic mass and charge. Complex patterns of atoms like space shuttles will include many subdivisions (components) that must be designed, and Gall's Law illustrates that they must be designed and understood before the designer has a decent chance of the space shuttle working.

Replies from: cousin_it
comment by cousin_it · 2010-04-03T15:43:18.282Z · LW(p) · GW(p)

I think you completely miss the point of Gall's law. It's not about understanding individual components. Big software projects still fail, even though we understand if-statements and for-loops pretty well.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-03T15:58:47.979Z · LW(p) · GW(p)

I know that.

It's about an evolution from simpler systems to more complex systems. Various design phases of the space shuttle aren't what falsify that example. It's the evolution of rocket propulsion, aircraft, and spacecraft, and their components.

(EDIT: Also, at no point was I suggesting that understanding of components guarantees success in designing complex systems, but that it is neccessary. For a complex system to work it must have all working components, reduced down to the level of simple machines. Big software projects would certainly fail if the engineers didn't have knowledge of if-statements and for-loops.)

comment by kodos96 · 2010-04-06T17:37:53.246Z · LW(p) · GW(p)

Counterexample: Space shuttle.

Really? I think only 6 of them were built, and 2 of those suffered catastrophic failure with all hands lost.

comment by Lightwave · 2010-04-03T16:03:30.582Z · LW(p) · GW(p)

Counterexample: a complex computer program designed and written from scratch.

Replies from: Richard_Kennaway, pjeby
comment by Richard_Kennaway · 2010-04-03T19:43:43.783Z · LW(p) · GW(p)

I've written some of those. And every time, I test everything I write as I go, so that at every stage from the word go I have a working program. The big bang method, of writing everything first, then running it, never works.

Replies from: sketerpot
comment by sketerpot · 2010-04-05T21:00:17.336Z · LW(p) · GW(p)

The "big bang" sometimes happens to me when I write in Haskell. After I fix all the compiler errors, of course. I just wish there were a language with a type system that can detect almost as many errors as Haskell's without having quite such a restrictive, bondage-fetish feel to it.

But yeah, in general, only trivial programs work the first time you run them. That's a good definition of trivial, actually.

comment by pjeby · 2010-04-03T16:32:45.930Z · LW(p) · GW(p)

Counterexample: a complex computer program designed and written from scratch.

...and that worked the very first time? How often does that happen?

The quote is a rule of thumb and an admonition to rational humility, not a law of the universe.

Replies from: Lightwave, NMJablonski
comment by Lightwave · 2010-04-04T08:26:55.663Z · LW(p) · GW(p)

Well "never works and cannot be made to work" does sound a bit strong to me.

comment by NMJablonski · 2010-04-03T16:40:29.212Z · LW(p) · GW(p)

I agree it's probably not a law of the universe, as I cannot rule out possible minds that could falsify it. However, I cannot from within my mind (human capabilities) see a case where a complex system could work before each of its parts had been made to work.

comment by Piglet · 2010-04-05T15:14:01.448Z · LW(p) · GW(p)

"Face the facts. Then act on them. It's the only mantra I know, the only doctrine I have to offer you, and it's harder than you'd think, because I swear humans seem hardwired to do anything but. Face the facts. Don't pray, don't wish, don't buy into centuries-old dogma and dead rhetoric. Don't give in to your conditioning or your visions or your fucked-up sense of... whatever. FACE THE FACTS. THEN act."

--- Quellcrist Falconer, speech before the assault on Millsport. (Richard Morgan, Broken Angels)

Replies from: Eneasz
comment by Eneasz · 2010-04-05T16:49:49.804Z · LW(p) · GW(p)

My personal favorite from this trilogy is the whole "They say it's not personal, it's business. Well it's personal for us. And we must make it personal for them." (paraphrased)

comment by NancyLebovitz · 2010-04-04T17:40:11.853Z · LW(p) · GW(p)

"In the animal kingdom, the rule is, eat or be eaten; in the human kingdom, define or be defined."

Thomas Szaz

comment by anonym · 2010-04-04T01:07:36.856Z · LW(p) · GW(p)

The scientist is not a person who gives the right answers, he's one who asks the right questions.

Claude Lévi-Strauss

comment by MichaelGR · 2010-04-05T06:36:34.515Z · LW(p) · GW(p)

"A lie is a lie even if everyone believes it. The truth is the truth even if nobody believes it."

-David Stevens

comment by Amanojack · 2010-04-03T18:16:57.603Z · LW(p) · GW(p)

We originally want or desire an object not because it is agreeable or good, but we call it agreeable or good because we want or desire it.

-- Ludwig von Mises, Epistemological Problems of Economics

Replies from: AlanCrowe
comment by AlanCrowe · 2010-04-03T21:45:41.259Z · LW(p) · GW(p)

This reminds me of B. F. Skinner's criticism of William James

A long time ago William James corrected a prevailing view of the relation between feelings and action by asserting, for example, that we do not run away because we are afraid but are afraid becase we run away. In other words, what we feel when we feel afraid is our behaviour -- the very behaviour which in the traditional view expresses the feeling and is explained by it. But how many of those who have considered James's argument have noted that no antecedent event has in fact been pointed out? Neither 'because' should be taken seriously. No explanation has been given as to why we run away and feel afraid.

Before he can add something of substance to the discussion of the epistemological problems of economics, Ludwig von Mises must look back in time, to previous events, and offer them as the explanation of why we want or desire things and why we also call those things agreeable or good.

Replies from: Amanojack
comment by Amanojack · 2010-04-03T22:30:21.918Z · LW(p) · GW(p)

I think Mises's point is rather that concepts like "good," "bad," "evil," "right," "wrong," "ought to" and "rights" all reduce back down to variations on "I desire it"/"It brings me pleasure" and the opposite. In other words, all ethical systems are dressed up (subjective) consequentialism and they only appear otherwise due to semantic confusion.

Replies from: None
comment by [deleted] · 2010-04-06T03:35:40.492Z · LW(p) · GW(p)

.

Replies from: Amanojack, ata
comment by Amanojack · 2010-04-06T14:27:03.712Z · LW(p) · GW(p)

Imagine that you got no satisfaction at all from bringing pleasure to others, but you did it anyway. What would be the reason?

Replies from: None
comment by [deleted] · 2010-04-09T00:43:55.418Z · LW(p) · GW(p)

.

Replies from: None
comment by [deleted] · 2010-04-09T01:04:33.188Z · LW(p) · GW(p)

.

comment by ata · 2010-04-06T04:13:49.141Z · LW(p) · GW(p)

The response to that would be that you only do things that give others pleasure because the feeling of helping others is pleasurable to you or because you expect something in return, and that if neither of those were the case, you wouldn't do it. (I don't necessarily agree with that — I'm pretty sure I don't — but I do believe that's how they'd reduce it.)

comment by Rain · 2010-04-01T20:47:09.994Z · LW(p) · GW(p)

As you can easily imagine we often ask ourselves here despairingly: "What, oh, what is the use of the war? Why can't people live peacefully together? Why all this destruction?"

The question is very understandable, but no one has found a satisfactory answer to it so far. Yes, why do they make still more gigantic planes, still heavier bombs and, at the same time, prefabricated houses for reconstruction? Why should millions be spent daily on the war and yet there's not a penny available for medical services, artists, or for poor people?

Why do some people have to starve, while there are surpluses rotting in other parts of the world? Oh, why are people so crazy?

I don't believe that the big men, the politicians and the capitalists alone, are guilty of the war. Oh no, the little man is just as guilty, otherwise the peoples of the world would have risen in revolt long ago! There's in people simply an urge to destroy, an urge to kill, to murder and rage, and until all mankind, without exception, undergoes a great change, wars will be waged, everything that has been built up, cultivated, and grown will be destroyed and disfigured, after which mankind will have to begin all over again.

-- Anne Frank, 3 May 1944, aged 14

Replies from: wedrifid
comment by wedrifid · 2010-04-01T21:06:13.131Z · LW(p) · GW(p)

I don't believe that the big men, the politicians and the capitalists alone, are guilty of the war. Oh no, the little man is just as guilty, otherwise the peoples of the world would have risen in revolt long ago!

Rising in revolt tends to mean civil war. Perhaps if she thought that through a little more she would find at least one answer. One reason to stop other crazy people destroying things you value is to kill them.

Replies from: Rain
comment by Rain · 2010-04-02T16:33:22.606Z · LW(p) · GW(p)

Defense is necessary, I agree. But perhaps the revolt she was looking for was one of peaceful protest on both sides. The leaders can't do much damage without followers and supporters, armies and engineers. This site has already covered many of the biases which would lead one to support war, regardless of cause.

The other side seems to agree:

What luck for rulers that men do not think.

-- Adolf Hitler

Replies from: gwern, wedrifid
comment by gwern · 2011-04-09T19:00:50.184Z · LW(p) · GW(p)

Haven't been able to find a cite for this. It's listed as unsourced on Wikiquote, no real sources in a Google search, Google Books can only attribute it to a random quote collection book, and volume 106 of U.S. New & World Report attributes it to Yassir Arafat!

comment by wedrifid · 2010-04-05T08:19:23.729Z · LW(p) · GW(p)

What luck for rulers that men do not think.

-- Adolf Hitler

Too true!

comment by JamesPfeiffer · 2010-04-24T06:21:57.987Z · LW(p) · GW(p)

Evolving a threat response over a half-million years on the African savannah hasn't really left me with any good mechanisms for dealing with a threatening number.

PartiallyClips

comment by anonym · 2010-04-04T01:18:52.871Z · LW(p) · GW(p)

Fundamental progress has to do with the reinterpretation of basic ideas.

Alfred North Whitehead

Replies from: anonym
comment by anonym · 2010-04-04T01:34:58.298Z · LW(p) · GW(p)

The great advances in science usually result from new tools rather than from new doctrines.

Freeman Dyson

comment by billswift · 2010-04-03T04:16:09.312Z · LW(p) · GW(p)

If you can be sure of being right only 55 percent of the time, you can go down to Wall Street and make a million dollars a day. If you can't be sure of being right even 55 percent of the time, why should you tell other people they are wrong?

Dale Carnegie, How to Win Friends and Influence People

Replies from: sketerpot, JGWeissman, MichaelVassar
comment by sketerpot · 2010-04-05T20:38:50.055Z · LW(p) · GW(p)

Rephrase that and it sounds nonsensical: "If you can't outperform the stock market, then how can you be sure of anything?" I think Carnegie was just looking for a glib rationalization for his advice to avoid contradicting people whom you want to like you.

comment by JGWeissman · 2010-04-04T19:48:09.237Z · LW(p) · GW(p)

What if I am right 9 times out of 10 when I say I am 90% sure of something, but I am never or very rarely more than 50% sure of propositions of the form "This stock's price will go up/down, over a relevant time frame"?

comment by MichaelVassar · 2010-04-11T00:10:36.121Z · LW(p) · GW(p)

Death by Lob's Theorem to this quote.

comment by [deleted] · 2010-04-02T13:56:15.131Z · LW(p) · GW(p)

O Lord, make our enemies quite ridiculous!

--Voltaire

comment by Richard_Kennaway · 2010-04-01T22:02:39.440Z · LW(p) · GW(p)

A final goal of any scientific theory must be the derivation of numbers. Theories stand or fall, ultimately, upon numbers.

Richard Bellman, "Eye of the Hurricane"

Replies from: Liron, Waldheri
comment by Liron · 2010-04-02T04:30:18.269Z · LW(p) · GW(p)

Sounds like a traditional-rationality precursor to "hypotheses are expectation-constrainers".

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-04-02T06:33:49.526Z · LW(p) · GW(p)

It doesn't sound like that to me. Can you elaborate?

Replies from: None
comment by [deleted] · 2010-04-02T17:00:29.466Z · LW(p) · GW(p)

Well, the quote could be interpreted as "Any scientific theory must ultimately produce some numbers, so that reality can be measured and we can see whether the numbers match."

Another interpretation is "A scientific theory ultimately isn't a scientific theory at all unless it's essentially a set of equations."

Replies from: RobinZ
comment by RobinZ · 2010-04-05T14:07:50.154Z · LW(p) · GW(p)

I agree, although I think the second sentence ("Theories stand or fall, ultimately, upon numbers") is sufficient to justify the former interpretation.

comment by Waldheri · 2011-04-03T14:13:27.467Z · LW(p) · GW(p)

Counterfactual: the theory of evolution is one of the most successful scientific theories, yet it contains no equations; nor numbers. It is rather a framework of ideas in which observations can be made sense of.

Replies from: Richard_Kennaway, Perplexed, PhilGoetz
comment by Richard_Kennaway · 2011-04-03T17:12:55.423Z · LW(p) · GW(p)

Price's Equation? Fisher's fundamental theorem? Hardy-Weinberg law?

comment by Perplexed · 2011-04-03T17:44:03.775Z · LW(p) · GW(p)

I tend to agree with you that numbers are inessential in a scientific theory, and that Darwin's theory is a good example of this. But your critics also have a point that some nice math has been added to the theory since Darwin's time. (Not enough of a point to justify downvoting you, though, IMHO).

As a smaller scale example of a non-numerical scientific theory, consider the theory that the historical branching order of the Great Ape family tree is "First orangutan, then gorilla, then man, leaving the two species of chimp." That is a meaningful and testable scientific theory as it stands, even though there are no numbers involved. But what spoils my example a little is the observation that this theory is improved by adding numbers. "Orangutan branched ~12M years ago, gorilla 6M, man 5M, bonobo 0.5M."

Replies from: Sniffnoy, None
comment by Sniffnoy · 2011-04-03T20:31:23.845Z · LW(p) · GW(p)

This does highlight a problem in the insistence on numbers, though. What's required is not numbers but mathematics, something we can formalize. Classical theories dealt largely in real numbers and functions of real numbers but there's nothing wrong with a theory we get trees out of instead of numbers. (Of course, we can then use numbers in describing those - numbers are enormously useful - but they don't need to be the direct result.)

comment by [deleted] · 2011-04-03T18:09:59.138Z · LW(p) · GW(p)

the historical branching order of the Great Ape family tree is "First orangutan, then gorilla, then man, leaving the two species of chimp." That is a meaningful and testable scientific theory as it stands, even though there are no numbers involved.

Well, the numbers 1,2,3 do show up implicitly here, in the ordering.

comment by PhilGoetz · 2011-04-03T17:30:07.114Z · LW(p) · GW(p)

The theory of evolution was discredited around, IIRC, 1900, because the math didn't work out, because people didn't know genes were discrete, and thought they were analog. It was resurrected after people learned genes were discrete, and found the math worked.

(I haven't looked at this math myself, so I could be wrong.)

Replies from: Perplexed, None
comment by Perplexed · 2011-04-03T18:45:02.317Z · LW(p) · GW(p)

There is some minor confusion here that it may be worth clearing up. The mathematical 'disproof of Darwin' you seem to be thinking of was the work of Fleeming Jenkin who wrote to Darwin with his objections around 1870.

Jenkin's argument was based on the reasonable-at-the-time assumption of 'blending inheritance', the idea that the features of an organism (height, say) should simply be the average of the features of its parents, plus or minus a random perturbation. Jenkin showed that if this were how heredity worked, then natural selection would be almost completely ineffective.

Darwin was troubled by Jenkin's argument, in part because he did not understand the math. One of his correspondents commiserated:

The mathematicians must be a singularly happy race, seeing that they alone of men are competent to think about the facts of the cosmos. ... Mathematics are ... the sciences of number and measurement, and as such, one is at a loss to perceive why they should be so essentially necessary to enable a man to think fairly and well upon other subjects. But it is, as you once said, that when a man is to be killed by the sword mathematical, he must not have the satisfaction of even knowing how he is killed.

Jenkin's objections were never all that influential, because no one else understood the math either. But the rediscovery of Mendel around 1900 provided the needed correction to the 'blending inheritance' assumption. Fisher 'did the math' refuting Jenkin around 1920, and republished his argument as the first chapter of his book in 1930. Available online and definitely worth a read.

Unfortunately, the wikipedia article on Jenkin confuses his 'refutation' of Darwin with that of William Thompson (Lord Kelvin) who wrote in 1897 that the earth was "was more than 20 and less than 40 million year old, and probably much nearer 20 than 40". His argument was based on how long it would take for the core of the earth to cool down to its current (roughly calculable) level. Lord Kelvin's math was right, but he failed to take into account the heating effects of radioactive decay. Radioactivity was first discovered in 1895 and was still not well understood in Kelvin's time. Rutherford finally disposed of this 'mathematical refutation of Darwin' around 1910.

Replies from: timtyler
comment by timtyler · 2011-04-07T16:32:49.935Z · LW(p) · GW(p)

Unfortunately, the wikipedia article on Jenkin confuses his 'refutation' of Darwin with that of William Thompson (Lord Kelvin) who wrote in 1897 that the earth was "was more than 20 and less than 40 million year old, and probably much nearer 20 than 40".

OMG! I checked. It is true! What a mess!

comment by [deleted] · 2011-04-03T18:41:34.740Z · LW(p) · GW(p)

The theory of evolution was discredited around, IIRC, 1900

The only source I found semi-supporting that comes from a document which reads strangely, as if it is teaching the "controversy". Look at the wording:

By 1900, natural selection had been so discredited that few scientists accepted it as the mechanism of evolution. By all accounts, however, they all accepted the so-called “fact” that species evolve.

The so-called "fact"!

comment by wnoise · 2010-04-01T20:45:00.943Z · LW(p) · GW(p)

True Knowledge:

Life is a process of breaking down and using other matter, and if need be, other life. Therefore, life is aggression, and successful life is successful aggression. Life is the scum of matter, and people are the scum of life. There is nothing but matter, forces, space and time, which together make power. Nothing matters, except what matters to you. Might makes right, and power makes freedom. You are free to do whatever is in your power, and if you want to survive and thrive you had better do whatever is in your interests. If your interests conflict with those of others, let the others pit their power against yours, everyone for theirselves. If your interests coincide with those of others, let them work together with you, and against the rest. We are what we eat, and we eat everything.

All that you really value, and the goodness and truth and beauty of life, have their roots in this apparently barren soil.

This is the true knowledge.

We had founded our idealism on the most nihilistic implications of science, our socialism on crass self-interest, our peace on our capacity for mutual destruction, and our liberty on determinism. We had replaced morality with convention, bravery with safety, frugality with plenty, philosophy with science, stoicism with anesthetics and piety with immortality. The universal acid of the true knowledge had burned away a world of words, and exposed a universe of things.

Things we could use.

--Ken MacLeod, The Cassini Division

Replies from: wedrifid, Mass_Driver, Strange7, NMJablonski, Richard_Kennaway
comment by wedrifid · 2010-04-01T21:11:31.705Z · LW(p) · GW(p)

Life is a process of breaking down and using other matter, and if need be, other life. Therefore, life is aggression, and successful life is successful aggression. Life is the scum of matter, and people are the scum of life.

'True Knowledge'? Only if you include the capital 'T' and 'K'!

This is not 'the universal acid of the true knowledge burning away a world of words'. It's just a world of words.

Replies from: wnoise
comment by Mass_Driver · 2010-04-02T05:28:07.034Z · LW(p) · GW(p)

Currently, humans don't work that way. I mean, sure, we want to survive, and will do a lot of nasty things for it, but if you actually internalize nihilism, crass self-interest, and convention as your moral foundation, then the result will NOT be goodness or truth or beauty. To win, you have to be aware of the mundane roots of things without celebrating them.

See, e.g., Gall's Law and/or Goodhart's Law.

Replies from: wnoise
comment by wnoise · 2010-04-02T06:54:28.997Z · LW(p) · GW(p)

No, currently we don't. If we want our values to survive, then we must win. If we want to win, we have nothing else to place our values on besides this "apparently barren soil".

Think of it as the converse of the following Terry Pratchett dialog between Susan and Death in Hogfather:

"All right," said Susan. "I'm not stupid. You're saying humans need... fantasies to make life bearable."

"REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE"

"Tooth fairies? Hogfathers? Little- "

"YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES"

"So we can believe the big ones?"

"YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING"

"They're not the same at all!"

"YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET-- " Death waved a hand. "AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME... SOME RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED"

"Yes, but people have got to believe that, or what's the point---"

"MY POINT EXACTLY"

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-02T07:12:00.116Z · LW(p) · GW(p)

I enjoyed the Pratchett dialogue, but I am not sure I learned from it -- I wind up empathizing with both characters. Are you agreeing with me? Disagreeing with me? What is the converse of a dialogue? I'm confused.

I think part of what bothers me about your Cassini quote is that the claims in the first paragraph are overstated, especially coming from a character who is (presumably) a metaethical nihilist/egoist.

Life is the scum of matter, and people are the scum of life.

Why, is it so wrong to eat things? Eating seems normal and natural to me; an activity to be celebrated. "Scum" is a kind of life that prevents our usual foods from being healthy for us -- it is thus an odd insult for a carnivore.

Nothing matters, except what matters to you. Might makes right, and power makes freedom.

Why? If I firmly estimate that other minds exist, does the existence of those minds depend upon my estimation? If other minds exist, why should what matters to them be irrelevant? What does it even mean to say that "might makes right" except that I plan to ignore the concept of "right"? When, in the course of human events, has the power to ignore morality left people truly free?

You are free to do whatever is in your power, and if you want to survive and thrive you had better do whatever is in your interests.

Really? All the time? Is the world so grim that I must spend all my time eating or face extinction? Surely species and individuals with a significant advantage can spend some of the resulting surplus on frivolous pursuits; what evidence is there that the fate of the world hangs by a razor-thin thread?

Replies from: wnoise, Rain
comment by wnoise · 2010-04-02T17:49:49.823Z · LW(p) · GW(p)

I wind up empathizing with both characters.

You're supposed to, or at least I did. Both are right.

What is the converse of a dialogue?

The converse of a logical statement is another statement with the antecedent and consequent swapped. I was using it metaphorically for "another similar take on the same subject". Both these quotes emphasize that there is no morality inherent in the universe. If we want a moral universe, we have to build it ourselves.

The Cassini Division quote actually to me seems rather cheerful. Even from cynicism that deep we can build a good life full of all the things we cherish.

I think part of what bothers me about your Cassini quote is that the claims in the first paragraph are overstated, especially coming from a character who is (presumably) a metaethical nihilist/egoist.

I think that's because they're not coming from a unitary viewpoint. They're bridging between something approximating normal morality, and utter amorality.

life is the scum of matter, and people are the scum of life.

Why, is it so wrong to eat things?

The point is not "it's wrong to eat things". The point is that life is what's survived, and it does anything it can to survive. People much the same, though they're better at it.

If I firmly estimate that other minds exist, does the existence of those minds depend upon my estimation?

Of course not.

If other minds exist, why should what matters to them be irrelevant?

First ask why should what matters to them be relevant?

Well, because:

  1. You want to live under conditions such that they are.
  2. They're useful to you, and you to them, and cooperation can make you both better off than a bitter fight to the death.

But neither of these is fundamental.

What does it even mean to say that "might makes right" except that I plan to ignore the concept of "right"?

It means that the concept of right is not fundamental, is not baked into the fabric of the universe. Right only means something relative to the minds that hold it. And they can only enforce that with might. Try reading it as "might effects right".

When, in the course of human events, has the power to ignore morality left people truly free?

Well, the simplest answer is when people have the power to ignore morality forced upon them by others that they don't agree with. If a gay man is free to ignore the moral judgements of an Imam in a Sharia country, he is freer to have sex with whom he pleases, how he pleases. A slave that has the power to escape is freer. A person is freer when they can do something that pleases them rather than the high-paying stressful job that their parents tell them is what they should do.

if you want to survive and thrive you had better do whatever is in your interests.

Really? All the time?

All the time.

can spend some of the resulting surplus on frivolous pursuits;

The "frivolous pursuits" are both the thriving and what is in your interests. You interests include both accumulating the surplus and spending it on what matters to you.

The times where it is survival on the line, rather than thriving, can be much rarer.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-02T19:19:32.709Z · LW(p) · GW(p)

All right, all of that is interesting. I would use some of the words you use differently, but none of your definitions are unreasonable, and now that I understand what you're really saying, I agree with most of it.

I still disagree that the interests of others are non-fundamental; there are causes I would die for, which your philosophy seems to forbid. Perhaps I still don't understand your stance on that point.

Also, this may be nitpicky, but at this point in history, life is not "what survived." The ocean, the moon, the molten core of the Earth, the Sun, and, so far as we know, much of the rest of the galaxy are made of nonliving matter that is roughly as enduring as life. Life has not yet succeeded in eating everything else.

:-)

Replies from: wnoise
comment by wnoise · 2010-04-02T19:51:52.159Z · LW(p) · GW(p)

there are causes I would die for,

You're free to do so, should you decide that's what you value.

which your philosophy seems to forbid.

It's not my philosophy, or at most only a minor part. I like seeing what this viewpoint illuminates, and thought others here would as well. Judging by the karma swings on the post, it has proven controversial. Hopefully it's provoked some thought in doing so.

Also, this may be nitpicky, but at this point in history, life is not "what survived."

Nitpicks are good. That's an entirely fair point. I wavered between this formulation and a statement that life is the only thing that uses other matter, which I think is closer to expressing a violation of the Kantian categorical imperative (second formulation), and hence a common formulation of evil. (Or as Pratchett expressed it: "And sin, young man, is when you treat people as things. Including yourself. That's what sin is.")

Replies from: Mass_Driver
comment by Mass_Driver · 2010-04-03T15:21:01.098Z · LW(p) · GW(p)

I like seeing what this viewpoint illuminates

Me too!

Life is the only thing that uses other matter, which I think is closer to expressing a violation of the Kantian categorical imperative (second formulation), and hence a common formulation of evil.

As long as you're accepting nitpicks, I don't think your Kantian formulation holds much water either. Kant teaches that you can't use people as means to an end, but he would probably encourage you to use things as means to an end; certainly he chides people who want to leave well enough alone (thereby saving resources) for not developing their latent talents.

A hookworm might be evil under a modernish Kantian framework because it is life treating other life as a means to an end; ditto fire, which is matter treating other matter as a means to an end. A lichen, though, is well within its rights (as life) to treat the rock it sits on (matter) as means to an end, and people (as the only known Kantian-rational agents) are well within their rights to treat wheat (mere life) as means to an end.

comment by Rain · 2010-04-02T12:07:49.666Z · LW(p) · GW(p)

Mastication is only one form of eating. As a Westerner, I consume a large portion of our world's resources in the form of energy, household goods, large appliances, transportation, gadgets, taxes to fund war efforts, etc.

As for imposing our will upon life, just look at factory farms, algae farms, dead zones in the sea, global warming, and war. Might is truly the final arbiter, and unless part of what we care about is the other, then we show a good track record of trampling them for our own uses.

Our present (relative) peace was brought about by people who felt the rights of others mattered, and had the might to back it up and impose it on those who felt differently.

comment by Strange7 · 2010-04-01T21:38:13.809Z · LW(p) · GW(p)

It disappoints me that this kind of thing is still news to some people. I value survival (that is, the continued existence of things similar to myself) first and foremost, partly because it's the one thing my ancestors have had in common ever since the invention of phospholipid membranes. The state of Existence is, metaphorically, engaged in ongoing skirmishes with it's various neighbors in possibilty-space, so I'd rather stay away from the border, just to avoid getting caught on the wrong side if it shifts.

comment by NMJablonski · 2010-04-02T19:31:45.899Z · LW(p) · GW(p)

I'm puzzled that this has such low karma.

Perhaps many people were turned off by the poetic imagery, but it seemed that many responders failed to understand it. It can be very difficult to step outside the human frame, and truly consider reality as matter, forces, space and time. A lot of the replies seemed to find this statement repugnant, without realizing that it's just an explanation of facts.

Also, for those who think this excerpt is an exhortation to nihilism: read the final paragraph again. We build upon the basics. We form societies because they meet our individual preferences, we establish moral conventions because we desire something from our individual experiences.

Replies from: RobinZ
comment by RobinZ · 2010-04-02T20:29:54.365Z · LW(p) · GW(p)

I didn't find it engaging, so I didn't bother finishing it when I saw it. Reading the entire thing in reaction to your comment, I don't find it interesting.

Replies from: NMJablonski
comment by NMJablonski · 2010-04-02T21:06:38.082Z · LW(p) · GW(p)

Understood.

It may be a basic difference of interests. Less Wrong attracts engineers, economists, scientists, and philosophers, who will find value in different kinds of comments / topics.

I was just surprised, given the reaction to similar material here in the past, that this wasn't heavily upvoted.

Replies from: RobinZ
comment by RobinZ · 2010-04-02T22:18:12.518Z · LW(p) · GW(p)

As PhilGoetz said, "Peanut butter gets more karma than caviar."

In recognition of your opinion, I will devote further analysis to the quote at a later time. My opinion may be overly shallow.

comment by Richard_Kennaway · 2010-04-02T09:03:16.390Z · LW(p) · GW(p)

I have not read The Cassini Division. Are these MacLeod's views? That is, in context, is it written to present the speaker's philosophy as good or evil? MacLeod himself, I believe, is something of a socialist, which would make these views rather odd in his own mouth.

Replies from: wnoise
comment by wnoise · 2010-04-02T17:04:34.692Z · LW(p) · GW(p)

His views are ... eclectic. He has self described as a Libertarian Trotskyist. Don't ask me how that works. I think it would be an exaggeration to say these are MacLeod's views.

They are presented as a very distorted reading of philosophy and politics shaped by the founders' horrible pre-revolution life:

The true knowledge…the phrase is an English translation of a Korean expression meaning ‘modern enlightenment’. Its originators, a group of Japanese and Korean ‘contract employees’ (inaccurate Korean translation, this time, of the English term ‘bonded labourers’) had acquired their modern enlightenment from battered, ancient editions of the works of Stirner, Nietzshe, Marx, Engels, Dietzgen, Darwin, and Spencer, which made up the entire philosophical content of their labour-camp library. (Twentieth century philosophy and science had been excluded by their employers as decadent or subversive – I forget which.) With staggering diligence, they had taken these words – which they ironically treated as the last word in modern thought – and synthesized from them, and from their own bitter experiences, the first socialist philosophy based on totally pessimistic and cynical conclusions about human nature.

They are also, however, the views of the one of the largest political entities in the solar system, indicating that whatever else, they work (in that fictional universe). They're also the views of the main character. She's designed pretty explicitly to be something of a Rorschach test, entirely ambiguous between a monster and the saviour of the human race.

ROT13: Fur (nggrzcgf gb?) pbzzvgf trabpvqr ntnvafg n cbfguhzna pvivyvmngvba yvivat va bar bs gur tnf tvnagf. Guvf cbfguhzna pvivyvmngvba unf znqr fbzr ntterffvir zbirf ntnvafg uhznavgl, ohg gur erprag barf unir orra ragveryl bs gur sbez bs vasbezngvba jnesner nggnpxf -- gnxvat bire pbzchgref.

comment by MattPrather · 2010-04-13T00:04:25.565Z · LW(p) · GW(p)

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Replies from: mattnewport
comment by mattnewport · 2010-04-13T00:11:32.861Z · LW(p) · GW(p)

I prefer Yeats' phrasing:

The best lack all conviction, while the worst
Are full of passionate intensity.

Replies from: MattPrather
comment by MattPrather · 2010-04-14T21:13:48.182Z · LW(p) · GW(p)

By the way, I am uncertain as to how to think about the quantification (number / proportion / "ballpark estimate") of real people who fit the concept of Russel's "wiser people", or Yeats' "best".

How far off would I be if I were to estimate the quantity of such wiser and better people as "less than one third of the population of any given tribe" ?

Is anyone brave enough to say it should be thought of as a drastically smaller quantity?

Is anyone brave enough to realize how much they themselves actually fit the description for the "fools and fanatics" or "worst" -- and then, after realizing it, actually become the better?

Or am I perhaps better off to not pick at the idea?

Replies from: mattnewport
comment by mattnewport · 2010-04-14T21:18:46.627Z · LW(p) · GW(p)

I'm quite comfortable to ballpark < 5%.

Replies from: wedrifid
comment by wedrifid · 2010-04-14T21:23:23.534Z · LW(p) · GW(p)

That is about my impression too. I'm less sure about the 'worst'. I'd go with up to a third but perhaps symmetry is intended.

comment by Vladimir_Golovin · 2010-04-06T10:35:21.604Z · LW(p) · GW(p)

"You miss 100% of the shots you don't take."

-- Wayne Gretzky (but I've seen it attributed to Michael Jordan and Joe Ledbetter, HS coach)

Replies from: Jack
comment by Jack · 2010-04-06T22:16:19.027Z · LW(p) · GW(p)

Except that actually isn't right. You miss exactly 0% of the shots you don't take. And I'm not just being pedantic. In basketball this attitude can cost teams games. Any game of possessions (of which basketball is one) is won with efficiency. Shooting the ball means there is some chance of scoring but also some chance of missing and the ball being rebounded by the other team. When the latter happens you've lost your opportunity to score and you will never get it back. So the key to winning is to take high efficiency shots-- this means shots that are likely to go in and shots that are worth a lot of points. Now not shooting does increase the likelihood of a turnover and one can't go on not shooting forever. Moreover, quick shots before the defense is ready can often be very efficient shots. But the key is that the game is not about scoring a lot of points-- it's about scoring a lot of points efficiently. And to get good at that means cultivating a skill of waiting for the best shot, creating a better shot or deferring to more efficient teammates.

It might be that these aren't concerns in hockey: if all shots are more or less equally efficient or if a lot of points are scored of offensive rebounds "keep shooting it" might be a good message. I don't know a lot about the sport. But even hockey players aren't shooting from the other side of the rink.

Outside sports there are occasions where 'missing' is worse than 'not shooting' and if the chances of 'missing' are high enough or the cost of 'missing' sufficiently high it can be a really bad idea to 'shoot'.

comment by Thomas · 2010-04-05T07:07:10.387Z · LW(p) · GW(p)

There are strange flowers of reason to match each error of the senses.

-Louis Aragon

comment by Nic_Smith · 2010-04-03T01:59:05.260Z · LW(p) · GW(p)

Since a gene is just a molecule, it can't choose to maximize its fitness, but evolution makes it seem as though it had.... Why, for example, do songbirds sing in the early spring? The proximate cause is long and difficult. This molecule knocked against that molecule. This chemical reaction is catalyzed by that enzyme. But the ultimate cause is that birds are signalling territorial claims to each other in order to avoid unnecessary conflict. They just do what they do. But the net effect of an immensely complicated evolutionary process is that songbirds behave as though they had rationally chosen to maximize their fitness.

Laboratory experiments on pigeons show that they sometimes honor various consistency requirements of rational choice theory better than humans (Kagel, Battalio, and Green 1995). We don't know the proximate explanation. Who knows what goes on inside the mind of a pigeon? Who knows what goes on in the minds of stockbrokers for that matter? -- Ken Binmore, Rational Decisions

comment by orthonormal · 2010-04-03T01:43:19.438Z · LW(p) · GW(p)

This is not to say that M. Legrandin was anything but sincere when he inveighed against snobs. He could not (from his own knowledge, at least) be aware that he himself was one, since it is only with the passions of others that we are ever really familiar, and what we come to discover about our own can only be learned from them. Upon ourselves they react only indirectly, through our imagination, which substitutes for our primary motives other, auxiliary motives, less stark and therefore more seemly. Never had Legrandin's snobbishness prompted him to make a habit of visiting a duchess as such. Instead, it would encourage his imagination to make the duchess appear, in his eyes, endowed with all the graces. He would gain acquaintance with the duchess, assuring himself that he was yielding to the attractions of mind and heart which the vile race of snobs could never understand. Only his fellow-snobs knew that he was of their number, for, owing to their inability to appreciate the intervening efforts of his imagination, they saw in close juxtaposition the social activity of Legrandin and its primary cause.

-- Marcel Proust, In Search of Lost Time

Replies from: gwern
comment by gwern · 2010-05-01T01:35:51.319Z · LW(p) · GW(p)

'George Eliot's villainous character Grandcourt "did not care a languid curse for anyone's admiration"; but this state of non-caring, just as much as desire, required its related object - namely, a world of admiring and envying spectators: for if you are fond of looking stonily at smiling persons, the persons must be there and they must smile.'

--Finite and Infinite Games

comment by XFrequentist · 2010-04-02T18:14:19.130Z · LW(p) · GW(p)

Such sad statistical situations suggest that the marriage of science and math may be desperately in need of counseling. Perhaps it could be provided by the Rev. Thomas Bayes.

Tom Siegfried, Odds Are, It's Wrong, on the many failings of traditional statistics in modern science.

comment by neq1 · 2010-04-02T01:59:15.553Z · LW(p) · GW(p)

""Not evil, but longing for that which is better, more often directs the steps of the erring"

Theodore Dreiser, Sister Carrie

comment by Lightwave · 2010-04-07T17:57:53.215Z · LW(p) · GW(p)

I never guess. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.

Sir Arthur Conan Doyle, The Sign of Four, A Scandal in Bohemia

comment by Zubon · 2010-04-05T00:51:16.694Z · LW(p) · GW(p)

I could be wrong, but I'd like to see some evidence.

--- Mark Liberman

comment by JenniferRM · 2010-04-02T19:22:56.889Z · LW(p) · GW(p)

Social scientists are experts at having an explanation for the results, no matter how they come out, so long as they are statistically significant. It is easy to turn the statistical crank. With sufficiently powerful statistical tools, you can find a significant statistical relationship between just about anything and anything else.

Psychologists see no real problem with the current dogma. They are used to getting messy results that can be dealt with only by statistics. In fact, I have now detected a positive suspicion of quality results amongst psychologists. In my experiments I get relationships between variables that are predictable to within 1 percent accuracy. The response to this level of perfection has been that the results must be trivial! It was even suggested to me that I use procedures that would reduce the quality of the results, the implication being that noisier data would mean more.

After some recovery period I realized that this attitude is to be expected from anyone trying to see the failure of the input-output model as a success. Social scientists are used to accounting for perhaps 80% (at most) of the variance in their data. They then look for other variables that will account for more variance. This is what gives them future research studies. The premise is that behavior is caused by many variables. If I account for all the variance with just one variable, it’s no fun and seems trivial.

If psychologists had been around at the time that physics was getting started, we’d still be Aristotelian, or worse. There would be many studies looking for relationships between one physical variable and another—e.g., between ball color and rate of fall, or between type of surface and the amount of snow in the driveway. Some of these relationships would prove statistically significant. Then when some guy comes along and shows that there is a nearly perfect linear relationship between distance traveled and acceleration, there would be a big heave of “trivial” or “too limited”—what does this have to do with the problems we have keeping snow out of the driveway?

Few psychologists recognize that, whatever their theory, it is based on the open-loop input-output model. There is no realization that the very methods by which data are collected imply that you are dealing with an open-loop system. To most psychologists, the methods of doing research are simply the scientific method—the only alternative is superstition. There is certainly no realization that the input-output model is testable and could be shown to be false. In fact, the methods are borrowed, in caricature, from the natural sciences, where the open-loop model works very well, thank you. Progress in the natural sciences began dramatically when it was realized that the inanimate world is not purposive.

Psychologists have mistakenly applied this model of the inanimate world to the animate world, where it simply does not apply.

This was a forgivable mistake in the days before control theory, because before 1948 there was no understanding of how purposive behavior could work. Now we know, but the social sciences have their feet sunk in conceptual concrete. They simply won’t give up what, to them, simply means science.

The author was transformed by reading "Behavior: The Control of Perception"(1973) and began a research program whose early years(?) seem to have been summarized in "Mind Readings: Experimental Studies of Purpose"(1992)

Replies from: orthonormal, PhilGoetz
comment by orthonormal · 2010-04-03T01:34:37.645Z · LW(p) · GW(p)

This has been discussed here before.

The problem is that Marken's models don't actually have predictive power; he just fits a function to the data using as many free parameters as he has data points, and marvels at the perfect fit thus derived. One doesn't need to think highly of the current state of psychology to realize that Marken is a crank, and that any recognition Marken has in the PCT community is a sign that they are bereft of actual experimental support if not basic scientific reasoning skills.

Replies from: JenniferRM
comment by JenniferRM · 2010-04-04T07:36:59.598Z · LW(p) · GW(p)

The interaction you linked to was interesting. I didn't realize there was already a back story within this community with positions staked out and such. I offered the quote because it seemed like a beautifully mathematical objection to existing work that was "up this community's alley" but I haven't worked into the actual mathematics or experiments themselves. For example, I hadn't purchased either of the books that I linked to, not have I studied them - I simply assigned them high EV given the quality of the author's text.

Your comments, in the interaction you linked to, seem like a good arguments against Marken's theory (specifically the claim that his work involves more free parameters than data points appears to be a good argument against the theory, if true). However, in all of that back and forth, I noticed many links to "lesswrong heuristics" but I didn't notice any outside links to an actual research papers detailing methodology.

I'm substantially more ignorant on the subject than either you or your previous interlocutor and it took me a while to even understand that "PCT" was the theory Marken supports, that you two were taking the pro and con towards it, that your text was mostly between each other with a substantial amount of knowledge assumed. I wish you had both linked more, because it would have been educational.

That said, I'd like to see such links if you know of any. If I can swiftly dismiss Marken's work without further thought, that would be a very efficient use of time. Can you direct me to the links showing an example of his experimental work so I can verify that his research program is crippled by mathematical overfitting? The best I could find was Perceptual organization of behavior: A hierarchical control model of coordinated action but it was pay-walled so I can't access it now to look into it myself.

Replies from: orthonormal, RobinZ
comment by orthonormal · 2010-04-06T07:18:49.810Z · LW(p) · GW(p)

The paper discussed in that interaction can be found here without a paywall.

As stated then (the conversation can be taken up from about here if not earlier), I think it's quite likely that simple control circuits can be found in facets of motor response; but Powers, Marken and Eby had been talking about control theory in cognitive domains (like akrasia) as if they could isolate simple circuits there, and my search for any kind of evidence turned up only this sort of embarrassing tripe.

And really, the math here is important— it's not a matter of disagreeing with interpretation, it's the plain fact that a generic model with 4 free parameters can be tweaked to precisely fit 4 data points, and it's clear from the paper that this is what Marken did. You simply need more data points than free parameters in order to generate any evidence in favor of a model; the fact that he never mentioned this, and instead crowed about the impressive fit of his model to the data, indicate either gross ignorance of how mathematical models work, or outright intent to mislead (coupled with an utterly incompetent peer review process.)

The gauntlet remains thrown, if anyone wants to point to an experimental study which demonstrates a discernible control circuit in a cognitive task (apart from tasks, like tracking a dot, which have an obvious motor component— in these, I do expect control circuits to be a good model for certain behavior). I would be surprised, but it would suffice to give credence to the theory in my eyes.

comment by RobinZ · 2010-04-04T13:33:54.201Z · LW(p) · GW(p)

Through judicious abuse of my employer's resources, I have acquired a copy of the PDF - PM me an email address and I'll send it to you.

Replies from: JenniferRM
comment by JenniferRM · 2010-04-05T22:40:08.223Z · LW(p) · GW(p)

Thanks Robin! I have read this paper now, but it still doesn't seem to address the arguments that orthonormal linked to :-/

The 1986 study appeared to me to be basically well done, offering a fascinating paradigm that could be extended in many directions for further research with a reasonably strong result by itself. It basically confirmed the positive claims of Marken that hierarchical arrangements of negative feedback loop systems (designed, with a handful of optimized parameters, and then left alone) can roughly reproduce trained human behavior in a variety of dynamically changing toy domains, supporting the contention that whatever is operating in the human nervous system after a period of training is doing roughly the same effective computations as the model.

In the text, Merken addresses the "motor control literature" as making claims whose refutation was partly the purpose of his experiments.

It required a little more googling to figure out the claims he was trying to reject... but basically he seems to be objecting to the claim that mammals work as open loop controllers (that is, generating action signals based on an internal model of the world that are sent into the world with no verification loop or secondary corrections). This claim appears to have been founded mostly on things called "deafferentiation experiments"... which turned out to be aesthetically horrifying and also turned out to not actually prove the general case of "open loop" claims.

The most infamous of these experiments, (warning - kind of disturbing pictures) was basically:

a psychologist, who had cut afferent ganglia that supplied sensation to the brain from their arms and legs, then used restraint, electric shock, and withholding of food to force [the monkeys] to use the limbs they could not feel.

The ability of monkeys mutilated in this fashion to (eventually?) move around purposively was taken as evidence that there was not a hierarchically arranged set of negative feedback motor control systems implemented in their nervous system. In practice (after the scientist was arrested for animal cruelty, PETA's request for custody was denied, and the monkeys were brainscanned, euthanized, and autopsied) it turned out that the monkey's brains had been massively re-wired by the experience. The practical upshot of the experiments seem to have primarily been to serve as dramatic evidence of adult primate brain plasticity (which they didn't believe in, back then?) rather than as confirmation of a negative feedback theory of motor control. (Probably there's more to it than that, but this is my first draft understanding.)

Merken dismisses these experiments in part by pointing out the difficulty of preventing negative feedback control processes if there are many sub controllers that can use measurements partially correlated to the measure being optimized and concludes with falsification examples and criteria for the general theory and the particular model that are not subject to this objection:

Control theory does not rule out the possibility that some of the behavior produced by organisms is open loop. If a behavior is open loop then the effects of disturbances will not be resisted by system outputs. There is evidence, for example, that the saccadic eye positioning system is open loop. However, open-loop behavior is likely to be exceptionally rare ...The two-level model could be tested by other means. One suggestion is to look for differences in the time to react to disturbances applied to controlled variables that are presumed to be at different levels—the higher the level of the variable, the slower the response to the effect of a disturbance. Another approach would be to trace out the required connections anatomically. The present study shows only that the two-level model is sufficient, not that it is necessary.

In short, I'm still impressed by Merken. His reasoning seems clean, his experiment, robust, his criticisms of motor-control and trait-theory, well reasoned. My very broad impression is that there may be a over-arching background argument here between "accurate model in the head producing aim and fire success" versus "incremental goal accomplishment via well tuned reflexes and continuous effort"? If that back story is operative then I guess my posterior probability was just pushed a little more in the direction of "reflexes and effort" over "models and plans".

If there is some trick still lurking here, Orthonormal, that you could point me to and spell out in detail rather than by reference to assertions and hand-waving rationality heuristics, that would be appreciated. The more time I spend on Merken's work, the more I find to appreciate. At this point, I've spend a day or two on this and I think the burden of proof is on you. If you take it up successfully I would be in your debt for rubbing a bit of sand out of my eyes :-)

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-04-08T01:12:00.462Z · LW(p) · GW(p)

Jennifer, here is where orthonormal seems to say where exactly Marken overfit the data.

(Orthonormal might not have seen your comment because you didn't post it in reply to one of his/hers.)

[ETA: Nevermind. Looks like the date of orthonormal's last comment is after yours, so he/she probably saw it.]

comment by PhilGoetz · 2010-04-02T23:04:11.615Z · LW(p) · GW(p)

I don't understand what the quote is trying to say. What are the unrecognized consequences of the open-loop model?

It sounds like the author is upset that psychologists don't believe he has a model of behavior that explains 99% of some output variable using only one input variable. I'd have a hard time believing too.

comment by James_Miller · 2010-04-02T03:04:24.227Z · LW(p) · GW(p)

"Hypocrisy and dissimulation are what keeps social systems strong; it is intellectual honesty that destroys them."

Theodore Dalrymple- The New Vichy Syndrome p. 26.

Replies from: wnoise, Nanani
comment by wnoise · 2010-04-05T02:35:09.096Z · LW(p) · GW(p)

If the rationality quotes are intended to illustrate rationality, rather than themselves necessarily be rational, I think this is a fine quote.

comment by Nanani · 2010-04-05T01:16:05.727Z · LW(p) · GW(p)

This is true when the social systems in question are built on dishonest foundations. Observing whether or not intellectual honesty has this effect on a system has predictive value wrt the eventual fate of the society employing the system.

Voted up.

comment by Richard_Kennaway · 2010-04-01T22:01:35.044Z · LW(p) · GW(p)

At this point I reveal myself in my true colours as a stick-in-the-mud. I hold a number of beliefs that have been repudiated by the liveliest intellects of our time. I believe that order is better than chaos, creation better than destruction. I prefer gentleness to violence, forgiveness to vendetta. On the whole I think that knowledge is preferable to ignorance, and I am sure that human sympathy is more valuable than ideology. I believe that in spite of recent triumphs of science, men haven't changed much in the last two thousand years. And in consequence, we must still try to learn from history. History is ourselves.

Sir Kenneth Clarke, "Civilisation" (Excerpt on YouTube.)

Replies from: None
comment by [deleted] · 2012-11-20T08:28:27.330Z · LW(p) · GW(p)

These seem mostly like applause lights.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-11-20T09:10:21.799Z · LW(p) · GW(p)

These seem mostly like applause lights.

"Applause light" is the applause light here.

Replies from: None
comment by [deleted] · 2012-11-20T09:24:44.100Z · LW(p) · GW(p)

I noticed that after writing it, but I'm standing by my original assessment. To the LW community, "applause light" has a relatively specific meaning and is a useful and compact way of expressing a sense in which I don't think the statements above are useful statements to make, and I don't want to taboo that word just because it looks like I'm trying to signal affiliation with the LW tribe or whatever. (I'll happily taboo the word if it interferes with actual communication, but technical vocabulary exists for a reason.)

If the point you're trying to make is that "applause light" is a fully general counterargument, then doesn't that point also apply to itself (in that both "applause light" and "fully general counterargument" are both applause lights and fully general counterarguments)?

comment by whpearson · 2010-04-17T22:02:44.064Z · LW(p) · GW(p)

True, I did not escape some unpleasant conversations with political figures To them I repeated the same thing: all contemporary conflicts I considered to be temporary phenomena, as the reigns of Alexander the Great and Napoleon were temporary. Every world crisis could be viewed in strategic terms, as long as the consequence of that approach was not our potential destruction as a biological species. But when the fate of the species became one of the members of the equation, the choice had to be automatic, a foregone conclusion, and appeals to the American way, the patriotic spirit, to democracy or anything else lost all meaning.

  • In Stanislaw Lem's His Master's Voice (fiction)
comment by Tehom · 2010-04-10T22:35:17.818Z · LW(p) · GW(p)

Reality is that which, when you stop believing in it, doesn't go away.

Phillip K. Dick

Replies from: Jack, Jack
comment by Jack · 2010-04-11T03:49:20.929Z · LW(p) · GW(p)

Welcome to Less Wrong, though! Introduce yourself.

comment by NancyLebovitz · 2010-04-07T11:25:48.852Z · LW(p) · GW(p)

We must be careful who we let define what is sustainable.

Jason Stoddard in Shine, an anthology of near-future optimistic science fiction.

comment by gregconen · 2010-04-04T17:07:43.146Z · LW(p) · GW(p)

Deleated as a repeat.

Replies from: RobinZ
comment by RobinZ · 2010-04-04T20:35:04.753Z · LW(p) · GW(p)

A repeat, but a good one.

comment by gwern · 2010-05-01T00:38:07.753Z · LW(p) · GW(p)

"O my friend, if we, leaving this war, could escape from age and death, I should not here be fighting in the van; but now, since many are the modes of death impending over us which no man can hope to shun, let us press on and give renown to other men, or win it for ourselves."

--Sarpedon, The Iliad, as quoted in Eric Drexler's Engines of Creation

comment by [deleted] · 2010-04-19T03:42:15.117Z · LW(p) · GW(p)

You can speed up your math reading skill by practicing, but be careful. Like any skill, trying too much too fast can set you back and kill your motivation. Imagine trying to do an hour of high-energy aerobics if you have not worked out in two years. You may make it through the first class, but you are not likely to come back.

Shai Simonson and Fernando Gouvea, "How to Read Mathematics"

comment by [deleted] · 2010-04-02T02:47:36.642Z · LW(p) · GW(p)

The capacity of the human mind for swallowing nonsense and spewing it forth in violent and repressive action has never yet been plumbed.

--Robert A. Heinlein

Sad, but true.

comment by NancyLebovitz · 2010-04-18T00:09:25.597Z · LW(p) · GW(p)

I hold that information is not neutral. Never give a (fallible) human sterile information. He will not ignore it. These models led to an increase of risk in society, period. The providers are responsible.

Nasim Taleb

comment by wedrifid · 2010-04-14T21:34:15.016Z · LW(p) · GW(p)

We lost the documentation on quantum mechanics. You'll have to decode the regexes for yourself.

-- The Gods, XKCD

comment by CronoDAS · 2010-04-04T04:39:09.593Z · LW(p) · GW(p)

"If A=B and B=C and C=D, then do not get a job proofreading." - Quid's Theorem

comment by [deleted] · 2010-04-23T03:44:21.626Z · LW(p) · GW(p)

"Wow! That seems…incredibly hard to believe. I’m not saying that just because it sounds crazy means its not true. Plenty of crazy things are true. But this claim is based on the results of just one study, conducted with the help of a biased author." --Jason Swett (my older brother)

comment by Tehom · 2010-04-10T22:33:50.345Z · LW(p) · GW(p)

The object of opening the mind, as of opening the mouth, is to close it again on something solid

G. K. Chesterton

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-12-06T00:33:10.572Z · LW(p) · GW(p)

That quote had already been posted by Eliezer Yudkowsky on 2/2/2010.

comment by simplyeric · 2010-04-08T20:35:25.325Z · LW(p) · GW(p)

not trying to be glib here, but:

"• Do not quote yourself." -4wnoise

Replies from: RobinZ
comment by RobinZ · 2010-04-08T20:39:07.013Z · LW(p) · GW(p)

...what?

Replies from: simplyeric
comment by simplyeric · 2010-04-09T02:14:07.173Z · LW(p) · GW(p)

Well, I'm being a little postmodern, but how often have you heard people refer to themselves? Not quite 'i told you so' but in a similar vein. Pundits do this a lot. "well if you recall, last year I said xy&z, and look what happened?". It's the falacy related to the fact that, of all the possible outcomes, at least ONE person will probably be right. But, that fact is purely casual/trivial. I just found it poetic that one of the rules of the thread is 'do not quote yourself'... Clearly that's an issue that not all people recognize.

Replies from: wedrifid, RobinZ
comment by wedrifid · 2010-04-09T02:23:38.501Z · LW(p) · GW(p)

That is insightful and all, but now falls under:

  • Do not quote comments/posts on LW/OB.

;)

Replies from: simplyeric
comment by simplyeric · 2010-04-09T13:56:55.085Z · LW(p) · GW(p)

fair enough. Rats.

"Like any dogma, it is honored far more in the breach than in the observance."

-Benoit Mandelbrot

Replies from: wedrifid
comment by wedrifid · 2010-04-11T13:10:25.984Z · LW(p) · GW(p)

"Like any dogma, it is honored far more in the breach than in the observance."

As is appropriate when the dogma in question is trivially observed yet distracting when breached. I would expect dogma to be encouraged with honor for observance more when the reverse is the case. For example, when a dogma pertains to a particularly noble or self sacrificial action that is far rarer than the 'null' breach.

comment by RobinZ · 2010-04-09T03:16:28.373Z · LW(p) · GW(p)

If we were to have designated threads for self-quoting, I would imagine there would have to be some effective restrictions to keep the quality high - I would imagine a time limit that would have to expire, for example.

comment by Strange7 · 2010-04-08T06:39:11.566Z · LW(p) · GW(p)

If I let everyone I thought was an idiot die, there wouldn't be many people left.

-Gilgamesh Wulfenbach

comment by [deleted] · 2010-04-07T23:06:29.606Z · LW(p) · GW(p)

A witty saying proves nothing.

Voltaire

Edit: All right, then, here's another one:

Never underestimate the power of human stupidity.

Robert Heinlein

Replies from: gwern, Rain
comment by gwern · 2010-05-01T01:26:57.860Z · LW(p) · GW(p)

"It is unbecoming for young men to utter maxims."

--Aristotle

comment by Rain · 2010-04-07T23:09:30.132Z · LW(p) · GW(p)

Repeat.

comment by Thomas · 2010-04-07T19:26:29.107Z · LW(p) · GW(p)

Economists did something even better than predict the crisis. We correctly predicted that we would not be able to predict it.

-William Easterly

Replies from: Clippy
comment by Clippy · 2010-04-07T20:03:37.934Z · LW(p) · GW(p)

How is that better? That doesn't make sense. Predicting that you won't be able to predict something is equivalent to a maximum entropy probability distribution over the outcome. That's a state of zero knowledge. What is -William Easterly attempting to establish with -William Easterly's claim?

Also, what crisis?

Replies from: mattnewport, cupholder, Matt_Simpson, Thomas
comment by mattnewport · 2010-04-07T20:21:24.986Z · LW(p) · GW(p)

What is -William Easterly attempting to establish with -William Easterly's claim?

He is attempting to establish that William Easterly and other mainstream academic economists do not suck at their jobs and that modern macroeconomics has not been thoroughly discredited by the recent (ongoing) financial crisis. He attempts to do this by claiming that their failure to predict anything correctly is not an indictment of their intellectually bankrupt field but rather a ringing endorsement. In so doing he conveniently ignores those economists and investors who correctly predicted the crisis and explained in detail what was going to happen and why it was going to happen in the years before the crisis.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-07T20:47:03.274Z · LW(p) · GW(p)

In so doing he conveniently ignores those economists and investors who correctly predicted the crisis and explained in detail what was going to happen and why it was going to happen in the years before the crisis.

There's always someone predicting a financial crisis, and when it inevitably happens (and one will eventually come), someone probably predicted it. Was there anyone who predicted the crisis based on reliable methods that we could use to predict another crisis?

Easterly does have a point though - there are two ways to predict a crisis. Infer the implicit market prediction, or predict it yourself. The latter is extremely hard because as soon as you find some reliable method of predicting financial crises and tell the world about it, market prices will change to reflect this knowledge. On the other hand, as soon as the market knows about the crisis, the crisis is beginning (if people know the price of X is going to fall soon, then the price of X will fall now as they all sell it). So in some sense, a crisis has to come out of the blue.

It sounds like Easterly was being sarcastic - taking a jab and macroeconomists who DO try to predict crises.

Replies from: mattnewport
comment by mattnewport · 2010-04-07T22:27:06.475Z · LW(p) · GW(p)

Was there anyone who predicted the crisis based on reliable methods that we could use to predict another crisis?

The Greatest Trade Ever describes how John Paulson's hedge fund identified the coming sub-prime collapse and made $15 billion betting on it. It also covers several other investors who identified the same issues and made money, though most were not as lucky/smart with their timing as Paulson.

The crisis also looks a lot like a classic example of a credit crunch as described by Austrian business cycle theory. Peter Schiff is one of the best known commentators who predicted the broad outlines of the crisis before it really hit.

Now, I'm not saying that Austrian economists have all the answers or that there isn't some element of 'even a stopped clock tells the right time twice a day' with the predictions of disaster panning out but there were people out there telling a coherent story about why the economy faced major problems and how the crisis would play out. Some of them were quite accurate on the timing as well. You wouldn't know it from the pronouncements of most economists, bankers and politicians because they look much better if they can proclaim that 'nobody' saw or could have seen the problems coming. I'm a lot more impressed with the likes of Andrew Lahde bowing out with a 'f*ck you' and millions of dollars in profits from betting on disaster and being right than by William Easterly smugly proclaiming vindication of mainstream economics when his profession largely failed at making predictions or even understanding what was going on in the real economy.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2010-04-07T23:30:50.409Z · LW(p) · GW(p)

I think the ABC theory, at least in the form that I understand it, is onto something, but I don't think it's quite right. I think there should be less attention on the fed and more attention on the decision making of investors. And someone should just mathematicize the damn thing already.

Some Austrian influenced economists (but not Austrian) are convinced that the housing market is doomed to bubbles due to it's structure and the structure of the human mind. Basically, once prices start rising for an extended period of time, the human mind treats these prices as if they will continue rising EVEN IF they know that the prices are way higher than they should be, given the actual value of the asset. Many experiments have bore this out. Here's a link.

ETA: Just to clarity a bit, I do think it's possible - though difficult - to predict that a crisis is going to happen, and even have a decent idea of the magnitude. The timing, on the other hand, I think is nearly impossible to get right with any precision.

comment by cupholder · 2010-04-07T22:58:37.807Z · LW(p) · GW(p)

Also, what crisis?

I had never previously considered the possibility of James Callaghan uploading into an internet-connected paperclip maximizer, but I guess there's a first time for everything.

comment by Matt_Simpson · 2010-04-07T20:47:58.305Z · LW(p) · GW(p)

Also, what crisis?

The great paperclip crash of 07

Replies from: Jack
comment by Jack · 2010-04-07T21:12:39.590Z · LW(p) · GW(p)

The great paperclip crash of 07

This happened much earlier than 2007. The advent of electronic databases and office networks surely lead to rough times in the paperclip industry. Some analysts say that if we had paid more attention to the lesson of the 98' Paperclip Crisis we could have avoided all the problems we have today.

comment by Thomas · 2010-04-07T20:12:22.899Z · LW(p) · GW(p)

If that's a real fact, that a crisis is unpredictable (what might be, I don't know), than the knowledge of this fact is more valuable than an accidental prediction of one crises.

At least so I understand W. Easterly.

comment by faithlessgod · 2010-04-07T10:47:37.507Z · LW(p) · GW(p)

Do not sacrifice truth on the altar of comfort

comment by steven0461 · 2010-04-05T17:03:41.017Z · LW(p) · GW(p)

"The most perfidious way of harming a cause consists of defending it deliberately with faulty arguments." - Nietzsche

comment by MichaelGR · 2010-04-05T06:37:16.638Z · LW(p) · GW(p)

“It is not the mountain we conquer, but ourselves.”

--Sir Edmund Hillary (1919-2008) - New Zealand Mountaineer and First man to Climb Mt. Everest

Replies from: Amanojack
comment by Amanojack · 2010-04-05T11:57:43.677Z · LW(p) · GW(p)

Interesting in light of pjeby's distinction between "you" and "yourself."