[SEQ RERUN] Science Doesn't Trust Your Rationality

post by MinibearRex · 2012-05-05T06:37:31.042Z · LW · GW · Legacy · 24 comments

Today's post, Science Doesn't Trust Your Rationality was originally published on 14 May 2008. A summary (taken from the LW wiki):

 

The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Dilemma: Science or Bayes?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

24 comments

Comments sorted by top scores.

comment by Aharon · 2012-05-05T08:10:03.839Z · LW(p) · GW(p)

Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn't need a social process of science... right?

Seeing how often overconfidence bias is brought up as a problem, and how the rationality camps etc. are implemented to battle this bias, among others, this assumption doesn't seem to be a bad starting point.

comment by shminux · 2012-05-05T17:52:22.981Z · LW(p) · GW(p)

In the beginning came the idea that we can't just toss out Aristotle's armchair reasoning and replace it with different armchair reasoning. We need to talk to Nature, and actually listen to what It says in reply. This, itself, was a stroke of genius.

If you do a probability-theoretic calculation correctly, you're going to get the rational answer.

How does one make sure that this "probability-theoretic calculation" is not a "different armchair reasoning"?

Science doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment. [...] Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction.

This seems like a safe assumption. On the other hand, trusting in your powers of Solomonoff induction and Bayesianism doesn't seem like one: what if you suck at estimating priors and too unimaginative to account for all the likely alternatives?

So, are you going to believe in faster-than-light quantum "collapse" fairies after all? Or do you think you're smarter than that?

Again a straw-collapse. No one believes in faster-than-light quantum "collapse", except for maybe some philosophers of physics.

Replies from: JGWeissman, private_messaging
comment by JGWeissman · 2012-05-05T18:17:48.096Z · LW(p) · GW(p)

Again a straw-collapse. No one believes in faster-than-light quantum "collapse", except for maybe some philosophers of physics.

Speed of light or slower collapse, applied to spatially separated measurements of entangled particles, seems even more ridiculous.

Replies from: shminux
comment by shminux · 2012-05-05T21:19:42.661Z · LW(p) · GW(p)

The collapse model says that after performing a local measurement, the wavefunction locally evolves from the eigenstate that has been measured, nothing else. For a local observer spacelike separated events do no exist until they come into causal contact with it. That's the earliest time that can be called a measurement time.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-05T21:53:37.594Z · LW(p) · GW(p)

For a local observer spacelike separated events do no exist until they come into causal contact with it.

That sounds like mind projection fallacy. That the observer does not know about the events doesn't mean they don't exist.

That's the earliest time that can be called a measurement time.

That would imply that whatever measurements we make locally, from the perspective of an observer who hasn't yet interacted with those measurements, our wave function hasn't collapsed yet, and we remain in superposition.

So, how does it make sense that the wave function has collapsed from our perspective?

Replies from: shminux
comment by shminux · 2012-05-06T02:35:19.558Z · LW(p) · GW(p)

That sounds like mind projection fallacy. That the observer does not know about the events doesn't mean they don't exist.

"Exist" should be a taboo word, until you can explain it in terms of other QM concepts.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-06T03:20:56.087Z · LW(p) · GW(p)

For a thing to exist, it means that thing is part of the reality that embeds our minds and our experience, whether or not that thing has an effect on our minds and our experience. Of course when I say something exists, it is a prediction of my model of reality. And you might ask how I can defend my model in favor of an alternative that says different things about events with no effect on my experience, and my answer would be that I prefer models that use the same rules whether or not I am looking, in which my reducible mind is not treated as ontologically fundamental.

Replies from: shminux
comment by shminux · 2012-05-06T05:10:22.872Z · LW(p) · GW(p)

For a thing to exist, it means that thing is part of the reality that embeds our minds and our experience,

"Reality" is another taboo word. We have no direct QM experience.

I prefer models that use the same rules whether or not I am looking, in which my reducible mind is not treated as ontologically fundamental.

If there is a single lesson from QM, it is that looking (=measurement) affects what happens. This has nothing to do with minds.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-06T05:29:08.674Z · LW(p) · GW(p)

"Reality" is another taboo word.

Reality is the thing that produces our experience and which we are trying to describe with our models. Stop playing dumb.

If there is a single lesson from QM, it is that looking (=measurement) affects what happens. This has nothing to do with minds.

Yes, looking affects what happens, but that is fully accounted for by the physical process of looking. That is, the effect of looking can be predicted and explained by treating the observer with the same laws of physics as whatever is observed. This does not mean you can make stuff up about unobserved events, or claim that events haven't really happened until they are observed ("For a local observer spacelike separated events do no exist until they come into causal contact with it.").

Replies from: shminux
comment by shminux · 2012-05-06T06:07:27.094Z · LW(p) · GW(p)

Reality is the thing that produces our experience and which we are trying to describe with our models.

I'm perfectly happy with the models being testable experimentally, without introducing this untestable thing you call reality.

Stop playing dumb.

I guess this concludes our exchange.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-06T07:15:18.947Z · LW(p) · GW(p)

Sorry for having been rude, but I really believe you should have understood my normal usage of "reality" from context, and I was annoyed already that you asked me to taboo "exist" which you introduced to the conversation.

Nonetheless, I have made substantial criticisms against your position, which you have not responded to. Whether or not you continue this exchange, you should take them into account as you continue your complaints about MWI advocacy.

comment by private_messaging · 2012-05-07T16:39:55.171Z · LW(p) · GW(p)

... Solomonoff induction ...

Totally agreed. Thing is in general incomputable, how much more you need not to trust yourself doing it correctly? Clearly you can't have a process that relies on computing incomputable things right. I'm becoming increasingly convinced, either via confirmation bias, or via proper updates, that Eliezer skipped helluva lot of fundamentals.

comment by shminux · 2012-05-05T18:03:09.124Z · LW(p) · GW(p)

For those interested in the current state of the Born rule research, there is a review in the latest Foundations of Physics.

A note of warning: this journal is heavily skewed towards philosophy and sometimes publishes complete crankery. Even its new chief editor, a Nobel prize winner Gerard 't Hooft, writes stuff like this on occasion.

comment by asr · 2012-05-06T16:32:38.363Z · LW(p) · GW(p)

I think Yudkowsky's analysis here isn't putting enough weight on the social aspects. "Science", as we know it, is a social process, in a way that Bayesian reasoning is not.

The point of science isn't to convince yourself -- it's to convince an audience of skeptical experts.

A large group of people, with different backgrounds, experiences, etc aren't going to agree on their priors. As a result, there won't be any one probability on a given idea. Different readers will have different background knowledge, and that can make a given hypothesis seem more or less believable.

(This isn't avoidable, even in principle. The Solomonoff prior of an idea is not uniquely defined, since encodings of ideas aren't unique. You and the reviewers are not necessarily wrong in putting different priors on an idea even if you are both using a Solomonoff prior. The problem wouldn't go away, even if you and the reviewers did have identical knowledge, which you don't.)

Yudkowsky is right that this makes science much more cautious in updating than a pure Bayesian. But I think that's desirable in practice. There is a lot of value to having a scientific community all use the same theoretical language and have the same set of canonical examples. It's expensive (in both human time and money) to retrain a lot of people. Societies cannot change their minds as quickly or easily as the members can, so it makes sense to move more slowly if the previous theory is still useful.

Replies from: private_messaging
comment by private_messaging · 2012-05-07T16:45:38.545Z · LW(p) · GW(p)

Other issue is that the process should be difficult to maliciously subvert (or non maliciously by rationalization of erroneous belief). That results in a boatload of features that may be frustrating to those wanting to introduce unjustified untestable propositions for fun and profit (or to justify erroneous beliefs).

Replies from: asr
comment by asr · 2012-05-07T18:14:20.984Z · LW(p) · GW(p)

Hrm. My impression is that science mostly isn't organized to catch malicious fraud. It's comparatively rare for outsiders to do a real audit of data or experimental method, particularly if the result isn't super exciting. In compensation, the penalties for being caught falsifying data are ferocious -- I believe it's treated as an absolute career-ending move.

I agree that the process is pretty good at squelching over-enthusiastic rationalization. That's an aspect I thought Yudkowsky captured quite well.

Replies from: private_messaging
comment by private_messaging · 2012-05-08T07:23:37.619Z · LW(p) · GW(p)

It is a part of difficulty to subvert - it is difficult to arrange a scheme with positive expected utility for falsifying data. At the same time there's plenty of subtle falsifications such as discarding of negative results. And when it comes to rationality - if you have a hypothesis X that is supported by arguments A,B,C,D and is debunked by arguments E,F,G,H , you can count on rational self interested agents to put more effort into finding the first four but not the last four, as payoff for former is bigger. (The real agent's reasoning costs utility, and it is expensive to find those arguments)

Consider some issue like AI risk. If you can pick out the few reasons why AI would kill everyone, even very bad reasons that rely on some oracular stuff that is not implementable, you are set for life (and you don't even have to invent them, you can pick out of fiction and simply collect them and promote together). If you can make a few equally good reasons not to, that's pure waste of your time as far as self interest is concerned. Of course science does not trust you to put equal effort when it is clearly irrational to put equal effort, for anyone but the true angels (and then for the true angels it is also rational to try to grab as much money (which would be ill spent otherwise) as they can as easily as they can, and then donate it to charities etc, so for purpose of fact finding you can't trust even the selfless angels).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-05-09T04:04:55.569Z · LW(p) · GW(p)

It is a part of difficulty to subvert - it is difficult to arrange a scheme with positive expected utility for falsifying data.

Given that one gets fame for "spectacular" discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.

Replies from: private_messaging
comment by private_messaging · 2012-05-09T06:24:09.884Z · LW(p) · GW(p)

That has always been the problem with experimental science, sometimes you can't really protect from falsification.

Actually, thing is, given the list of biases, one shouldn't trust one's own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases... that's just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with 'correct priors' allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn't have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F... which would raise probability of A, but not G,H,I,J... which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.

Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument - when you are actually doing bulletproof math with no gaps where errors creep in.

comment by FeepingCreature · 2012-05-05T15:00:15.382Z · LW(p) · GW(p)

I disagree with his statements on the effects of state power. Regulation seems to work well enough over here; I don't know from where he takes the unsourced assumption that it doesn't.

comment by Alerus · 2012-05-07T14:22:27.018Z · LW(p) · GW(p)

I disagree with the quoted part of the post. Science doesn't reject your bayesian conclusion (provided it is rational), it's simply unsatisfied by the fact that it's a probabilistic conclusion. That is, probabilistic conclusions are never knowledge of truth. They are estimations of the likelihood of truth. Science will look at your bayesian conclusion and say "99% confident? That's good!, but lets gather more data and raise the bar to 99.9%!). Science is the constant pursuit of knowledge. It will never reach it it, but it will demand we never stop trying to get closer.

Beyond that, I think in a great many cases (not all) there are also some inherent problems in using explicit bayesian (or otherwise) reasoning for models of reality because we simply have no idea what the space of hypotheses could be. As is such, the best bayesian can ever do in this context is give an ordering of models (e.g., this model is better than this model), not definitive probabilities. This doesn't mean science rejects correct bayesian reasoning for the reason previously stated, but it would mean that you can't get definitive probabilistic conclusions with bayesian reasoning in the first place for many contexts.

comment by DanielLC · 2012-05-05T18:51:14.196Z · LW(p) · GW(p)

Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won't ever visit again.

Libertarianism might rely on individuals generally being that prosocial, but that specific thing isn't necessary. Most jobs don't get tips. There's no reason waiters need them.

Replies from: ZankerH
comment by ZankerH · 2012-05-06T15:00:15.772Z · LW(p) · GW(p)

As I understand it, in the USA waiting staff get paid below minimum wage and are expected to live off tips.

Replies from: nawitus
comment by nawitus · 2012-05-06T20:59:48.785Z · LW(p) · GW(p)

If tipping stopped, waiting staff wages would increase and so would food prices (to pay for the wage increases).