Parapsychology: the control group for science
post by AllanCrossman · 2009-12-05T22:50:06.821Z · LW · GW · Legacy · 188 commentsContents
Acknowledgements Comments / criticisms None 188 comments
Parapsychologists are constantly protesting that they are playing by all the standard scientific rules, and yet their results are being ignored - that they are unfairly being held to higher standards than everyone else. I'm willing to believe that. It just means that the standard statistical methods of science are so weak and flawed as to permit a field of study to sustain itself in the complete absence of any subject matter.
— Eliezer Yudkowsky, Frequentist Statistics are Frequently Subjective
Imagine if, way back at the start of the scientific enterprise, someone had said, "What we really need is a control group for science - people who will behave exactly like scientists, doing experiments, publishing journals, and so on, but whose field of study is completely empty: one in which the null hypothesis is always true.
"That way, we'll be able to gauge the effect of publication bias, experimental error, misuse of statistics, data fraud, and so on, which will help us understand how serious such problems are in the real scientific literature."
Isn't that a great idea?
By an accident of historical chance, we actually have exactly such a control group, namely parapsychologists: people who study extra-sensory perception, telepathy, precognition, and so on.
There's no particular reason to think parapsychologists are doing anything other than what scientists would do; their experiments are similar to those of scientists, they use statistics in similar ways, and there's no reason to think they falsify data any more than any other group. Yet despite the fact that their null hypotheses are always true, parapsychologists get positive results.
This is disturbing, and must lead us to wonder how many positive results in real science are actually wrong.
The point of all this is not to mock parapsychology for the sake of it, but rather to emphasise that parapsychology is useful as a control group for science. Scientists should aim to improve their procedures to the point where, if the control group used these same procedures, they would get an acceptably low level of positive results. That this is not yet the case indicates the need for more stringent scientific procedures.
Acknowledgements
The idea for this mini-essay and many of its actual points were suggested by (or stolen from) Eliezer Yudkowsky's Frequentist Statistics are Frequently Subjective, though the idea might have originated with Michael Vassar.
This was originally published at a different location on the web, but was moved here for bandwidth reasons at Eliezer's suggestion.
Comments / criticisms
A discussion on Hacker News contained one very astute criticism: that some things which may once have been considered part of parapsychology actually turned out to be real, though with perfectly sensible, physical causes. Still, I think this is unlikely for the more exotic subjects like telepathy, precognition, et cetera.
188 comments
Comments sorted by top scores.
comment by AlexMennen · 2009-12-06T17:55:21.467Z · LW(p) · GW(p)
Parapsychologists make a poor control group of scientists because part of their job is collecting evidence that parapsychology works. In science, that step is already done. Biologists do not need to prove that life works, because life exists. Physicists do not need to prove that physics works, because physics, by definition, IS the way the universe works. Einstein did not dream up relativity and then start looking for evidence to support it. He looked at the evidence that was available, and came up with relativity as a way to explain it. Parapsychologists do it the other way around.
Replies from: brazil84, AllanCrossman↑ comment by brazil84 · 2009-12-06T23:14:10.832Z · LW(p) · GW(p)
"Parapsychologists make a poor control group of scientists because part of their job is collecting evidence that parapsychology works."
Why is that their job? In theory, they are just studying the question of psychic phenomena. If a parapsychologist found strong evidence against psychic phenonma, he would be doing his job.
Of course, your real point is that such a parapsychologist would be working himself out of a job. But the same danger is there for biologists and physicists. And of course climatologists.
Replies from: ciphergoth, prase↑ comment by Paul Crowley (ciphergoth) · 2009-12-09T14:53:50.160Z · LW(p) · GW(p)
What, they're going to discover there's no such thing as climate?
Replies from: Curiouskid, brazil84↑ comment by Curiouskid · 2011-09-18T17:30:49.903Z · LW(p) · GW(p)
Why is research into climate funded? I imagine if the scare of global warming were eliminated, there would be much less funding.
Replies from: gjm↑ comment by gjm · 2011-09-18T18:49:52.919Z · LW(p) · GW(p)
Some people have conjectured in the past that we might be in danger of a sort of global cooling. (Note: some things said about this by those who profess to disbelieve in anthropogenic climate change are false.) Understanding large-scale climate phenomena may help to predict natural disasters like hurricanes. The climate is extremely important for life on earth (human life included). I think there would be plenty for climate scientists to do "if the scare of global warming were eliminated".
(In fact, there is near-total agreement among climate scientists about global warming. So probably most changes in climate-scientist opinion on this topic would make for greater uncertainty and therefore more funding...)
[EDITED to add: At least one person has downvoted this; no one has replied to it. I would be interested to know what about it is downvote-worthy; it looks OK to me even on rereading. If I've made some idiotic mistake, I can't fix my brain unless someone points it out to me...]
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2011-10-08T16:17:50.477Z · LW(p) · GW(p)
I suspect the issue is the vague nature of the 'some things said about this' comment. Yes, some things said about it are undoubtedly false, but some quite incisive things said about the conjecture are true!
Replies from: gjm↑ comment by gjm · 2011-10-08T20:40:50.288Z · LW(p) · GW(p)
Oh yes, for sure. That comment was a lazy shorthand for something like this: "Some unscrupulous people have played up past conjectures about global cooling in order to discredit what is now said about global warming, and a great deal of what they have said on this subject is bullshit. By referring in this context to global cooling, I am not endorsing any of that bullshit." But that seemed like too much to cram into a comment that was actually about something else, and no way of making it much shorter sprang to mind.
↑ comment by brazil84 · 2009-12-13T22:01:39.964Z · LW(p) · GW(p)
They might discover that global warming is not a serious threat after all.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-12-14T08:38:46.018Z · LW(p) · GW(p)
Yes, I know that's what you meant, but the point of my comment is that unlike with parapsychology, the object of study for climatologists definitely exists, whether or not AGW is real. Any individual climatologist who started to think that AGW might not be real won't think "this is the end for climatology", because climate exists either way.
And of course if they worry that funding for climate research might drop as a result of what they've found, they can console themselves with the knowledge that their new belief allows them to earn twice as much money working for an oil-industry funded body such as the CEI.
Replies from: brazil84↑ comment by brazil84 · 2009-12-14T09:45:17.578Z · LW(p) · GW(p)
I understand your point, but the analogy still has a good deal of validity since the essential point is that the practical consequences of a negative result are to damage the careers of the scientists in question.
"And of course if they worry that funding for climate research might drop as a result of what they've found, they can console themselves with the knowledge that their new belief allows them to earn twice as much money working for an oil-industry funded body such as the CEI."
I disagree. If it turns out that global warming was wildly exaggerated, why would the oil-industry fund any climatology work at all?
Replies from: ciphergoth, taryneast↑ comment by Paul Crowley (ciphergoth) · 2009-12-14T13:13:15.277Z · LW(p) · GW(p)
A researcher who thought it exaggerated would know that they would not personally be in a position to change the long-term trends whether or not they report their conclusions, but they could make a lot of money from the CEI in the short term. Denialists whose credentialed academic specialty is specifically climatology are few (I know of none), so they would be very much in demand.
Replies from: brazil84↑ comment by brazil84 · 2009-12-14T14:01:43.579Z · LW(p) · GW(p)
Ok I understand your point. It seems to me that you are distinguishing between long term and short term consequences. Without getting into an argument about the availability of funding for skeptical global warming research, I will concede the possibility that a climate researcher who is publishing skeptical results may do better in the short term than a parapsychologist who is publishing skeptical results.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-12-14T16:34:47.351Z · LW(p) · GW(p)
More than that: a fully credentialed climate researcher who is prepared to back the position of an organisation like the CEI can line their own pockets far more effectively than one who backs the scientific consensus. You'd be celebrated in many quarters as the world's leading authority on the subject, you'd get speaking engagements aplenty, lots of media attention, and more. It's an extraordinary testament to personal integrity that so few go for it - as I say, I know of none who have.
Replies from: brazil84↑ comment by taryneast · 2011-03-26T18:05:49.388Z · LW(p) · GW(p)
If it turns out that global warming was wildly exaggerated, why would the oil-industry fund any climatology work at all?
To reduce their currently extortionate insurance premiums.
Replies from: brazil84↑ comment by brazil84 · 2011-04-02T01:50:55.720Z · LW(p) · GW(p)
Assuming the world discovers and accepts that global warming was wildly exaggerated, how would oil-industry-funded climatology research reduce insurance premiums?
Replies from: taryneast↑ comment by taryneast · 2011-04-03T09:03:07.508Z · LW(p) · GW(p)
I'm told by somebody in the industry, that premiums in cyclone-areas (notably the rigs in the gulf of mexico) are going through the roof right now as climate change predictions mean that cyclone activity is likely to continually increase. If climate change were wildly exaggerated, it could be used to argue for reduced insurance premiums in those regions.
Replies from: brazil84↑ comment by brazil84 · 2011-04-03T09:30:23.830Z · LW(p) · GW(p)
Assuming for the sake of argument that that that's true, so what? After that happens, what's the incentive to fund further research?
Replies from: taryneast↑ comment by taryneast · 2011-04-03T10:07:31.611Z · LW(p) · GW(p)
Um, I was responding to "why would the old industry fund climatology work at all" - my answer is "they might do it if it would reduce their insurance premiums".
I do not postulate that they would continue to fund further research after that.
Replies from: brazil84↑ comment by brazil84 · 2011-04-03T10:15:07.129Z · LW(p) · GW(p)
"Um, I was responding to 'why would the old industry fund climatology work at all'"
Ok, that's not the question I asked.
Replies from: taryneast↑ comment by taryneast · 2011-04-03T14:08:53.876Z · LW(p) · GW(p)
Looking back at your comment above, that is a word-for-word copy of what you asked. How have I misunderstood your question? Have I taken it out of context? If so - my apologies - and can you supply the correct context?
Replies from: brazil84↑ comment by brazil84 · 2011-04-03T17:51:59.412Z · LW(p) · GW(p)
Lol, yes you took it out of context. Here is the first part of my question:
"If it turns out that global warming was wildly exaggerated"
So the question is about what happens after anthropogenic CO2 triggered global warming is (hypothetically) debunked as a serious threat.
Replies from: taryneast↑ comment by prase · 2009-12-09T20:01:10.708Z · LW(p) · GW(p)
In theory, they are just studying the question of psychic phenomena.
What they do in theory isn't much important. Parapsychologists have all their discipline at stake. They can't take one well established effect and gain experience and status by studying it, and then turn to investigation of a more controversial phenomenon. It's a serious difference from climatology.
Replies from: brazil84↑ comment by brazil84 · 2009-12-13T22:00:07.767Z · LW(p) · GW(p)
I don't think the difference is all that big. If climatologists discover that global warming is not a serious threat after all, it will seriously damage their ability to get funding and prestige.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-07-13T12:35:08.840Z · LW(p) · GW(p)
.
↑ comment by AllanCrossman · 2009-12-07T18:50:18.548Z · LW(p) · GW(p)
In science, that step is already done.
Only in general, but not for specific questions like: does compound XYZ affect tumour growth?
Replies from: AlexMennen↑ comment by AlexMennen · 2009-12-08T03:30:01.970Z · LW(p) · GW(p)
True, but the people studying whether compound XYZ affect tumour growth are not preselected to believe that it is.
Replies from: wedrifidcomment by AndrewKemendo · 2009-12-06T02:28:04.839Z · LW(p) · GW(p)
In no way do I think that the parapsychologists have good hypotheses or reasonable claims. I also am a firm adherent to the ethos: Extraordinary claims must have extraordinary proofs. However to state the following:
one in which the null hypothesis is always true.
is making a bold statement about your level of knowledge. You are going so far as to say that there is no possible way that there are hypotheses which have yet to be described which could be understood through the methodology of this particular subgroup. This exercise seems to me to be rejecting these studies intuitively,(without study) just from the ad hominem approach to rejection - well they are parapsychologists therefore they are wrong. If they are wrong, then proper analysis would indicate that, would it not?
I have never seen a parapsychology study, so I will go look for one. However does every single study have massive flaws in it?
Replies from: Kaj_Sotala, Blueberry, billswift, AllanCrossman, CronoDAS↑ comment by Kaj_Sotala · 2009-12-06T08:57:40.286Z · LW(p) · GW(p)
I have never seen a parapsychology study, so I will go look for one. However does every single study have massive flaws in it?
Damien Broderick's Outside the Gates of Science summarizes a number of parapsychology studies, noting that several of the studies do indeed seem quite solid. It doesn't come to any definite conclusion over whether psi phenomena are actually real or if there's just something wrong with our statistical techniques, but it does seem like there might be enough to warrant more detailed study. See also e.g. Ben Goertzel's review of the book.
↑ comment by Blueberry · 2009-12-06T08:34:14.193Z · LW(p) · GW(p)
You are going so far as to say that there is no possible way that there are hypotheses which have yet to be described which could be understood through the methodology of this particular subgroup. This exercise seems to me to be rejecting these studies intuitively,(without study) just from the ad hominem approach to rejection - well they are parapsychologists therefore they are wrong. If they are wrong, then proper analysis would indicate that, would it not?
This is exactly the point. Parapsychology is one of the very few things we can reject intuitively, because we understand the world well enough to know that psychic powers just can't exist. We can reject them even when proper analysis doesn't indicate that they're wrong, which tells us something about the limitations of analysis.
ETA: Essentially, if the scientific method can't reject parapsychology, that means the scientific method isn't strong enough, not that parapsychology might be legitimate.
Replies from: Yvain, Mitchell_Porter, LauraABJ, Neil↑ comment by Scott Alexander (Yvain) · 2009-12-07T16:46:17.030Z · LW(p) · GW(p)
There are many other things that people have claimed can be rejected intuitively without study through the years.
In the 18th century, everyone knew that real scientific physics only permitted a body to act upon another body through direct contact. When Newton proposed his theory of gravity, many people rejected it as pseudoscientific or magical because it claimed the stars and planets could exert action at a distance, without saying how they did it.
In the 19th century, everyone knew that life was on a different order than mere matter, because obviously you couldn't produce the self-moving and self-regenerating qualities of life with just stuff like you get in rocks and sand.
In the 20th century, everyone knew that the mind was more than just the brain, since simple introspection could determine the existence of a consciousness inexplicable in simple material terms.
The absurdity heuristic is an okay heuristic, but I'd be really really careful before saying something is so absurd we can throw away any contradictory experimental evidence without a glance.
The possibility I give to some sort of psi effect existing (in a nice, scientific way that we can study once we figure out what form of matter/energy forms its substrate) is pretty low, but not zero. I'm not even willing to give it a tiny one in a bajillion probability - remember that people who say they're 99% sure of something are wrong 20% of the time, and that since this issue is "politically" charged, vaguely defined, and possibly affected by knowledge we don't have, this is exactly the sort of thing we'd be likely to be overconfident on. If this was some calibration test, I wouldn't feel too good about placing more than 95% or so on the nonexistence of psi.
And if you're a Bayesian, a couple of good studies should be able to start manipulating that 5% number upwards
Replies from: MichaelVassar, Sebastian_Hagen↑ comment by MichaelVassar · 2009-12-08T03:38:08.970Z · LW(p) · GW(p)
I used to think that way before I knew about Bayesianism. Once I learned about it I realized that the prior probability for psi was very VERY low, e.g. its complex and there's no reason to expect it so one in a bajillion, while the probability for the observed evidence for psi, given what we know about psychology, was well in excess of 50% in the absence of psi, so the update couldn't justify odds greater than two in a bajillion.
Replies from: alexflint, Yvain, externalmonologue↑ comment by Alex Flint (alexflint) · 2009-12-08T10:16:40.267Z · LW(p) · GW(p)
One in a bajilion? Guys, the numbers matter. 10^-9 is very different from 10^-12, which is very different from 10^-15. If we start talking about some arbitrarily low number like "one-in-a-bajillion" against which no amount of evidence could change our mind, then we're really just saying "zero" but not admitting to ourselves that we're doing so.
Other than that, I agree with Yvain and have found this to be perhaps the most belief-changing so far on LW!
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-11T02:13:27.267Z · LW(p) · GW(p)
It takes only 332 pieces of evidence with likelihood ratios of 2:1 to promote to 1:1 odds a hypothesis with prior odds of 1:googol, that is 10^-100, which would be the appropriate prior odds of something you could describe with around 70 symbols from a 26-letter equiprobable alphabet.
"A bajillion to one" are odds that Bayesian updating can overcome surprisingly quickly - it isn't anything remotely like "no amount of evidence can change my mind". Now odds of one to a googolplex - that might as well be zero, relative to the amount of evidence you could acquire over a human lifetime. But the prior probability of any possibility you can describe over a human lifetime should be much higher than that.
Replies from: roystgnr↑ comment by roystgnr · 2012-02-09T18:52:49.504Z · LW(p) · GW(p)
A nitpick: it takes 332 pieces of all mutually independent evidence to perform that level of update.
More confusing, for these purposes the independence level of the evidence depends on what hypotheses you're trying to distinguish with it. E.g. if you're trying to distinguish between "that subject has ESP powers" and "that experiment was random luck" then 332 repetitions of the same experiment will do. If you're trying to distinguish between "that subject has ESP powers" and "that experimenter's facial expressions differ based on what cards he was looking at", then you can't just repeat it; you've got to devise new and different experiments.
↑ comment by Scott Alexander (Yvain) · 2009-12-09T18:50:16.987Z · LW(p) · GW(p)
You're right that I completely missed the Bayesian boat, and I'm going to have to start thinking more before I speak and revise my estimates down to <1%.
But I'm still reluctant to put them as low as you seem to. The anthropic principle combined with large universe says that whatever complexity is necessary for the existence of conscious observers, we can expect to find at least that level of complexity. Questions like consciousness, qualia, and personal identity still haven't been resolved, and although past experience suggests there is probably a rational explanation to this question, it isn't nearly dissolved yet. If consciousness really is impossible without some exotic consciousness-related physics (Penrosean or otherwise), then our universe will have exotic consciousness-related physics no matter how complex they need to be. And since evolved beings have been so proficient at making use of normal physics to gain sensory information, it's a good bet they'd do the same with exotic consciousness-related physics too if they had them...
...is a somewhat hokey argument I just invented on the spot, and I'm sorry for it. But the ease with which I can put something like that together is itself evidence that there are enough possible sides of the issue that hadn't been considered (at least I hadn't considered that one; maybe you've been thinking about it for years) that it needs at least a little more room for error than two in a bajillion (sorry, Alex).
I also disagree with your assessment of the amount of evidence. Have you ever read any good books by intelligent believers in the subject? It's not all John Edwards psychic chat shows. I also think you might be double-counting evidence against psi here - psi doesn't exist so we know any apparent evidence must come from human psychology, therefore there never was any apparent evidence in the first place. Or have you read the studies and developed separate explanations for each positive result?
Anyway, let's settle this the LW way. Give me your odds that psi exists, and we can make a bet at them. If it's one in a million, then I'll give a cent to your favorite charity on the condition that you give $10,000 to my favorite charity if psi's shown to exist within our lifetimes (defined however you want; possibly as evidence sufficient to convince any two among Randi, Dawkins, and Eliezer that psi is >50% likely).
Replies from: MichaelVassar, MixedNuts, CarlShulman, FeepingCreature↑ comment by MichaelVassar · 2009-12-10T19:20:33.902Z · LW(p) · GW(p)
One problem with this argument is that if psi exists, we are very bad at using it, and we don't see other organisms using it well either. The world we see appears to be almost completely described by normal physics at worst.
I don't think that I'm double-counting evidence. I certainly know that there can be intelligent believers, after all, MANY intelligent people believe that one is compelled to accept the conclusions of the scientific method over those of the scientific community. Also, beliefs can be compelling for any variety of irrational reasons. The evidence I have seen though looks to me like exactly the evidence you would expect given known psychology and no psi. We can surely agree that there is a LOT of evidence that hyman psychology would create belief in psi in the absence of psi, can't we.
I would set my odds at "top twenty most astoundingly surprising things ever discovered but maybe not top ten". That seems to me like odds of many billions to one against, but not trillions. Unfortunately, the odds for almost any plausible winning conditions occurring without psi being real are much higher, making the bet difficult to judge. I have a standing 10,000 to one bet against Blacklight Power's "Hydrino Theory" with Brian Wang based on a personal estimate of odds MUCH less than 1-in-10K for "Hydrino Theory" and I'm happy to extend those odds when the odds are still more favorable, but psychotic breaks by two people in a group of three? If the odds per person are 1%, that gives odds of about 1:3300. I'm happy to give those odds on the Dawkins, Randi Yudkowsky bet and count "psi is actually real" as a rounding error.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-12-11T14:39:15.162Z · LW(p) · GW(p)
Have donated $10 to SIAI (seemed less likely to lose you guys money in transaction fees than $1) with public comment about the bet . Will decide where you can donate your $33000 in the unlikely event it proves necessary.
↑ comment by MixedNuts · 2011-07-13T13:22:17.626Z · LW(p) · GW(p)
I'd feel ridiculously overconfident stating a probability of less that 1e-6, yet I don't have the slightest hesitation to take that bet. (Brain sucks at small probabilities.) Condition is any two among {Randi, Dawkins, Eliezer, Vassar, me}, but if one is reported to have developed a new mental illness at least two months before they say psi is real, they don't count.
Also, let's make it purchasing power as of 2011, not dollar amount. Assuming scarcity lasts long enough.
↑ comment by CarlShulman · 2009-12-11T15:03:31.794Z · LW(p) · GW(p)
What if they're dead?
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2009-12-11T15:06:22.768Z · LW(p) · GW(p)
Well, then I lose the bet...unless someone contacts their ghosts...in which case I win the bet!
↑ comment by FeepingCreature · 2012-02-12T15:10:54.379Z · LW(p) · GW(p)
Psi doesn't even explain consciousness or qualia.
[edit] Oops, necro. Disregard me.
[edit edit] okay! nevermind that then :D
Replies from: MC_Escherichia↑ comment by MC_Escherichia · 2012-02-12T15:19:14.242Z · LW(p) · GW(p)
I don't think there's a prejudice against replying to old posts around here...
↑ comment by externalmonologue · 2010-11-27T17:14:04.778Z · LW(p) · GW(p)
One in a bajillion? You are saying you actually know how complex psi is without even saying what aspect of psi you are talking about.
We know biology is very complex. So when testing a supplement like creatine, the pseudoskeptic could say "biology is extremely complex. We do not know the mechanism that makes creatine work so I assign a very low bayesian probability. Today I feel like a hundred trillion to one".
Keep in mind this is after several studies have shown an effect in the predicted direction whose odds are not easily explained by chance. Indeed, there is nothing wrong with you assertion about complexity just the subjective part where you assign a number to a phenomena you are not very familiar with.
Replies from: wedrifid↑ comment by wedrifid · 2010-11-27T17:33:03.747Z · LW(p) · GW(p)
One in a bajillion? You are saying you actually know how complex psi is without even saying what aspect of psi you are talking about.
Bajillion isn't exactly a precise measure. In this context it means 'lots'. That isn't hard to assign to the all aspects of psi.
↑ comment by Sebastian_Hagen · 2009-12-07T17:23:26.174Z · LW(p) · GW(p)
In the 20th century, everyone knew that the mind was more than just the brain, since simple introspection could determine the existence of a consciousness inexplicable in simple material terms.
No, they didn't. Superficial research indicates that serious materialism goes back to at least the Enlightenment in the 18th century. And the 20th century? That's not even plausible.
Replies from: nwthomas↑ comment by nwthomas · 2011-07-04T18:18:23.156Z · LW(p) · GW(p)
Good point, with the qualifier that many people (including professional philosophers) presently find themselves unable to wrap their heads around the idea that they have no non-material consciousness. The "argument from absurdity" against materialism is alive and kicking.
↑ comment by Mitchell_Porter · 2009-12-06T08:47:58.596Z · LW(p) · GW(p)
Parapsychology is one of the very few things we can reject intuitively, because we understand the world well enough to know that psychic powers just can't exist.
Do you think it is possible that we are "living in the Matrix"? If so, then you should consider something functionally indistinguishable from psychic powers to be possible.
Replies from: bigbad, Pablo_Stafforini, Blueberry↑ comment by bigbad · 2009-12-07T23:24:43.048Z · LW(p) · GW(p)
If we are, in fact, living in the Matrix, then science has already characterized the rules of the simulation rather well. Barring further interference by the sysadmin/God/whatever, it should continue to operate by mechanistic, semipredictable rules. Science has little to say about one-time interventions from outside observable reality, whether you call them "Matrix hacks", "miracles", or what you will. Regarding such matters, the null hypothesis has yet to be convincingly falsified, but absence of proof is not proof of absence.
Replies from: Strange7↑ comment by Pablo (Pablo_Stafforini) · 2009-12-06T17:43:39.059Z · LW(p) · GW(p)
The hypothesis that we are living in the Matrix is best understood as a metaphysical hypothesis. The various claims made by parapsychologists, however, are not metaphysical claims about the nature of reality, but "scientific" claims about what goes on in reality. It is therefore unclear why such claims would be more probable on the assumption that the Matrix hypothesis is true.
Replies from: Psychohistorian, Jack↑ comment by Psychohistorian · 2009-12-07T04:08:19.498Z · LW(p) · GW(p)
I am not surprised when a video game character consistently summons balls of fire out of nothingness. I would be absolutely astounded to see an actual person do this. This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.
If we were living in the matrix, this would not be the case. It would not mean that we are necessarily in the kind of video game where there are psychic powers, but it would provide a very clear mechanism through which psychic powers could act. Such a mechanism does not appear possible in a deterministic universe, or at least in the one we seem to occupy.
Replies from: Vladimir_Nesov, wedrifid↑ comment by Vladimir_Nesov · 2009-12-07T05:48:27.081Z · LW(p) · GW(p)
Real world is uncaring, unsupervised. Magic is not just about the world being "complex", it's about the world containing mechanisms targeting specifically humans, and understanding the situation much like a human would. Being "deterministic" doesn't preclude anything, it's more of a way of seeing things than the way things are.
↑ comment by wedrifid · 2009-12-07T04:25:40.270Z · LW(p) · GW(p)
This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.
An artificial dichotomy.
Replies from: Zack_M_Davis, Psychohistorian, Psychohistorian↑ comment by Zack_M_Davis · 2009-12-07T04:46:22.272Z · LW(p) · GW(p)
I don't think so. Video games are specifically programmed to create a particular experience for the user. If something goes over the horizon and won't be needed again, it just doesn't get computed. Whereas the real universe seems to be---just the same physics. Everywhere. No complicated ad hoc programming describing levels or characters or points, or translating keypresses into useful actions---no user input at all, come to think of it.
Replies from: SilasBarta, Baughn, NancyLebovitz↑ comment by SilasBarta · 2009-12-07T19:59:36.904Z · LW(p) · GW(p)
If something goes over the horizon and won't be needed again, it just doesn't get computed. Whereas the real universe seems to be---just the same physics. Everywhere.
Not quite. That's what we assume happens -- justifiably! -- because it would be a far more complicated hypothesis to disbelieve in the implied invisible.
However, failing to see these implied invisibles is not itself independent evidence of universal law, just an inference from an Occamian prior. You would fail to see implied invisibles with equal probability whether or not the laws were fully universal.
Interestingly, I explored the question of whether it's possible, if the universe is a simulation, to shut it down by forcing it to do more and more computational work in order to keep fooling us. But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.
Replies from: gwern, Jack↑ comment by gwern · 2009-12-08T04:22:58.166Z · LW(p) · GW(p)
But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.
I read this, but I'm a little confused. Conceptually, as a closed system, the demand of universe is constant, sure, when I imagine it as something like the game of Life. Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?
Because if optimizations are applied, then it seems you can expand the necessary power by doing things that defeat the optimizations. Caches are ineffective if you keep generating intricately linked cryptographic junk, etc. One might think that no simulating agent would run a simulator whose worst-case requirements are beyond its abilities; but then, we humans routinely use QuickSort and don't mind our kernels over-committing memory...
(Incidentally, I made an estimation of my own for how small our substrate could be: http://www.gwern.net/Simulation%20inferences.html . I concluded that the simulating computer could be as small as a Planck cube.)
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-08T17:02:33.946Z · LW(p) · GW(p)
Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?
It doesn't rely on that assumption. It's just based on the fact that any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.
The more states something could be in, from your perspective, the less information the simulator has to store to consistently represent it for you.
Replies from: gwern, Tyrrell_McAllister↑ comment by gwern · 2009-12-08T19:00:49.418Z · LW(p) · GW(p)
I vaguely see what you're getting at - every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I'm not sure how exactly this would work on the level of bits and programs?
Replies from: Vladimir_Nesov, SilasBarta↑ comment by Vladimir_Nesov · 2009-12-08T20:13:42.471Z · LW(p) · GW(p)
This is a very conceptually interesting question.
↑ comment by SilasBarta · 2009-12-08T19:27:25.393Z · LW(p) · GW(p)
Bah! Implementation issue! :-P
At the level you're asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.
Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn't have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.
Replies from: gwern↑ comment by gwern · 2009-12-08T19:37:08.790Z · LW(p) · GW(p)
Spoken like a true theoretician. But it's hard to see an implementation that is optimal in exploiting this memory bound.
I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.
If we start out with 1 1000, and then the 'laws of physics' begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?
Is there any representation of the universe's state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like 'if (changed? x), then if x > 128, convert x Integer; x <= 128 && > 1, convert x int; else convert x bit' in order to preserve the constant-memory usage?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-08T20:12:39.180Z · LW(p) · GW(p)
I don't think this hypothetical universe is comparable in the relevant ways: it must be capable of representing the concept of an observer, and what that observer knows (has mutual information with), and adhere to the 2nd law of thermodynamics. Which I don't think is the case here.
Replies from: gwern↑ comment by gwern · 2009-12-08T21:45:38.635Z · LW(p) · GW(p)
Wait, there has to be an observer? I thought you were really just talking about entangled wave-functions etc.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-09T23:20:26.173Z · LW(p) · GW(p)
No, that's the point Jack brought up. I was only discussing the issues that arise in the hypothetical scenario in which the universe is simulated in an "overworld" and must successfully continue to fool us.
↑ comment by Tyrrell_McAllister · 2009-12-08T19:29:38.504Z · LW(p) · GW(p)
You make an interesting observation. I'm still trying to think it through, so I might not yet be making sense. But, right now, I have the following difficulty with accepting your argument.
Any simulation has "true" physical laws. These are just the rules that govern how in fact the simulation's algorithm unfolds, including all optimizations, etc.
However, we expect, a priori, the ultimate laws of reality to satisfy certain invariances. For example, perhaps we expect the ultimate laws to work identically at different points in real physical space. The true laws of the simulation might not satisfy such invariances with respect to the simulation. For example, the simulation's laws might not work identically at different points in the simulated physical space. [ETA: Optimization makes this likely. The simulation could evolve in a "chunkier" way far from us than it does close to us.]
So maybe this is how we can define what it means to hide the simulated nature of our universe from us: "Hiding the simulation" means "making our universe appear to us as though its laws satisfy all the expected invariances, even though they don't".
Here's the issue that I hope you address:
I'm convinced by your argument that "any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states."
Say that, when I start out, system A could be in any one of the states in some state-set X. Then I learn about system B, and so, as you point out, system A could now be in any one of the states in some larger state-set Y, as far as I know.
But what if the larger state-set Y includes states that do not obey the expected invariances? And what if, as I learn more about the universe, the state-set that A's state must be in grows, all right, but eventually consists almost entirely of states that violate our expected invariances?
Wouldn't that amount to discovering the simulated nature of our universe? To avoid this discovery, wouldn't the simulators have to put more resources into making sure that A's set of possible states includes enough states that obey the expected invariances?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-08T20:22:32.082Z · LW(p) · GW(p)
Good point -- I've struggled with the same problem, in different terms. Let me know if my statement of the problem matches the point you're making here:
"It's possible to discover, not just particulars about individual systems, but universal laws. These universal laws put a constraint on all future observations, thus reducing the subjective entropy of the universe, without (apparently) needing any corresponding gain of entropy."
It's something I was wondering about when going over the E. T. Jaynes papers and Yudkowsky's Engines of Cognition.
I haven't gotten it resolved in terms of 2nd law and the "subjective entropy" idea, but I think I know how to resolve it in the context of the simulated universe question: basically, if the simulation starts out adhering to the invariances that have to be obeyed (even though they might be more than necessary to fool observers), then it is no additional burden for the observers to notice these invariances.
Though the observers have (apparently) violated the 2nd law -- and this is an area for further research -- the simulator was already expending the computational resources necessary to make the invariances hold. It is an exception to the general principle I derived, in that it's a case where net destruction of entropy requires no additional RAM.
I'm still working on how to resolve the remaining problems, but it shows how discovery of universal physical laws needn't be a problem for the simulator.
Replies from: pengvado, Tyrrell_McAllister↑ comment by pengvado · 2009-12-08T22:07:34.842Z · LW(p) · GW(p)
I'll try to bring your solution back to thermodynamics terms:
The universe always has and always will obey certain invariances, and those are a redundancy in your observations, which (along with any other redundancy that could possibly be derived) is already taken into account when computing information-theoretic entropy. If you had plenty of data already to derive the invariance but just hadn't previously noticed it, that lack of logical omniscience is why the 2nd law is an inequality. Including the invariance into your future predictions isn't a net reduction in entropy. It just removes some of the slack between the exact phase-volume preserving transforms of physics and the upper bounds that a computationally bounded agent has to use.
↑ comment by Tyrrell_McAllister · 2009-12-08T21:32:09.749Z · LW(p) · GW(p)
Your restatement looks exactly right, and your solution would resolve the issue I raised.
One question is, how much optimization can the simulators do if the true laws are as invariant as they "ought to be"? For example, if the universe has to evolve according to the same rules everywhere, that would seem to keep it from evolving in a chunkier way far away from us, which closes off a potential way to save on computation.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-09T23:32:09.969Z · LW(p) · GW(p)
The simulator can maintain conservation of e.g. mass, while not churning through the computations required for e.g. gravity until people see enough that they can check if gravity isn't holding.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it -- but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
On second thought, that doesn't work either, since discovery of gravitational laws will constrain their existing predictions of where the planets will be, and this destruction of entropy is unrelated to the entropy needed to create it, which was your objection to begin with.
My best guess at this point is that any resolution will ultimately hinge on a finer-grained information-theoretic analysis of the discovery of universal laws. That is, as you gain evidence pointing to the validity of laws you notice, you assign a high-but-not-unity probability to the laws continuing to hold. Each time your probability goes up, that corresponds to a particular reduction in the entropy of your probability distribution.
But, as they say, "to make inferences you have to make assumptions". There is some entropic cost to making the assumptions necessary for the model with invariants to work, and this must be properly accounted for. I'll continue to research this.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-12-10T11:40:23.850Z · LW(p) · GW(p)
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it -- but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
This is wrong (even assuming that previous coarse-grained observations don't matter). If you are changing the model by refining it, choosing one option of more detailed data arbitrarily, then this process on the world-model isn't reversible: you can't "un-choose" that arbitrary data and remain able to reconstruct it (unless the data is not arbitrary after all and only depends on the world model that is already there). As a result, no magical increase in entropy occurs, and no resources get saved: it's not an operation on the subsystems within the modeled world, it's an operation on the system of whole-world model within the world of modelers.
Also, consider the fact that ultimate laws can never be discovered, strictly speaking: there will always be uncertainty, and maybe there won't even be asymptotically certain candidates, only turtles always deeper and deeper.
↑ comment by Jack · 2009-12-07T21:30:18.313Z · LW(p) · GW(p)
When I was first introduced to quantum mechanics my professor taught us the Copenhagen Interpretation. I was immediately reminded of occasional moments in video games where features of a room aren't run until the player gets to the room. It seemed to me that only collapsing the wave function when it interacted with a particular kind of physical system (or a conscious system!) would be a really good way to conserve computing power and that it seemed like the kind of hack programmers in an fully Newtonian universe might use to approximate their universe without having to calculate the trajectories of a googolplex (ed) subatomic particles.
Can anyone tell me if this actually would save computing power/memory?
Replies from: SilasBarta, pengvado, matt↑ comment by SilasBarta · 2009-12-07T21:48:49.856Z · LW(p) · GW(p)
The answer basically comes down to the issue of saving on RAM vs. saving on ROM. (RAM = amount of memory need to implement the algorithm, ROM = amount of memory needed to describe the algorithm)
Video game programmers have to care about RAM, while the universe (in its capacity as a simulator) does not. That's why programmers generate only what they have to, while the universe can afford to just compute everything.
However, I asked the same question, which is what led to the blog post linked above, where I concluded that you wouldn't save memory by only doing the computations for things observers look at: first, because they check for consistency and come back to verify that the laws of physics still work, forcing you to generate the object twice.
But more importantly (as I mentioned) because the 2nd law of thermodynamics means that any time you gain information about something in the universe, you necessarily lose just as much in the process of making that observation (for a human, it takes the form of e.g. waste heat, higher-entropy decomposition of fuels). So by learning about the universe through observation, you simultaneously relieve it of having to store at least as much information (about e.g. subatomic particles).
(This argument has not been peer-reviewed, but was based on Yudkowsky's Engines of Cognition post.)
↑ comment by pengvado · 2009-12-08T06:59:36.079Z · LW(p) · GW(p)
Assuming they don't make any approximations other than collapse, yes a classical computer simulating Copenhagen takes fewer arithmetic ops than simulating MWI. At least until someone in the simulation builds a sufficiently large coherent system (quantum computer), at which point the simulator has to choose between forbidding it (i.e. breaking the approximation guarantee) or spending exponentially many arithmetic ops.
Copenhagen (even in the absence of large coherent subsystems) does not take significantly less memory than MWI: both are in PSPACE.
Otoh, if the simulator is running on quantum-like physics too, then there's no asymptotic difference in arithmetic either. And if you're not going to assume that the simulator's physics is similar to ours, who says it's less rather than more computationally capable?
↑ comment by matt · 2009-12-07T22:53:46.283Z · LW(p) · GW(p)
googleplex = Google Inc's HQ
googolplex = 10^(10^100)
Replies from: Blueberry↑ comment by Baughn · 2009-12-07T19:43:44.822Z · LW(p) · GW(p)
If you implemented the laws of physics on a computer, using lazy evaluation, then whatever is "over the horizon" from the observer process(es) would not be computed.
However, this would not in the least be observable from inside the system. If the observer moved to serve you, your past would be "retroactively" computed.
I'm not claiming this is very likely to be the case, since at the very least it requires an additional agent - the observer process - to cause anything to happen at all, but lazy evaluation isn't some weird ad-hoc concept; it's a basic concept in computer science that also happens to make programs shorter, a lot of the time.
Hopefully not sufficiently shorter that a universe using lazy evaluation with one random point in space somewhere as the observer is less complex than one using strict evaluation. That.. would be impossible for us to detect, of course, but I believe it'd still have consequences.
↑ comment by NancyLebovitz · 2009-12-07T10:58:03.870Z · LW(p) · GW(p)
If the universe we're living in is a work of art or a game, it's made for minds with much greater processing power than we've got. It isn't obvious that they'd be satisfied with something as crude as a video game.
Replies from: Baughn↑ comment by Baughn · 2009-12-07T19:46:07.742Z · LW(p) · GW(p)
How about a video game where you attempt to control a pre-singularity global civilization by directly playing a few thousand randomly selected humans simultaneously, while not letting this fact be noticed by the NPCs?
It's interesting to wonder what sort of games post-humans might play, though I hope it won't be anything quite that ethically objectionable.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-08T01:47:50.531Z · LW(p) · GW(p)
It's interesting to wonder what sort of games post-humans might play, though I hope it won't be anything quite that ethically objectionable.
Or, from the perspective of a pre-post-human, quite that dull. If I am going to play that kind of sim I'm going to pick the 'elves' faction.
Replies from: Baughn, Lightwave↑ comment by Baughn · 2009-12-10T12:35:03.584Z · LW(p) · GW(p)
Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-10T12:54:58.656Z · LW(p) · GW(p)
Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.
You're serious? That scares me.
Replies from: Baughn↑ comment by Psychohistorian · 2009-12-07T05:20:12.644Z · LW(p) · GW(p)
If you can understand how the two are truly the same, you are far wiser than anyone I've ever met, and I would very much like to subscribe to your newsletter. I hope thefirst issue explains how this dichotomy is invalid.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T05:37:59.429Z · LW(p) · GW(p)
A video game can be deterministic or not in the same way any other kind of universe can. "Video game" vs "deterministic" is just a silly comparison. I don't know what word to use in place of 'deterministic', I just don't think that one is the right one.
Replies from: Blueberry↑ comment by Blueberry · 2009-12-07T06:40:08.582Z · LW(p) · GW(p)
I'm thinking "algorithmic". That is, the universe, or a video game, follows a certain algorithm to determine what happens next, whether the algorithm is the laws of physics or a computer program. Algorithms aren't necessarily deterministic: we could have a step for "generate a truly random (quantum) number".
↑ comment by Psychohistorian · 2009-12-07T05:16:20.863Z · LW(p) · GW(p)
An artificial dichotomy.
Just plain, "no."
There is, to my knowledge, exactly zero evidence indicating that the creation and execution of the laws governing the universe resembles that of video games in any way. There's some sense in which the term "system" applies to both, I admit, but that's about it, and "system" is a pretty broad word.
Replies from: Pavitra↑ comment by Pavitra · 2009-12-07T05:22:15.863Z · LW(p) · GW(p)
You mean, besides the predictive power of the mathematical formalizations of Occam's Razor, as opposed to a linguistic or pathetic formulation?
The universe looks very falsifiably like a computer program.
↑ comment by Jack · 2009-12-06T19:58:20.035Z · LW(p) · GW(p)
Huh? "Metaphysics" refers to an incredibly wide variety of claims. But I'd say that metaphysics tries to answer questions about reality that aren't the kind of questions that can be answered by experimental science. Since we lack a good method for answering these questions our confidence in metaphysical claims is usually substantially lower than it is for empirical claims. But why should we think all metaphysical questions are radically different from scientific questions such that the answer to one can't influence our estimations of the other? Of hand I can't think of a number of metaphysical hypotheses that have been greatly effected by scientific knowledge and vice versa-- materialism, substance dualism, determinism and indeterminism, free will, eternalism and philosophies of time etc.
In this case it seems rather obvious that if we are "living in the Matrix" the probability that the basic laws of physics are complicated rather than simple is dramatically higher.
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2009-12-06T23:25:32.642Z · LW(p) · GW(p)
I never denied that a our assessment of an empirical claim may be influenced by the metaphysical views we hold. I simply noted that, once the Matrix hypothesis is understood as a metaphysical hypothesis, it is unclear why believing that we live in the Matrix should increase our credence in the various claims of parapsychology.
Replies from: Jack↑ comment by Jack · 2009-12-06T23:45:37.135Z · LW(p) · GW(p)
I have no idea what your argument actually is. Why does it matter whether or not the Matrix hypothesis is a metaphysical hypothesis?
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2009-12-07T00:24:17.543Z · LW(p) · GW(p)
My original comment was a reply to Mitchell Porter, who suggested that parapsychology would somehow receive support from the Matrix hypothesis. I replied by saying that this would not be true, or at least not clearly, if that hypothesis is understood as a claim about the ultimate nature of reality.
To take another example, suppose someone argued Berkeleyan idealists should be more open to psychic phenomena, since we are all ideas in the mind of God. I would reply that this is not so, since the fact that the world is ultimately made of mind has in itself no implications about whether certain kinds of mental phenomena take place within that world.
Replies from: Jack↑ comment by Jack · 2009-12-07T01:04:20.680Z · LW(p) · GW(p)
The ability of certain collections of atoms to communicate large amounts of information to other collections of atoms over vast distances without there being any detectable emissions is an incredibly complex power. Complex entities are a priori improbable compared to simple entities. You need some kind of creation mechanism to make them probable... with biological systems we have evolution, with pocket watches and jet planes you have human inventors. If you accept a metaphysical hypothesis that involves an intelligence creating the universe-- programmers or God-- you have a mechanism for making complex entities probable. That is why the Matrix hypothesis makes psychic phenomena more likely
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T01:39:59.230Z · LW(p) · GW(p)
Complex entities are a priori improbable compared to simple entities.
This remains the case no matter what the universe is made of. All evidence suggests that psychic mechanisms are not available to us in our current mode of existence, whatever that may be. That evidence doesn't change until you get more.
At least, that seems to me to be the point ben is making.
Replies from: Jack↑ comment by Jack · 2009-12-07T01:58:16.932Z · LW(p) · GW(p)
Complex entities are a priori improbable compared to simple entities.
This remains the case no matter what the universe is made of.
Yes, but not no matter how the universe was created. The Matrix hypothesis includes a claim that the universe was created by some intelligence and that makes psychic phenomena substantially more plausible.
That doesn't mean all religious people have to believe in psychic phenomena or even that they should. If there is no evidence for psychic phenomena then there is not evidence for psychic phenomena. But if you think the universe was created claims of psychic phenomena should be less absurd on their face.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T02:16:26.643Z · LW(p) · GW(p)
Yes, but not no matter how the universe was created. The Matrix hypothesis includes a claim that the universe was created by some intelligence and that makes psychic phenomena substantially more plausible.
Another position that could be taken is "the evidence suggests that if we are living in a matrix scenario then it is probably one of the ones without matrix psychic powers". That is, assuming rational reasoning without granting a counter-factual premise. The evidence can then be considered to have a fixed effect on the probability of psychic powers. Whether it causes you to also lower your probability for a Matrix or to alter your description of probable Matrix type would be considered immaterial.
Again, this is just what my impression of Ben's position. He'll correct me if I'm wrong. For my part I don't care about Matrixes (especially No. 2. I walked out of that one in disgust! Actually, I do care about the 'dodge this!' line. It's infuriating.)
Replies from: Jack↑ comment by Jack · 2009-12-07T05:23:32.843Z · LW(p) · GW(p)
This all started when Michell Porter responded to Blueberry's claim that we know, intuitively, that psychic phenomena are just not possible. I'm not quite sure I know just what Blueberry was talking about. But his estimate of the existence of psychic phenomena was zero and not just because of parapsychology's failure to provide convincing evidence but because of our understanding of the world. Mitchell provides Blueberry with a hypothesis that is consistent with what we know about the world but under which the existence of psychic phenomena is not prohibitively improbable.
None of this changes the fact that finding evidence of psychic phenomenon should cause us to revise our probabilities of its existence up and that non finding evidence should cause us to revise our probabilities down. But if your probability is zero, and especially if your probability is zero for reasons other than the failure of parapsychology, a hypothesis with P>0 where P(psi) is >0 looks like information you needs to update on.
Ben says it isn't clear why this is so. Well creation makes complex, unselected entities more probable. But maybe I should wait to have this argument with him.
As far as the movie goes, it is all downhill right after Neo wakes up gooey pink tub and sees all the other people hooked into the Matrix. The whole movie should have taken place in the Matrix and kept us in the dark about what it really was until the very end. Would have been way cooler that way.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T05:43:26.904Z · LW(p) · GW(p)
But if your probability is zero, and especially if your probability is zero for reasons other than the failure of parapsychology, a hypothesis with P>0 where P(psi) is >0 looks like information you needs to update on.
Once your probability is zero is it even possible to update away? That'd more be 'completely discarding your entire understanding of the universe for reasons that cannot be modelled within that understanding'. If something is impossible then something else, no matter how unlikely, must be the truth. This includes the hypothesis "every thought I have that suggests the p(0) hypothesis must be true is the product of cosmic rays messing with my brain."
Replies from: Jack, Blueberry↑ comment by Jack · 2009-12-07T07:44:56.127Z · LW(p) · GW(p)
I've always been really confused by this but it isn't clear that an event with P=0 is an impossible event unless we're talking about the probability of an event in a finite set of possible events. (Edit again: You can skip the rest of this paragraph and the next if you are smarter than me and already get continuous probability distributions. I'm obviously behind today.)This is how it was explained to me: Think of a dart board with a geometric line across it. That line represents probability space. An event with P=.5 is modeled by marking the middle of the line. If someone throws a dart at the line there is an equal chance that it lands at any point along the line. However, at any given point the probability that the dart lands there is zero.
I think the probability of any particular complex entity, event or law existing can be said to have a probability of zero absent a creator or natural selection or some other mechanism for enabling complexity. Of course this is really counterintuitive since our evolved understanding of probability deals with finite sets of possibilities. Also it means that 'impossible' can't be assigned a probability. (Edit: Also, the converse is true. The probability that the dart lands anywhere other than the spot you pick is 1 so certainty can't be mapped as 1 either.)
Also, imperfect Bayesians will sometimes assign less than ideal probabilities to things. A perfect Bayesian would presumably never wrongly declare something impossible because it could envision possible future evidence that would render the thing possible. But regular people are going to misinterpret evidence and fail to generate hypotheses so they might sometimes think something is impossible only to later have it's possibility thrown in their faces.
Replies from: Eliezer_Yudkowsky, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-07T08:25:49.377Z · LW(p) · GW(p)
Interesting point. Since physics does appear on the surface to be continuous, I can't rule out continuous propositions. Perhaps the amended saying should read "0 and 1 are not probability masses, and 0 is not a probability density."
Replies from: Steve_Rayhawk, wedrifid↑ comment by Steve_Rayhawk · 2009-12-07T16:22:41.837Z · LW(p) · GW(p)
Oh. I was expecting your belief to be as with infinite-set atheism: that we never actually see an infinitely precise measurement.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-07T19:34:27.150Z · LW(p) · GW(p)
We don't, but what if there are infinitely precise truths nonetheless? The math of Bayesianism would require assigning them probabilities.
↑ comment by wedrifid · 2009-12-07T11:50:04.828Z · LW(p) · GW(p)
I've always been really confused by this but it isn't clear that an event with P=0 is an impossible event unless we're talking about the probability of an event in a finite set of possible events. This is how it was explained to me: Think of a dart board with a geometric line across it. That line represents probability space. An event with P=.5 is modeled by marking the middle of the line. If someone throws a dart at the line there is an equal chance that it lands at any point along the line. However, at any given point the probability that the dart lands there is zero.
And, having assigned p(A) = 0 to such an event A I will not be able to rationally update away from zero without completely discarding my former model. There is no evidence that can cause me to rationally update p(A) = 0 to something else ever. Discard it and overwrite with something completely unrelated perhaps, but never update.
↑ comment by wedrifid · 2009-12-07T11:53:36.064Z · LW(p) · GW(p)
I've always been really confused by this but it isn't clear that an event with P=0 is an impossible event unless we're talking about the probability of an event in a finite set of possible events. This is how it was explained to me: Think of a dart board with a geometric line across it. That line represents probability space. An event with P=.5 is modeled by marking the middle of the line. If someone throws a dart at the line there is an equal chance that it lands at any point along the line. However, at any given point the probability that the dart lands there is zero.
And, having assigned p(A) = 0 to such an event A I will not be able to rationally update away from zero without completely discarding my former model. There is no evidence that can cause me to rationally update p(A) = 0 to something else ever. Discard it and overwrite with something completely unrelated perhaps, but never update. p(A) is right there as the numerator!
(But yes, I take your point and tentatively withdraw the use of 'impossible' to refer to p=0.)
ETA: Well, maybe you're allowed to use some mathematical magic to cancel out the 0 if p(B) = 0 too. But then, the chance of that ever happening is, well, 0.
Replies from: Jack↑ comment by Jack · 2009-12-07T13:13:08.154Z · LW(p) · GW(p)
Er, my bad. I missed your point. I see it now, duh.
So my friend thinks something S has a probability of zero but I know otherwise and point out that it is possible give assumption which I know my friend believes has a .1 chance of being true. He says "Oh right. I guess S is possible after all." What has just happened? What do we say when we see the dart land at a specific point on the line?
Replies from: pengvado, wedrifid↑ comment by pengvado · 2009-12-07T13:46:37.161Z · LW(p) · GW(p)
What has just happened?
Your friend had incorrectly computed the implications of his prior to the problem in question. On your prompting he re-ran the computation, and got the right answer (or at least a different answer) this time.
Perfect Bayesians are normally assumed to be logically omniscient, so this just wouldn't happen to them in the first place.
What do we say when we see the dart land at a specific point on the line?
In order to specify a point on the line you need an infinite amount of evidence, which is sufficient to counteract the infinitesimal prior. (The dart won't hit a rational number or anything else that has a finite exact description.)
Or if you only have a finite precision observation, then you have only narrowed the dart's position to some finite interval, and each point in that interval still has probability 0.
↑ comment by wedrifid · 2009-12-07T15:08:08.955Z · LW(p) · GW(p)
So my friend thinks something S has a probability of zero but I know otherwise and point out that it is possible give assumption which I know my friend believes has a .1 chance of being true. He says "Oh right. I guess S is possible after all." What has just happened?
You wasted a great gambling opportunity.
Pengvado gives one good answer. I'll add that your friend saying something has a probability of zero most likely means a different thing than what a Bayesian agent means when it says the same thing. Often people give probability estimates that don't take their own fallibility into account without actually intending to imply that they do not need to. That is, if asked to actually bet on something they will essentially use a different probability figure that incorporates their confidence in their reasoning. In fact, I've engaged with philosophers who insist that you have to do it that way.
What do we say when we see the dart land at a specific point on the line?
"Did not! Look closer, you missed by 1/infinity miles!"
↑ comment by Blueberry · 2009-12-07T06:45:46.326Z · LW(p) · GW(p)
Well, if you believe in Cartesian skepticism (that is, that we might be in the Matrix), then your probability for anything can't ever be zero. Say the probability that this world is an illusion within another world is epsilon: in that case anything could be true. So the lowest probability we can assign to anything is epsilon. EY has a post on how 1 and 0 aren't probabilities.
If you observed psychic powers with no possibility of cheating, you should probably conclude that something metaphysically weird is going on: you're in a dream, you're insane, or there's a hole in the Matrix.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T12:03:45.530Z · LW(p) · GW(p)
Well, if you believe in Cartesian skepticism (that is, that we might be in the Matrix), then your probability for anything can't ever be zero. Say the probability that this world is an illusion within another world is epsilon: in that case anything could be true. So the lowest probability we can assign to anything is epsilon. EY has a post on how 1 and 0 aren't probabilities.
Find the probability p(A | B) where B = "nothing weird like what the Cartesian sceptics talk about is going on". That gets rid of the Matrix issue. Then it is just a matter of whether or not you want to let dart players divide by infinity all willy nilly.
↑ comment by Blueberry · 2009-12-06T09:11:39.848Z · LW(p) · GW(p)
Do you think it is possible that we are "living in the Matrix"? If so, then you should consider something functionally indistinguishable from psychic powers to be possible.
Yes, but then we would have to give up on the scientific method anyway, because the laws of physics would be whatever the adminstrators of the Matrix felt like changing them to.
Replies from: SilasBarta, Jack↑ comment by SilasBarta · 2009-12-06T17:14:35.380Z · LW(p) · GW(p)
What Jack said -- you can still notice regularities in the Matrix even if it's sometimes capriciously tampered with. That is, these overlords do not automatically make our universe a high-entropy white noise bath.
I discussed some of the implications of a simulated universe and the implications of the 2nd law of thermodynamics on it here.
↑ comment by LauraABJ · 2009-12-06T12:55:36.001Z · LW(p) · GW(p)
Exactly! I guess Allan needs to explain further why parapsychology is bunk. As an example, a person 'reading the mind' of another person a mile away without the emission of any kind of detectable electromagnetic wave or signal capable of traveling that far is in violation of the laws of physics as we know them (and if people did emit such signals, it would be intensively studied). For this to be true, physics itself would need to be complicated on a level that would specifically allow this phenomenon to occur, which seems very, very unlikely. To quote Michael Vassar, "Magic is the hypothesis that physics is complicated."
We should expect positive results from the field of parapsychology, since so many people (in total over the years) are trying to prove it exists, and there is an extreme positive results bias. Thus by chance positive results will be obtained and published, while negative results are largely ignored or not even submitted (I assume a 'scientist' trying to prove parapsychology wants to do so, and so may only bother submitting a paper on the topic if the results are positive).
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-07T09:51:56.244Z · LW(p) · GW(p)
...Charles Honorton and his colleagues drew together all the forced-choice experimental precognition experiments reported in English between 1935 and 1987, publishing their findings in the December 1989 Journal of Parapsychology. The combined results were impressive: 309 studies contributed to by 62 senior authors and their associates, nearly two million individual trials made by more than 30,000 subjects. (In a properly conservative culling, all the experimental work of both Rhine's chosen but subsequently disgraced successor, Walter J. Levy, and S.G. Soal, once a famous specialist in time-displacement psi tests, was excluded; both were known to have cheated in at least some experiments.) Overall, the cumulation is highly significant - 30 percent of studies provided by 40 investigators were independently significant at the 5 percent level. Yet this was not due to a suspicious handful of successful researchers: 23 of the 62 (37 percent) found overall significant scoring.
By the same token, admittedly, this means 63 percent failed to show significant psi. But [...] [i]f one hundred studies are done, averaging as many as thirty-eight correct calls instead of the twenty-five due to chance, then, surprisingly, we should only expect to find among that one hundred "about 33 [statistically] significant studies ... and a 30% chance that there would be 30 or fewer!" Here's why: The scattergun variance that arises simply from chance would mask most of the extra correct calls. This fact would remain in force even if the responders were picking up their extra hits through hidden radio receivers rather than psi! It's just what happens with the statistics of phenomena that have low power. [...]
Well, could this 37 percent success rate be due to the "file drawer"? Hardly. Honorton's estimate required fourty-six unreported chance-level experiments for each of those in the meta-study, including those that themselves gave no significant support for the paranormal hypothesis. It seems highly unlikely that such a trove of dull experiments exists [...] Nor were the results due to an excessive contribution from a few specialist parapsychologists doing so many precognition studies that their non-scoring rivals were swamped. Strikingly, if all the investigators "contributing more than three studies are eliminated, leaving 33 investigators, the combined z [number of standard deviations found] is still 6.00" - with an associated probability of chance coincidence of somewhat more than one in a billion.
The individual effect sizes were all over the place, so Honorton and his coauthor, Diane C. Ferrari, unceremoniously threw out all the studies with unusually large deviations from the mean. [...] "Outcomes remain highly significant. Twenty-five percent of the studies (62/248) show overall significant hitting at the 5% level." Maybe the quality of studies explains the persistance of apparent anomalies? [...] if anything, the significance of the results climbed as quality improved. [...] What's more, the "effect size" had persisted over more than fifty years. This measure compensates for the different sample sizes in various studies: technically, it divides the z score by the square root of the number of trials in each study.
-- Damien Broderick, Outside the Gates of Science
Replies from: Jack, CarlShulman, LauraABJ↑ comment by Jack · 2009-12-07T10:14:25.001Z · LW(p) · GW(p)
Would it really surprise anyone here if, say, 10 percent of parapsychologists are either rigging experiments, hiding negative results or falsifying data? 20%?
Thirty-seven percent.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-07T16:13:47.283Z · LW(p) · GW(p)
What would be the incentive? Forging results for highly public performances that allowed you to make money off people, sure. But for results published in obscure journals, when even academics in well-respected fields may need to fight tooth and claw for their next yearly funding? In a field that won't even get you the respect of most other academics, and might very well ruin your scientific reputation? Trying to prove a view that doesn't have powerful ideological backers pouring money into it the way creationists do? And with the number of fake researchers apparently staying roughly even for a period of fifty years, looking from the way the effect size hasn't changed?
Replies from: Jack, CarlShulman↑ comment by Jack · 2009-12-07T17:50:18.494Z · LW(p) · GW(p)
And with the number of fake researchers apparently staying roughly even for a period of fifty years, looking from the way the effect size hasn't changed?
That right there is a really good point I didn't think of. As for motive, my impression is that a lot of parapsychologists are trying to demonstrate the truth of beliefs that are incredibly significant to them-- their new age spirituality is at stake. For that matter, if they've dedicated their lives to the subject. If there are no psychic phenomena they have literally spent their lives studying nothing. You might as well ask why theologians never come up with arguments disproving the existence of God. But your point about consistency makes this all moot. I'll check out the book.
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-12-07T18:43:12.147Z · LW(p) · GW(p)
why theologians never come up with arguments disproving the existence of God
Well if they do they get called philosophers of religion instead...
↑ comment by CarlShulman · 2012-03-16T00:49:31.324Z · LW(p) · GW(p)
What would be the incentive?
To get more funding for their work, more fame within the parapsychology community, and to make it more likely that the world at large will realize the truth via "fake-but-accurate" experiments. Some parapsychologists pay for their own experiments, using resources garnered from a "day job" in some other field, but many rely on donations from wacky psi-enthusiasts (people who also get excited about ghosts, "subtle energies" and so forth), or selling psi-controlled meditation lamps. Many others think that it's critically important for mainstream funding sources to provide grants to parapsychologists (such as themselves) to do the work they find interesting and important.
Under those circumstances, a psychic believer could come up with all sorts of justifications:
I have to publish these "fake but accurate" experiments to convince others of the effects that I KNOW are really there, and thus gain enough resources to get definitive proof. After all, surely those dishonest skeptics and materialists (who regularly misrepresent the existing literature, and deceive the broader scientific community about the great work done in parapsychology) are doing the same thing, and if only one side 'enhances' its data then the truth will lose out.
↑ comment by CarlShulman · 2012-03-16T01:24:02.103Z · LW(p) · GW(p)
Honorton's estimate required fourty-six unreported chance-level experiments for each of those in the meta-study, including those that themselves gave no significant support for the paranormal hypothesis.
Note that this is a bogus calculation: it says that if there was no publication bias, so that unpublished studies were just as likely to show positive results as published ones, then adding the stated number of chance studies would "dilute" the results below a threshold significance level. But of course the whole point of publication bias is the enrichment of the file-drawer with negative results. See this paper by Scargle. You need far fewer studies in the file-drawer given the presence of bias. Further, various positive biases will be focused in the published literature, e.g. people doing outright fraud will normally do it for an audience.
The number of studies needed also collapses if various questionable research practices (optional stopping, post hoc reporting of subgroups as separate experiments, etc) are used to concentrate 'hits' into some experiments while misses can be concentrated in a small file drawer.
Parapsychologists counter that the few attempts to audit for unpublished studies (which would not catch everything) have not found large skew in the unpublished studies, but these inflated "fail-safe" statistics are misleadingly large regardless.
↑ comment by LauraABJ · 2009-12-07T14:24:47.349Z · LW(p) · GW(p)
"Honorton's estimate required fourty-six unreported chance-level experiments for each of those in the meta-study, including those that themselves gave no significant support for the paranormal hypothesis."
Why is this at all unlikely? This is a 52 year span of time, and who knows how many times each of these (only 62) 'scientists' ran the trials or tweaked the procedure before they decided they had a set of data worth submitting. Who knows how many people looked for these phenomena, didn't find them, and gave up without submission? Even without outright fraud (which I wouldn't doubt), people lie to themselves. I've worked with scientists who had evidence that their previously obtained results were bunk and submitted them anyway... 'maybe the retest was flawed...' The significant effect that was found may just be the threshold at which an investigator needs to see (or fake) results to submit a paper. There's the answer to the question Allan originally posed...
Also, on another note, not all 'forced choice' tests are conducted in the same way. Some of them involve the person looking at the card being in the same room as the guesser, and well, it's not hard to imagine ways of getting a score above chance like that.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-07T16:05:01.929Z · LW(p) · GW(p)
Why is this at all unlikely? This is a 52 year span of time
309 times 46 is 14 214, which divided by 52 equals approximately 273 unpublished studies per year. I haven't seen any figures on how many studies were conducted for e.g. a specific experimental paradigm in psychology during that time, so I can't say for certain how plausible this is or isn't. It does sound a bit high considering that parapsychology hasn't exactly been the best-funded field around, though it might have had more money in the 1930's. Does anyone have numbers?
Replies from: LauraABJ↑ comment by LauraABJ · 2009-12-07T19:13:53.555Z · LW(p) · GW(p)
I'm not saying there were 14k unpublished completed full studies, I'm suggesting that what got published was already biased. There is room for selection bias at every level of a study, including which trials and which methods are finally taken, written up, and submitted. If the 'scientists' are trying to prove that psi exists, they can find it, one or another. Fraud isn't even required, just wishful thinking. The consistency of the effect is interesting, but may only be measuring the psychological phenomenon of deliberate self-deception.-- ah we've discovered the threshold deviation from chance at which people will believe their own crap hasn't been tampered by their own meddling.
Think about the alternative explanation: If the forced choice test is run properly- the subject guesses which order 5 symbols in a 25 card deck will appear before the deck has been shuffled. The deck is then shuffled by machine (or associate) in a different room, and the order of the cards are examined. Now, how do you propose the subject is entangled with the card shuffling machine and deck, without violating current physical law, such that he can predict the order? This is magical thinking, with no basis in reality as we know it. Unless there is a pattern in the card shuffling machine and some people are very aware of it due to practice with it... but that is hardly psi.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-07T20:39:28.974Z · LW(p) · GW(p)
Think about the alternative explanation
I'm not saying that psi must be real, only that it seems to merit a closer look than most people in this thread have been implying. Yes, it does seem rather unlikely that psi would exist, which is why I'm still undecided myself. But the fact that we can't come up with any physical explanation for it doesn't mean that it couldn't be real. As Yvain pointed out, Newton's theories may at the time have seemed like magical thinking as well. There could be some physical mechanism we're just not aware of, but which the brain has nonetheless evolved to take advantage of.
Or then it might just be a sign of our statistical methods being flawed, made worse by psi researchers being insufficiently rigorous in their methods.
Replies from: LauraABJ, scav, Vladimir_Nesov↑ comment by LauraABJ · 2009-12-08T01:30:23.567Z · LW(p) · GW(p)
"I'm not saying that psi must be real, only that it seems to merit a closer look than most people in this thread have been implying."
I strongly disagree. Psi has been looked at very intensively for a very long time, and the best it can yield is that it's not completely statistically insignificant. No theories have been posed as to how it works, it hasn't been quantified (ie, how far away, in what time frame can the subject predict the future), and it cannot be demonstrated reliably and repeatedly from even a few individuals who could then be studied more elaborately. Even one person who could always predict the order of a deck of cards would be fascinating. At some point, you just have to say a line of research does not merit further study.
In the mean time, giving these theories credence wastes time and resources and leads people to think they can believe anything they want about the world, including the outstanding religious dogma, since, hey, you never know.
↑ comment by scav · 2009-12-09T13:30:52.015Z · LW(p) · GW(p)
Here's a closer look: to accept psi, you would have to reject evolutionary biology.
It would be such a humungous advantage to communicate telepathically, see the future or remote locations, or manipulate the physical world by thinking, that there's no way evolution wouldn't have optimised the f* out of it by now.
We don't wonder whether birds have wings, or whether dogs have a sense of smell. That we can wonder whether we might have psychic powers means we DON'T, to a very high probability indeed.
Replies from: timtyler, Kaj_Sotala↑ comment by Kaj_Sotala · 2009-12-09T16:00:52.455Z · LW(p) · GW(p)
Just because an ability would be useful doesn't mean that evolution could (or would, if reaching that ability required several intermediate steps with very low fitness advantages) optimize it without limit.
The ability to digest literally everything we put in our mouths would be useful as well, but the fact that we don't have that doesn't mean we need to reject evolution.
Replies from: scav↑ comment by scav · 2009-12-11T16:07:06.287Z · LW(p) · GW(p)
Voted up for making me think harder.
I'm not talking about an ability (like digesting cellulose) which would be really advantageous but we don't have and would require a lot of unlikely steps. The non-null hypothesis of human psychic powers is that we do already have them and ancient humans did too. Yet we don't seem to have evolved psychic abilities that are even detectable by now.
Compare: the abilies to cope with milk and beer in our diet have been evolving in humans since the invention of dairy farming and brewing (a few thousand years ago?) There is large population variation in these digestive abilities after that short time.
Would the selection pressure in favour of telepathy be that much less than for drinking beer?
Replies from: Blueberry, Strange7↑ comment by Blueberry · 2009-12-13T07:38:06.579Z · LW(p) · GW(p)
The problem here is that you're assuming a) psychic abilities would have some degree of heritability, instead of being random accidents that aren't passed on genetically, and b) that psychic abilities can vary in degree, so that there could be selection pressure to make them larger, instead of being binary.
Also consider that psychic abilities in small amounts could have detrimental effects on fitness: for instance, they could make you more sensitive to bad moods, more temperamental, or even insane.
↑ comment by Strange7 · 2014-02-05T05:42:47.527Z · LW(p) · GW(p)
Beer consumption has all sorts of implications for social interaction and waterborne disease, and in some environments, there are no close substitutes. Digestive efficiency is a major factor in survival, one way or another; not being able to cope with the food and drink you've got can kill you, and synthesizing a lot of tricky enzymes you don't need (or, equivalently, hosting intestinal flora which aren't pulling their weight) can also kill you.
Telepathy, on the other hand, doesn't seem to involve enough of the body to have significant metabolic effects one way or another, and is unreliable and vague even for the best performers. What life-or-death/reproduce-or-don't outcomes would it be exerting selection pressure through?
Replies from: CCC, Jiro↑ comment by CCC · 2014-02-05T06:59:52.818Z · LW(p) · GW(p)
What life-or-death/reproduce-or-don't outcomes would it be exerting selection pressure through?
The ability to find someone willing to reproduce with you? Or a heightened ability to persuade someone to do so? Earlier warning of malicious intent from a potential murderer?
↑ comment by Jiro · 2014-02-05T06:48:22.760Z · LW(p) · GW(p)
Small statistical effects accumulate over evolutionary timescales. Ifr telepathy is unreliable and vague, but is more reliable than chance and less vague than making stuff up, it will be selected for, even if the effects of telepathy are very difficult to detect on an individual scale.
↑ comment by Vladimir_Nesov · 2009-12-08T10:33:13.395Z · LW(p) · GW(p)
I'm not saying that psi must be real, only that it seems to merit a closer look than most people in this thread have been implying.
This does mean estimating it to be much more probably real than seems reasonable at this point.
↑ comment by Neil · 2009-12-06T13:59:55.687Z · LW(p) · GW(p)
If parapsychology is studying the patently non-existent, then the fact that parapsychologists don't typically spend their time debunking their own subject might suggest they are not up to par in some way, as a group, with "the rest of" science - unless you concede that other branches of science would also carry on in the face of total collapse in the credibility of their subject.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-06T15:49:00.766Z · LW(p) · GW(p)
The nonexistence of psychic powers is less patently obvious than the truth of many-worlds in physics, so there is no proof that parapsychologists are less rational than average physicists. They are studying a widely despised subject, but that if anything should raise our estimate of their level.
That said, it's entirely possible that, in reality, parapsychologists are lower-level. But we should not be so quick to assume this. And it remains that other sciences may also tend to contain some low-level people. Scientific protocols for saying when a theory has been verified are not supposed to rely on such things.
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2009-12-08T10:25:14.780Z · LW(p) · GW(p)
What is this "level" attribute you refer to? Does it mean intelligence or something more?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-08T10:31:30.106Z · LW(p) · GW(p)
Those little numbers that appear above people's heads. You can't see them?
Replies from: Strange7↑ comment by billswift · 2009-12-06T04:07:33.073Z · LW(p) · GW(p)
This http://www.susanblackmore.co.uk/Articles/si87.html isn't a study, it's Susan Blackmore's article discussing 10 years of research attempting to demonstrate psi phenomena.
↑ comment by AllanCrossman · 2009-12-06T10:28:37.208Z · LW(p) · GW(p)
However to state the following: "one in which the null hypothesis is always true" is making a bold statement about your level of knowledge.
OK. But the point about what we can conclude about regular science stands even if this is only mostly correct.
comment by Morendil · 2011-01-08T10:11:54.425Z · LW(p) · GW(p)
See also Cosma Shalizi's The Neutral Model of Inquiry.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2017-04-18T04:43:01.640Z · LW(p) · GW(p)
moved here
comment by prase · 2009-12-09T20:30:06.615Z · LW(p) · GW(p)
There's no particular reason to think parapsychologists are doing anything other than what scientists would do; their experiments are similar to those of scientists, they use statistics in similar ways, and there's no reason to think they falsify data any more than any other group. Yet despite the fact that their null hypotheses are always true, parapsychologists get positive results.
This seems self-contradictory. If they get positive results by methods we approve (as you write, nothing other that what scientists do), what entitles us to dismiss those results? If we admit flaws in the methods of science (publication bias etc.) which allow us to explain the positive results of parapsychology away, why we need a control group to test whether science works well?
In short: if scientific method is what is being tested, by what method do we establish the falsity of parapsychology in the first place?
Replies from: timtyler↑ comment by timtyler · 2009-12-09T21:11:45.232Z · LW(p) · GW(p)
Parapsychology is obviously a load of old nonsense based on wishful thinking.
Replies from: prase↑ comment by prase · 2009-12-10T19:44:27.087Z · LW(p) · GW(p)
Not that I disagree, but when one wants to make a serious inquiry with control groups and formal analysis of evidence, "obviously" is word which should rather be omitted.
Replies from: HiddenTruth↑ comment by HiddenTruth · 2009-12-25T20:30:28.249Z · LW(p) · GW(p)
I agree with prase and would like to point out that Academia and Science are not the same thing. Academia is the establishment based on commonly and popularly accepted scientific truth while Science is a method to achieve objective truth from empirical evidence. The use of the word "nonsense" and "absurdity" is common to Academia as well as propagators of all faith based establishments. The use of parapsychology as a control group would need more than a faith based assumption of null hypothesis being true in every case. Serious inquiry would have to be made in every case of parapsychology.
Replies from: erniebornheimer↑ comment by erniebornheimer · 2011-11-30T21:12:31.583Z · LW(p) · GW(p)
I agree with HiddenTruth and prase. The original post is flawed, because it starts with a perfectly good idea: "if there were a group that 'did science' but was always wrong, it would be a good control group to compare to 'real science'", but then blows it by assuming parapsychologists are indeed always wrong.
FWIW, I too believe parapsychologists are probably almost always wrong, but so what? Who cares what I believe? No one does, and no one should (without evidence), and that's the point.
comment by PhilGoetz · 2009-12-10T18:40:56.643Z · LW(p) · GW(p)
The idea is good. I'm afraid that it may be interpreted as meaning that we need to increase our publication standards from 95% confidence intervals to 98% confidence intervals. I think scientists already have a dangerously strong bias to reject anything that fails to meet a 95% confidence interval. If someone has a good idea, with good theoretical reasoning behind it; and they run some experiments but don't hit 95%, it's still worth considering.
There are also all sorts of data-collection tasks which are routinely thrown out if they fall below 95% confidence, when they shouldn't be. People doing any sort of genomics work routinely fail to report gene associations at less than 95% confidence. The fact is that, when we're taking millions of pieces of data and putting them into a computer program to compute reliability scores, ALL data should be saved and used. Most of the information scientists produce is in the large mass of low-confidence predictions. There is much more information in 100,000 50%-confidence predictions than in a dozen 95%-confidence predictions.
Replies from: Cyan↑ comment by Cyan · 2009-12-10T18:53:36.183Z · LW(p) · GW(p)
I agree that all data should be saved, and that there's much more information in 100,000 50%-confidence predictions than in a dozen 95%-confidence predictions. But ask a biologist which they'd prefer (ETA: I have actually done this, more or less) and they'll take the dozen 95%-confidence predictions, because they're just going to turn around and use bog-standard low-throughput experimental techniques to dig deeper. From the biologists' decision theory perspective, false positives are a lot more costly than false negatives.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-12-16T03:22:48.282Z · LW(p) · GW(p)
That's why we need to replace biologists with robots. Like this one.
Replies from: Cyan↑ comment by Cyan · 2009-12-16T04:13:13.739Z · LW(p) · GW(p)
That approach only works because yeast has been subjected to intense investigation by low-throughput techniques, providing a huge knowledge base that constrains and guides the automated investigation. (It also helps that yeast doesn't do alternative splicing.) So it's not so much "replacing" as "building upon".
comment by Alex Flint (alexflint) · 2009-12-08T11:12:22.413Z · LW(p) · GW(p)
I would like to see more discussions on LW that, like this one, concern the rational conclusion that can be drawn from a particular body of research (as opposed to discussion of rationality itself). I will endeavour to start some such discussions.
comment by timtyler · 2009-12-09T15:30:31.295Z · LW(p) · GW(p)
Re: "There's no particular reason to think parapsychologists are doing anything other than what scientists would do"
Sure there is. They are studying something that doesn't exist - so they are probably stupider than most scientists, and more likely to believe nonsense, have a relatively poor history of undestanding experimental results - and so on.
comment by arundelo · 2009-12-07T00:02:24.980Z · LW(p) · GW(p)
A discussion on Hacker News contained one very astute criticism: that some things which may once have been considered part of parapsychology actually turned out to be real, though with perfectly sensible, physical causes.
What gwern said.](http://news.ycombinator.com/item?id=978927
Replies from: bigbad, Douglas_Knight, ciphergoth↑ comment by bigbad · 2009-12-07T23:11:36.475Z · LW(p) · GW(p)
As Feynman said, one of the characteristics of the truth is that, as you look more closely at it, it gets clearer. Most of the parapsych crowd tends to report results that are have a 1% probability of occurring randomly, after having done hundreds of experiments and failing to report the rest. The difference is that the level of confidence in the best experiment in a real effect doesn't scale simply with the number of experiments. A real effect should show millions-to-one odds in a few trials, once solid experimental procedures have been devised.
↑ comment by Douglas_Knight · 2009-12-07T03:10:46.134Z · LW(p) · GW(p)
A discussion on Hacker News contained one very astute criticism: that some things which may once have been considered part of parapsychology actually turned out to be real, though with perfectly sensible, physical causes.
Susan Blackmore is an example of someone who did both research on psi and out of body experiences because she saw them as connected.
Replies from: arundelo↑ comment by arundelo · 2009-12-14T05:40:27.109Z · LW(p) · GW(p)
This was interesting, but it isn't quite what I'm looking for, which is cases of something being considered paranormal, and rejected by the scientific community, and then later being accepted by the scientific community and explained in a scientific way.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-07-17T16:43:32.729Z · LW(p) · GW(p)
Or even more, something that meets those criteria and was actually investigated by parapsychologists.
↑ comment by Paul Crowley (ciphergoth) · 2011-07-17T16:42:06.851Z · LW(p) · GW(p)
FTR and for convenience, gwern said:
Hold on. This is a historical question; electromagnetism explains stuff that could be considered paranormal as well, but that stuff wasn't. Were the mirror neuron phenomena actually noticed, classified under parapsychology, and investigated by parapsychologists, producing results that had any connection to their eventual uptake by mainstream psychology/neurology?
The link doesn't mention anything like that; if anything, the example of the woman who thought mirror-touching was perfectly normal & universal suggests the opposite.
comment by LauraABJ · 2009-12-06T02:03:10.618Z · LW(p) · GW(p)
If we assume that approximately p% of results are false positives, and that only positive results are published, then the question becomes how many scientists are trying to prove (and disprove) the same hypothesis. If 1000 scientists are trying to prove that Drug Y slows the progression of Alzheimer's disease, and a p of 0.01 is required for publication, then we need to see more than 10 independent publications supporting this result before we should believe it. Things would be so much easier if negative results were given as much weight as positive ones... Can anyone think of a good way of calibrating the publication bias towards positives?
Replies from: rps↑ comment by rps · 2009-12-07T16:05:37.379Z · LW(p) · GW(p)
This is what they do in the wretched hive of scum and villainly that is medical research: http://www.cochrane-net.org/openlearning/HTML/mod15-3.htm
comment by RobinHanson · 2009-12-06T22:03:27.199Z · LW(p) · GW(p)
We could make more productive use of this control if we could compare some other methods as applied to both it and some more standard topic. So far we see that standard academic and "scientific" institutions find results even when there are none. Are there some other institutions that might do better?
Replies from: wedrifid↑ comment by wedrifid · 2009-12-07T01:06:20.291Z · LW(p) · GW(p)
Are there some other institutions that might do better?
You would seem to be well qualified to make such a proposal. I am sure you have put thought into how a truth seeking institution could be constructed that relied significantly upon prediction markets or perhaps something similar but more specialised to the context. Ideas and or links to some suggestions you have already made?
comment by Curiouskid · 2011-09-18T17:27:12.328Z · LW(p) · GW(p)
Just out of curiosity, has anybody read into the evidence for parapsychology? The best book I've found is "The Conscious Universe" by Dean Radin. There is a critical review of the book here: http://www.skepticreport.com/sr/?p=537
comment by onthefence · 2011-01-10T15:46:27.948Z · LW(p) · GW(p)
Since when is pre-judging the validity or importance of a subject--in the spirit of "it's obviously nonsense so why even bother studying it?"--considered a "scientific" stance to take? It's dogmatic comments like these that sadly lead many non-scientists to have a less than favorable view on the seeming "objectivity" of the field and its researchers.
Replies from: David_Gerard, TheOtherDave, jimrandomh↑ comment by David_Gerard · 2011-01-10T16:36:26.984Z · LW(p) · GW(p)
The big problem with parapsychology as a field is that science is all of a piece. Thus, physics is consistent with chemistry, biology and so on. So the question is not "what knowledge can we derive on the assumption that we know nothing?" - but "what knowledge can we derive given what we know already?" And we know really quite a lot about areas that directly impinge on this question.
Basic physics leaves it not looking good for parapsychology as a field in any way. Sean M. Carroll points out that both human brains and the spoons they try to bend are made, like all normal matter, of quarks and electrons; everything else they do is properties of the behaviour of quarks and electrons. And normal matter, made of quarks and electrons, interacts through the four forces: strong, weak, electromagnetic and gravitational. Thus either it's one of the four known forces or it's a new force, and any new force with range over 1 millimetre must be at most a billionth the strength of gravity or it will have been captured in experiments already done. So either it's electromagnetism, gravity or something weaker than gravity.
This leaves no force that could possibly account for telekinesis, for example. Telepathy would require a new force much weaker than gravity and a detector in the brain evolved to use it for signaling. Precognition, the receipt of information transmitted back in time, would violate quantum field theory.
What this means is that these ideas have pretty much no chance of being right even before we test them directly.
Treating parapsychology as having zero chance of working rather than "but there's still a chance, right?" of working does have the philosophical problem that it would require dismissing out of hand any positive results, rather than properly evaluating them as merely ridiculously unlikely. However, this is unlikely to be a practical problem while well-designed tests show no positive results, and the only tests showing any positive results tend to exhibit the experimental design skills of Daryl J. Bem.
(The above is large chunks of the RationalWiki article, but I wrote those chunks too ;-) )
↑ comment by TheOtherDave · 2011-01-10T17:06:17.045Z · LW(p) · GW(p)
It's dogmatic comments like these that sadly lead many non-scientists to have a less than favorable view on the seeming "objectivity" of the field and its researchers.
Do you actually believe that?
That is, if a majority of scientists started instead saying "Actually, we've looked into this, here's a calculation of the expected frequency of non-fraudulent positive results from properly run parapsychological experiments given an assumption of no actual parapsychological phenomena, and here's a survey of results in the field. Notice that the actual positive results are not exceeding the expected positive results given that assumption?" (with the associated responsibility for maintaining such things instead of working on something else), you're suggesting that a majority of the folks who dismiss the objectivity of scientists to go "Oh! Well, all right, then." and decide the scientists really are objective after all?
That would really surprise me, if it happened. I expect instead that the majority of those folks are far more likely to continue dismissing scientists, they'll just have some other reason for doing it.
Replies from: deeb↑ comment by deeb · 2011-07-16T18:38:21.769Z · LW(p) · GW(p)
actually, this is precisely how I would like people to discuss parapsychology.
What, are you going to defend science or rationalism using unscientific or irrational tactics just because you think that is going to work better? Even if that wasn't detrimental to your own agenda in the long run, you would need to ask yourself at that point what makes you different from any politician defending any ideology at all. Parapsychology isn't "wrong" because it is obvious to the bigwigs in your camp (the "rationalists") that it is wrong. It is "wrong" (or, unsubstantiated) because and only because positive results are not exceeding the positive results expected assuming the null hypothesis. If positive results DID exceed these, we WOULD need to recognize there is an effect. Actually, most people here would probably just see this as proof that we do indeed live in a simulation and would actually be pretty cool with that as they had half-hoped that we did all along.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-07-16T18:58:29.230Z · LW(p) · GW(p)
You're making a lot of assumptions about me on the basis of, as far as I can tell, no data. (Either that, or you're using "you" to refer to someone other than me.)
For what it's worth, I agree that this is an excellent way to discuss unsubstantiated theories, although I would also say that after a certain point the onus is on those presenting the theory to show that their methodology and results are meaningfully different (and better) than previously disproved attempts to do the same. Otherwise, each new re-presentation of the same theory becomes, not part of the process of discovery, but rather just a tedious nuisance.
What I was doubting (and still doubt) is that doing so would change the way science is thought about among those who dismiss it out of hand.
↑ comment by jimrandomh · 2011-01-10T16:29:51.626Z · LW(p) · GW(p)
The consensus belief that parapsychology is nonsense is not a pre-judgment; it is an informed judgment based on overwhelming evidence.
Replies from: deeb↑ comment by deeb · 2011-07-16T18:46:46.032Z · LW(p) · GW(p)
...this overwhelming evidence coming from paraphsychology studies, and parapsychology studies only.
Before people did these, all we had was overwhelming anecdotal evidence in favour of parapsychology. Every culture, nay, every family is chock-full of reliable witnesses that give accounts of how they personally experienced paranormal phenomena. In the face of such persistent, recurring reports, you can hardly blame people for wanting to investigate. It is only after you do studies under laboratory conditions that you can begin to show that this anecdotal evidence is a product of selection bias.
While I am personally quite convinced that selection bias is all that is needed to explain the phenomena, this doesn't take away the immense cultural significance of the phenomena that were selected in this way. In this sense, parapsychology is not "wrong", it's just cultural (as opposed to supernatural). At the end of the day, science doesn't attach value to anything. It is just capable of describing what arises from what. Meaning arises from subjective choice alone, and as humans we are much more interested in meaning and made-up patterns than in a full list of all hydrogen atoms in the biosphere, no matter how "objective".
Replies from: khafra↑ comment by khafra · 2014-02-24T02:16:53.665Z · LW(p) · GW(p)
The consensus belief that parapsychology is nonsense is not a pre-judgment; it is an informed judgment based on overwhelming evidence.
....this overwhelming evidence coming from paraphsychology studies, and parapsychology studies only.
No, the evidence against precognition comes from overwhelming evidence in favor of a model of physics in which the arrow of time doesn't reverse. The evidence against telepathy comes from studies of communication channels between remote humans that don't show anything outside sound waves and visual-frequency electromagnetic radiation.
It's the constraints imposed by an underlying model we're extremely certain of; not the direct experiments on the parapsychological theory in question.
comment by erica · 2009-12-18T12:29:52.869Z · LW(p) · GW(p)
If you start with Darwin, add Jung, Sheldrake and Dawkins, parapsychology becomes interesting. How do cultures evolve? What is a mentality? Why do prodigees talk of 'catching a moment in time'? Do they catch a moment in space or a happy coincidence of chemical patterning in the brain?
Replies from: wedrifid↑ comment by wedrifid · 2009-12-18T12:41:24.076Z · LW(p) · GW(p)
Do they catch a moment in space?
Yes, it is one of humanity's favourite pastimes.
Replies from: erica↑ comment by erica · 2009-12-18T13:28:49.931Z · LW(p) · GW(p)
So, why is that individual able to catch the moment and not another? Because they have the receptor? How did they get the receptor - was it a random mutation or an hereditary bias towards reception?
Replies from: wedrifid↑ comment by wedrifid · 2009-12-18T14:09:34.781Z · LW(p) · GW(p)
So, why is that individual able to catch the moment and not another?
Pardon me. In the absence of knowing which brain state (or, possibly non existent phenomenon) the parapsychologists are describing with the 'catching a moment in time' I was going for a more literal interpretation. Referring to the human (particularly masculine) drive to capture as much space as possible for as much time as possible.
Replies from: erica↑ comment by erica · 2009-12-18T15:12:45.263Z · LW(p) · GW(p)
The parapsychologists aren't describing it, but musicians often talk as if their compositions are somehow external and they are able to tap into them.
The prodigee I was thinking of said, in response to 'Where do you get your ideas from?', 'It's like catching a split second in time and if I catch that, all the rest (i.e. the full composition) follows'.
I asked my son, who's reading maths, if there could be a formula to explain this description and he said, 'Mum, to be honest, I don't know what you're on about.'
But there was a very good Horizon programme not long after, I think presented by a mathemetician, and he came to the conclusion that one day we will have mathematical formulae for consciousness.
Replies from: Morendil, whpearson↑ comment by whpearson · 2009-12-18T15:33:55.531Z · LW(p) · GW(p)
My brief attempt to outline one possible explanation for the phenomenon.
Read if you not familiar with search algorithms/spaces the wikipedia article on it.
Imagine trying to find a good piece of music, you could just create random notes and see how good they are, but that wouldn't be very interesting music. So instead you have to have an algorithm to generate interesting pieces of music. The algorithm might take a small piece of work and build upon it.
I didn't say where the algorithm came from, likely we build it in some way as we get experience. This building of skills is in turn is another search problem, that of finding good specialist searches. So in answer to your question, the gifted people might be those that are lucky and find good search algorithms (at a variety of different levels).
Replies from: erica