Personal Evidence - Superstitions as Rational Beliefs
post by OrphanWilde · 2013-03-22T17:24:27.265Z · LW · GW · Legacy · 138 commentsContents
138 comments
I'll start with a confession:
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is. In particular, I own a house in which bizarre crap has happened since I first moved into it. Persistently. I've moved into another house, and have been making repairs in preparation to sell it; most recently, in a room with almost no furniture, in a space with absolutely no furniture, a key was dropped by myself. Four people searched the area for significant periods of time on three different occasions with no luck. I found it on the floor a week or two ago on top of something that wasn't there when it fell. Which is the straw that broke the camel's back in terms of my skepticism.
Other bizarre things that have happened include such things as my waking up to discover my recently-purchased bottle of key lime juice had been placed in the oven, and the oven turned on; the plastic bottle had just started to melt when I made the discovery. Another situation involved my sister, who one morning (while home alone) walked into the living room and discovered on a previously empty floor three sonograms of the previous occupant's baby. (There were -many- other things; I'm choosing for the purposes of this post the most unusual and least prone-to-outside-explanation occurrences. Night terrors, for example, are easily explained.)
Up until the last incident, the key, I was inclined to attribute the events to, say, sleepwalking and confirmation bias. At this point, I do not think the evidence really supports that conclusion anymore. My skepticism has been broken by personal experience; I'm not going to attribute anything to any -particular- explanation, but there is definitely something -not normal- about that house, whatever it may be; it has been the (nearly) sole repository of such experiences in my life. (The only other such experience was the day my grandfather (with whom I was extremely close) died, and given the mental turmoil I was experiencing, I'm disinclined to give that particular experience too much credit. For the curious, I was taking a shower, and the hot water repeatedly (3 times) turned off. As in, the knob was completely rotated to shut off the flow of hot water to the faucet.)
A key point of rationality is that evidence can in fact change your mind. Well, the evidence has changed my mind.
From a reader's perspective, this is all anecdotal evidence. So I don't expect to change anybody -else's- mind - indeed, you're probably making a mistake if you -do- change your mind, because out of millions of people, you -should- expect to see a few weird things being related by other people. The odds of somebody else relating an entirely factual series of anecdotes that suggest something unlikely are probably significantly higher than the odds of that unlikely thing being true. However, the odds of such things happening to you personally are considerably -lower- than the odds of hearing about the events from somebody else. Which all leads into a central conclusion: It's possible for the evidence to support one person believing something, while at the same time -not- supporting that anybody else believe that thing. If you win the lottery, that may be evidence for you believing you're living in a simulation or that some other mechanism "forced" the outcome - while at the same time the evidence doesn't suggest anything for somebody -else- winning the lottery.
I have a different purpose in mind: Making the claim that objectively irrational beliefs can, in fact, be subjectively rational. Prior to these experiences, I regarded the idea of a haunted house - I use the idea without prejudice for what "haunted" is or refers to - was that it was just superstitious people scaring themselves. At this point I'm forced by the evidence I've seen to conclude that there's something to the idea, even if it's not what people think it is. Maybe EMFs subtly messing with my brain (there is some weak evidence for the idea that electromagnetic fluctuations can induce metabolic changes in neurons - see http://jama.jamanetwork.com/article.aspx?articleid=645813 ), maybe something else.
If a pattern-recognition algorithm doesn't produce false positives, it's probably getting false negatives, and given that we can test false positives but do not know to test false negatives, pattern-recognition should favor false positives over false negatives. What does this have to do with anything? Well, it means superstitions aren't a product of a poor mind, only -untested- superstitions are. A good intelligence should develop superstitions. It should, when capable, discard them.
But it should only discard such superstitions as it has evidence to do so.
Now, the skeptical reader might ask what odds I place on each of these events occurring. My answer is as follows: Each event was highly unlikely in itself, explainable as an independent event only by positing pretty unlikely circumstances (what odds would I place on me or my housemate sleepwalking multiple times when neither of us have any history of such behavior, and such behavior has entirely ceased since leaving that house? Keep in mind that neither I nor my sister were initially inclined to regard such events as even needing explanation; it's only been until the most recent episode that I've decided the evidence suggests anything at all, so the possible explanation that the sleepwalking was a product of disturbance at the first few unusual events seems unlikely). Further evidence has rendered each event less likely as an independent phenomenon - since moving to a different house, the occurrences have ceased. When returning to the house, occurrences resume within its context. My control, while hardly blind, is controlling. But meaningfully, the same evidence doesn't mean the same thing if it is coming from somebody else; out of millions of people, I would expect such things to occur. I simply cannot expect them to occur -to me-. (And I wasn't the only one who found the house to be... off. There's a sense of not-quite-rightness to one basement room which I cannot explain without resorting to Lovecraftian cliches about alien geometries. The house was burgled several times; the only room that was left completely untouched, even when the copper piping was stolen (and subsequently the water meter - I got a waterfall in my basement!), was that room, which is conveniently where I left a thousand or so dollars worth of building materials for a project I hadn't finished yet.)
Evidence is personal. The odds of something happening are not equal to the odds of that something happening to you. Therefore, while we should not be surprised if miracles (that is, really unlikely and contextually significant events) occur, it is still legitimate to be surprised when they occur to us individually. The qualitative rationality of an individual belief is not equal to the qualitative rationality of the same belief on a social scale; individuals get different evidence than society, even when the same evidence is apparently present both for the individual and the community.
And just as it is a mistake for people to judge the beliefs of others based on the community standard of evidence, rather than the individual standard of evidence, it is likewise a mistake for an individual to judge society based on the individual standard of evidence, rather than the community. Just as it is possible for the individual to rationally believe something that society should not rationally believe as a whole, it is possible for society to rationally reject something the individual has overwhelming personal evidence for.
Aumann was, in short, wrong, because Aumann Updating is based on the belief that two individuals -can- share evidence. Evidence is incompletely transferable.
(Note: Anthropic reasoning can potentially remedy this at least to some extent for -past- experiences; reproducible and continuing experiences somewhat less.)
138 comments
Comments sorted by top scores.
comment by jimrandomh · 2013-03-22T19:04:18.327Z · LW(p) · GW(p)
Forget the whole philosophy and meta side of this. If the events you describe really happened, then there is a sneaky, insane and probably dangerous human around. If this were a B-movie horror, you would search the house alone and unarmed, find a concealed door in the "alien geometry" basement room, open it and be ambushed. Tread carefully. If a key goes missing and then reappears, it has been copied. If a previous occupant's possessions turn up mysteriously, then the lurker has been around for awhile. Hidden cameras are a useful tool. In any case, it is dangerous to tip him off that you know, unless you are ready to act (as in, bring in police, conduct a thorough search with special equipment, and change all locks) immediately.
Replies from: Elithrion, OrphanWilde↑ comment by Elithrion · 2013-03-22T19:14:00.470Z · LW(p) · GW(p)
Either that or maybe either OrphanWilde or his sister or someone else close to him really enjoys messing with everyone and making it seem that the house is haunted.
Replies from: A4FB53AC, atucker, ModusPonies↑ comment by A4FB53AC · 2013-03-24T06:03:28.288Z · LW(p) · GW(p)
Especially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?
↑ comment by ModusPonies · 2013-03-22T21:38:40.834Z · LW(p) · GW(p)
That sounds like a restatement of the parent comment, with different connotation.
Replies from: Elithrion↑ comment by Elithrion · 2013-03-23T00:41:29.489Z · LW(p) · GW(p)
I meant "someone close to him" in a relationship, not a spatial, sense (so, "other family member or friend he knows about"). Which I guess is still kind of just a different connotation, but I think one worth noticing separately from the "crazy lurker who's been around for a while" hypothesis.
Replies from: ModusPonies, ModusPonies↑ comment by ModusPonies · 2013-03-23T02:32:41.073Z · LW(p) · GW(p)
I see. Thanks for clarifying.
↑ comment by ModusPonies · 2013-03-23T02:32:14.231Z · LW(p) · GW(p)
Ah. Thanks for clarifying.
↑ comment by OrphanWilde · 2013-03-22T19:26:34.350Z · LW(p) · GW(p)
Heh. There -was- a semi-hidden door in the "alien geometries" room going outside, actually, although it had been boarded up from the other side, and I've since replaced it with well-reinforced masonry. And the original door was spray-painted in red with bizarre symbols. (I'm not making that up. Seriously, imagine this: First, there's a room that doesn't seem to join together properly. Now there's this apparent wooden wall, in a room mostly made of cinderblock walls, covered in weird symbols in red spray paint. The wall swings open if you can get a fingertip grip on the edges of the wood - into a featureless wooden wall behind it. -Nobody- liked going into that room.)
The only way a key retrieval would have been possible is if the sneaky person had reached out from a heating duct and grabbed it. Which is conceivably possible - the ductwork is loose and could be pulled down from the basement.
...considering my girlfriend refused to be in the house for more than a month after I showed her a haiku inspired by the noises inside the walls as being rats with broken necks being thrown down from the attic by a creature living there (the house was -awesome- writing inspiration), I don't think I'll relate that particular explanation.
Replies from: Viliam_Bur, Manfred↑ comment by Viliam_Bur · 2013-03-22T21:08:04.230Z · LW(p) · GW(p)
You could invite some rationalists there. Nice place for a meetup! :D
Replies from: Eliezer_Yudkowsky, drethelin↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-23T01:30:25.904Z · LW(p) · GW(p)
Eventually only the most rational visitor is left alive.
Replies from: drethelin, OrphanWilde↑ comment by OrphanWilde · 2013-03-31T01:33:10.762Z · LW(p) · GW(p)
The most rational visitor would flee as soon as gunshots erupted from the house next door... (It's... not the nicest neighborhood.)
↑ comment by drethelin · 2013-03-22T21:30:34.607Z · LW(p) · GW(p)
yeah I want to hang out in/explore this house
Replies from: atomliner↑ comment by atomliner · 2013-03-28T07:21:55.845Z · LW(p) · GW(p)
Honestly. Let's investigate! OrphanWilde, under which circumstances would it be possible for you to divulge the location of this anomaly?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-31T01:31:53.596Z · LW(p) · GW(p)
It's in Lansing, Michigan. Which I suspect would be a -long- drive for most rationalists here.
The frequency of odd events was about once every month or so (including the rather-less-odd events like night terrors or the dream in which I was shot to death), and considerably less frequent (as in, once in two or so years) since I moved into another house and started visiting the house only in preparation to sell it.
↑ comment by Manfred · 2013-03-23T00:09:08.249Z · LW(p) · GW(p)
The only way a key retrieval would have been possible is if the sneaky person had reached out from a heating duct and grabbed it.
Nope. I'm just going to defy you here, since this sort of restatement is occasionally bad. Four different people searched for this key (that you know of >:D ), and I doubt you said to every one of them "don't bother, I searched well enough to exclude me having overlooked it" to each one.
comment by James_Miller · 2013-03-22T19:21:09.465Z · LW(p) · GW(p)
You should assign a much higher probability to your being a victim of gaslighting than to your having experienced supernatural events.
Replies from: IlyaShpitser, OrphanWilde, ikrase, Will_Newsome, Will_Newsome↑ comment by IlyaShpitser · 2013-03-23T16:27:29.994Z · LW(p) · GW(p)
There is one other obvious explanation than gaslighting (schizophrenia).
Replies from: coffeespoons, elharo↑ comment by coffeespoons · 2013-03-23T20:12:52.569Z · LW(p) · GW(p)
↑ comment by elharo · 2013-03-29T14:43:46.041Z · LW(p) · GW(p)
The most obvious suggestion, which is called out in the article, is that the author is simply lying; that none of this happened as described. Personally I find that explanation far more likely than gaslighting, schizophrenia, sleepwalking, or haunting. Absent independent and trustworthy verification of the events, that is my default assumption. It's not unfalsifiable, but it is where I start with stories like this.
Isaac Asimov wrote a very amusing Black Widower tale along these lines--The Obvious Factor--that significantly improved my own rationality. Briefly, that story taught me and to this day reminds me that you cannot simply accept first person reports as evidence of implausible phenomena. It may be impolite to state this so boldly, but surprisingly often it is useful to at least say to yourself silently, "I do not believe this person is telling the truth." Even more so on the Internet.
While most people are truthful--i.e. describe reality as they remember and recall it--amongst that fraction of people who tell wild and exotic stories, the percentage of outright fabulators is much higher. Given the prior of a ridiculous tale, I adjust my probability that the teller is a liar way upwards; and I'll continue operating under that assumption until I see independent evidence of the claims.
I'll put forth a hypothesis here: this story is simply a thought experiment to see if it's possible to envision a scenario in which "objectively irrational beliefs can, in fact, be subjectively rational."
↑ comment by OrphanWilde · 2013-03-22T19:39:26.082Z · LW(p) · GW(p)
I don't know the incidence of gaslighting. I'd be more inclined to blame something like EMFs screwing with me, given that the house did have bizarre electrical issues - lights that are turned off flicker on irregularly, and appliances with safety features have to be reset regularly owing to line voltage fluctuations.
Replies from: James_Miller, Intrism, army1987↑ comment by James_Miller · 2013-03-22T20:07:00.084Z · LW(p) · GW(p)
Since everything you describe is highly plausible given gaslighting, then you only have a right to assign a high probability to something you should have assigned a very low prior to (such as supernatural events) if you can almost completely rule out gaslighting.
Replies from: gothgirl420666, OrphanWilde↑ comment by gothgirl420666 · 2013-03-25T00:18:20.846Z · LW(p) · GW(p)
How likely of an explanation is that, really? How often does this happen? I feel like giving this phenomenon a name and a wiki page has artificially increased your probability of it. If you had just said "this could easily be explained by someone having a powerful grudge on you, sneaking into your house and waging a several month long campaign of psychological warfare on you", that would sound absurd.
↑ comment by OrphanWilde · 2013-03-22T20:13:23.842Z · LW(p) · GW(p)
I don't blame supernatural events. I believe the house is "haunted," but purely as a description of the events. (Smallpox is still smallpox even after we discover that it is caused by microorganisms rather than, say, angry spirits, or an imbalance in our body fluids.)
Replies from: Qiaochu_Yuan, Kawoomba↑ comment by Qiaochu_Yuan · 2013-03-22T22:01:07.594Z · LW(p) · GW(p)
I think this was misleading.
↑ comment by Kawoomba · 2013-03-22T21:52:32.711Z · LW(p) · GW(p)
Ok, you win. Now I'm confused. I don't interpret "haunted" in such a ... mundane way, and rereading your post with your connotations in mind, little of controversy remains.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T16:38:46.403Z · LW(p) · GW(p)
It will probably help if you frame the post with my previous skepticism that even mundane hauntings existed. (With some extremely exceptional cases, like the guy living in the crawl spaces of somebody's home.)
↑ comment by Intrism · 2013-03-24T16:09:19.692Z · LW(p) · GW(p)
If lights that are turned off are flickering, I recommend getting an electrician in to look at them. That's clearly not supposed to happen (should be impossible, actually), and might be an indication of a potential electrical fire hazard. Just curious, does this house often blow breakers?
Replies from: michael61, OrphanWilde↑ comment by OrphanWilde · 2013-03-28T16:20:06.416Z · LW(p) · GW(p)
It uses fuses, and not unusually often, no, just the usual "Too many things plugged in to an outdated electrical system" situations. If they blew more frequently I'd have yanked the fuse box and replaced it with breakers, as fuses are expensive.
I've fiddled with the electrical system, replacing light switches and a few fixtures - it predates aluminum wires quite considerably, the biggest problem being that many of the fixtures predate grounding wires (which started becoming standardized around the same time as aluminum wires, actually), and there don't seem to be issues with the lines themselves, nor with the fixtures (even new fixtures exhibit this behavior, at least until they stop working). My father, an electrical technician, also took a look, and threw up his hands.
Electromagnetic flux is the only explanation I can come up with for the behavior. (The lights don't flicker to full brightness - you can only tell that it's happening if the room is very dark, and it certainly isn't bright enough to see by. CFL bulbs actually exhibit the behavior significantly worse than incandescent bulbs, another suggestion that it's EMF.)
↑ comment by A1987dM (army1987) · 2013-03-23T01:15:00.416Z · LW(p) · GW(p)
I'd be more inclined to blame something like EMFs screwing with me,
It sounds unlikely to me that EMFs would screw with you that badly. (But I'm not an expert about these things.)
↑ comment by ikrase · 2013-03-22T19:38:42.825Z · LW(p) · GW(p)
Or, alternatively, other forms of deliberate or non-deliberate monkey business.
Or, alternatively, insanity caused by some specific and temporary cause (Beware of interference between this and gaslight defense).
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-23T17:45:16.112Z · LW(p) · GW(p)
Or, people forgetting that they moved stuff around.
↑ comment by Will_Newsome · 2013-03-22T22:45:25.625Z · LW(p) · GW(p)
Putting it that way artificially excludes supernatural gaslighting.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-03-23T17:42:37.967Z · LW(p) · GW(p)
Constructing intelligible hypotheses at all excludes self-contradictory and meaningless or nonsense causes. For instance, that the weird behavior could be caused by blue lizards who are not blue, or that it could be caused by f̢ish wish wa҉l̵l͜op ̡ba̷z̶i̸n̷g͜a déa̶t̢h spįral ̛cr͠o͜t͠hf͢allá ́o͢o͠kmi̶sch͝ ͜g̴a̵x̛̀a̸̢̧x̢͏̸a͘͝l̴͟a͢x̴i̡a̵͜x ̸̛f͠r̴op͝͡.̶͏͏
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-03-23T18:01:04.502Z · LW(p) · GW(p)
I think it's obvious that those phenomena are due to colourless green ideas sleeping furiously.
Jokes aside, “supernatural gaslighting” may be extremely unlikely, but I don't see how it (a supernatural (in WN's sense) entity trying to trick OrphanWilde and others into believing he's insane) is self-contradictory or meaningless.
↑ comment by Will_Newsome · 2013-03-22T22:00:12.886Z · LW(p) · GW(p)
Whence your prior?
Replies from: James_Miller↑ comment by James_Miller · 2013-03-23T00:34:07.391Z · LW(p) · GW(p)
If either a ghost or sociopath could have caused an event I would give vastly more weight to the likelihood if it having been done by a sociopath.
comment by moridinamael · 2013-03-22T23:29:53.049Z · LW(p) · GW(p)
It sounds like an old house, based on the structural abnormalities. Is it possible that there is a gas leak? One prime candidate for the "haunted house" phenomenon is hallucinations brought on by exposure to carbon monoxide.
I think it worth pointing out that it's very difficult to remove selective perception and confirmation bias here. For example, if I experienced the "dropped key" scenario you described, I would simply assume that the key had slid or been kicked out of the room, and then someone else had later found it and set it down somewhere off the ground. However, if I believed that I lived in a Haunted House, the first hypothesis that would spring to mind would be that the key's teleportation was part of the haunting. So when you say such incidents have not occurred since you lived in that house, I believe that you think that, but it's likely that you just don't view incidents outside the house as existing in the same magisterium, in some sense.
Additionally, I happened to be pondering today why people seem to take Aumann's Agreement Theorem so literally around here. It is a mathematical idealization. We're not talking about the mathematics of particle physics, we're talking about the cognition and interactions of humans, who only barely do reasoning in the first place and barely succeed in communicating even the simplest concepts without signal loss.
Replies from: CronoDAS, NancyLebovitz↑ comment by CronoDAS · 2013-03-24T03:07:10.358Z · LW(p) · GW(p)
It sounds like an old house, based on the structural abnormalities. Is it possible that there is a gas leak? One prime candidate for the "haunted house" phenomenon is hallucinations brought on by exposure to carbon monoxide.
I'd also be wondering about toxic mold.
↑ comment by NancyLebovitz · 2013-03-24T02:30:06.732Z · LW(p) · GW(p)
It's plausible that the house should be checked for carbon monoxide, regardless of issues about ghosts.
A fast google turns up CO as a possible cause of hauntedness, but I didn't see anything specific about what sort of hallucinations CO is known to cause.
comment by Will_Newsome · 2013-03-22T22:13:15.692Z · LW(p) · GW(p)
(Commenters: talking about the 'supernatural' in terms of metaphysics is metaphysically interesting but phenomenologically speaking it just clouds the issue unnecessarily. The way most people actually use the concept is just 'weird things happening that would require human or transhuman agency, in situations where there's no good reason to suspect human agency'. Talking about reductionism &c. is missing the point---it doesn't matter whether the agency comes from an engineered superintelligence or an "ontologically fundamental" god, what matters is there's non-human agency around. Note that all reports of supernatural phenomena can be explained "naturally" by superintelligences, simulators, highly advanced aliens, &c., all of which seem not-unlikely in a big universe. The improbability stems from the necessity of their having seemingly bizarre motivations; the mechanisms themselves, however, aren't fantastically improbable.)
Replies from: Kawoomba, yli↑ comment by Kawoomba · 2013-03-23T13:29:15.845Z · LW(p) · GW(p)
Hew-mons have a long standing tendency to see agency where it does not belong, from hearth gods, to lightning bolts thrown by Zeus, to suspecting a saber tooth behind every rustling tree. That is a priori reason not to assume agency prematurely.
There are conditions (arbitrarily chosen example: prodromal schizophreny ...) that amplify that bias; from hearing voices to seeing hidden messages. While a big (infinite) universe may contain all sorts of curious phenomena, from cosmic planetaria to Hubble volumes in which a deity reigns, the question becomes not "who can list the most agenty hypotheses", but instead "why privilege agenty hypotheses, why be invested in them in the first place", a reason for which has historically been the human psyche.
↑ comment by yli · 2013-11-06T06:56:33.950Z · LW(p) · GW(p)
I agreed with this at first, but actually, no. Belief in the supernatural doesn't require belief in gods, spirits or any non-human agents. You could just believe that humans have some supernatural abilities like reading each other's minds. When trying to explain these abilties, only reductionists will conclude that there's some third party agent like a simulator setting things up. Non-reductionists will just accept that being able to read minds is part of how this ontologically fundamental mind stuff works.
comment by Manfred · 2013-03-22T19:16:53.599Z · LW(p) · GW(p)
If this were a mystery story, the prime suspects would be the people who helped you search for the key :)
If you have a spare computer and some spare time, you could surreptitiously set up a few sneaky cameras. Just don't tell those people. If you think this is supernatural with, say, p>10%, this sounds like a fun thing to investigate. Well, fun now that you're out of a house that was for one reason or another correlated with you having night terrors.
comment by Qiaochu_Yuan · 2013-03-22T22:44:16.063Z · LW(p) · GW(p)
I used to experience periodic attacks of sleep paralysis. The first few times it happened to me I had no idea what was happening; I had vivid auditory hallucinations as well as the strong sensation that someone else was in my bedroom. Occasionally I would feel an enormous pressure on my chest.
I later learned that sleep paralysis may have inspired a lot of folklore, e.g. incubi. If a thousand years ago I reported these symptoms and someone told me "a demon is sitting on you," I would have been hard-pressed to come up with a better hypothesis.
comment by [deleted] · 2013-03-23T02:37:15.996Z · LW(p) · GW(p)
I second Manfred. Set up cameras. Don't worship your (partial) ignorance, fix it!
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-03-23T11:10:14.196Z · LW(p) · GW(p)
The cameras should see each other (so a person cannot manipulate one camera from its dead angle without being seen by another camera), and should upload their data to the cloud in real time.
comment by Xachariah · 2013-03-23T21:26:53.050Z · LW(p) · GW(p)
Woah woah woah, back up here. Why would you want to move out of this goldmine? Lets assume that you're correct and it's a real haunted house and not schizophrenia/people-in-the-walls/mental-abuse-by-your-sister.
You were in a house where objects were repeatedly, purposefully moved without human intervention. Just take a look at the incident where key lime juice got into the oven: either the haunted house teleported it from your fridge to your oven, or the haunting house generated action-without-reaction and floated it from the fridge to the oven.
[*] In case A, you've just broken the Theory of Relativity and can help humans colonize the stars with instant teleportation. Please collect your 100 nobel prizes, 100 billion dollars, and knowledge that you've protected all of humanity from any possible existential risk.
[*] In case B, you've just broken the Laws of Thermodynamics and can help humanity defeat entropy itself. Please collect your 200 nobel prizes, 500 billion dollars, and the knowledge that you've literally saved all of existence for all time.
You've gotten the most valuable thing that has ever existed on this planet living in your house, and you want to give up ownership.
Either that or your sister is crazy and hates key lime juice. 50/50 equally likely.
Replies from: Elithrion, MugaSofer, army1987↑ comment by Elithrion · 2013-03-23T22:49:17.575Z · LW(p) · GW(p)
What if the house merely floated the thing over there with reaction (pushing back on the floors/walls), and its floor rotted slightly (accumulating entropy, losing chemical energy) in proportion to the necessary force? In that case, he's only discovered ghostly energy transfer at small distances, which may be completely impractical (only one or two Nobels).
Replies from: None↑ comment by A1987dM (army1987) · 2013-03-23T23:18:43.347Z · LW(p) · GW(p)
He explicitly made clear than he's using a broader definition of “haunted” than usual, so I guess someone other than himself or his sister messing with him would count.
comment by Vaniver · 2013-03-23T17:48:41.183Z · LW(p) · GW(p)
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is.
So, it seems like there's a strong and weak interpretation of this sentence.
The weak interpretation (that I endorse) is "reported physical evidence is often physical"- crop circles aren't photoshopped, they're actually there in the fields. Throwing out the evidence with the interpretation is rarely wise. Talking with someone who believes in alien abduction reports, I get the sense that there are a number of them where it's reasonable to believe that weird things were really noticed (like someone's lawn having an abnormal amount of radiation, or so on). Under this interpretation, you probably ought to move out of the house / set up hidden cameras / etc.; if any of these are the result of hallucinations or the actions of others or the unremembered actions of yourself, then setting up systems to counteract that is a good idea.
In this interpretation, a "superstition" is more of a "unarticulated causal model." It isn't that "ghosts live in the house, and move objects around," it's "something unknown about this house is bad, and I should avoid it." (That particular model doesn't even articulate why it's bad- it accepts both teleporting keys and your memory being faulty.)
The strong interpretation (that I don't endorse) is "physical evidence is anywhere near close enough to overcome a sensible prior against the supernatural." If there really are crop circles in your fields, it's still many, many times more likely that you're doing it while sleepwalking, or other humans are doing it to you, or some other mundane explanation, than that aliens are doing it to you. I will note that it is often useful to take the outside view on your own evidence, especially if you have or are developing a mental disorder. "If someone told me that there were invisible ants crawling all over them, how likely is it that they're hallucinating? Very likely, and from that I should expect that the invisible ants crawling all over me aren't real." I wrote a post about this a while back that may be interesting.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-03-24T10:12:19.492Z · LW(p) · GW(p)
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is.
So, it seems like there's a strong and weak interpretation of this sentence.
The part after the last comma, and other things he says in the post, make it overwhelmingly clear to me that the weak interpretation is what is meant.
Replies from: Vaniver↑ comment by Vaniver · 2013-03-24T16:53:21.624Z · LW(p) · GW(p)
In general, it seems valuable to me to attempt to rewrite things more precisely, both as an aid to my understanding and for the purposes of communication.
The last clause was ambiguous to me, because it could suggest giving a comparable amount of credence to supernatural explanations. It seems reasonable to me to give particular anti-credence to supernatural explanations, because they seem strictly dominated by explanations that involve mental illness. Further, the parts about incommunicable evidence strike me as pushing towards the strong interpretation.
comment by [deleted] · 2013-03-22T18:05:20.526Z · LW(p) · GW(p)
Aumann was, in short, wrong, because Aumann Updating is based on the belief that two individuals -can- share evidence. Evidence is incompletely transferable.
I don't care to respond to the rest of your post, but I feel I should point out that saying a theorem is wrong because the hypotheses are not true is bad logic.
Replies from: gwern, OrphanWilde, AlexSchell, handoflixue↑ comment by gwern · 2013-03-22T18:10:12.489Z · LW(p) · GW(p)
I'm interested in whether the axioms or theorem are even wrong in this case.
Why isn't this covered under the general observation "your observations [of haunting] are very little information and move a outsider's beliefs by [very small amount], and if your own beliefs don't converge, you're just demonstrating your irrationality by overweighting your experience and ignoring how many thousands of people throughout history have felt equally freaked out by 'haunted houses' only for detailed investigation to find nothing."?
Replies from: Will_Newsome, OrphanWilde, None↑ comment by Will_Newsome · 2013-03-22T23:26:27.743Z · LW(p) · GW(p)
I'm curious now whether and how the agreement theorem holds in cases where the environment includes agents that are selectively presenting different evidence to different rational observers. You'd think that'd ruin the result along the same lines as the no free lunch theorems.
Replies from: gwern↑ comment by gwern · 2013-03-23T02:29:07.189Z · LW(p) · GW(p)
If they're presenting false evidence and are otherwise indistinguishable from truth-tellers, then I would guess that agreement would fall a lot or cease to happen; if they're the equivalent of random noise, then I'm not sure what would happen, but probably bad stuff if we go by Hanson's paper on communicating rare evidence; and if they're merely being selective about evidence, you can still infer stuff from their reports (the Bullock thesis in my backfire effect page would be relevant here).
Replies from: Will_Newsome↑ comment by Will_Newsome · 2013-03-29T02:12:33.976Z · LW(p) · GW(p)
(This is obvious, but it took me a bit to explicitly notice: deceptive agents in the environment is exactly the same formally speaking as irrational agents in the notionally Bayesian community, so of course the agreement theorem doesn't apply.)
↑ comment by OrphanWilde · 2013-03-22T18:22:05.651Z · LW(p) · GW(p)
Imagine, for a moment, a society of N people, N being currently undefined but large. Every year 1 person out of this population is randomly selected for an award.
For what value of N does it become more likely that the process is nonrandom, provided you are chosen?
You're acting as though evidence should be evaluated strictly objectively. If this is truly the case, you shouldn't update your beliefs upon winning the award -for any value of N-, because no matter what happens to you personally, it had to happen to -somebody-. However, for a sufficiently large value of N, there reaches a point when the odds of a person winning the award is more likely -simulated- than actually winning the award. You expect -somebody- to win the award, therefore if somebody wins the award nothing unusual has happened. However, you cannot expect -yourself- to win the award, and if you do you should update your priors to reflect this fact.
Replies from: gwern↑ comment by gwern · 2013-03-22T19:26:59.506Z · LW(p) · GW(p)
However, for a sufficiently large value of N, there reaches a point when the odds of a person winning the award is more likely -simulated- than actually winning the award. You expect -somebody- to win the award, therefore if somebody wins the award nothing unusual has happened. However, you cannot expect -yourself- to win the award, and if you do you should update your priors to reflect this fact.
Huh? What does simulation have to do with this?
I'll use your example against you: Mary Panick recently won $250k from a lottery. Should she increase her belief that someone in the lottery commission crooked the lottery to favor her? How much, exactly?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-22T19:33:51.177Z · LW(p) · GW(p)
Objectively, no; as previously mentioned, it shouldn't surprise us that somebody won the lottery. Subjectively, yes; I would certainly update my odds that something other than pure chance is at work if I happened to win the lottery.
And simulation is coming from Robin Hanson's assertion that if you're an important person in the world, you should probably update your priors to suggest you are being simulated; it's a related argument. If the world is ever capable of simulating individual people, any given important person is more likely a simulation than the real thing - so, given that I'm not particularly important, I can probably assume I'm not simulated, unless something exceptionally unlikely happens to me. But if I were, say, Obama, maybe I -should- think I'm living in a simulation. From the outside, there's a president of the United States, so it's not particularly unusual that -somebody- is the president of the United States. From the inside, it would be unusual that -I- am the president of the United States. Same thing.
Replies from: gwern, CronoDAS↑ comment by gwern · 2013-03-22T19:50:37.569Z · LW(p) · GW(p)
Objectively, no; as previously mentioned, it shouldn't surprise us that somebody won the lottery. Subjectively, yes; I would certainly update my odds that something other than pure chance is at work if I happened to win the lottery.
Again, why? Suppose we are comparing two models: in one world, there are 1000 haunted houses which are all explained by gaslamping and sleepwalking etc; in the second world, there are 1000 haunted houses and they are all supernatural etc. Upon encountering a haunted house, would you update in favor of 'I am in world two and houses are supernatural'? Would someone reading your experience update? I propose that neither would update, because the evidence is equally consistent with both worlds; so far so good.
Now, if in world 1 there are 1000 frightening houses with the mundane explanations mentioned, and in world 2 there are 1000 frightening houses with the mundane explanations (human biology and mentality and the laws of probability etc having not changed) plus 1000 frightening houses due to supernatural influences, upon encountering a frightening house would you update?
Of course; in world 2 there are more frightening houses and you have encountered a frightening house, which is twice as likely in world 2 than in world 1 (2000 houses versus 1000 houses), and so you are now more inclined to think you are in world 2 from whatever you were thinking before. But so would an observer reading your experience!
So where does this unique unconveyable evidence (that your post claims your experience has given you) come from?
And simulation is coming from Robin Hanson's assertion that if you're an important person in the world, you should probably update your priors to suggest you are being simulated; it's a related argument.
Ah. It's coming from anthropics. You're making the claim that Aumannian agreement cannot convey anthropic information.
You realize that both the SIA and SSA are hotly debated because either seems to lead to absurd conclusions, right? While Aumann just leads to the conclusion 'people are irrational', which certainly doesn't seem absurd to me.
And since one man's modus ponens is another man's modus tollens, why isn't your post just further evidence that anthropic reasoning as currently understood by most people is completely broken and cannot be trusted in anything?
Replies from: Elithrion, OrphanWilde, private_messaging↑ comment by Elithrion · 2013-03-23T02:18:29.512Z · LW(p) · GW(p)
I think there are non-anthropic problems with even rational!humans communicating evidence.
One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. However, this assessment is difficult to communicate, since there are strong incentives to lie about these assessments (and also a lot of potential for someone to turn out to not be entirely rational and just get these assessments wrong). So, the rational agent may read this post and update to believing it's much more likely that OrphanWilde either lies to people for fun (just look at all those improbable details!) or is incompetent at assessing evidence and falls prey to apophenia a lot.
This might not be an issue were it not for the second problem, which is that communication is costly. If communication were free, OrphanWilde could just tell us every single little detail about his life (including in this house and in other houses), and we could then ignore the problem of him potentially being a poor judge of evidence. Alternatively, he could probably perform some very large volume evidence-assessment test to prove that he is, in fact, competent. However, since communication is costly, this seems to be impractical in reality. (The lying issue is slightly different, but could perhaps be overcome with some sort of strong precommitment or an assumption constraining possible motivations combined with a lot of evidence.)
This doesn't invalidate Aumann agreement as such, but certainly seems to limit its practical applications even for rational agents.
↑ comment by OrphanWilde · 2013-03-22T20:11:24.328Z · LW(p) · GW(p)
I don't rule out mundane explanations. Hence my repeated disclaimers on each use of the word "haunted." If anything "supernatural" exists, it isn't supernatural, it's merely natural, and we simply haven't pinned down what's going on yet. Empiricism and reductionism don't get broken.
And anthropic reasoning is -unnecessary- to the logic, it simply provides the simplest examples. I could construct related examples without any anthropic reasoning at all:
You're shipwrecked on a deserted island with a friend, Johnny. You see a ship in the distance; Johnny's eyesight is not as good as yours, and he cannot. You've been trying to cheer him up for the past three days, because he's fallen into depression. He doesn't believe you when you tell him there's a ship; he cannot see it, and he believes it's just another attempt by you to cheer him up. -You cannot share the true evidence, because you cannot show him the ship he cannot see-.
Or, to put it in terms of framing:
You flip a coin ten times. It comes up heads each time. versus You say a coin will land heads-up ten times. You flip it ten times, and it comes up heads each time.
Even though the odds are strictly speaking equally likely in both cases, the framing of the first proposition in fact makes it [edited: less significant, not more likely]; you would have been similarly impressed if it had come up tails each time. So the second scenario is twice as significant as the first scenario.
The fact that it is happening to me, rather than another person, is a kind of contextual framing, in much the same sense that calling heads first frames the coin-flipping event.
Replies from: gwern↑ comment by gwern · 2013-03-22T20:34:32.996Z · LW(p) · GW(p)
I don't rule out mundane explanations. Hence my repeated disclaimers on each use of the word "haunted." If anything "supernatural" exists, it isn't supernatural, it's merely natural, and we simply haven't pinned down what's going on yet. Empiricism and reductionism don't get broken.
Fine, replace 'mundane causes' with 'mundane causes minus cause X' and 'supernatural with 'cause X' in my examples. -_-
I could construct related examples without any anthropic reasoning at all:
And they fail. In the desert island case, Aumann is perfectly applicable: if you have more evidence than he does, then this will be incorporated appropriately; in fact, the desert island case is a great example of Aumann in practice: that you've been lying to him merely shows that 'disagreements are not honest' (you are the dishonest party here).
The fact that it is happening to me, rather than another person, is a kind of contextual framing, in much the same sense that calling heads first frames the coin-flipping event.
How so? You didn't predict it would be a haunted house before it went, to point out the most obvious disanalogy.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-22T21:02:28.649Z · LW(p) · GW(p)
I feel like your disagreement is getting a little slippery here.
My rejection of Aumann is that there is no common knowledge of our posteriors. It's not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
If I -had- predicted it would be a haunted house, I'd be suspicious of any evidence that suggested it was. The point isn't the prediction - prediction is just one mechanism of framing an outcome. The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there's a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It's thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn't have known about except for their reporting such. If I'm not updating my priors after being surprised, what am I doing?
Replies from: gwern↑ comment by gwern · 2013-03-22T21:29:30.125Z · LW(p) · GW(p)
It's not necessary for me to have lied to him before, after all; I could have been trying to cheer him up entirely honestly.
Then why does he distrust you? If you have never lied and will never lie in trying to cheer you up, then he is wrong to distrust you and this is simply an example of irrationality and not uncommunicable knowledge; if he is right to suspect that you or people like you would lie in such circumstances, then 'disagreements are not honest' and this is again not uncommunicable knowledge.
The point is in the priors; my prior of -somebody- experiencing a series of weird events in a given house are pretty high, there's a lot of people out there to experience such weird events, and some of them will experience several. My prior odds of -me- experiencing a series of weird events in a given house should be pretty low. It's thus much more significant for -me- to experience a series of weird events in a given house than for some stranger who I wouldn't have known about except for their reporting such. If I'm not updating my priors after being surprised, what am I doing?
And you talk about me being slippery. We're right back to where we began:
- to the extent you have any knowledge in this case of observing a rare but known event, the knowledge is communicable to an outsider who can also update; it is no more 'significant' to you than a stranger, and any significance may just reflect known cognitive biases like anecdotes, salience, base-rate neglect, etc (and fall under the irrationality rubric)
- to the extent that this knowledge is anthropic, one can argue that this is uncommunicable but anthropic arguments are so divergent and unreliable that it's not clear you have learned uncommunicable knowledge rather than found yet another case in which anthropic arguments are unreliable and give absurd conclusions.
You have not shown any examples which simultaneously involve uncommunicable knowledge which does not involve anthropics (what you are claiming is possible) and rationality and honesty on the part of all participants.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T17:22:08.384Z · LW(p) · GW(p)
You're presupposing that trustworthiness is communicable, or that rationality demands trusting somebody absent evidence to do otherwise. There's definitely incommunicable knowledge - what red looks like to me, for example. You're stretching "rationality" to places it has no business being to defend a proposition, or else adding conditions (honesty) to make the proposition work.
What, exactly, are you calling anthropics? What I'm describing doesn't depend on either SIA or SSA. If you're saying that I'm depending upon an argument which treats any given observer as a special case - yes, yes I am, that is in fact the thrust of my argument. Your argument against anthropics was that it leads to "absurd" results. However, your judgment of "absurd" seems tautological; you seem to be treating the idea that being unable to arrive at the same posterior odds as absurd in itself. That's not an argument. That's begging the question.
So - where exactly am I absurd in the following statements: 1.) My prior odds of some stranger experiencing highly unlikely circumstances (and me hearing about them - assuming such circumstances are of general interest) should be relatively high 2.) My prior odds of me experiencing highly unlikely circumstances should be low 3.) Per Bayesian inference, given that the likelihood of the two distinct events is different, the posterior distribution is different 4.) Therefore, there is a different amount of information in something happening personally than to something happening to a stranger
Or, hopefully without error, as it's been a while since I've mucked about with this stuff:
M is the event happening to me, O is the event happening to somebody else, X is some idea for which M and O are evidence of, and Z is the population size (assuming random distribution of events:
p(X|M)=p(M|X)*P(X)/P(M)
Assuming X guarantees M and O, we get:
p(X|M)=P(X)/P(M)
p(X|O)=P(X)/P(O)
where p(M) = p(O) / Z
Which means
p(X|M) = p(X|O) * Z
Which is to say, M is stronger evidence than O for Z:
p(X|M, M1) = p(M|X)*P(X|M1)/p(M|M1)
p(X|O, O1) = p(O|X)*P(X|O1)/p(O|O1)
Using the above assumption that X guarantees M and O:
p(X|M, M1) = p(X|M1)/P(M|M1)
p(X|O, O1) = p(X|O1)/P(O|O1)
Substituting, again where P(O1) = p(M1) * Z, and where p(M1|M) and p(O1|O) are both 1:
p(X|M, M1) = p(X|M1)/p(M|M1)
p(X|O, O1) = (P(X)/Z*P(M1)) / (Z*P(M) / Z*P(M1))
= p(X)/(Z*P(M))
Or, in short - the posteriors are different. The information is different. There is a piece of incommunicable evidence when something happens to me as opposed to somebody else.
Replies from: gwern↑ comment by gwern · 2013-03-28T17:40:51.316Z · LW(p) · GW(p)
You're presupposing that trustworthiness is communicable, or that rationality demands trusting somebody absent evidence to do otherwise. There's definitely incommunicable knowledge - what red looks like to me, for example. You're stretching "rationality" to places it has no business being to defend a proposition, or else adding conditions (honesty) to make the proposition work.
The conditions are right there in the Aumann proofs, are they not? I'm not adding anything, I'm dividing up the possible outcomes: anthropics (questionable), communicable knowledge (contra you), or Aumann is inapplicable (honesty etc).
What I'm describing doesn't depend on either SIA or SSA.
I'd be interested to see if you could prove that the result holds independently of them.
That's not an argument. That's begging the question.
That's the point of the modus tollens vs modus ponens saying. You claim to derive a result, but using premises more questionable than the conclusion, in which case you may have merely disproven the premises via reductio ad absurdum. If this is begging the question (which it isn't, since that's when your premise contains the conclusion), then every proof by contradiction or reductio ad absurdum is question-begging.
Or, in short - the posteriors are different. The information is different. There is a piece of incommunicable evidence when something happens to me as opposed to somebody else.
Correct me if I am wrong, but in your example, M is not increased when O fails to happen - more concretely, you assume the number of spooked people you will hear of is constant - when it would be more appropriate to increase the number of observations of O by 1, since if you don't go into the spooky house someone else does. Then you are merely deriving the uninteresting observation that if there are more events (by 1) consistent with Z, Z will be more likely. Well, yeah. But there is nothing special about one's own observations in this case; if someone else went into the house and reported observations, you would update a little more, just like if you want into the house, and in both cases, more than if no one went into the house (or they went in and saw nothing).
Also, your equations are messed up. I think you need to escape some stuff.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T18:18:57.088Z · LW(p) · GW(p)
Aha! I think the issue here is that you're thinking of it in terms of two identical observers. The observers aren't identical in my arguments - one is post-hoc. I have realized where the discrepancy between our arguments is coming from with your example, because of the way I keep framing problems as being about the observer. Suppose I and a friend, Bob, are arguing about who goes in the house. In this case, there's not practical difference between our evidence. The difference isn't between me and other, the difference is between me and (other who I wouldn't have known about except for said experience).
Bob and I are identical (I did say this wasn't necessarily anthropic!) for the purposes of calculation. Bob is included in p(M).
Steve, who wrote a post on a rationality forum describing his experiences, is -not- identical with me for the purposes of calculation. Steve is included in p(O).
Does my argument make more sense now? Bob's information is fully transferable - in terms of flipping coins, he called heads before flipping ten heads in a row. Steve's information is -not- - he's the guy who flipped ten heads in a row without calling anything.
(ETA: I have no idea how to make them look right. How do you escape stuff?)
Replies from: arundelo↑ comment by arundelo · 2013-03-28T18:50:34.186Z · LW(p) · GW(p)
In certain contexts an asterisk is a magic character and you need to precede it with a backslash to keep it from turning into <em>
or </em>
. To get
p(X|M, M1) = p(M|X)*P(X|M1)/p(M|M1)
do
p(X|M, M1) = p(M|X)\*P(X|M1)/p(M|M1)
Or you can just put equations in their own paragraphs that are indented by four spaces, in which case no characters will have their magic meaning. (This is how I did the above paragraph where the backslash is visible.)
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T18:57:24.234Z · LW(p) · GW(p)
Is there a reference somewhere on LessWrong or the Wiki for the mark-up used in the comments?
Replies from: arundelo↑ comment by arundelo · 2013-03-28T19:37:44.335Z · LW(p) · GW(p)
There's a "Show help" button on the right underneath comment fields. The quick reference it reveals includes a link to the wiki page.
The formatting language used is a (not totally bug-free) subset of Markdown.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T19:45:29.321Z · LW(p) · GW(p)
Laughs I'm so used to useless help screens I ignored that button, looked for it manually on the wiki, couldn't find it, and gave up. Thanks!
↑ comment by private_messaging · 2013-03-26T09:48:05.273Z · LW(p) · GW(p)
Again, why?
Suppose i type 693012316 693012316 . Maybe I typed same number twice, maybe I used quantum random number generator and got them separately on the first try. You use the equality as evidence of the former, even if you believe in many worlds, where it is basically a lottery played by parallel yous. Likewise, the winner of the lottery observes the same number twice, which is some evidence for various crazy hypotheses where the selection of "I" necessarily coincides with the winner. edit: you're totally correct though that such crazy hypotheses are quite improbable to begin with.
Replies from: gwern↑ comment by gwern · 2013-03-26T14:02:57.669Z · LW(p) · GW(p)
Likewise, the winner of the lottery observes the same number twice, which is some evidence for various crazy hypotheses where the selection of "I" necessarily coincides with the winner.
In my example of two worlds, the odds of observing the observed evidence is the same in both worlds and so there is no update.
What set of worlds are you postulating for your "two numbers" example? Because your example, as far as I understand it, doesn't seem at all analogous.
Replies from: private_messaging, Eugine_Nier↑ comment by private_messaging · 2013-03-26T17:12:43.505Z · LW(p) · GW(p)
I'm talking specifically about supernatural explanations for you winning the lottery, I don't see either why people opt for supernatural explanations for haunting.
Suppose we do something like Solomonoff induction. Dealing with codes that match observations verbatim. There's a theory that reads bits off the tape to produce the ticket number, then more bits to produce the lottery draw, and there's a theory that reads bits off the tape and produces both numbers as equal. Suppose the lottery has the size of 2^20, about 1 million. Then the former theory will need 40 lucky bits to match the observation, whereas the latter theory will need only 20 lucky bits to match the observation. For mostly everyone the latter theory will be eliminated, except the lottery winner, for who it will linger, and now, with the required lucky bits, the difference in length between the theories will decrease by 20 bits. S.I. - using learning agent (AIXI and variations of it) which won the lottery will literally expect higher probability of victory on next lottery, because it didn't eliminate various "I always win" hypotheses. edit: and indeed, given sufficiently big N, the extra code required for "I always win" hack will be smaller than log2(N) so it may well become the dominant hypothesis after a single victory. Things like S.I. are only guaranteed to be eventually correct for almost everyone; if there's enough instances, the wrongmost ones can be arbitrarily wrong.
At the end of the day it's just how the agents learn - if you were constantly winning lotteries, at some point you would start believing you got supernatural powers, or MWI is true plus the consciousness preferentially transfers specifically to the happy winner, or the like. Any learning agent is subject to risk of learning wrong things.
edit: more concise explanation: if you choose a person by some unknown method, and then they win the lottery, that's distinct from you not choosing some person, then someone winning the lottery. Namely, in the former case you got evidence in favour of the hypothesis that "unknown method" picks lottery winners. For a lottery winner, their place in the world was chosen by some unknown method.
Replies from: gwern↑ comment by gwern · 2013-03-29T20:54:03.554Z · LW(p) · GW(p)
So let's see if I'm understanding you here.
You treat a lottery output as a bitstring and ask about SI on it. We can imagine a completely naive agent with no previous observations; what will this ignorant predict? Well, it seems reasonable that one of the top predictions will be for the initial bitstring to be repeated; this seems OK by Occam's razor (events often repeating are necessary for induction) and I understand that empirically investigating simple Turing machines that many (most? all?) terminating programs will repeat output. It will definitely rank the 'sequence repeats' hypotheses above that of possible PRNGs, or very complex physical theories encompassing atmospheric noise and balls dropping into baskets etc.
So far, so good.
I think I lose you when you go on to talk about inferring that you will always win and stuff like that. The repeating hypotheses aren't contingent on who they happen to. If the particular bitstring emitted by the lottery had also included '...and this number was picked by Jain Farstrider', then SI would seem to then also predict that this Jain will win the next one as well, by the same repeating logic. It certainly will not predict that the agent will win, and the hypothesis 'the agent (usually) wins' will drop.
Remember that my trichotomy was that you need to either 1) invoke anthropics; 2) break Aumann via something like dishonesty/incompetence; or 3) you actually do have communicable knowledge.
These SI musings doesn't seem to invoke anthropics or break Aumannian requirements, and looking at them, they seem communicable. 'AIXI-MC-MML*, why do you think Jain will win the lottery a second time?' '[translated from minimum-message-length model+message] Well, he won it last time and since I am ignorant of everything in the world, it seems reasonable that he will win it again'. 'Hmm, that's a good point.' And ditto if AIXI-MC-MML happened to be the beneficiary.
* I bring up minimum-message length because Patrick Robotham is supposed to be working on a version of AIXI-MC using MML so one would be able to examine the model of the world(s) a program has devised so far and so one could potentially ask 'why' it is making the predictions it is. Having a comprehensible approximation of SI would be pretty convenient for discussing what SI would or would not do.
Replies from: private_messaging↑ comment by private_messaging · 2013-03-30T17:55:00.335Z · LW(p) · GW(p)
It will definitely rank the 'sequence repeats' hypotheses above that of possible PRNGs,
It doesn't need PRNGs. The least confusing description of S.I. is as following: the probability of a sequence S is the probability that an universal prefix Turing machine with 3 tapes: input tape which can only be read from, head only advanced in one direction, work tape which can be read from and written to, and is initialized with zeroes, and output tape that can only be written to, will output the sequence S when fed a never-ending string of random bits on the input tape.
The head has such rule set that the program can be loaded via the input tape, and then the program can use the input tape as source of data. This is important because a program can then set up an interpreter emulating other Turing machine (which ensures a constant bound on difference between length of code for different machines).
(We predict using conditional probability - if the machine outputs sequence matching the previous observations, what is the probability that it will produce specific future observations) .
So if we are predicting, for example, perfect coin flips, an input string which begins with code that sets up the working tape so that it will subsequently relay random bits from input to the output, does the trick. This code requires the bits on the input tape to match the observation, meaning that for each observed bit, the length of the input string which has to be correct grows by 1 bit.
Meanwhile a code that sets up the machine to output repeating zeroes does not require any more bits on the input tape to be correct. So when you are getting repeated zeroes, the code relaying random bits is being lowered in weight by factor of 2 with each observed bit, whereas the theory outputting zeroes stays the same (until, of course, you encounter a non zero and it is eliminated).
For more information, see referenced papers in
http://www.scholarpedia.org/article/Algorithmic_probability
I think I lose you when you go on to talk about inferring that you will always win and stuff like that. The repeating hypotheses aren't contingent on who they happen to. If the particular bitstring emitted by the lottery had also included '...and this number was picked by Jain Farstrider', then SI would seem to then also predict that this Jain will win the next one as well, by the same repeating logic.
You scratched your ticket and you seen a number. Correct codes have to match the number on the ticket and the number winning the lottery. Some use same string of input bits to match both, some use different pieces of input string.
(I am assuming that S.I. can not precisely predict the lottery. Even assuming a completely deterministic universe, light from the distant stars, incoming cosmic rays, all of that incoming information ends up mixed in the grand hash of thermal noise and thermal fluctuations)
edit: to make it clearer. Suppose that the lottery has 1000 decimal digits; you scratch one ticket; then later, the winning number is announced, and it matches your ticket. You will conclude that the lottery was rigged, with very good confidence, won't you? In absence of some rather curious anthropic reasoning, existence or non existence of 10^1000 -1 other tickets, or other conscious players, is entirely irrelevant (and in presence of anthropics you have to figure out which ancestors of h. sapiens will change your answer and which won't). With regards to Aumann's agreement theorem, other people would agree that if they were in your shoes (shared the data and the priors) they'd arrive at same conclusions, so it is not at all violated.
↑ comment by Eugine_Nier · 2013-03-28T04:33:43.331Z · LW(p) · GW(p)
The point is that if the lottery is biased it's more likely to be biased in such a way that the same number repeats.
↑ comment by CronoDAS · 2013-03-24T03:14:38.258Z · LW(p) · GW(p)
If the world is ever capable of simulating individual people, any given important person is more likely a simulation than the real thing - so, given that I'm not particularly important, I can probably assume I'm not simulated, unless something exceptionally unlikely happens to me.
"Important" people in most MMOs tend to be NPCs. You can't have every PC be King of Orgrimmar or whatever...
↑ comment by OrphanWilde · 2013-03-22T18:13:31.688Z · LW(p) · GW(p)
If my interpretation of your complaint is correct, it's probably a good thing I didn't do that, then.
ETA: My interpretation being that you're complaining that I rejected the hypothesis as being an incorrect derivation of the logic. The theorem is a perfectly fine derivation of its assumptions; I can't find anything wrong with its logic, and have no interest in trying to. My statement was simply meant to reflect the fact that the conclusion is wrong, which follows from my rejection of the assumption that all pertinent evidence can in any case be shared.
↑ comment by AlexSchell · 2013-03-22T23:28:19.175Z · LW(p) · GW(p)
I don't really know much about this, but from what I recall the theorem doesn't require the hypothesis that info can be shared. The theorem says that two Bayesians with common priors and common knowledge of their posteriors have the same posteriors. They don't actually need to communicate their evidence at all, so the evidence need not be communicable.
Replies from: Creutzer↑ comment by Creutzer · 2013-03-23T10:56:31.083Z · LW(p) · GW(p)
In practice, though, how are they going to attain knowledge of each other's posteriors without communicating?
Replies from: AlexSchell↑ comment by AlexSchell · 2013-03-23T15:59:02.650Z · LW(p) · GW(p)
Actually, to agree on a proposition, they only need to have common knowledge of their posteriors for that proposition. (At least this is how Aumann describes his result.) And they can communicate those posteriors without communicating their evidence.
Replies from: Creutzer↑ comment by handoflixue · 2013-03-22T18:15:03.572Z · LW(p) · GW(p)
saying a theorem is wrong because the hypotheses are not true is bad logic.
If the objection is true, and the hypothesis is false, that seems like a great objection! If, on the other hand, he provided no evidence towards his objection, then it seems that the bad logic is in not offering evidence, not attacking the hypothesis directly.
Am I missing something, or just reading this in an overly pedantic way?
Replies from: Kindly↑ comment by Kindly · 2013-03-23T16:09:35.860Z · LW(p) · GW(p)
You're missing something by reading this in an insufficiently pedantic way.
The pedantic way is as follows. The theorem's claim is "If A, then B", where A is the hypothesis. Claiming that A is false does not invalidate the theorem; in fact, if A could be proven to be false, then "If A, then B" would be vacuously true, and so in a way, arguing with the hypotheses only supports the theorem.
You could, however, claim that the theorem is useless if the hypothesis never holds. One example of this is "If 2+2=5, then the moon is made of green cheese". This is a true statement, but it doesn't tell us anything about the moon because 2+2 is not 5.
comment by Shmi (shminux) · 2013-03-24T04:08:36.513Z · LW(p) · GW(p)
Since no one mentioned this famous excerpt from Feynman's bio yet:
He was at work in the computing room when the call came from Albuquerque that Arline was dying. He had arranged to borrow Klaus Fuchs’s car. When he reached her room she was still. Her eyes barely followed him as he moved. He sat with her for hours, aware of the minutes passing on her clock, aware of something momentous that he could not quite feel. He heard her breaths stop and start, heard her efforts to swallow, and tried to think about the science of it, the individual cells starved of air, the heart unable to pump.
Finally he heard a last small breath, and a nurse came and said that Arline was dead. He leaned over to kiss her and made a mental note of the surprising scent of her hair, surprising because it was the same as always.
The nurse recorded the time of death, 9:21 P.M. He discovered, oddly, that the clock had halted at that moment—just the sort of mystical phenomenon that appealed to unscientific people.
Then an explanation occurred to him. He knew the clock was fragile, because he had repaired it several times, and he decided that the nurse must have stopped it by picking it up to check the time in the dim light.
Gleick, James (2011-02-22). Genius: The Life and Science of Richard Feynman (Kindle Locations 3604-3612).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-03-24T04:44:09.824Z · LW(p) · GW(p)
As has been observed elsewhere, still more likely is that the clock had stopped prior to her death and the nurse didn't realize it was stopped while recording the time of death.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-03-24T05:25:44.553Z · LW(p) · GW(p)
Right, still a perfectly non-mysterious explanation.
comment by Kindly · 2013-03-23T22:54:37.582Z · LW(p) · GW(p)
Okay, I'll bite the lottery bullet. If I happen to win the lottery, then this does not make it more likely that something unusual happened (e.g. the lottery being rigged). The hypothesis that I naturally won the lottery is unlikely, true; but the hypothesis that the lottery was rigged so that I would win is equally unlikely for the exact same reasons: why was it rigged for me to win, and not someone else?
Similarly, in the haunted house situation, you should in fact consider it unlikely that these events would happen to you. However, this penalty should apply equally to just about any explanation: it's equally unlikely that psychopaths would choose you, specifically, to play tricks on, as it is that you, specifically, would happen to find yourself in a haunted house.
The only way to break symmetry is if you have reason to believe that some kind of rare event is more likely to happen to you. Is there something about you that makes you more likely to find yourself in a haunted house? (e.g. are you a ditzy blonde girl between the ages of 16 and 19.5?) Then you should update on that. Similarly, if you're related to someone running a lottery, and you find yourself winning it, you should suspect that foul play was involved.
Note that all these explanations carry over: if the lottery winner is related to the lottery organizer, you should suspect foul play whether or not the lottery winner is you.
You might argue that supposing you live in a simulation breaks this: specifically, if we consider simulations of the "everyone you don't know is fake" flavor. In that case, the hypothesis "the simulation is rigged so one of the people being simulated in detail wins" means you have maybe a 1:70 chance of winning, compared to a much smaller chance if the world is big.
But here the improbability is just redirected to a higher level: what are the odds that someone would simulate you, specifically? Once we factor this in, the simulation hypothesis is back to "odds that an arbitrary person finds themselves in a lottery-based simulation". Calculating these odds is beyond the scope of this comment.
comment by Juxtaposition · 2013-03-23T20:22:11.473Z · LW(p) · GW(p)
This is a textbook example of getting carried away with irrational fascination, and letting one's beliefs drift along lines defined by no more than a disorienting attempt to jam defunct terminology into a new situation. It's reminiscent of when notable academics profess their 'belief in God' by reference to plausibly scientifically defensible positions concerning possible alien intelligences, potential future singularity states, etc. That's not what they meant by "God"!
Why call it "superstition"? Why refer to this as an instance of you updating on anything? You now believe in haunted houses? Really? Tell me, why did you write this post as you did? What would this post have looked like if you did nothing but describe the events that you witnessed? For one, it probably wouldn't have been nearly as fun to write. Nor would it have seemed as on-topic for this forum as it currently does.
You claim you don't believe in supernatural events. You say you believe the house is "haunted", but as no more than a description of these seemingly bizarre occurrences that have yet to be explained. But if you take this seriously, you should be able to rewrite this whole post as just that--a mere description of the events. If by "haunted", you only mean what happened, then there's no reason why you couldn't just rewrite it without that word. If all I mean by "TV" is "television", then I should be able to replace all instances of the first word with the second, and retain my meaning completely.
But then where's the update? What did you change your mind about? Where's the insight? All that's left is a catalog of strange things that happened in your old house. All the interesting philosophy vanishes, and the responses devolve into nothing more engaging than suggestions about what may be going on, or action that may be taken. Gaslighting, EMFs, sleepwalking, a gas leak, etc. Perhaps you should set up some cameras, or get the place inspected. Or maybe you should just get the hell out of there and forget about the whole thing.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T16:34:11.248Z · LW(p) · GW(p)
The update is on the idea that houses, rather than individuals, may be the locus of events. I previously regarded haunted houses as being the product of superstitious people scaring themselves, or perhaps a series of coincidences, which though unlikely will occur in a large enough population. I can no longer reasonably do so.
I don't subscribe to a particular causal explanation because I don't have one.
comment by maia · 2013-03-23T02:37:03.872Z · LW(p) · GW(p)
Interestingly, my father, a moderately respected scientist, has cited similar reasoning to me when discussing why he believes in supernatural phenomena. He believes he has encountered overwhelmingly convincing evidence, but says he understands that I shouldn't necessarily believe him. This is... a pleasant way to deal with disagreement, if not faultless reasoning.
After reading your thread with gwern, I think you and he are probably wrong about this reasoning in general, and you are probably wrong in your case specifically.
I think it should be possible to encounter supernatural phenomena in such a way that it is extremely convincing to you and not to anyone else. If you were a highly rational agent who encountered real supernatural phenomena, and told (even perfectly rational) people about it, their first reaction would be not to believe you. And this likely makes sense on their part unless you're able to produce extremely good evidence that (you are highly rational AND you are very unlikely to be lying), OR you have reproducible evidence of a particular phenomenon that you can show them.
But you should be able to produce such evidence... if it's not convincing to them, why is it convincing to you?
Replies from: Eugine_Nier, drethelin, jooyous↑ comment by Eugine_Nier · 2013-03-23T23:35:57.862Z · LW(p) · GW(p)
But you should be able to produce such evidence... if it's not convincing to them, why is it convincing to you?
Because they don't have enough evidence of your rationality.
Replies from: maia↑ comment by maia · 2013-03-24T17:24:41.961Z · LW(p) · GW(p)
Then why can't you produce that instead? If you don't have any outside-view evidence of your rationality, why do you believe you are rational?
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-03-24T23:08:33.338Z · LW(p) · GW(p)
Most people in the world believe in the supernatural. What's your outside view argument that it is they and not you who are irrational?
↑ comment by drethelin · 2013-03-24T19:06:10.893Z · LW(p) · GW(p)
If you see something vanish before your eyes, that's evidence in favor of weird stuff happening, but it's a lot less convincing to anyone else. Without recording, it's impossible to convey all the attached data in a reliable way eg, you KNOW you bought it, you know where and when and why and you know that at one moment it was there and the next it isn't. But to them, it may never have "existed" at all, if they never saw it before it "Vanished". It's easier for them to assume you misplaced it or are lying about ever having whatever it was. The same is true for things appearing or moving around. There's also the problem of sample size. If you live in a "haunted" house, you spend orders of magnitude more time than anyone who doesn't who you try to explain the problem to. if on average something happens once a week, this can be convincing and frightening to someone who lives there for a year but might never be seen by anyone else.
Replies from: maia↑ comment by maia · 2013-03-24T21:07:14.842Z · LW(p) · GW(p)
Disclaimer: I'm going to set aside the issue of lying here, and assume you can convince people you are telling the truth, because this seems like a less-interesting gamut. If you want to talk about that, feel free to say so.
If you see something vanish before your eyes, that's evidence in favor of weird stuff happening, but it's a lot less convincing to anyone else.
If they believe you are telling the truth, but think it is more likely that you are crazy than that this actually happened, why should you prefer the latter hypothesis? Do you trust your senses more just because they are your own? That doesn't make sense.
It's easier for them to assume you misplaced it
Why shouldn't you assume this if it makes more sense? People forget moving things all the time.
if on average something happens once a week, this can be convincing and frightening to someone who lives there for a year
If it's really a repeating phenomenon, you should be able to get someone else to come over to your house and witness it at least once. And after a few times of the two of you seeing the same thing at the same time (ideally with some safeguards like writing down what you saw and when before exchanging information to avoid bias), that person can safely demote the "you are crazy" hypothesis.
Replies from: drethelin↑ comment by drethelin · 2013-03-25T06:24:20.404Z · LW(p) · GW(p)
Why doesn't it make more sense to trust your own senses than other peoples? You have a LOT more evidence of them being accurate, than you do of anyone else's. Different brainstates correlate with different actual events with various degrees of reliability, but you'll always have a better sample size to gather correlational data about your own brainstates as compared to that of someone else.
before you hornswoggle someone into hanging out at your spooky mansion for a few weeks to make sure of seeing something weird, you are in "possession" of evidence(Eg a temporary occurrence that created a certain brainstate such as having thought to have seen something move or appear) that is convincing to you but not to anyone else. Once you actually start witnessing stuff with someone else, that's further, different evidence.
↑ comment by jooyous · 2013-03-23T19:02:56.534Z · LW(p) · GW(p)
I think there's a state of mind associated with stuff like this where you just feel bad and you don't even want to know why anymore, you just want the bad feeling to stop? So it's not really based on evidence. I think there are some brain states that might be built out of confirmation bias and bad reasoning, but sometimes the only way to flush them and make your brain work again is to just move out of the surroundings that caused the badness in the first place.
comment by NancyLebovitz · 2013-03-22T23:05:49.691Z · LW(p) · GW(p)
"A stone cannot fall from the sky - there ARE no stones in the sky." -- Lavoisier (quote found at the link above)
When you hear something really weird from a number of independent sources, what can you conclude?
Replies from: gwern, Manfred, Luke_A_Somers↑ comment by gwern · 2013-03-23T02:35:44.096Z · LW(p) · GW(p)
So two professors make a tiger? Let us keep in mind that in the annals of Charles Fort, I have little doubt that we could find things we currently believe false with as much testimony.
↑ comment by Luke_A_Somers · 2013-03-23T22:59:43.152Z · LW(p) · GW(p)
Anyone with eyes can tell that there's at least one rock in the sky: the MOON. And they've seen shooting stars. I'm sure both Lavoisier and Jefferson were aware of Newton's explanation of planetary motion as a continuation of earthly laws, and of Galileo's observations of the moons of Jupiter. There are legends of rocks that fell from the sky (e.g. Excalibur was said to have been forged from meteoric iron)
With all of this, it's a bit of a mystery to me how it could be a stretch to suppose that some rocks do fall from the sky.
All this is of course aside from your general question, but really.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-03-23T23:45:49.759Z · LW(p) · GW(p)
An interesting question is what ideas are we discounting today in an equally irrational manner.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-03-24T02:26:17.130Z · LW(p) · GW(p)
Bingo. That's roughly what I was going to say.
It's a lot easier to be smart after the facts are in.
comment by [deleted] · 2013-03-23T04:11:28.292Z · LW(p) · GW(p)
This should have been two posts. First, "My house is spooking me, what's up with that? Spooky things, amirite?". Second, "There's some distinct information content between experiencing an event and being told about someone else experiencing that event. Therefore AAT doesn't work, amirite?"
To the second point, I haven't read any write-ups of AAT. Does it say you have to speak messages that have the same evidential impact to others as your accumulated experience does to you? That sounds like a pretty terrible theorem. I thought there was some wiggle room where each agent could regard the other as an imperfect updater, whose words nevertheless had some informativeness about reality, and from that they eventually reach close posteriors given close priors once upon a time. So the end effect is like an evidence swap, but it works even if you can't quite convey the messages you want, so long as your statements are informative about your present belief state at each round of communication and you can each update a bit.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-03-23T10:58:15.568Z · LW(p) · GW(p)
Here's how AAT works:
- You have two perfect Bayesian agents.
- They have "common knowledge" of each others rationality. (This means that they're rational, they each know that the other is rational, they each know the other knows that they are rational, they each know that the other knows that they know they are rational, and so on.)
- They each have the same prior for some proposition "A".
- They then get some evidence. What evidence they each get depends on the agent, so they end up with different beliefs.
- They then communicate to each other their current beliefs. (They don't communicate anything about the evidence they saw or their reasoning. They only trade one number: the probability they currently assign to A.)
- Because the belief of the other agent tells them something about what the other agent saw they must update their current beliefs.
- Theorem: After repeating this enough times they end up with the same beliefs.
↑ comment by magfrump · 2013-03-24T01:05:52.928Z · LW(p) · GW(p)
The first two of these hypotheses I think pretty clearly don't apply to this context; all of the uncertainty that I subjectively feel comes from not trusting that you are rational. If I heard someone close to me say something like this, then my first instinct would be to think of them as being less rational, as this seems a more likely explanation than the explanation they've given.
However there are a small number of people that I feel like, if they came to me with this evidence, they would be able to present it in a way that could convince me.
So at least my introspection says that bullet point 2 is the failing hypothesis, and correcting towards this (by having the evidence come from people I trust more) will actually result in more updating. This seems consistent with your post, since people generally trust themselves the most.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-03-24T02:35:33.209Z · LW(p) · GW(p)
Is there some point at which AAT suggests that people are disagreeing because they have different experiences, and something needs to be checked on?
My example is a time when I was with people who were arguing about how hot the hot and sour soup was, and eventually some sampling established that one side of the table had been given hotter soup.
This is an easy case, of course-- everyone's nervous systems were similarly calibrated for capsaisin.
Replies from: magfrump, magfrump↑ comment by magfrump · 2013-03-24T09:45:45.920Z · LW(p) · GW(p)
The other comment is maybe less in the spirit of your comment, so here's a more direct reply:
If different agents communicate their evidence to one another continually, and keep having different evidence that draws their beliefs apart, the simplest beliefs should end up being that they are in different reference classes. I think this ends up being a question of specific evidence and updates, and isn't really relevant to AAT.
As an example, it is easy for me to believe that my friend is allergic to peanuts and still eat peanuts myself. We both eat a mystery food, independently, then talk about our experiences. He went to the hospital, and I thought it was tasty. We both conclude the food had peanuts; we can completely Aumann despite our different experiences.
↑ comment by magfrump · 2013-03-24T09:40:08.325Z · LW(p) · GW(p)
I am rereading your question as: "When do circumstances become different enough that evidence from one situation doesn't apply to the other situation?" and this sounds like the fundamental question of reference class tennis, which I believe does not have a good answer.
comment by Will_Newsome · 2013-03-22T21:34:51.227Z · LW(p) · GW(p)
I've had many similar experiences. You might want to search for J. E. Kennedy, capricious psi. And keep in mind it's dangerous to be around human or transhuman intelligences with unknown motivations. Also, yay updating on evidence! Whether the explanation is supernatural or not, the important thing is to keep many hypotheses in mind and not disregard evidence or hypotheses just because they're uncomfortable.
comment by drethelin · 2013-03-22T20:44:35.148Z · LW(p) · GW(p)
my priors for "weird geographically located feelings and occurrences happen" is a lot higher (as it should be obviously but it's worth stating) than for any of the possible explanations put forward for it. I think EMF like reasons are the most plausible, though. Tricks on memory and feelings seem pretty plausible, eg you didn't actually lose the key except in your pocket, and then dropped it later than you thought you did.
comment by Dagon · 2013-03-23T21:19:38.043Z · LW(p) · GW(p)
The question of "probability of occuring" is only half the question. For you, the probability that those experiences are in your memory is 1 - they're real. The more interesting probability to explore is the relative likelihood of causes for these memories.
For the three main possibilities (or any others you find more likely than any of them): 1) "supernatural" or non-human action. 2) believable hallucinations (including hallucination of other people's reactions, or environmental causes of hallucinations in multiple people). 3) devious human agency.
what were your prior probabilities of encountering them? Having experienced this strangeness, what are your current probabilities?
My estimate that I will experience something along these lines is something around 0.000001, 0.005, and 0.001 (note that there is some overlap between 2 and 3, and I count "someone inducing hallucinations" as #3 for this). If I did experience things that could be explained by any of them, they would increase massively - they'd sum to something under 1 to account for options I haven't listed. But they'd retain their proportions, so figure 0.0001, 0.5, and 0.1. It would take a lot of evidence to reduce the RELATIVE probability of 1 vs 2 and 3, not just the absolute probability of experiencing weird shit.
So, before your experiences, what were your priors for your main causal hypotheses? and what are your estimates now?
For myself as a reader of the post, I must admit that I add a sub-possibility of #3: the poster is misrepresenting something, intentionally or un-. And that my prior (for experiencing reports from strangers of strange happenings) gives this a higher likelihood than the others by more than an order of magnitude.
Sorry for that, but don't let it stop you from examining your own experiences in a rational way, and finding ways to distinguish among possibilities in order to make correct decisions.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-03-28T16:28:38.737Z · LW(p) · GW(p)
I can't really rule out any of those options, but the discrepancy in situations and people present makes it difficult to blame any one person for all of the events. (The key event in particular occurred with my girlfriend and brother present, while my sister was out of state. And the event with the sonograms occurred when I wasn't there, so not even I am a common element in all the weird events.)
I'm leaning towards a mixture of #2 and a fourth option - that there's some environmental condition in the house which is causing people to behave oddly.
An important qualifier in my post is that I previously regarded the idea of haunted houses (again, without prejudice for the word "haunted") as purely fiction, entirely in the heads of the occupants. I'm now forced to conclude that there are houses where - for whatever reason - strange events are indeed common.
comment by knb · 2013-03-26T07:05:42.737Z · LW(p) · GW(p)
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is.
You should clarify what you mean by "haunted houses." You seem to be using the term "haunted houses," to mean "houses where people have experienced spooky things." If that is all you mean by the phrase "haunted houses," then your belief in them is not unusual in any sense. I believe that many people have had spooky-house experiences that they don't know how to explain.
A sensible way to react to this is to assume that "hauntings" have a variety of causes. One haunted person was hallucinating, one person was being pranked by someone, another had someone secretly living in her root cellar, etc. It seems like you are arguing that "haunted houses" exist as a phenomenon with a common cause (as yet unknown, possibly EMFs). That is an unreasonable way to react.
Also, please post a photo of the alien-geometry room, and tell us what you did with the ultrasound photos.
comment by Richard_Kennaway · 2013-03-25T13:11:27.634Z · LW(p) · GW(p)
You mention a housemate, apparently your sister. What's more likely: ghosts, hand-wavy stuff about electric fields, or your sister? Well, you know your sister, I know nothing about her.
Whatever the probabilities, the scariest hypothesis is that it's your sister doing all this. Poltergeists, whatever they are, seem to just do random weird annoying stuff. People, on the other hand, have purposes; crazy people can have crazy purposes.
comment by MrMind · 2013-03-25T09:29:32.795Z · LW(p) · GW(p)
Well, you notice you're confused: good! Now the best way to proceed is to form hypothesis on what could cause the strange phoenomena and test for those. Is there a ghost in the house? Is gaslighting/EMF messing with your head? Is there someone pranking you? There are various ways to falsify all of those... just test and update.
comment by MugaSofer · 2013-03-24T19:06:38.533Z · LW(p) · GW(p)
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is. In particular, I own a house in which bizarre crap has happened since I first moved into it. Persistently. I've moved into another house, and have been making repairs in preparation to sell it; most recently, in a room with almost no furniture, in a space with absolutely no furniture, a key was dropped by myself. Four people searched the area for significant periods of time on three different occasions with no luck. I found it on the floor a week or two ago on top of something that wasn't there when it fell. Which is the straw that broke the camel's back in terms of my skepticism.
Other bizarre things that have happened include such things as my waking up to discover my recently-purchased bottle of key lime juice had been placed in the oven, and the oven turned on; the plastic bottle had just started to melt when I made the discovery. Another situation involved my sister, who one morning (while home alone) walked into the living room and discovered on a previously empty floor three sonograms of the previous occupant's baby. (There were -many- other things; I'm choosing for the purposes of this post the most unusual and least prone-to-outside-explanation occurrences. Night terrors, for example, are easily explained.)
Well that ... sure is weird. Are you planning to set up hidden cameras? Cameras seem like they should deal with most of the competing hypotheses here.
I must say, this seems like it should take a human. Whether hiding in the walls or sleepwalking & high on mold. Even if you couldn't rule out ghosts, they don't seem to fit.
comment by Lauryn · 2013-03-22T18:53:07.628Z · LW(p) · GW(p)
I just have to point out that just because it's anecdotal evidence doesn't mean we shouldn't take it as evidence- albeit with a good amount of salt. Especially from someone who we have evidence (Being on this site in the first place) is at least mildly rational. (And I'm not even going to mention the ghosts thing.)
Replies from: maia↑ comment by maia · 2013-03-23T02:13:45.278Z · LW(p) · GW(p)
who we have evidence (Being on this site in the first place)
Careful. Lots of people can use the Internet. (It bothers me a little bit when people on LessWrong say things like 'using LessWrong is evidence of rationality' and suchlike, mostly because I feel it encourages complacency.)
His reputation here overwhelms any evidence from the mere fact that he's here.
Replies from: army1987, None↑ comment by A1987dM (army1987) · 2013-03-23T09:32:23.425Z · LW(p) · GW(p)
I took “being on this site” to mean “being a regular” in this context.
↑ comment by [deleted] · 2013-03-23T03:39:53.121Z · LW(p) · GW(p)
Oh wow, I totally misread Lauryn. I thought they were saying it's mildly rational to take anecdotes as evidence from someone who was on "the site", as in at the site of the spooky house. Like a general listens to a scout who has just surveyed some terrain and doesn't say, "Haha, your mission was pointless! This report is nothing but worthless anecdotes! Come back when you have a decent effect size!"
comment by [deleted] · 2015-06-14T06:15:36.566Z · LW(p) · GW(p)
I recently started carrying a plastic water bottle around with me. I use to carry a metal one, because I was worried about plastic leaching into the water and that being toxic or something. I decided to investigate the risk recently, and came across a Quora post that's reasurred me that if I wash the bottle with soap, then dry it intermittently, it should be alright.
There is a recycling stamp on every plastic bottle that is usually placed at the bottom with a digit inside (image above). This can help you pretty much understand the rules of recycling/reusing bottles. There are basically 7 codes, explanation of each is below. ... Safely Reusing Plastic Number 1
The major concern with the reuse of plastic bottles isn't that the plastic will leach out harmful chemicals, but rather that bacteria will grow in the bottles. According to PlasticsInfo.org, in plastic labeled number 1, PET plastic itself is sanitary, but when warmed it becomes susceptible to bacteria. When washing bottles for reuse, the key is to thoroughly dry the bottle before refilling it with water or another liquid.
PET plastic bottles are designed and sold for one-time use so they are not shaped with a wide opening for easy cleaning. Consumers must take extra care when washing these bottles in hot soapy water, allowing enough time before refilling for the bottle to completely dry.
comment by Alrenous · 2013-04-09T03:55:51.965Z · LW(p) · GW(p)
Was the alien geometry visible from outside the room? Or would the burglar have had to open the door and thus see the expensive materials before deciding to leave it be?
Replies from: OrphanWilde↑ comment by OrphanWilde · 2013-05-10T17:01:38.727Z · LW(p) · GW(p)
Update on the alien geometries thing:
There's a reason the room looked like it didn't fit together right. It's because it didn't fit together right; that part of the house was sagging heavily, and the angles were within visual tolerances of square while the whole of it was still visibly off; effectively it was an optical illusion. It's been fixed now. It's still a very creepy room but the corners are at least square now.