Dreams with Damaged Priors
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-08T22:31:22.994Z · LW · GW · Legacy · 61 commentsContents
61 comments
Dreaming is the closest I've gotten to testing myself against the challenge of maintaining rationality under brain damage. So far, my trials have exhibited mixed results.
In one memorable dream a few years ago, I dreamed that the Wall Street Journal had published an article about "Eliezer Yudkowsky", but it wasn't me, it was a different "Eliezer Yudkowsky", and in the dream I wondered if I needed to write a letter to clarify this. Then I realized I was dreaming within the dream... and worried to myself, still dreaming: "But what if the Wall Street Journal really does have an article about an 'Eliezer Yudkowsky' who isn't me?"
But then I thought: "Well, the probability that I would dream about a WSJ article like that, given that a WSJ article like that actually exists in this morning's paper, is the same as the probability that I would have such a dream, given that no such article is in this morning's paper. So by Bayes's Theorem, the dream isn't evidence one way or the other. Thus there's no point in trying to guess the answer now - I'll find out in the morning whether there's an article like that." And, satisfied, my mind went back to ordinary sleep.
I find it fascinating that I was able to explicitly apply Bayes's Theorem in my sleep to correctly compute the 1:1 likelihood ratio, but my dreaming mind didn't notice the damaged prior - didn't notice that the prior probability of such a WSJ article was too low to justify raising the hypothesis to my attention.
At this point even I must concede that there is something to the complaint that, in real-world everyday life, Bayesians dispense too little advice about how to compute priors. With a damaged intuition for the weight of evidence, my dreaming mind was able to explicitly compute a likelihood ratio and correct itself. But with a damaged intuition for the prior probability, my mind didn't successfully check itself, or even notice a problem - didn't get as far as asking "But what is the prior probability?"
On July 20 I had an even more dramatic dream - sparking this essay - when I dreamed that I'd googled my own name and discovered that one of my OBLW articles had been translated into German and published, without permission but with attribution, in a special issue of the Journal of Applied Logic to commemorate the death of Richard Thaler (don't worry, he is in fact still alive)...
Then I half woke-up... and wondered if maybe one of my OBLW articles really had been "borrowed" this way. But I reasoned, in my half-awake state, that the dream couldn't be evidence because it didn't form part of a causal chain wherein the outside environment impressed itself onto my brain, and that only actual sensory impressions of Google results could form the base of a legitimate chain of inferences.
So - still half-asleep - I wanted to get out of bed and actually look at Google, to see if a result turned up for the Journal of Applied Logic issue.
And several times I fell back asleep and dreamed I'd looked at Google and seen the result; but each time on half-awaking I thought: "No, I still seem to be in bed; that was a dream, not a sense-impression, so it's not valid evidence - I still need to actually look at Google." And the cycle continued.
By the time I woke up entirely, my brain had fully switched on and I realized that the prior probability was tiny; and no, I did not bother to check the actual Google results. Though I did Google to check whether Richard Thaler was alive, since I was legitimately unsure of that when I started writing this post.
If my dreaming brain had been talking in front of an audience, that audience might have applauded the intelligent-sounding sophisticated reasoning about what constituted evidence - which was even correct, so far as it went. And yet my half-awake brain didn't notice that at the base of the whole issue was a big complicated specific hypothesis whose prior probability fell off a cliff and vanished. EliezerDreaming didn't try to measure the length of the message, tot up the weight of burdensome details, or even explicitly ask, "What is the prior probability?"
I'd mused before that the state of being religious seemed similar to the state of being half-asleep. But my recent dream made me wonder if the analogy really is a deep one. Intelligent theists can often be shepherded into admitting that their argument X is not valid evidence. Intelligent theists often confess explicitly that they have no supporting evidence - just like I explicitly realized that my dreams offered no evidence about the actual Wall Street Journal or the Journal of Applied Logic. But then they stay "half-awake" and go on wondering whether the dream happens to be true. They don't "wake up completely" and realize that, in the absence of evidence, the whole thing has a prior probability too low to deserve specific attention.
My dreaming brain can, in its sleep, reason explicitly about likelihood ratios, Bayes's Theorem, cognitive chains of causality, permissible inferences, strong arguments and non-arguments. And yet still maintain a dreaming inability to reasonably evaluate priors, to notice burdensome details and sheer ridiculousness. If my dreaming brain's behavior is a true product of dissociation - of brainware modules or software modes that can be independently switched on or off - then the analogy to religion may be more than surface similarity.
Conversely it could just be a matter of habits playing out in in my dreaming self; that I habitually pay more attention to arguments than priors, or habitually evaluate arguments deliberately but priors intuitively.
61 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2009-08-08T23:21:40.516Z · LW(p) · GW(p)
I've heard it said somewhere (sorry, can't come up with a citation) that this is the rationale behind the brain's tendency to actively erase dreams, the bane of dream-diary keepers everywhere. Because we're somewhat source-blind, we risk keeping our attachment to conclusions we made in dreams even after we realize we were dreaming. To prevent that, the brain erases most of the five or so dreams we have each night, and even the ones we remember on waking tend to be mostly erased in a few hours unless you consciously think about them very hard.
Replies from: gwern, SilasBarta↑ comment by gwern · 2009-08-09T01:08:52.427Z · LW(p) · GW(p)
Yes, and thank god! I not too infrequently have false memories from dreams, because my dreams are so humdrum. So far it hasn't caused me any irreparable damage, but I live in dread that someday I'll dream about filing my taxes or something and never get around to doing it in real-life.
(One strange dream anecdote: a few days ago I was reading the Wiktionary entry for spannungsbogen - a good word for LWers to know, incidentally - and I was annoyed at the talk page's skepticism. So I went to the New York Times and looked it up, finding one article defining the term, and adding it to the talk page as an example from somewhere other than Dune.
A few hours later, I abruptly realize that I never actually did any of that inasmuch as I was sound asleep at the time. For the heck of it, I go to nytimes.com & run the search. I find one article defining the term, and add it to the talk page as an example from somewhere other than Dune...)
↑ comment by SilasBarta · 2009-08-09T03:38:45.174Z · LW(p) · GW(p)
I think you're doing what Eliezer_Yudkowsky just warned against: how long is that hypothesis?
"The brain 'decides' to erase dream memories in such a way that we lose ~80% of our short-term memory of them and all of the long-term memory of them that we don't deliberately try to remember, on the basis of sufficient Bayes-structure in the brain, and its having observed (over period ___?) the lack of entanglement between dream-based conclusions and the real world, and its prediction of sufficiently bad consequences ..."
Also, isn't that brain architecture a pretty narrow target for such a dumb process as evolution to hit? It's smart enough to exterminate dream conclusions (not to mention identify what is a dream, before you wake up) with extreme prejudice but not e.g. astrology?
FWIW, here's the theory I prefer and would defend:
"Dreams are random neuron firings" + "The feeling of dream coherency is the result of the brain's normal procedure for mapping sense data to best hypothesis."
Replies from: orthonormal, Douglas_Knight, Jonathan_Graehl↑ comment by orthonormal · 2009-08-09T17:37:39.779Z · LW(p) · GW(p)
That is indeed complicated, but that hypothesis isn't what Yvain was suggesting. The proposed adaptation is just that memories don't get stored as usual during REM sleep, which is a relatively simple thing for the brain to do. (Also, it's pretty clear that this actually happens.) It's then argued that this is a good adaptation for evolutionary reasons, because if we lacked it (and kept the rest of our tendencies of believing every conclusion we remember, context notwithstanding) we'd have some problems.
(E.g. an ex-girlfriend of mine who would stay angry at people who had been mean to her in her dreams, despite knowing that it had just been a dream.)
Replies from: NancyLebovitz, wedrifid↑ comment by NancyLebovitz · 2012-12-02T17:29:25.657Z · LW(p) · GW(p)
(E.g. an ex-girlfriend of mine who would stay angry at people who had been mean to her in her dreams, despite knowing that it had just been a dream.)
I don't have that bad a case, but sometimes I notice an emotions from a dream continuing into the next day. I drop them when I realize they're from a dream-- but now I realize that I only do that when something reminds me of some detail from the dream and I understand the source of the emotion. Scary thought-- how much of my emotional life is literally dream-based?
Replies from: gwern↑ comment by gwern · 2012-12-02T19:37:53.157Z · LW(p) · GW(p)
I've noticed sometimes that I think I've done something or I have something, but it was only in a dream. I worry that this will one day affect something important, rather than like last week my plan to have grilled cheese for lunch (I had already eaten all my cheddar cheese).
↑ comment by Douglas_Knight · 2009-08-09T05:00:14.363Z · LW(p) · GW(p)
Your theory is largely consistent with Yvain's theory. Maybe it's competing for the short-term part of his theory, but your theory simply doesn't address why we dreams fade after waking up.
But I see his theory rather differently than you do: it is evolution, not the brain, that has made the observation. Of course, if dreaming is about rewriting long-term memory, then other effects on memory could be side-effects.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-08-09T16:35:22.655Z · LW(p) · GW(p)
Your theory is largely consistent with Yvain's theory.
Yvain says the brain eliminates dreams because of a noticed property of dreams as such. My theory (not really mine, just endorsing without remembering where I first read it, but I'll keep the terminology) says that the brain is just applying normal hypothesis update procedures, with no need to identify the category "dreams".
Maybe it's competing for the short-term part of his theory, but your theory simply doesn't address why we dreams fade after waking up.
I think it does. To wake up is to be bombarded with overwhelming evidence that one's most recent inferences ("dreams") are false. So, whatever neural mechanism (synaptic strengths + firing patterns) represented these inferences is crowded out, if not replaced outright, by a radically different one.
You might say, "But when I change my mind after believing something stupid, I remember that I used to believe it." Sure, because that belief was there much longer and developed more inertia compared to a dream, and you "self-stimulated" that belief, which, lo, helps you remember dreams too.
"But when I briefly believe something stupid and then correct it, I remember it." Compare the set of all beliefs you've held for under twenty minutes, to the set of all your dreams. Do you think you remember a higher fraction of one than the other?
But I see his theory rather differently than you do: it is evolution, not the brain, that has made the observation.
I wasn't claiming Yvain left out the possibility of evolution doing the learning -- that's what I meant by "over period _?" Was this entanglement noticed over the person's life, evolutionary history (the Baldwin effect), or what? But I didn't know how to concisely say that any more clearly.
Of course, if dreaming is about rewriting long-term memory, then other effects on memory could be side-effects.
True, and that would be a parsimonious way to handle the phenomenon of dreaming, but that wasn't Yvain's theory.
↑ comment by Jonathan_Graehl · 2009-08-09T05:11:41.730Z · LW(p) · GW(p)
That theory is too short.
comment by Bo102010 · 2009-08-08T23:05:32.169Z · LW(p) · GW(p)
Rationality in dreams, fun topic...
Since early adolescence, I've experienced episodes of "sleep paralysis" just before waking a few times per month. The experience is different for each individual, but most people dream/hallucinate waking up, but being unable to move at least one part of their body. It can be very disturbing, especially if you can see all the normal things associated with waking up (like your alarm clock on the nightstand, your spouse next to you/talking to you, etc.).
When it first started happening to me regularly, each occurrence really freaked me out. I'd hallucinate waking up to storm winds breaking out my windows, but being unable to move, or being awake and trying to get up, but having my vision frozen in one spot. I would wake up sweating, breathing heavily, and very disturbed.
After several years, I've developed a sort of dream rationality, in which I "wake up" and experience some sort of paralysis (a lot of times I dream that my neck is forced into some terrible position), and then consider how likely the scenario is to being not-real before I get upset. I recall recently "waking up" to a burglar going through my closet, and I being unable to move anything but my eyelids. I started to get a little excited, but then I considered "How likely is it that a burglar silently defeated my deadbolt AND I spontaneously became paralyzed?" I considered this conjunction to be exceedingly improbable, so I sat back and let the scene play out, and a minute (probably not really) or so later I woke up for real.
If only I could apply this type of reasoning to dreams about sitting in high school classrooms with unfinished homework.
Replies from: AndrewKemendo, divia, Eliezer_Yudkowsky, teageegeepea↑ comment by AndrewKemendo · 2009-08-09T01:00:51.571Z · LW(p) · GW(p)
I recall recently "waking up" to a burglar going through my closet, and I being unable to move anything but my eyelids. I started to get a little excited, but then I considered "How likely is it that a burglar silently defeated my deadbolt AND I spontaneously became paralyzed?"
Similarly enough, a few years back I was "woken up" in the same manner to the sound of breaking glass outside of my window. In the half awake state, I worked through the probability that someone was actually breaking into my car outside of my window and decided that if indeed this was the case and I wasn't just dreaming I would hear further evidence (door closing, car starting etc...) and then take action. I did not and so I went back to sleep.
My prior probability was low that my car would get broken into. The evidence should have updated it, however I erred on the side of the prior and came to the conclusion that I was just dreaming.
Wrong. Sure enough the next morning my car had a broken window and my CD player and CD's were gone (Car thieves like Tchaikovsky and Art of Noise apparently). I am a much lighter sleeper now.
Replies from: Bo102010↑ comment by divia · 2009-08-10T04:19:17.479Z · LW(p) · GW(p)
I recently used similar reasoning during an episode of sleep paralysis about a week ago. My sleep paralysis episodes are always very similar: I hear someone calling out to me from the next room, but I can't respond because I'm paralyzed. I have them often enough that I usually realize what's going on. In this one, I heard my brother (who had been visiting earlier in the day, but who doesn't live with me) calling out to me from the other room. I knew I was experiencing sleep paralysis, but at first, I tried desperately to wake myself up to go answer him anyway. Then I remembered that he probably wasn't there and that hearing people call out to me that aren't there often happens when I have sleep paralysis. I ended up converting the experience into the longest lucid dream I've ever had, which I'd highly recommend if you can pull it off.
Amusingly enough, the experience almost came full circle, since near the end of my lucid dream I actually encountered my brother and my first thought was that I needed to let him know that it was just a dream so that he could be lucid too. It took me a good minute or two to realize the problem with that line of thought, and as it was I told him anyway.
As far as applying that reasoning to dreams about sitting in high school classrooms with unfinished homework, I think with enough practice it's entirely possible! I haven't fully mastered the art of doing so, but most of the lucid dreams I had as a kid, I had because really awful things were happening, and I'd trained myself to realize that it's pretty rare for real life to be as bad.
Replies from: jimmy↑ comment by jimmy · 2009-08-14T22:48:13.067Z · LW(p) · GW(p)
I'd trained myself to realize that it's pretty rare for real life to be as bad.
That reminds me of one of those "You gotta see this!" type of shows where a motorcycle racer crashed at about 110mph and did a few flips in the air before coming down. In the interview, he said "I just remember thinking 'I hate these kind of dreams'".
I had a personal experience that was very scary during which I was questioning whether it was really happening or if I was just dreaming.
I kept on doing what needed to be done (and never really believed it wasn't happening), and the guy on (err.. off, I guess) the motorcycle didn't really have a chance to do anything, but it seems worth mentioning that you need to be pretty darn sure that you're dreaming before you decide to do something else.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-08T23:07:09.894Z · LW(p) · GW(p)
"How likely is it that a burglar silently defeated my deadbolt AND I spontaneously became paralyzed?"
Interesting! So you're explicitly evaluating priors in your dreams, then? That makes it more likely that it is indeed a matter of habit.
Replies from: Scottbert↑ comment by Scottbert · 2012-08-18T21:07:11.690Z · LW(p) · GW(p)
This was during sleep paralysis, not during dreaming. Perhaps the prior-evaluating inhibition is absent during sleep-paralysis but not dreaming?
They are obviously related states, but from personal experience I have had a much easier time realizing what's going on when sleep-paralyzed (including recognizing that the voices and people I hear in the room with me almost certainly aren't actually there because they weren't every other time this happened)
↑ comment by teageegeepea · 2009-08-09T21:43:09.945Z · LW(p) · GW(p)
I don't know how relevant it is here, but for those interested Mind Hacks had a recent post on sleep paralysis.
comment by [deleted] · 2009-08-10T01:59:24.442Z · LW(p) · GW(p)
It's funny, my dad is somewhat of a "crackpot"(I am actually glad to be raised by one)... from my youth he always talked about how he has had experiences which lead him to believe he is not fully awake and speculates about how everyone else is not fully awake. I don't take it entirely seriously, but as I've heard it over and over again I've done a lot of thinking on the hypothesis... but no gathering of research so it's probably of no use yet, but good to be reminded about it.
The other thing I want to say, is that I am an extremely gullible person in my natural state. Even though I've been reading this blog for a bit over a year and trying to understand how one applies probability to life, this has been one of the more useful entries. The idea of calculating priors makes much more sense to me- and I think my failure to consistently do so is why I have been so gullible! Post-prior calculation, I seem to be a pretty decent reasoner (like you are in dreams, haha). So I just want to say thanks for bringing this up, it really helped me understand, and will help me explain to other people. I also hope that a lot more posts are done on helping calculate priors.
comment by SilasBarta · 2009-08-10T00:01:58.631Z · LW(p) · GW(p)
Sorry, the snark potential here is too high, so I need to get these off my chest.
I dreamed that the Wall Street Journal had published an article about "Eliezer Yudkowsky", but it wasn't me, it was a different "Eliezer Yudkowsky", and in the dream I wondered if I needed to write a letter to clarify this.
Um, the advantage of your name -- and mine -- is that you pretty much never have to worry about this happening. Low prior indeed!
the dream couldn't be evidence because ... only actual sensory impressions of Google results could form the base of a legitimate chain of inferences.
Yeah, I can see why you're worried people might quote you without permission! I mean, I thought I'd seen the worst Google fanboys, but never before did I see anyone claim that Google was the genesis of all valid inferences!
comment by SoullessAutomaton · 2009-08-08T23:11:32.969Z · LW(p) · GW(p)
I've often suspected that dreaming consists mostly of the brain's "confabulation" module being fed semi-random noise. Thus, the dreaming brain would be able to look for explanations and arguments, but not generate new hypotheses.
Under this hypothesis, I suppose it actually speaks well for Eliezer that his rationalizing mind prefers to throw its hands up and opt for indecisive confusion rather than accept a silly explanation.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-08T23:35:39.420Z · LW(p) · GW(p)
Hm... I hadn't thought to track the degree to which my dreams involve silly explanations for ridiculous events, as opposed to just acceptance of the events themselves.
Note also that this is an application of Occam's Imaginary Razor.
comment by RolfAndreassen · 2009-08-09T22:50:43.444Z · LW(p) · GW(p)
But with a damaged intuition for the prior probability, my mind didn't successfully >check itself, or even notice a problem - didn't get as far as asking "But what is the >prior probability?"
Ok, but in this case it seems that you took the correct action - don't worry about it - and so it is not clear if this is a valid criticism. Even in terms of application of scarce brainpower, it seems to me that dismissing further worry by means of "low prior probability" is about the same amount of work as dismissing by the likelihood ratio, so once your dreaming mind has started considering the problem, there is no course more optimal than dismissing it and the path taken is not relevant. No?
comment by JulianMorrison · 2009-08-09T22:42:02.594Z · LW(p) · GW(p)
I was just thinking "they already do apply the 'too implausible' heuristic to space aliens and bigfoot", but considering further, no they don't. They apply the silliness heuristic. Things with p = tear up all of physics (ghosts) get bundled with p = an unusual animal good at hiding (Loch Ness Monster) and things with p = near certainty unless civilization destroys itself (AGI at more than human level). Those things are silly, dismiss them.
I suspect this applies to everyone, including you. Even you don't routinely use prior calculations as a heuristic, you dismiss things as silly. The difference being that you try to calibrate your sense of silliness. Sleeping switches it off, leaving even you defenseless.
Also explains how religious people are consistently almost-atheist. The other religions have silly characteristics (how many arms?!). Ours is pre-defined as sensible, so we never get as far as searching it for silly characteristics, indeed it would be silly to try.
Replies from: None↑ comment by [deleted] · 2009-08-11T04:55:52.126Z · LW(p) · GW(p)
things with p = near certainty unless civilization destroys itself (AGI at more than human level)
I have to object to this - you're applying the awesomeness heuristic, in which things that would be awesome are judged to have higher probabilities.
It would be awesome if axions existed (at this point, generally speaking, anything outside the Standard Model would be awesome), especially if axions allowed us to construct an Epic Space Drive, so it's nearly certain that axions exist!
More mundanely, it would be awesome if we could spew carbon dioxide into the atmosphere without deadly serious consequences (it would!), so it's nearly certain that global warming is nonsense!
The unspoken assumption behind your claim is: "It would be awesome if superhuman artificial general intelligence was possible (it would!), and the universe is awesome (it is!), therefore the possibility is nearly certain."
I suppose there are other names for the awesomeness heuristic, but the phrase "awesomeness heuristic" is itself awesome, QED.
We don't know whether superhuman intelligence is possible. Perhaps it isn't. Perhaps, just as anything that can be computed, can be computed by a Turing machine with enough time and memory, anything that can be understood, can be understood by a human with enough time and scratch paper. Arguments that human-level artificial general intelligence is impossible are clearly nonsense ("Data is awesome" is true but not relevant; the existence of human-level humans is an applicable counterexample), but the same can't be said for superhuman-level.
Or, more disturbingly, perhaps it's possible, but we're too stupid to figure it out, even in a billion years. Ignoring evolution, cats aren't going to figure out how to prove Fermat's Last Theorem, after all.
I personally believe that superhuman artificial general intelligence is probably possible, and that we'll probably figure it out (so we should figure out ahead of time how to make sure that AI isn't the last thing we ever figure out), but I certainly wouldn't say that it's "nearly certain". That's a very strong claim to make.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-08-11T08:58:05.242Z · LW(p) · GW(p)
I don't call it near certainty because it would be awesome. I call it near certainty because it would be implausible given what we know for mildly superhuman intelligence to be impossible, and even implausible for it to be supremely hard, and because we are on the technological cusp of being able to brute-force a duplicate (via uploads) that could easily be made mildly superhuman, even as we are also on the scientific cusp of being able to understand how the internal design of the brain works.
Hugely superhuman intelligence is of course another ball of wax, but even that I would rate as "hard to argue against".
comment by SilasBarta · 2009-08-09T16:59:42.274Z · LW(p) · GW(p)
One spooky experience for me was when a dream had a high prior probability, and was backed up afterward by unlikely supporting evidence.
My family was living in a rent house for a few months before moving to a new home (was about 9 at the time). I had noticed an uncomfortably large number of scary-looking pests around the house.
Then I had a dream that I had woken up in my bed (likely) and saw a little dark mouse (likely) at the doorway, which then charged toward my bed, jumped up on it (not likely, but I didn't realize mouse limitations then), ran over to the other side, lingered by the pillow for a second, and then jumped off and ran away.
I told my parents the next morning that "It's getting really bad because now I'm seeing mice." And they didn't believe me. So I said, no, it was there, it must have left evidence. So we went back to the scene and looked in the side of the pillow where it had lingered. There was a small opening to the pillow's feather-like down, and right at the opening, there was a black strand of down, while every other piece was white. The black piece looked like it could pass as having been some animal's fur.
In retrospect, the black piece was probably not fur anyway, and the mouse probably wasn't there (again, can't jump that high), but that was pretty freaky.
comment by Gavin · 2009-08-09T02:54:07.508Z · LW(p) · GW(p)
Why are you so concerned with being able to maintain rationality even in altered or brain-damaged states? This has come up in different contexts in several recent posts.
It seems like an unnecessary bit of rationalist machismo. It's enough of a challenge to create rational, intelligent work when at normal functionality, optimizing for ability to withstand brainwashing or brain damage doesn't seem to me to be an important priority.
Does it have something to do with wondering whether you are now in a diminished state, simply due to humanity's inherent biases and frailties?
Replies from: anonym, JulianMorrison, Kaj_Sotala↑ comment by anonym · 2009-08-09T17:14:38.665Z · LW(p) · GW(p)
Practicing in dreams and other altered states is like practicing with a self-imposed handicap. Just as a tennis player might practice under conditions and self-imposed constraints so severe that they would never come up in a real game, a rationalist might seek to hone their skills in unrealistically harsh environments.
If one could notice and counteract cognitive biases, accurately assess priors and calculate probabilities, etc. -- all while dreaming, drunk, tripping on LSD, sick with a 104 F fever, etc. (not that I'm recommending all those) -- then those skills would have become second nature and effortless in less severe environments. Just as a tennis player who practices playing 7-set matches at heat-wave temperatures against multiple practice partners is much better prepared to play a 5-set match at a less severe temperature against a single player.
Replies from: None, Bo102010↑ comment by [deleted] · 2009-08-10T01:49:58.366Z · LW(p) · GW(p)
Heh anonym, it happens to be the case that I myself attempted to do so after ingesting psilocybin. I think I did admirably... (though I'm not actually a full bayesian [or even a frequentist] about my rationality, working on it...). THC or alcohol seems all about mustering up the strength to do it, which I often fail to do. Of course, I'm thinking that now that I've been there done that, to be safe I won't do it much more in my life.
Replies from: wedrifid↑ comment by wedrifid · 2009-08-28T18:59:56.397Z · LW(p) · GW(p)
Heh anonym, it happens to be the case that I myself attempted to do so after ingesting psilocybin.
Interesting! I've been wondering how that would play out. Any noticeable after effects or particular challenges to rational thinking worth noting?
Replies from: None↑ comment by [deleted] · 2009-09-20T21:03:27.670Z · LW(p) · GW(p)
It's an interesting experience, and I would say that it is of at least some interest to the rationalist, given that I don't think it has shown to have negative effects on the brain or anything like that. A while back, some people in this community were pondering whether it "altered the personality forever" which I haven't seen be the case for any of my companions, so unless that came from scientific studies rather than anecdote I think that isn't a concern of the rationalist either.
I have some audio tapes of the conversations we had during it, but I haven't listened to them yet. I need to because my perception of what happened is probably a bit off from what actually happened. My companions put pretty much all their trust in me during the trip because I seemed to be making the best decisions (even though I was the n00b in the group). I never lost the knowledge that I was on a drug, though I hear that happens, and that would be quite a challenge to the rationalist indeed!
One interesting thing was that we would have to plan to do things several times. A lot of conversations had to happen over and over again because whenever you change your relationship with other objects in the room it can be so distracting that you forget what your goal was. Super Akrasia to the tenth. I guess similar things happen with THC, but I don't think it's for the same reasons.
Another interesting thing is that the idea of play-acting seemed to be really potent. Having been named rational, the impulse to merely ACT rational (rather than be rational) was nigh-overwhelming. In-group feelings were really strong, my companions had WAY more xenophobia than I experienced (I could remember how little people tend to care about what other people are doing, whereas my friends seemed to think every person was a secret police officer or something). I never hallucinated anything... but it is interesting to operate with a general skepticism of everything you think or see.
Different levels of the drug will cause widely different things. I was doing about the average dose. My friends were probably at a place where I'm not sure rationality was possible... that can happen and it's interesting to think about (I never want to be there though).
Replies from: wedrifid↑ comment by wedrifid · 2009-09-21T07:29:30.964Z · LW(p) · GW(p)
A while back, some people in this community were pondering whether it "altered the personality forever" which I haven't seen be the case for any of my companions, so unless that came from scientific studies rather than anecdote I think that isn't a concern of the rationalist either.
Studies I have seen reported a positive and long term alteration of mood.
Replies from: achiral↑ comment by achiral · 2012-11-25T06:55:16.304Z · LW(p) · GW(p)
Indeed they do. However the dose they use in the psilocybin research equates to a much greater dose of mushrooms than the "average"(I'll assume 3.5 grams of dry P. cubensis) dose goldfishlaser speaks of.
The whole point with psychedelic drugs is that one must take a high, overwhelming dose in order to experience the full gamut of experiential states possible.
I have an excellent cognitive psychology book published by OUP called The Antipodes of the Mind:
http://www.amazon.com/Antipodes-Mind-Phenomenology-Ayahuasca-Experience/dp/0199252939
The book takes an empirical, phenomenological approach. The author has gathered data from around 2,500 experiences with the plant based tea Ayahuasca(in effects it is rather like mushrooms yet typically stronger). He himself has taken the brew well over 100 times. This data is then analyzed in various ways: semantic content of visions, progression and stages of the experience, structural topology of visions, and so on. Please take a moment to browse the table of contents in the amazon online book preview to get a feel for both the academic seriousness of this book as well as the quite fascinating contents. Best of all, the author, Benny Shanon, includes numerical tables and a whole appendix devoted to explaining his research methodology.
↑ comment by Bo102010 · 2009-08-09T17:25:22.873Z · LW(p) · GW(p)
I was going to joke about the next practice session is going to be getting really drunk and then going to a casino to count cards.
Replies from: teageegeepea, anonym↑ comment by teageegeepea · 2009-08-09T21:44:09.100Z · LW(p) · GW(p)
Actually, dart players who normally play/practice drunk perform better when intoxicated than sober. It won't help your game if you normally play sober to start playing drunk though.
Replies from: jimmy↑ comment by anonym · 2009-08-09T17:35:55.533Z · LW(p) · GW(p)
Yeah, that would be taking it too far ;-).
I'm not advocating putting yourself into net-loss situations like intentionally getting drunk to practice, but if you happen to find yourself in a state (like in a dream or having a high fever) that presents special challenges and there are no negative consequences to practicing rationality, I think it would be good to do so. When it's easy to do in a dream or with a fever, it will be effortless in real life under "normal" circumstances.
↑ comment by JulianMorrison · 2009-08-09T22:46:02.223Z · LW(p) · GW(p)
Waking states of healthy normal humans are brain damaged (relative to an ideal reimplementation not slapped together hodge-podge by iteratively mutating a monkey). And yet we must be rational.
↑ comment by Kaj_Sotala · 2009-08-09T15:27:13.772Z · LW(p) · GW(p)
Why are you so concerned with being able to maintain rationality even in altered or brain-damaged states? This has come up in different contexts in several recent posts.
Maybe because one never knows if their brain is going to become damaged, or whether it is in fact damaged right now. (That would be my explanation, I don't know if it's the same as Eliezer's.)
Replies from: None↑ comment by [deleted] · 2009-08-11T05:10:06.090Z · LW(p) · GW(p)
My father, who has brain cancer, said after coming out of surgery that he felt he was at about 75% of capacity. It's funny, as far as deadly serious cancer goes, that immediately before he said that, I thought to myself - but didn't say aloud - that he was at about 70%, compared to 10% before going into surgery.
Anecdotal evidence is anecdotal, but not all brain damage renders one unaware of its presence.
comment by gwern · 2009-08-09T01:12:20.931Z · LW(p) · GW(p)
Conversely it could just be a matter of habits playing out in in my dreaming self; that I habitually pay more attention to arguments than priors, or habitually evaluate arguments deliberately but priors intuitively.
From what I've read of lucid dreaming, one of the standard techniques is to periodically & habitually do 'dream checks' in the hopes that while asleep one will out of habit do a dream check, realize it's a dream, and then do whatever. This seems like a fairly complex mental argument, and similar to what you did.
Replies from: anonym↑ comment by anonym · 2009-08-09T02:00:54.014Z · LW(p) · GW(p)
Reality testing. I've noticed text changing in dreams as mentioned in that technique.
comment by A1987dM (army1987) · 2012-08-13T23:07:52.147Z · LW(p) · GW(p)
Well, the probability that I would dream about a WSJ article like that, given that a WSJ article like that actually exists in this morning's paper, is the same as the probability that I would have such a dream, given that no such article is in this morning's paper. So by Bayes's Theorem, the dream isn't evidence one way or the other.
Very weak evidence, but not quite zero evidence, I'd say. Maybe there was something you saw/heard subliminally but didn't consciously notice...
comment by CAE_Jones · 2012-11-07T04:39:42.548Z · LW(p) · GW(p)
Over this past summer, I decided to do some tests with my dreams. Overall this has gone nowhere (it doesn't help that I've had difficulty keeping myself motivated enough to keep it going for long). I didn't put it in Bayesian terms at the time (despite having been reading LessWrong), but ignoring weak priors is one of the biggest rationality fails I've experienced in dreams (I've documented quite a few incidents). There have been some cases where I have done something resembling updating dream priors, specifically as it relates to elevators, as there was a period from 2004-2006 in which elevators in dreams would frequently behave in an unrealistic manner (traveling more floors than would make sense, usually down, or having the door or floor become structurally unsound). It got to the point that I actively avoided elevators in dreams if I was interested in the situation I was experiencing. I've sort of half-tried for lucid dreaming, and consequently only been sort of half-successful. I've found that anxiety almost always outweighs desire when it comes to influencing the experience. I did try a little test where I would try to will the time on a display to be a particular number, then checked to see if it worked. I tried this several times, but by the time I remembered to write down my results, I only remembered two of the trials (though I do remembering them being pretty representative of the rest of them). I was able to get the time to appear close (no more than five minutes off, and usually much closer) to the target. I'd like to do an experiment involving my vision, but that's proven much more difficult than I'd like. My left eye has been nonfunctional since birth, my right eye started scarring around age 3, stabalized at "I could read subtitles if you pause the video, give me inch-thick glasses and let me sit close to the screen for a couple minutes", then dropped rapidly toward useless after I visited Las Vegas at age 14 (there are several factors that probably came to a head with that incident). Vision in my dreams took much longer to decline, but gradually got to the point where clear visuals are a novelty even while asleep.
Replies from: CAE_Jones↑ comment by CAE_Jones · 2012-11-07T13:12:22.979Z · LW(p) · GW(p)
Update: Finding this article got me to try again, and while overall it was a vague mess (though I attempted some more number experiments involving other people/NPCs), things got interesting near the end. I did have limited success working with visuals, and within moments of noticing this, observed a dog run up a slide and consciously decided that the probability of this having happened in reality and not having been made up just for the dream was pretty low. It's worth noting that I was fully aware it was a dream by that point.
Replies from: CAE_Jones↑ comment by CAE_Jones · 2012-12-02T08:29:06.355Z · LW(p) · GW(p)
I should probably have mentioned what happened the day after my previous comment on this post, but was worried I was getting annoying.
I decided to test the teleportation method I used in dreams while awake, fully realizing that this was a very silly idea.
At first, I selected a destination that would be distinct enough from where I was at the time. With the target as a mobile home, and me in a place with a solid foundation, all I had to do was take one step to be convinced it hadn't worked. I then decided to try a destination more similar to my actual location. Such a destination quickly came to mind, and I moved to the place I considered most similar (the hallway). Though I couldn't help but go over in my head all of the little details that gave away how different the two locations were in spite of this; acoustics, differences in the rooms that would be clear as soon as I left the hallway, and especially the ambient odors.
Thoroughly primed to expect nothing, I stepped out of the hallway... and experienced genuine surprise as to where I was. It seems that, even though I was focusing on everything against me confusing the two places intellectually, the part of my brain aware of the setting had been convinced I was already at the destination! I've actually tried self-deception regarding setting in the past (more so around ages 12-13), without any success; that tearing the idea apart in detail somehow actually made it work was surprising, and something I kinda wish I could design experiments for and actually find some use for.
Improving my updating on priors regarding setting in dreams has been less exciting. I'll need to try and remember to ask myself about the setting and how I got there if ever I find myself wondering.
comment by SforSingularity · 2009-08-13T19:27:39.044Z · LW(p) · GW(p)
They don't "wake up completely" and realize that, in the absence of evidence, the whole thing has a prior probability too low to deserve specific attention.
"a prior probability too low to deserve specific attention" is an advanced argument, and it is not a generally accepted or well-known principle of reasoning, and it is my impression that humans rarely use it.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-13T19:45:05.867Z · LW(p) · GW(p)
As an explicit principle, yes, it is rarely known.
As an informal principle, most people don't spend a lot of time wondering, after they fully wake up, whether the pink elephant might really be hidden inside their pillow.
Replies from: SforSingularity↑ comment by SforSingularity · 2009-08-13T20:14:50.483Z · LW(p) · GW(p)
most people don't spend a lot of time wondering, after they fully wake up, whether the pink elephant might really be hidden inside their pillow.
Yes, most people have a "that's clearly ridiculous" filter that emulates the work that a complexity prior should do. It seems to work much like the way modern antivirus works - there's a blacklist of "ridiculous" ideas that you reject out of hand, and popular culture acts like the "update virus database" function. Simple hypotheses like evolution often end up on this blacklist, and horrifically complex ones like the Abrahamic God often end up off it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-13T20:20:41.911Z · LW(p) · GW(p)
Related concepts: Absurdity heuristic, Antiprediction.
comment by [deleted] · 2009-08-09T15:38:06.105Z · LW(p) · GW(p)
It's dangerous to take any lesson from your own dreams, except a general warning that the mind plays tricks.
My dreams are disjointed, incoherent, frustrating sequences of events that could never happen in real life. For example, wandering through an indoor park full of acoustic shells and bumping into a killer whale, walking on its tail.
The closest thing to what you describe was dream in which I kept "waking up" just enough that, within the dream, I collapsed with fatigue and taleported into a bed (not my own).
Despite my actual dreams being bizarre, I've several times had false memories of banal dreams in the form of Déjà vu.
comment by Douglas_Knight · 2009-08-09T05:12:18.404Z · LW(p) · GW(p)
dreamers vs theists:
Without an explanation for the origin of a hypothesis, its mere existence is evidence. If the dreamer believes dreams are random, that's enough. The theist needs a psychological or sociological explanation for the belief. (But the theist acknowledging lack of evidence should at least stop favoring the birthright religion and go the Unitarian route.)
↑ comment by [deleted] · 2009-08-10T02:15:36.393Z · LW(p) · GW(p)
Why is the existence of a hypothesis evidence of its existence? Isn't that exactly what we're talking about- the probability of priors?
For me to accept that the existence of a hypothesis is evidence of its existence, I would have to believe that: 1) Our brains are naturally rational abstract thinkers (which can be counter-proved by several examples from the book Blink by Gladwell, though outside of these examples it is a tad pop-sci so I don't actually recommend and didn't finish) 2) I'd have to unbelieve in the knowledge that humans don't anthropomorphize the universe.
Perhaps you meant that this psychological or sociological explanation, being absent to many believers, is why the hypothesis is adequate evidence (I see now that that is a valid interpretation of your comment) If so, okay, I get it, and you have a good point there.
So, theists are handicapped in a very similar way to the half-asleep, even though half-asleep isn't an accurate description of their condition. Instead, both the half-asleep and the theist need the ability to better assess priors.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-08-10T02:55:47.586Z · LW(p) · GW(p)
Why is the existence of a hypothesis evidence of its [truth]?
The information in the complex hypothesis had to come from somewhere.
But it's mainly evidence for the particular religion against similar non-existant religions.
Yes, the psychological explanation is that we're biased to see minds too much. You might say that our brains have the wrong priors, systematically weighting minds. But that merely predicts that everyone makes up separate beliefs about gods, which might not be so far from animism. An organized religion in which large numbers of people have the same catechism means that a particular belief has outcompeted other beliefs. You have to explain what competition it won, why it spread. Showing that what people usually call evidence is not evidence doesn't show that it spread for non-epistemic reasons. But it certainly decreases the chance.
Replies from: None↑ comment by [deleted] · 2009-08-10T03:58:10.656Z · LW(p) · GW(p)
It has been my experience in organized religion, that extremely few have the same catechism. In rare cases where people agree down to the details, it is rarely by logical process... it is emotional agreement form some sort of need to agree.
edit: I realized my message was unclear. What I mean is, people disagree, or hold a nonbelief, on many of the catechisms presented by most religions- but just dont talk about it because the desire of community.
comment by Vladimir_Nesov · 2009-08-09T00:01:31.363Z · LW(p) · GW(p)
The next level's feat must be the ability to maintain sane thinking in one's sleep.
Replies from: MichaelVassar, UnholySmoke↑ comment by MichaelVassar · 2009-08-09T05:24:44.967Z · LW(p) · GW(p)
I'd prefer that he take skill focus in "Friendly AI", or possibly "Craft FAI", whichever works. For me, "Negotiator" seems a better choice.
↑ comment by UnholySmoke · 2009-08-09T21:25:09.875Z · LW(p) · GW(p)
But where's the fun in that?
Also, I find that my rationalist-module and my weird-stuff-generation-module are very interlinked. If I start running dream experiences through logic, weird stuff tends to stop happening. In fact, I generally wake up. Now, training yourself to become lucid but maintain the brain's penchant for throwing nonsense at you; that would be pretty cool.
Replies from: infotropism↑ comment by infotropism · 2009-08-10T20:57:33.404Z · LW(p) · GW(p)
So yes, you'd likely lose the fun of normal dreaming - experiencing weird stuff, letting the insane flow of your dreams carry you like a leaf on a mad wind and not even feeling confused by it, but rather feeling like it was plain normal and totally making sense, having lots of warm fuzzy feelings and partway formed thoughts about your experiences in that dream.
Yet you might on the other hand gain the fun of being able to, for instance, capitalize on your dreaming time to learn and do some thinking. Not to mention the pleasure and sense of security derived from knowing your rational mind can work even under (some) adverse conditions.
Replies from: UnholySmoke↑ comment by UnholySmoke · 2009-08-11T22:17:59.330Z · LW(p) · GW(p)
Good points, but;
I have plenty of time to think and don't feel like I'm in any rush. Sleeping is time off.
I've been pleased by my mind's acuity under some very adverse conditions indeed. And terrified by its fragility on other occasions (forest rave a couple of months back, not pretty). But dreams are too far removed from reality to be much use in training myself. Like I said - as soon as I become lucid (or at least aware that I'm dreaming) things stop being interesting.