Posts
Comments
I'll be there.
"Why does reality exist?"
I think the problem with this question is the use of the word "why." It is generally either a quest for intentionality (eg. "Why did you do that?) or for earlier steps in a causal chain (eg. Why is the sky blue?). So the only type of answer that could properly answer this question is one that introduced a first cause (which is, of course, a concept rife with problems) or one that supposed intentionality in the universe (like, the universe decided to exist as it is or something equally nonsensical). This is probably (part of) why answering this question with the non-explaination "God did it" feels so satisfactory to some--it supposes intentionality and creates a first cause. It makes you feel sated without ever having explained anything, but the question was a wrong one in the first place, because any answer would necessarily lead to another question, since the crux of the question is that of a causal chain.
I think a better question would be "How does reality exist?" as that seems a lot more likely to be answerable.
I can feel pain in dreams. I'm not sure if I can self-inflict pain in dreams (I've never tried), but I've definitely felt pain in dreams.
I'm not sure if I've experienced sleep paralysis before, but I've had experiences very similar to it. I will "wake up" from a dream without actually waking up. So I will know that I'm in bed, my mind will feel conscious, but my eyes will be closed and I'll be unable to move. Ususally I try to roll around to wake myself up, or to make noise so someone else will notice and wake me up. But it doesn't work, 'cause I can't move or make noise, even though it feels like I am doing those things (and yet I'm aware that I'm not, because I can feel the lack of physical sensation and auditory perceptions). But when I actually wake up and can move, it feels like waking up, rather than just not being paralyzed any more. And sometimes when I'm in that "think I'm awake and can't move" state, I imagine my environment being different than it actually is. Like, I might think I'm awake and in my own bed, and then when I wake up for real, I realize I'm at someone else's place. Which makes me think I wasn't actually awake when I felt like I was. But it feels awfully similar to sleep paralysis, so I'm not sure if it is sleep paralysis or just something very similar.
They usually recommend things like seeing if light switches work normally
Do other people have the same problem I do, then? When I'm dreaming, I often find that it's dark and that light switches don't work. I'm always thinking that the light bulbs are burnt out. It's so frustrating, 'cause I just want to turn on the light and it's like I never can in a dream.
When I dream about being underwater, I can breathe in the dream, but I also am under the impression that I'm holding my breath somehow, even though I'm breathing. Like, I'll "hold my breath" only, I've just made the mental note to do it and not actually done it. But it won't be clear to me in the dream whether or not I'm holding my breath, even though I'm aware that I'm still breathing. It's weird and contradictory, but dreams are capable of being like that. It's like how in a dream, you can see someone and know who they're supposed to be, even though they may look and act nothing like that person they supposedly are. Or how you can be in both the first and third person perspective at the same time.
That's exactly my method too.
Those who dream do not know they dream, but when you wake you know you are awake.
I actually use this fact to enable lucid dreaming. When I'm dreaming, I ask myself, "am I dreaming?" And then I answer yes, without any further consideration, as I've realized that the answer is always yes. Because when I'm awake, I don't ask that question, because there's never any doubt to begin with. So when I'm dreaming and I find myself unsure of whether or not I'm dreaming, I therefore know that I'm dreaming, simply because the doubt and confusion exists. It's a method that's a lot simpler (and more accurate) than trying to analyze the contents of the dream to see if it seems real.
That may be true, but you've given no evidence to support your claim. Can you give some examples?
Well that really depends what the decision is and what the circumstantial factors are. As I said in my last comment, decisions are made by a combination of emotion and reason. Emotions tell you where you want to go, and reason tells you how to get there. Whether or not a decision is reasonable depends on (1) was it an effective (and efficient, though that's somewhat less important) way of achieving your goal? Did it actually produce the outcome desired by your emotions? And (2) was it consistint with reality and the facts? Was the decision based on accurate information?
Taking the example you gave, of a family member being hurt by someone else in an accident, your emotions in reaction to this event are likely to be very charged. You just lost someone that was important to you, and you're bound to feel hurt. It's also very common to feel angry and to want revenge on (or justice for) the person that was responsible. It's not clear to me why the human default is to assign guilt without evaluting the situation first to see whether or not the person actually is guilty, but that does seem to be the common response. In this case, it would be up to a jury to decide whether this constituted manslaughter. It's most probable that the jury, having no vested interests besides ensuring justice, would be able to come to the most rational conclusion.
That said, if you are being truly rational about it and if your emotions are telling you your goal is to find out who (if anyone) was responsible, then your conclusion should be no different than that of the jury's. Of course, most people do allow their emotions to bias them, and aren't rational (thus the need for the jury). But if you are being rational about it, and your goal truly is about discovering the guilt or innocence of the parties involved, then how you feel about the situation is what is motivating your search, and reason and evidence should be what determined your answer. If you really don't have enough evidence, and the evidence you do have doesn't point more in one direction than the other, then yes, the rational conclusion would be simply to admit that you don't know.
One should be careful to inspect what exactly that emotional motivation actually is, if it's to determine guilt or innocence, to learn the truth about the situation, and not to find someone to blame so that you can feel better about it. (Although, how it would make you feel better to condemn a potentially innocent person when it will do nothing to bring back your family member nor help anyone else is a mystery to me. Alas, human beings have a lot of nonsensical intuitions.)
That said, if you're honest about your intentions, and what you really want is to blame someone else, and not to find the truth, and the possibility of blaming someone innocent isn't inconsistent with other explicit or implicit pro-social goals of yours, then to point the finger without basing your conclusion on the examination of the evidence isn't strictly irrational, since it would be consistent with your goals, to which the facts aren't relevant. However, that sort of approach would be pretty anti-social, and I doubt anyone having that goal would be honest enough to admit it. If your stated goal is to find the truth, then the only honest thing to do is look at the evidence, follow it, and be prepared that it might go either way.
It does no good to write in the bottom line before you start if your goal is to find out the truth. You won't arrive at the truth that way, and if your emotions tell you the truth is what you want, then that behavior would be irrational. In the words of Eliezer Yudkowsky, "Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts." I strongly recommend you read Eliezer's posts The Bottom Line as well as Rationalization, as they address the issue you seem to be struggling with.
It doesn't depend on N if N is consistent between options A and B, but it would if they were different. It would make for an odd hypothetical scenario, but I was just saying that it's not made completely explicit.
I'm making a point about human psychology. The value of a life obviously does not change.
Although, I suppose theoretically, if the concern is not over individual lives, but over the survival of the species as a whole, and there are only 500 people to be saved, then picking the 400 option would make sense.
I think that rephrasing improves it.
Or, for all we know, there are only 400 lives to be saved in the first instance. Saving 400 out of 400 is different than saving 400 out of 7 billion. The context of the proposition makes a difference, and it's always best to be clear and unambiguous in the paramaters which will necessarily guide ones decision as to which choice is the best.
- Save 400 lives, with certainty.
- Save 500 lives, 90% probability; save no lives, 10% probability.
I think it ought to be made explicit in the first scenario that 100 lives are being lost with certainty, because it's not necessarily implied by the proposition. I know a lot of people inferred it, but the hypothetical situation never stated it was 400/500, so it could just as easily be 400/400, in which case choosing it would certianly be preferable to the second option. I think it's important you make your hypothetical situations clear and unambiguous. Besides, a 100% probability of 100 deaths explicitly stated will influence the way people perceive the question. If you leave out writing out that 100 people are dying, you're also subtly encouraging your readers to forget about those people as well, so it comes as little surprise that some would prefer option 1.
Rationality doesn't require that you not feel the emotions, it just requires that you avoid letting them bias you towards one conclusion over another. You should follow the evidence to determine the level of guilt of the perpetrator. There is no causal link from how you feel about the event to how it actually happened. I'd have to say that in terms of interpreting the event, there is no room to "agree to disagree" if all the facts are understood and agreed upon. Certainly there's room to feel differently about it based on your own relative situation, but it has no bearing on the interpretation of the event.
The way I see it, emotions and reason serve two complimentary functions. Emotions tell you your goal, what you want. Reason tells you how to get there. Your emotions may say "Go south!" and reason may say "There is an obstacle in my path. In order to reach my destination, I need to first make a detour." If you allow your emotions to override your sense of reason, you'll try to go south and plow straight into the wall, and that lack of reason will hinder your ability to achieve your desired ends. If you think that reason is the way and emotions are the enemy and thus undervalue your emotions, you'll wander around aimlessly, as you'll have no sense of where it is you actually want to go. If one were truly Spock-like, and bad events failed to result in negative affect, there would be no reason to think of them as bad and thus no logical reason to avoid them. (Here you could argue that, well, if it affects other people negatively, that would be reason. But in that case you're assuming empathy--that when bad things happen to others, it makes you feel bad, which is an emotional response.)
To some extent this is true. Strong emotions do have the power to shut down activity in the executive centers of the brain. There's a physiological basis for the idea of "seeing red" when you're angry. However, you can also train yourself to stop your emotional reactions in their tracks, think about them, and change them. You can choose not to be angry, but you likely need education and training to do so, and you may not be successful 100% of the time. But you can certainly improve dramatically.
It does benefit you to feel sad because your brother died, though not exactly directly. The reason you feel sad is because you were attached to him. You would not feel sad if he were a random, namless (to you) stranger. Having that attachment is beneficial, even if the consequent emotion is not. But the two are inextricably tied together, and the prospect of sadness at the loss is part of what keeps you wanting to look after each other.
The question of rationality in emotions is better considered in the framework of Rational Emotive Behavior Therapy. An emtion is irrational if it results from an irrational belief, one that is dogmatic, rigid, inflexible. When you recognize this and replace the irrational belief with the rational one, the irrational emotion tends to be replaced by a rational one.
In the example of the goblin, the anger is not a direct result of the goblin tying your shoes together, but of your beliefs about the goblin tying your shoes together. Common anger-inciting beliefs are "he shouldn't have done the" or "I can't stand that he did that." But why shouldn't he have done that? Is there some law stating goblins can't tie shoes together that was violated? Can you not stand that? Will you expire on the spot if that happens? No, what you really ought to realize is that "it's unfortunate and inconvenient that the goblin tied my shoes together." And when you think that thought, the anger typically turns into mild irritation or disappointment.
In the case of losing a brother, being sad and mourning is a normal, natural, and healthy response. If you went around thinking "I can't live without him" or "I can't stand that he died" you're going to upset yourself irrationally and likely end up unduly depressed. If you replace those thoughts with "It's very sad that my brother died, but I can tolerate it and life will still go on" you will likely be sad and mournful, but then move on with your life as most people do when they lose loved ones.
Yes. This is called Rational Emotive Behavior Theory, and it was developed by Albert Ellis.
My point was that it's misleading to those trying to interpet it directly into a logical statement, which is what Eliezer seemed to be trying to do. I'm sure there are lots of people who could read that sentiment and understand the meaning behind it (or at least a meaning; some people interpret it differently than others). It's certainly possible to comprehend (obviously, otherwise I wouldn't have been able to explain it), but the meaning is nevertheless in an ambiguous form, and it did confuse at least some people.
Sure, that's true. I suppose you could have a split-brain person who is happy in one hemisphere and not in the other, or some such type of situation. I guess it just depends on what you're looking for when you ask "is someone happy?" If you want a subjective feeling, then self-report data will be reliable. If you're looking for specific physiological states or such, then self-report data may not be necessary, and may even contradict your findings. But it seems suspect to me that you would call it happiness if it did not correspond to a subjective feeling of happiness.
I think when you parse this out you realize that there are a lot of other factors at play here, it's not just a "belief in belief" thing.
Treating someone nicely has an influence on how they subsequently treat you and others. So it's not so much that you're believing someone is nice when they're not, it's that you're believing that they do not have a fixed property state of "niceness", that it is variable dependent on conditions, which you can then manipulate to promote niceness, for the benefit of yourself and others.
None of this is belief in belief. When you look closer you see that you are comparing two different things: how nice Bob has been in the past and how nice Bob will be in the present/future, dependent on what type of environment he is in, and you are thus modifying your behavior on the assumption that your contribution to the environment can make it such that Bob will be nice, or at least nicer. And there is evidence to support this assumption, so it's not irrational to expect Bob to be(come) nice when treat him nicely accordingly.
It's just misleading to phrase it as "I benefit from believing perople are nicer than they are," because what you mean by the first "are" (will be) is not the same as what you mean by the second "are" (have been).
I think the idea behind this is that he wishes reality played out in such a way that, to a rational observer, it would engender belief. It's a roundabout way of saying "I wish reality were such that..."
Only retroactively. Our memories are easy to corrupt. But no, I don't think you can be happy or unhappy at any given moment and simultaneously believe the opposite is true. There's probably room for the whole "belief in belief" thing here, though. That is, you could want to believe you're happy when you're not, and could maybe even convince yourself that you had convinced yourself that you were happy, but I don't think you'd actually believe it.
True. Still, the method of measuring serotonin and dopamine levels would offer no benefit over a self-evaluation, since you can't implement it retroactively.
Not everyone can be hypnotized. About a quarter of people can't be hypnotized, according to research at Stanford.
I've tried to be hypnotized before and it didn't work. I think I'm just not capable of making myself that open to suggestion, even though I would have liked to have been hypnotized.
I heard from one of my psychology professors that those on the extreme ends of the IQ spectrum (both high and low) have more trouble being hypnotized, but I'm not sure if this is actually true. The Stanford research showed that hypnotizability wasn't correlated with any personality traits, but I probably wouldn't consider IQ a personality trait.
I think it's not underconfident because our over-confidence is so high that it really is hard to be pessimistic enough to match reality. Depressed people seem to have just enough pessimism to compensate (but not overcompensate) for this bias. I don't think that necessarily makes them have more common sense. Even just in terms of being more realistic, this is only one bias that they compensate for. It's not like depression magically cures any of the other biases.
Depressed people also have a tendency to have an external locus of control, and that is not necessarily rational. You may not be able to control the situations you're in, but it's often the case that your actions do have a significant impact on them, so believing that you have very little or no control is often not rational.
I'd heard of the connection between depression and more accurate perceptions (notably, more accurate predictions due to less overconfidence), but I wasn't aware of the causal direction. It had been portrayed to me as being that the improved perception of reality was the cause of the depression. Or maybe I just mistakenly inferred it and didn't notice. I didn't know it actually went the other way, though now that I think about it, that actually makes a lot of sense.
Personally, I find that imroved map-territory correspondence leads to more happiness, at least the improved rationality which results from learning Rational Emotive Behavior Therapy. It's not just losing illusions that helps. It's better understanding yourself, better understanding what is actually causing your emotions, and realizing that you have a more internal locus of control rather than external regarding your emotions. It's liberating to be able to stop an emotional reaction in its tracks, analyze it, recognize it as following from an irrational belief, and consequently substitute the irrational emotion for a rational one. It helps especially with anger and anxiety, as those have a tendency to result from irrational, dogmatic beliefs.
How would you calibrate a brain scan machine to happiness except by comparing it to self-evaluated happiness? You only know that certain neural pathways correspond to happiness because people report being happy while these pathways are activated. If someone had different brain circuitry (like, say, someone born with only half a brain), you wouldn't be able to use this metric except by first seeing how their brain pattern corresponded to their self-reported happiness. It seems to me that happiness simply is the perception of happiness. There is no difference between "believing you're happy" and "being happy." You can't be secretly happy or unhappy and not know it, 'cause that wouldn't constitute happiness.
Hm... You make a good point. I'm not sure I understand this conceptually well enough to have any sort of coherent response.
I think knowing a prior constitutes evidence. If you know that the lottery has one million numbers, that is a piece of evidence.
I like your description of yourself. You remind me a bit of myself, actually. I think I'd enjoy conversing with you. Though I have nothing on my mind at the moment that I feel like discussing.
Hm, I kind of feel like my comment ought to have a bit more content than "you seem interesting" but that's really all I've got.
Hi,
My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.
I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment I wanted to make, so I decided to join in order to be able to post the comment. I figure I may as well be part of the group, since I am interested in continuing reading and discussing here. :-)
As for more about my background in rationality and such, I like to think I've always been oriented towards rationality. Well, when I was younger I was probably less adept at reasoning and certainly less aware of cognitive biases and such, but I've always believed in following the evidence to find the truth. That's something I think my mother helped to instill in me. My acute interest in rationality, however, probably occurred when I was around 18-19 years old. It was at this point that I became an atheist and also when I began Rational Emotive Behavior Therapy.
I had been raised as a Christian, more or less. My mother is very religious, but also very intelligent, and she believes fervently in following the evidence wherever it leads (despite the fact that, in practice, she does not actually do this). The shift in my religious perspective initially occurred around when I first began dating my husband. He was not religious, and I had the idea in my head that it was important that he be religious, in order for us to be properly compatible. But I observed that he was very open-minded and sensible, so I believed that the only requirement for him to become a Christian was for me to formulate a sufficiently compelling argument for why it was the true religion. And if this had been possible, it's likely he would have converted, but alas, this was a task I could not succeed at. It was by examining my own religion and trying to answer his honest questions that I came to realize that I didn't actually know what any good reasons for being a Christian were, and that I had merely assumed there must be good reasons, since my mother and many other intelligent relgious people that I knew were convinced of the religion. So I tried to find out what these reasons were, and they came up lacking.
When I found that I couldn't find any obvious reasons that Christianity had to be the right religion, I realized that I didn't have enough information to come to that conclusion. When I reflected on all my religious beliefs, it occured to me that I didn't even know where most of them came from. So I decided to throw everything out the window and start from scratch. This was somewhat difficult for me emotionally, since I was honestly afraid that I was giving up something important that I might not get back. I mean, what if Christianity were the true religion and I gave it up and never came back? So I prayed to God (whichever god(s) he was, if any) to lead me on a path towards the truth. I figured if I followed evidence and reason, then I would end up at the truth, whatever it was. If that meant losing my religion, then my religion wasn't worth having. I trusted that anything worth believing would come back to me. And that even if I was led astray and ended up believing the wrong thing, God would judge me based on my intent and on my deeds. A god who is good will not punish me for seeking the truth, even if I am unsuccessful in my quest. And a god who is not good is not worth worshipping. I know this idea has been voiced by many others before me, but for me this was an original conclusion at the time, not something I'd heard as a quote from someone else.
Another pertinent influence of rationality on my life occured during my second year of college. I had decided to see a counselor for problems with anxiety and depression. The therapy that counselor used was Rational Emotive Behavior Therapy, and we often engaged in a lot of meaningful discussions. I found the therapy and that particular approach extremely helpful in managing my emotions and excellent practice in thinking rationally. I think it really helped me become a better thinker in addition to being more emotionally stable.
So it's been sort of a cumulative effect, losing my religion, going to college, going through counseling, etc. As I get older, I expose myself to more and more ideas (mostly through reading, but also through some discussion) and I feel that I get better and better at reasoning, understanding biases, and being more rational. A lot of the things I've read here are things that I had either encountered before or seemed obvious to me already. Although, there is plenty of new stuff too. So I feel that this community will be a good fit for me, and I hope that I will be a positive addition to it.
I have a lot of unorthodox ideas and such that I'd be happy to discuss. My interests are parenting (roughly in line with Unconditional Parenting by Alfie Kohn), schooling/education (I support a Sudbury type model), diet (I'm paleo), relationships (I don't follow anyone here; I've got my own ideas in this area), emotions and emotional regulation (REBT, humanistic approach, and my own experience/ideas) and pretty much anything about or related to psychology (I'm reasonably educated in this area, but I can always learn more!). I'm open to having my ideas challenged and I don't shy away from changing my mind when the evidence points in the opposite direction. I used to have more of a problem with this, in so far as I was concerned about saving face (I didn't want to look bad by publicly admitting I was wrong, even if I privately realized it), but I've since reasoned that changing my mind is actually a better way of saving face. You look a lot stupider clinging to a demonstrably wrong position than simply admitting that you were mistaken and changing your ideas accordingly.
Anyway, I hope that wasn't too long an introduction. I have a tendency to write a lot and invest a lot of time and effort in to my writings. I care a lot about effective communication, and I like to think I'm good at expressing myself and explaining things. That seems to be something valued here too, so that's good.
The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.
I would not expect this to take place before deliberating on a word's meaning. Think about it. How would you know if a string of letters is a word? If it corresponds to a meaning. Thus you have to search for a meaning in order to determine if the string of letters is a word. If it were a string of letters like alskjdfljasdfl, it would be obvious sooner, since it's unpronouncable and visually jarring, but something like "banack" could be a word, if it only had a meaning attached to it. So you have to check to see if there is a meaning there. So it doesn't seem all that strange to me that if you prime the neural pathways of a word's meaning, you'd recognize it as a word sooner.