Posts
Comments
I don't understand the point of this post. I mean, I understand its points, but why is this post here? Is it trying to point out that: (a) intent and reality are not always -- and usually aren't -- entangled? (b) Reality happened and our little XML-style purpose tags are added post fact?
It seems odd to spend so much time saying, "Humans reproduced successfully. Anger exists in humans." If the anger part is correlated to the reproduction part it seems fair to ask, "Why did anger help reproduction?" This is a different question than, "What is the purpose of anger?" Is this difference what the article was pointing out?
To reason correctly about evolutionary psychology you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation.
How is this different from any other topic?
To reason correctly about computer science you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation.
To reason correctly about Starcraft II you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation.
The idea of special-casing evolutionary psychology is where I feel I am losing the plot.
No, actually, not Wisconsin. Sorry :P
You wrote that disbelieving in God is not going to turn someone into a murderer because there are still plenty of good reasons to not be a murderer.
This was intended to be a counter example -- not a description of how all people work. I can imagine that someone out there would very much become a murderer if they lost religion.
...but I don't see why theu would be afraid of losing one system other than because they are afraid they will lose their morality (and become murderers). What is the other reason for being afraid?
Introspection is scary. Dismantling any large area of your belief system is also scary. I would expect that knocking over one's central morality system would (and should!) have drastic effects that would filter down throughout particular behaviors.
My only point was that pointing at the fear of becoming a murderer (or any other particular thing) does not imply an external moral system which is what I read out of the original post.
I guess each religion is the result of the developed moral intuitions of some group of thinkers, if not just one person, and if their versions of the God-source morality ring true to more people that religion will grow. In tiny towns one pastor can influence a bunch of people to buy into their version through charisma, but that religion will outlast them only if their version teaches itself to some extent thereafter without too much alteration.
The adaptability of a meme is related to truth but people often follow what they think sounds nice. Is there anything that makes religious beliefs immune to the dilemma of advertising or political rhetoric?
A central God-source morality would imply a deeper, er, source. But is it theoretically possible that some other system of morality is at work that is just as (or appears as) common as what a God-source morality provides?
(These are honest questions, but somewhat rhetorical.)
Would you be willing to summarize your view in a couple sentences, even if doing so would result in a caricature of your position? The main idea I drew from your comment is that when we think about how murder is immoral, this feels like something different than just that murder is not in our best interest (even after folding in that we have self-interests in being altruistic).
Someone making a choice to do X is not necessarily making this choice for moral reasons. If (a) they are doing X for moral reasons and (b) you suddenly take away those moral reasons but (c) they continue doing X it does NOT imply that (d) there are more moral reasons lurking behind those mentioned in a.
Furthermore, if you replace b with "they fear suddenly taking away those moral reasons", d becomes less likely.
sometimes people are motivated to murder. Presumably I could be motivated to murder, and in that case, why shouldn't I? If there was a higher moral authority, I might find that the moral authority compelling enough to tip things in favor of not murdering. However, without that moral authority I'm free after all.
I don't understand this comment. Some people do murder. Do these people consider themselves immoral? To be clear, I was only talking about murder because the article did.
I think the effects of the absence of a moral authority is more obvious in more mundane aspects of life, especially in cases where you are making a choice and one choice is not obviously more moral.
Sure. My point was that the quasi-pragmatic behavior causer-thingy kicks in here, too. So does a complicated morality system. I don't have a problem with either of these things coming into play at a mundane level. What gets interesting is when the two systems collide.
For instance, if a cashier accidentally gives me five dollars extra in change, is it more or less moral to return the change? Is it more or less pragmatic? This seems to touch the same topic as EY's last bit about the two philosophers. But I don't consider this terribly relevant to my original point (even though it is interesting.)
This is slightly different that what you referred to as two moral compasses. While that is also interesting, I am currently fascinated by what happens when a moral compass disagrees with a non-moral decision making system. How does the contention get resolved? But this is mostly unrelated to my point. My point revolves around the idea that the moral and non-moral decision systems can -- and often do -- work in tandem. Removing the moral system and noting no behavior change implies more about non-moral system (or the alternative compasses) than it does about the removed system.
This is similar to EY's point but I think the distinction between a moral reason to not-murder and a non-moral reason to not-murder is key.
There is [a compass] which feels quite distinctly different, which may actually point to an action that is not immediately intuitively moral but which nevertheless feels most like the right choice. Religious training causes us to recognize this different compass, call it "God", and trust in it.
I referred to this as God-source style morality. This obviously differs drastically from person to person in terms of details (and values; and scope) but my actual point was that you cannot use the idea that someone would not become a murderer after throwing away the God-source morality as evidence against the God-source morality. There are too many other things affecting the behavior of not-murder.
That being said, the opposite is also true. You cannot (necessarily) use the idea that God-source morality systems result in not-murder as evidence for the God-source morality. In other words, the causes behind a particular behavior are complicated. Sifting through them isn't as simple as saying, "We fear becoming murderers if God stops existing; therefore there is an external morality system" which is how I interpreted the article.
That isn't to say there aren't issues or problems with God-source morality. I think the idea that all morality "comes" from God is either misleading or inaccurate. But this delves into the greater discussions around ethics which wasn't what I was intending. My point was to show why I do not consider this statement necessarily true:
The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not. If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.
A religious upbringing could be largely about developing a feel for this nuanced, more reliable compass within ourselves. Without a reason to elevate it, I'm afraid we might never develop a reason to 'trust' it -- especially in cases where it seems contradictory -- and instead we would follow more immediate and pragmatic compasses that aren't really reflecting the full morality we're capable of, and on top of that not have the security of following a compass that has God's approval behind it.
Except most religious upbringings are filled with drastic moral differences. Visiting a friend's house or a different church will shift all of the moral teachings. In my opinion, God-source morality is ridiculously difficult to measure. How do we externally determine who has a developed feel for the nuances and who is off their nut? The best I can tell, the answer is to compare their actions with those in the Bible. (This is assuming Christianity, since we've been talking about God this whole time.) Namely, look at the results of the Fruit of the Spirit.
But at the end of the day, you can fake that. Fakers are the wolves in sheep's clothing but... how do you know? How do you study it and poke it and walk away with an answer? Is it possible to walk up to someone and challenge their actions from moral grounds using rationality?
By the way, hello! I remember you from the last time I posted things here.
Which is to say: The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not. If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.
If someone built a complicated morality system around the morality of God and suddenly changed it, they should be afraid. This fear doesn't necessarily stem from an, "Oh crap, I will now murder!" vibe. The idea that everything one believed about morality was wrong (or, at the very least, right for the wrong reasons) should shock them to the core. If it doesn't... then I find this statement severely misleading:
God, say the religious fundamentalists, is the source of all morality; there can be no morality without a Judge who rewards and punishes. If we did not fear hell and yearn for heaven, then what would stop people from murdering each other left and right?
It takes time to put everything back into place. If a moral system was built with a non-God-source but the person thought it was a God-source, sure, then your post makes sense. But what if that isn't actually what is happening? What if the question of wanton murder is actually just a different phrasing of the question "If not God, what is the source of morality?"
The answer to this question begins nonsensically. The issue of a religious God-sourced morality isn't that there is a real morality system behind the curtain acting as if it were God-sourced. The issue is that this external non-God-source system isn't being identified as a morality system at all and, in the extreme cases you have labeled religious fundamentalists, this external system cohabits the same control structure. "Thou shalt not murder" is not equivalent to the statement "I do not want to murder." But both thou-shalt-not-murder and I-do-not-want-to-murder result in the behavioral pattern of not-murder.
A good example of the split between these systems is the simple answer, "Because it is illegal and I will be incarcerated." Made even simpler, it is the equivalence of a knee-jerk reaction to touching a hot stove. In practice, this has little to do with morality (unless you want it to.) The idea that a hot stove hurts is a fundamental cause but I don't personally consider it a relevant indicator of a moral compass. This will certainly bleed into a morality system at some point and it makes sense that murder is closer to that bleed than eating pork. But it is my opinion that there is a distinct difference between a religious claim of a God-source morality and their pragmatic reaction to reality.
This could be a great segue into a handful of interesting topics about the definitions of morality and behavioral patterns and where the lines cross and so on. Instead, I am choosing to focus on using the claims of "murder is wrong" and "I do not want to murder" to distinguish between a moral reason to not murder and a pragmatic reason to not murder.
Stripping out the morality system doesn't (necessarily) change the other system -- nor does it necessarily change one's behavior. The idea that a behavior survives the first system (the God-source-morality system) does not imply that the second system performs the exact same role. In other words, a second system re-enforcing behavior patterns in the first system does not imply that the first system isn't really there.
Likewise, being scared of opening up, working on, and potentially dismantling or replacing the God-source-morality system is justifiable. A rebuilding of the morality system with a new source will (or should) have drastic behavioral effects. But these effects can be supplemented and carried by what I am calling the second system until the new morality system gets up and running. Disbelieving in God is not going to turn someone into a murderer because there are still plenty of good reasons to not be a murderer.
Eventually tying this back into a morality system isn't likely to be as difficult as it appears to the religious fundamentalist. In my opinion, I think it is likely that the replacement system is built independently of the God-source system and when the non-God-source morality system provides a plausible alternative, one can actually begin considering a switchover. In the meantime, there are a lot of uneasy sounding questions like, "But what about murder?" Fortunately, there are answers to those questions.
But the actual point of my comment is directed at this statement:
Even the notion that God threatens you with eternal hellfire, rather than cookies, piggybacks on a pre-existing negative value for hellfire.
Negative values are not necessarily morally negative values. "Hellfire" is likely to rack up negative points in nearly any value system -- that is sort of the point. Noting that the fear of hellfire exists outside of a God-source morality is not, in my opinion, a strong argument against God-source morality. It could simply mean that another value system is in play.
That being said, one could proffer the idea that all value is moral value and that all moral value is from the God-source. Then I would agree that such a system could not explain the intrinsic fear of hellfire. But such a system would also describe the pain from touching a hot stove as a God-source evil. In essence, "God" would just be the answer to everything which isn't really an interesting problem to solve as a rationalist. But addressing a weakness of that system as an argument against a more typical God-source morality system seems misplaced.
I could, of course, be completely missing the point... in which case, oops. :) All thoughts, corrections, what-have-you are welcome. If I am wrong, I want to know.
Eh. I guess I don't see a problem with how the language works here. "Correction as question" probably takes longer but if people are getting confused by the process then I consider that a weakness of the particular implementation.
For example:
- Each of the apples are green.
- Should "each" be used with "are" or "is"? As in, "Each of the apples is green."
Your challenge is that this is now ambiguous with regards to whether or not I know the answer. Except, the point isn't what I know. The point is that there is a contention of each+are and the two separate goals of "teach correct grammar" and "learn correct grammar" can move forward in the conversation:
- Each of the apples are green.
- Should "each" be used with "are" or "is"? As in, "Each of the apples is green."
- It should be used with are.
If you needed to know the answer, you got as much information as you could from this speaker. If you have questions you can continue down that path.
If, on the other had, you happen to know the answer, the conversation now forks into a direct confrontation:
- Each of the apples are green.
- Should "each" be used with "are" or "is"? As in, "Each of the apples is green."
- It should be used with are.
- I was told that each is always treated as singular and, therefore, it should be "each of the apples is green."
So, other than the inconvenience of having to insert a few sentences into the conversation, we haven't lost anything. There isn't any ambiguity and this transition was much smoother than simply saying:
- Each of the apples are green.
- "Each" be used with "is": "Each of the apples is green."
There are plenty of reasons why phrasing the correction as a question is helpful — your point (as I understand it) was that the ambiguity between "correction as question" and "query for information" makes the former not worth it. My counterpoint is that the ambiguity isn't a necessary component of "correction as question".
For what it is worth, I am mostly thinking of corrections that are not direct claims of fact. For instance:
- Swedish is not the official language of Sweden.
I don't see any advantage to responding to this with a question and, personally, favor the more direct approach:
- Swedish is not the official language of Sweden.
- Yes it is.
If I felt obligated to take the less direct approach, I would do as such:
- Swedish is not the official language of Sweden.
- Oh? I thought it was.
This can stall out if the other person doesn't offer anything useful in response. (And, by the way, Swedish is the official language according to Wikipedia.)
Also of note, this all changes depending on who is making the mistake. If I happened to be talking with someone I knew favored a direct approach, I would just point out the error because they are more likely to consider that polite than beating around the bush.
Oh, okay. I guess my form of "correction as question" is more like:
Is correct?
Also tangential: Have you tried simply getting up to get another drink or go to the bathroom? Chances are high that (a) others will join you (b) the conversation will experience a natural segue and/or (c) the people who still care about the subject will stay behind to continue on their own.
Just a thought. I don't really know what environment you were referring to.
That's annoying. What do you do if you're genuinely unsure if they're making a specific mistake and want to know?
How does phrasing a correction as a question limit your options? I don't understand how the specific mistake part ties into the correction as question part.
I immediately took the title to imply both meanings and assumed it was deliberate. I did not think this was all that terribly boastful. So... I guess I agree halfway?
I selfishly voted you up because this is what I want to hear. ;)
Yes, that is what I meant. I guess I should have put that in the post somewhere... I edited it in.
Aha! Thank you much! I figured something was up. :) I won't bother copying this over there, however.
Also, apparently there are some spammers about.
I used to post here on LessWrong and left for various reasons. Someone recognized my name earlier today from my activity here and I just so happened to have thought of LessWrong during a conversation I had with a friend of mine. The double hit was enough to make me curious.
So how's it going? I am just stopping by to say a quick, "Hello." It seems that Open Threads are no longer the way things work but I didn't notice anything else relevant in the Recent Posts. The community seems to be alive and thriving. Congratulations. :)
If a theist is a good person, that theist already is a good person, whether God is real or not.
The relevant question is whether the good person would remain good after they discover God is not real. My hunch is that most people who are good would stay that way.
But I like this point:
Whatever the truth is, the hypothetical frightened father - and the very real frightened theists, such as yourself - already are living under whatever conditions actually hold.
And I will take it with me.
Not so much ideas as things that I think would be helpful but will be buried in obscurity at the level I am.
That being said, this is helped more by learning the basics through reading the sequences than playing status games. My thoughts on status should be taken with the clarification that I am primarily seeking to learn and am thinking about status because I find it interesting.
If I can take my comments and change my behavior so as to be looking forward and increase my status without hampering my ability to learn... why shouldn't I? When I look at the various status levels here at LessWrong I notice the highest level I can see, which is the one I mentioned in my previous comment.
These comments are meant to be taken as observations for people who are curious or interested. I am not nearly as concerned with status as these comments may imply. I just know status is there and wonder about it in the same way I wonder about most things.
Experience can provide an excellent dummy check to make sure there isn't an obvious counter-argument or flaw in something that you are unable to see because you haven't seen it yet. There is much to be said from simply going out there and trying the theory; the results of trying are experience. When you can translate your experience into Bayesian terms you have succeeded.
I have no problem with deferring to someone who has more experience than I do as long as I trust their methodology. Once that trust is gone I start doubting the truth of their experience. I don't think their experience says what they think it says; they haven't translated it correctly.
Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.
If there is a high rate of conversation as people grow older than it makes sense to predict that you will come around eventually. People here tell me the same thing about my religious beliefs. The consensus is that as I grow older in the Way of Bayes I will eventually identify as atheist. I don't think this implies the proxy that you mention. Quantity of experience isn't legitimate authority but if I (a) predict you will change and (b) predict that I am unable to exact the change but rather (c) the change will happen on your own accord sometime in the future than I have no reason to talk to you. Telling you my prediction is halting the conversation, but the real conversation halter is whatever is causing you not to switch now.
But really, in the end, I do agree with you. I ran out of time and had to cut this comment short. Sorry.
There is a slight difference between being a top contributor and being famous as I am mentioning it here.
My current karma experiment is deliberately not posting comments I think are worth less than 2 karma unless I have a compelling reason to do so (such as asking for help or information). My goal is to increase the quality of my comments to the point that someone could think, "What has MrHen posted recently?" and the answer is more impressive than a series of one-liners and nitpicky comments.
Ideally, this will increase the weight of my words to the point that when I speak, people will listen. It is a pure, straightforward status grab, but one that doesn't involve gobbling up karma. The pinnacle of the status tree at LessWrong is to have someone think, "I, the reader, am wrong" instead of, "they, the writer, are wrong."
I am compiling a mental list for LW status games, rewards, and penalties that is similar to the karma list above. I am not much for status games but the gaming here seems to be harder to avoid than in real life. ("Avoid" is the wrong term but conveys the right intent. Status games are hard to "avoid.")
My biggest regret with the above karma list is that I have no good way to verify or catalogue my comments and their karma. LW just doesn't have any tools to make such a thing easy. I worry that the status list will be even further removed from reality -- possibly to the point of being unable to predict anything at all.
In any case, I like to think that I am getting better in regards to comment quality and topical knowledge. My karma rating keeps going up, so that's a good sign.
EDIT: Replaced "I think will be rated lower than 2 karma" with "I think are worth less than 2 karma."
Yeah, I was way off. I didn't think people would be that interested in karma theory. I think the big oops was the first bullet point.
The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.
After someone points this out, the incorrect response is to start adding clauses:
The screwdriver's purpose is to turn Phillips-head screws.
Or:
The screwdriver's purpose is to turn screws designed to be turned by the screwdriver.
People are more likely to do this to something other than screwdrivers, obviously.
"The purpose of love is..."
"Eyebrows are there so that..."
It is easy to misinterpret the point of this post as claiming that the purpose assigned to an object is wrong or inadequate or hopelessly complex. That isn't what is being said.
What is the point of this post? I seem to have missed it entirely. Can anyone help me out?
The mirror challenge for decision theory is seeing which option a choice criterion really endorses. If your stated moral principles call for you to provide laptops to everyone, does that really endorse buying a $1 million gem-studded laptop for yourself, or spending the same money on shipping 5000 OLPCs?
Is the point that predicting the end result of particular criterion is difficult because bias gets in the way? And, because it is difficult, start small with stuff like gene fitness and work up to bigger problems like social ethics?
Where is the pure moral reasoner [...] whose trustworthy output we can contrast to human rationalizations of the same utility function? [...] Why, it's our old friend the alien god, of course! Natural selection is guaranteed free of all mercy, all love, all compassion, all aesthetic sensibilities, all political factionalism, all ideological allegiances, all academic ambitions, all libertarianism, all socialism, all Blue and all Green.
Or... is the point that natural selection is a great way to expose the biases at work in our ethics choice criterion?
I am not tracking on something here. This is a summary of the points in the post as I see them:
We are unable to accurately study how closely the results of our actions match our own predictions of those results.
The equivalent problem in decision theory is that we are unable to take a set of known choice criteria and predict which choice will be made given a particular environment. In other words, we think we know what we would/should do in event X but we are wrong.
We possess the ability to predict any particular action from all possible choice criteria.
Is it possible to prove that a particular action does or does not follow from certain choice criteria, thereby avoiding our tendency to predict anything from everything?
We need a bias free system to study that allows us to measure our predictions without interfering with the result of the system.
Natural selection presents a system whose only "goal" is inclusive genetic fitness. There is no bias.
Examples show that our predictions of natural selection reveal biases in ourselves. Therefore, our predictions were biased.
To remove our bias with regards to human ethics, we should use natural selection as a calibration tool.
I feel like the last point skipped over a few points. As best as I can tell, these belong just before the last point:
When our predictions of the bias-proof system are accurate, they will be predictions without bias.
Using the non-biased predictors we found to study the bias-proof system, we can study other systems with less bias.
Using this outline, it seems like the takeaway is, "Don't study ethics until after you studied natural selection because there is too much bias involved in studying ethics."
Can someone tell me if I am correct? A simple yes or no is cool if you don't feel like typing up a whole lot. Even, "No, not even close," will give me more information than I have right now.
As a counterpoint, my highest rated comments are huge walls of text. This could be because (a) I don't make a lot of jokes or (b) I make crappy jokes or (c) people like my walls of text more than the typical wall of text or (d) something else.
I keep an eye on my karma and have noticed these things that I believe are related to your post:
Talking about the karma system has fallen out of favor. I think people are getting tired of it.
Asking why something was downvoted usually brings more upvotes unless you really, really deserved the downvotes. This latter case will probably be swarmed with downvotes.
Some (many?) people vote with an end score in mind. These meta-voters have more affect on threshold comments that fluctuate between -2 and 2.
Long conversations will generally pull between -1 and +2 per post. Most of my karma comes from lengthy discussions in the comments. Even if the top level post was only rated +2 I will net almost 100 karma points from the post and comments. (Note: I haven't actually added this up. It may be closer to 75 karma.)
Quick responses pointing out third alternatives or simple problems get upvoted and usually roam between +2 and +14. If you want to get your karma higher, this is the easiest way. Comment immediately after a post is submitted and point out the most obvious flaw respectfully and concisely. Don't try to make a point, just note an error. If you get in before the rest of us, you will probably get +4 or higher.
Long responses pointing out serious problems get upvoted but have a lower chance of pulling large amounts of karma than quick responses. However, after the first wave of quick responses, only the longer comments are true candidates for higher karma. I suspect that a long response to a quick comment has a good chance to do well, but haven't really watched those comments yet.
Jokes are upvoted when they are either extremely funny or solidly funny and on topic. Randomness is upvoted if it is an inside joke, otherwise stays around +0. Sarcasm is appreciated but has the problem of being mistaken for nonsarcasm.
Extending the point or conversation of a top-level post gets upvoted. Most of mine get between +2 and +4. Examples would be almost every comment I have made while reading the sequences.
Aggressiveness is generally poorly received on technical topics. Aggressiveness is easier to get away with when dealing with fuzzy topics. I attribute this mostly to margin of error. Technical topics are harder to be bulletproof. I am still having a hard time predicting which non-technical top level posts are voted up. I suspect this is because I don't know what is already been discussed or that the issue is a technical topic that I have misidentified.
Self-depreciation is a huge karma pull. Both my highest rated post and comment were essentially me slamming myself over and over. Each were voted higher than +20.
Bullet point lists of extensions, ideas, questions, and so on seem to do about as well or better than long paragraphs of text. The walls of text may be harder to skim for goodies or the bullet point list better organized?
Non-aggressive requests for clarification or information are not generally downvoted. Mine seem to roam between 0 and +2 karma. If the question and response delve into a lengthy but friendly conversation, I seem to get between 0 and +2 karma for each of my comments. If a solid agreement or conclusion is reached, the capping comment gets about double whatever the individual comments were getting.
Posting nearby "famous" people amps up the karma action. Replying to EY, Alicorn, Vladimir_Nesov, pjeby, et al will increase the amount of people that read your comments. The reasons for this are varied. The four I used here are just names that popped into my head. Also, some people seem to vote up conversations they are in while others do not. A few downvote anyone disagreeing with them. The people that matter generally fit these criteria: (a) top contributer (b) easily recognizable name (c) frequent poster/commenter (d) abnormal amount of recent activity (e) holds atypical beliefs for the community (f) a troll.
Better grammar, spelling, and language increases the likelihood that your comment will move upward faster.
Comments quickly upvoted higher than typical seem to either (a) go through the roof or (b) get meta-voted back to between +1 and +3.
Telling people to vote in a particular way tends to produce easy to predict results but not in a manner that is easy to describe.
And oh wow did that get long. Do note that this is all being typed from the top of my head using myself and the comments I read as an example. Naturally, the above does not dictate how people vote.
EDIT: I guess for fun, I predict that this comment gets ROT13orgjrra cbfvgvir gjb naq cyhf fvk xneznROT13.
EDIT 2: I would normally downvote a post such as this but elsewhere in the comments you seemed to have received the message and was wondering about deleting it. So I just left it as it is. Also relevant: I generally do not upvote jokes unless they are truly amazing.
Darwinian Evolution is irrelevant to the whole discussion.
I think I understand the point you are trying to make with this. The questions I have in response are these:
- When does Darwiniain Evolution become relevant for the discussion of life as we know it?
- Where does your theory of supernatural creation stop and natural cause and effect take over?
- If I were able to study, examine, and see the original supernatural creation of life, would I be able to explain it naturally? In other words, did the supernatural creator use already existing natural components and processes? Or did it create new components and processes? Or... ?
- If I were able to replicate the supernatural phenomena of creation using natural components could would this be evidence for or against your theory of supernatural creation?
You talk about DNA, replication, bacterium and other complicated terms. I don't know anything about these terms so I am not able to debate you on the particulars. The questions above are not a challenge. They are intended to clarify what you meant in terms I can understand.
Rearranging the cards in a deck has no statistical consequence. Cheating on your spouse significantly alters the odds of certain things happening.
If you add the restriction that there are no consequences, there wouldn't really be much point in doing it because its not like you get sex as a result. That would be a consequence.
The idea that something immoral shouldn't be immoral if no one catches you and nothing bad happens as a result is an open problem as far as I know. Most people don't like such an idea but I hear the debate surface from time to time. (Usually by people trying to convince themselves that whatever they just did wasn't wrong.)
In addition, cutting a deck of cards does have an obvious effect. There is no statistical consequence but obviously you are not going to get the card you were originally going to be dealt.
Taking two seconds to click on the Collapse Postulate link it appears that the article was originally posted on Overcoming Bias. Also, it appears to be part of a larger sequence on quantum mechanics.
I haven't read that sequence or that article so I cannot compare them to yours, but all of those links in the block you quoted presumably enhance the discussion to make the conclusion more obvious. Your article has one link.
Good point. I will do this in the future.
EDIT: For historical purposes, this comment reported two typos that have since been fixed. I was intending to delete this comment when they were fixed but a valuable discussion occurred below.
I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.
Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same.
So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested.
† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.
Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that...
1/3 + 2/3 * 1/4 =
1/3 + 2/12 =
4/12 + 2/12 =
6/12 =
1/2
Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair.
If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using.
I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.
I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works.
EDIT: Apparently the book is on Google.
She was talking to students at Harvard.
Okay. Nothing I have will help you. My problems are generally OCD based procrastination loops or modifying bad habits and rituals. Solutions to these assume impulses to do things.
I have nothing that would provide you with impulses to do.
All of my interpretations of "I can't do X" assume what I mean when I tell myself I can't do X.
Sorry. If I were actually there I could probably come up with something but I highly doubt I would be able to "see" you well enough through text to be able to find a relevant answer.
Possible solutions:
Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else.
Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area.
Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there.
These are somewhat complicated and broad categories and I don't know how much they would help.
That works for me. I am not convinced that the rule-changing heuristic was the cause but I think you have defended your position adequately.
A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. [...]
Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends:
One of my friends dealt hearts in a manner of giving each player a pack of three cards, the next player a pack of three cards and so on. The amount of cards being dealt were the same but we all complained that this actually affected the game because shuffling isn't truly random and it was mucking with the odds.
We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test...
Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related:
At a poker game with friends of friends there was one player who shuffled by cutting the cards. No riffles, no complicated cuts, just take a chunk from the top and put it on the bottom. Me and the mathematician friend from my first example told him to knock it off and shuffle the cards. He tried to convince us he was randomizing the deck. We told him to knock it off and shuffle the cards. He obliged while claiming that it really doesn't matter.
This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.
I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters.
This behavior is repeated in scenarios where the rules are not being changed or there aren't "rules" in the sense of a game and its rules. These examples are significantly fuzzier which is why I chose the poker example.
The lottery ticket example is the first that comes to mind.
EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.
Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?
Why can I override mine? What makes me different from my friends? The answer isn't knowledge of math or probabilities.
What do you do when you aren't doing anything?
EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.
I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments.
A summary:
- The poker game is an example. There are more examples involving things with less obvious rules.
- My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic.
- I confronted my friends and asked for a reasonable answer. Heuristics were not offered. No one complained about broken rules or cheating. They complained that they were not going to get their card.
It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride?
One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered.
Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?
I am more likely to be considered OCD than any of my friends in the example. I don't care if you cut the deck.
But the upshot is that they were irrational as a side effect of usually rational heuristics.
So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?
I can understand your answer if the scenario was more like:
"Hey! Don't do that!"
"But it doesn't matter. See?"
"Oh. Well, okay. But don't do it anyway because..."
And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem.
"Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong?
I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments?
How would you describe this heuristic in a few sentences?
I agree with your comment and this part especially:
However, the same thought process doesn't occur on winning; people aren't inclined to analyze their successes in the say way that they analyze their failures, even if they are both random events.
Very true. I see a lot of behavior that matches this. This would be an excellent source of the complaint if it happened after they lost. My friends complained before they even picked up their cards.
When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without (suspicion of) malicious intent breaks a ritual associated with the game.
We didn't until the people on TV did it. The ritual was only important in the sense that this is how they were predicting which card they were going to get. Their point was based entirely on the fact that the card they were going to get is not the card they ended up getting.
As a reminder to the ongoing conversation, we had arguments about the topic. They didn't say, "Do it because you are supposed to do it!" They said, "Don't change the card I am supposed to get!"
One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.
Sure, but this isn't one of those cases. In this case, they are complaining for no good reason. Well, I guess I haven't found a good reason for their reaction. The consensus in the replies here seems to be that their reaction was wrong.
I am not trying to say you shouldn't enjoy your coffee rituals.
How do you want your organism to react when someone else's voluntary action changes who receives a prize?
I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question.
Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently?
I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution.
As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why?
I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?
Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem.
But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?
EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense.
I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.
Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck.
(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)
Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.
Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior.
(I presume you have not tested the lottery ticket swap experimentally)
I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story:
My grandfather once won at bingo and was offered to choose a prize from a series of stuffed animals. Each animal was accompanied by an envelope containing some amount of cash. Amongst the animals were a turtle and a rhinoceros. Traditionally, he would always choose the turtle because he likes turtles but this time he picked the rhinoceros because my father happens to like rhinos. The turtle contained more money than the rhino and my dad got to hear about how he lost my grandfather money.
Granted, there are a handful of obvious holes in this particular story. The list includes:
- My grandfather could have merely used it as an excuse to jab his son-in-law in the ribs (very likely)
- My grandfather was lying (not likely)
- The bingo organizers knew that rhinos were chosen more often than turtles (not likely)
- My grandfather wasn't very good at probability (likely, considering he was playing bingo)
- Etc.
More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they shouldn't have traded tickets.
People do this all the time involving things like when they left for work. Decades ago, my mother-in-law put her sister on a bus and the sister died when the bus crashed. "What if?" has dogged her ever since. The connection between the random chance of that particular bus crashing on that particular day is associated with her completely independent choice to put her sister on the bus. While they are mathematically independent, it doesn't change the fact that her choice mattered. For some reason, people take this mattering and do things with it that makes no sense.
This topic can branch out into really weird places when viewed this way. The classic problem of someone holding 10 people hostage and telling you to kill 1 or all 10 die matches the pattern with a moral choice instead of random chance. When asking if it is more moral to kill 1 or let the 10 die people will argue that refusing to kill an innocent will result in 9 more people dying than needed. The decision matters and this mattering reflects on the moral value of each choice. Whether this is correct or not seems to be in debate and it is only loosely relevant for this particular topic. I am eagerly looking for the eventual answer to the question, "Are these events related?" But to get there I need to understand the simple scenario, which is the one presented by my original comment.
(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.
I am having trouble understanding this. Can you say it again with different words?
Hm. Interesting, I don't think I ever realized those two words had slightly different meanings.
*Files information under vocab quirks.*
I was tempted to add this comment:
Vote this comment up if you have no idea what Alicorn's metaphor of luminosity means.
But figured it wouldn't be nice to screw with your poll. :)
The point, though, is that I really don't understand the luminosity metaphor based on how you have currently described it. I would guess the following:
A luminous mental state is a mental state such that the mind in that state is fully aware of being in that state.
Am I close?
Edit: Terminology
For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong.
Sure, but the "wrong" in this case couldn't be shown to my friends. They perfectly understood probability. The problem wasn't in the math. So where were they wrong?
Another way of saying this:
- The territory said one thing
- Their map said another thing
- Their map understood probability
- Where did their map go wrong?
The answer has nothing to do with me cheating and has nothing to do with misunderstanding probability. There is some other problem here and I don't know what it is.
I don't think this is relevant. I responded in more detail to RobinZ's comment.