Open Thread, Feb. 2 - Feb 8, 2015

post by Gondolinian · 2015-02-02T00:28:25.757Z · LW · GW · Legacy · 256 comments

Contents

256 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

256 comments

Comments sorted by top scores.

comment by Parmenides · 2015-02-02T16:52:12.482Z · LW(p) · GW(p)

Posting for the first time because I feel I could maybe use some help. [And yes, I know of the Welcome Thread, but I think the Open Thread gets more attention so I'm posting first here. Maybe later I'll post in the Welcome Thread.]

I come from a very religious family and community, but I'm a closet atheist. (More accurately, I'd label myself agnostic leaning atheist with regard to the existence of one or more intelligent world-designer(s), but I give almost no credence to any religious claims beyond that. In any case, for simplicity I'm just going to refer to myself here as an atheist.)

I have only a single very close friend who knows of my atheism. 5 or 6 other people know I disagree with all the standard religious arguments, but they think that I've opted for "blind faith" and I'm still religious. Most of my family and friends, however, although they know that I'm unusually open-minded and intellectual for my close-minded religious community (and they look at me a bit strangely for that), still think that I'm fully religious.

A bit of background: I started doubting in high school, but it didn't turn into a full-fledged crisis of faith until I was about 18 or 19. Eventually a religious mentor pointed me to Pascal's Wager, and I leaned on that for many years. I got married to a wonderful religious girl and went on to study advanced religious studies. Shortly before the birth of my third child, however, I finally took another critical look at Pascal's Wager. I read numerous scholarly works and articles, went through a bunch of articles on the internet (including several from LessWrong), and did a lot of heavy thinking. In the end I decided that I can't rely any longer on the Wager. For the next few months I forced myself to nonetheless believe by pure force of will (whether this was Belief in Belief or real belief is a different question), but eventually the cognitive dissonance grew too great and I gave up.

The problem is that I can't tell anyone. My wife would probably decide to follow me - but there's a chance she might not, and I love her way too much to risk losing her. Even if she did follow me it would cause her a tremendous amount of mental anguish which I really don't want to impose on her. She'd also likely not be able to keep that kind of secret from her friends and family for too long, and the pain of trying to keeping it secret would likely be even worse for her than it is for me. And if it did get out, then we'd risk losing virtually all of our (close-knit, wonderful, highly supportive) families and friends. And that's besides the terrible emotional effects that a revelation of this sort would have on my parents, kids, siblings, and friends.

I do have a few vague tentative plans for eventually being able to maneuver myself into a position where I can reveal my beliefs without too much of a risk, but that's only for the long term. For the short term I'm stuck with only a single friend who knows my true position.

The problem is that it's so hard! I hate keeping secrets from my wife. I hate having to bottle up my intellectual arguments (particularly because I'm the type whose favorite activity is a good intellectual discussion with friends). I hate having to fake prayers and fake interest in my friends' and family's religious discussions. But what am I to do? I'm stuck with no alternatives.

So what do I want from you, fellow readers of LessWrong? I don't know. Emotional support? Advice? Maybe a link to an organization I could contact (secretly, of course) or to some relevant online resources? Whatever you can think of, I guess. Or maybe I'm just venting my emotions.

ETA: Maybe I should be a bit more specific. My situation closely parallels this. I do not want to end up like that!

Replies from: Vaniver, Gram_Stone, Viliam_Bur, gjm, mwengler, Gram_Stone, Alicorn, Unknowns, torekp, ChristianKl, Squark, polymathwannabe
comment by Vaniver · 2015-02-02T19:52:08.509Z · LW(p) · GW(p)

Paul Graham wrote an article called What You Can't Say that seems somewhat relevant to your position, and in particular engages with the instrumental rationality of epistemic rationality. I bring that one up specifically because his conclusion is mostly "figure out what you can't say, and then don't say it." But he's also a startup guy, and is well aware of the exception that many good startup ideas seem unacceptable, because if they were acceptable ideas they'd already be mature industries. So many heresies are not worth endorsing publicly, even if you privately believe them, but some heresies are (mainly, if you expect significant instrumental gains from doing so).

I grew up in a Christian household and realized in my early teens that I was a gay atheist; I put off telling people for a long time and I'm not sure how much I got from doing so. (Both of my parents were understanding.) Most of my friends were from school anyway, and it was easy to just stop going to church when I left town for college, and then go when I'm visiting my parents out of family solidarity.

My suspicion is that your wife would prefer knowing sooner rather than later. I also predict that it is not going to get easier to tell her or your children as time goes on--if anything, as your children age and absorb more and more religious memes and norms, the more your public deconversion would affect them.

comment by Gram_Stone · 2015-02-03T10:09:34.829Z · LW(p) · GW(p)

I think that your edit clarified things for me substantially. I read the entire article that you linked. I regret my earlier post for reasons that you will hopefully see.

I have a relevant anecdote about a simpler situation. I was with two friends. The One thought that it would be preferable for there to be less and/or simpler technology in the world, and the Other thought that the opposite was true. The One believed that technology causes people to live meaningless lives, and the Other conceded that he believed this to be true but also believed that technology has so many other benefits that this is acceptable. The One would always cite examples of how technology was used for entertainment, and the Other, examples of how technology was used for work. I stepped in and pointed out the patterns in their respective examples. I said that there were times when I had wasted time by using technology. I pointed out that if a person were like the One, and thus felt that they were leading a less meaningful life by the use of technology, then they should stop. It would be harmful were I to prescribe that a person like the One indiscriminately use technology. I then said that, through technology, I was able to meet people similar to me, people whom I would be far less likely to meet in physical life, and with whom I could hold conversations that I could not hold in physical life. In this way, my life had been made more meaningful by technology. And so it would be harmful for someone to prescribe that I indiscriminately do not use technology.

I learned three things from this event:

1) I should look for third alternatives.

I definitely did not consider this enough in my original response to you, and I apologize. Just like it is not a matter of less technology vs. more technology, it is not necessarily a matter of 'Keep your old life,' vs. 'Start a new life.' Honestly, your 'vague tentative plans' sound like potential third alternatives. I would say keep thinking about those, and also feel good for thinking of and about them. I'd love to hear about them, however vague and tentative. Vaniver touched on this. I would say that he found a third alternative in his own life. I'm bisexual; in physical life, I'm selective about whom I tell, and I don't feel outraged that this is pragmatic or feel inauthentic for doing it. Others would feel like they were in a prison of their own making. I picked the best alternative that I could live with.

2) I should remember that humans are never 'typical.'

There are people who feel like their skin is on wrong when they use technology that they consider undesirably advanced. I love technology. The One thought that people who used technology were suffering from a sense of meaninglessness, and they were simply unaware of this, or actively ignoring it. This was not true for me: Technology makes my life more meaningful. For either of us to act otherwise would be for us to act against our preferences. Likewise, it may have been more important for Shulem to act authentically than it was for him to keep his social relationships. Maryles had a sneaking suspicion that this is false. Yet, Shulem may really be more lonely and really not regret it.

3) I should remember that humans do things for more than just happiness.

People value other things besides happiness. The One saw that some people were happy playing mobile games all of the time, their reward centers firing away, but didn't think that it was worth it because their happiness was meaningless. The One valued meaning more than entertainment, and perhaps even more than happiness in general. People forget this easily. I see this in the article when Maryles says:

Not that I have a right to tell people how to live their lives. I just wish that he would have made choices that would have kept his family intact, and given him a better more meaningful life. Shulem says that he has no regrets. And yet I wonder if he has had similar thoughts? So I am sad for Shulem who still seems to live a very lonely life. I am sad for his children who lost a father they once loved. And yet I am hopeful that those with similar leanings that read his book will realize that the kind of radical change Shulem Deen made- even as he felt it was the right one based on being true to oneself -may not be the best solution for individual happiness.

He wishes that Shulem had made decisions to give himself a more meaningful life. He wishes that Shulem had made decisions to give himself a happier life. He wishes that Shulem had made decisions to give himself a less lonely life. He thinks that, ultimately, Shulem has made decisions to give himself a more authentic life at the price of forgoing these other possibilities. About this, he may be right. Another possibility is that there was no more preferable alternative. Maryles suggests otherwise: He seems to think either that authenticity, meaning, community, and happiness are all the same; or that all are reducible to one; or that all necessarily follow from one. I cannot glean which he believes from context. It is entirely possible that Shulem feels that his life is less happy, less meaningful, more lonely, and more authentic, and that he prefers all and regrets none of this. On the other hand, you, it seems, would not prefer this and would regret this, because you are not typical, as said above. I keep the complexity of value in mind when evaluating potential third alternatives.

Lastly, because things are often about that which they explicitly are not, I feel obliged to touch on this:

I was sad not so much about his erroneous (in my view) conclusions about God and Judaism. Although I am in no way minimizing the importance of that - this post isn’t about that.

If this is true, then 'The Lonely Man of No Faith' is a bad title, in the sense that it isn't representative of the article's implication. (It does, however, make for excellent link bait.) No one is thinking, "Surely his lack of faith is merely a coincidence. There must be other reasons that this man is lonely." Maryles has to say that the post is not about 'that' precisely because everyone has assumed that it's about that.

The general implication is that the so-called truth-seekers are worse off even though the opposite should be true. On this, I will say that any time that I have seen someone become less satisfied with their life by reading about the sorts of things that are posted here, it's because they have experienced a failure of imagination, or their new beliefs have not fully propagated. The failure modes that I've seen the most are:

You've given no indication that you believe any of these things, but I had to address that because of the article's implication, and you or others very well may believe these things, explicitly or implicitly, without indication. You identify as an open-minded person; you seem to take pride in it. As such, you may not really believe that there is no God; rather, you might believe that you ought to believe that there is no God, because perhaps that is what you believe open-minded people do, and you want to do what open-minded people do. (I had this very problem. Belief in belief goes both ways!) Saying that one atheist is less happy because he has been separated from his loved ones is very different from saying that atheists are universally dissatisfied because theism is essentially preferable. Though the author attempts to make that distinction, I think that he fails.

I'm also not saying that I deductively concluded that truth-seeking is preferable to ignorance. I inductively concluded it. Truth-seeking could have been horrible: It turns out it generally isn't.

Replies from: maxikov, Torello
comment by maxikov · 2015-02-05T07:04:12.515Z · LW(p) · GW(p)

The general implication is that the so-called truth-seekers are worse off even though the opposite should be true.

The opposite should be true for a rational agent, but humans aren't rational agents, and may or may not benefit from false beliefs. There is some evidence that religion could be beneficial for humans while being completely and utterly false:

http://www.tandfonline.com/doi/abs/10.1080/2153599X.2011.647849

http://www.colorado.edu/philosophy/vstenger/Folly/NewSciGod/De%20Botton.pdf

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361002/

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003679

Of course, this is not "checkmate, atheists", and doesn't mean we should all convert to Christianity. There are ways to mitigate the negative impact of false beliefs while preserving the benefits of letting the wiring of the brain do what it wants to do. Unitarian Universalists from the religious side, and Raemon's Solstice from the atheist side are trying to approach this nice zone with the amount of epistemological symbolism and rituals optimal for real humans, until we found a way to rewire everyone. But in general, unless you value truth for its own sake, you may be better off in life with certain false beliefs.

Replies from: Gram_Stone
comment by Gram_Stone · 2015-02-05T16:00:29.332Z · LW(p) · GW(p)

Good point, maxikov. I agree that instrumental rationality > epistemic rationality once you have enough epistemic rationality to understand why and not have it backfire and inadvertently make you less rational in both senses. As I said before, life is always lived in practice.

comment by Torello · 2015-02-03T15:35:26.590Z · LW(p) · GW(p)

Your discussion of failure modes at the bottom of this comment is excellent.

Do you have any recommend books or articles on the topic?

Has there already been a post about these failure modes on the main page? If not, please expand this into a main post.

Too all other readers, please feel free to share books or articles on the topic.

Replies from: Gram_Stone
comment by Gram_Stone · 2015-02-03T15:51:17.440Z · LW(p) · GW(p)

Thanks, Torello. Like many good things, they're really short and sweet summaries of things that Eliezer and others have been saying for years. The list is by no means exhaustive. I'm not very far into the Sequences, and this is just what I've pieced together, so someone else would probably be able to point you to relevant LW posts. I know far less than I appear to know.

I haven't read it, but my guess is that Gary Drescher's Good and Real: Demystifying Paradoxes from Physics to Ethics would be what you're looking for. I know for a fact that it explains why no absolute morality != moral relativism or moral nihilism, and why determinism != fatalism. As for the second, from what I understand, reductionism is the key to solving most of our Old Hard Unsolved Problems, so he'll talk about that, but I don't know if he'll talk about people weirdly losing all hope when they see that reductionism is the way to go. I don't know about the fourth item, but I don't see Drescher successfully avoiding it. The fifth item in the list probably did not merit discussion in Drescher's book.

I don't think it merits its own post, even in discussion. It's not really novel here, except perhaps in presentation.

comment by Viliam_Bur · 2015-02-03T09:07:50.202Z · LW(p) · GW(p)

I have never been in a situation similar to yours, so my advice may be wrong, but here it is anyway.

When people change their opinion, they sometimes go from one extreme to the opposite extreme, as if to make sure they would not drift back to their old position. But there is no need for sudden large changes. Unlike religious people, atheists do not have a duty to proselytize everyone to their non-belief. To put it bluntly, you are allowed to lie and deceive, if it is necessary for your survival. I do not support lying in general, because it has its cost, but sometimes telling the truth (at the wrong moment) has a much greater cost. The cost of lying is weakening the relationship with people you lie to. So I think you should try to be open with your wife (but be careful about your coming out), but lying to everyone else is an option.

When explaining how you feel, focus on the positive parts, not the negative parts. Rejecting religion is the negative part. It is not your terminal value to be non-religious. You probably still like some aspects of the religious culture; and that's okay. (Atheists are free to celebrate Christmas, if they choose to.) It's just that your positive values are understanding the world, being honest, etc. and religion happens to be incompatible with that. You are throwing religion away because the alternative would be throwing your curiosity or sanity away. If you are going to explain to someone the negative part, you should explain the positive part first (without even mentioning religion at the beginning). Only when they value the positive part, you should show them the conflict; then they may empathise.

Specifically, I think you should show your wife all the cool things you are interested in, starting with the noncontroversial ones. She does not have to like them all; different people have different preferences; but you may find something that is interesting for both of you. Then you have an enjoyable topic to talk about which is unconnected to religion. The more such topics you have, and the more time you spend debating them, the less time you spend debating religion, and the less role the religion plays at keeping you together. Then the impact of abandoning religion will be smaller. Just start with the simple stuff; do not go into "adversarial intellectual debate mode" you are probably using with your friends sometimes. Instead, be a guide in an intellectual adventure. For example, find some noncontroversial TED talks videos (not about religion, politics, evolution, global warming, or whichever topics are controversial in your religious community) and watch them together (maybe even with your children). Be the one who brings positive value, not the one who causes conflict.

You should be strategic about your social circle. I do not know the people around you, but I have read stories where people lost their whole religious community after coming out. You may have a few loyal friends who will stay with you regardless, but even those friends may be under pressure from their friends and families. You prepare strategically for this by creating new friends in advance. Preferably ones that your wife will like too. Every new friend who does not share your religion, is a friend who will not abandon you when you come out. To some degree, friendship is a question of spending time together, and having experiences in common. Essentially, you should manage your time to spend more time with people outside your religious community. (I hope they are available.) Again, bringing new nice people as friends is a positive step. Finding new interesting activities you and your wife could enjoy together, outside of your religious community, is also a positive step. You could take a family vacation outside of your community, with the new friends.

Shortly, build new bridges before you burn down the old ones. Treat everything related to your religious community as something you may lose, as something that may be used to blackmail you in the future, so do not invest in those things. Plan to minimize possible damage in the future.

Also, if you want your wife to support you, you also have to support her. Support her in all her dreams, help her explore the world. Be a team together. Make it obvious you would support her even where your religious community wouldn't.

Replies from: Jiro
comment by Jiro · 2015-02-03T17:08:22.201Z · LW(p) · GW(p)

Specifically, I think you should show your wife all the cool things you are interested in, starting with the noncontroversial ones.

Assuming that you don't already do this, doing this signals "I am trying to convince you of something which I don't want to talk about". People notice when you act in ways that you haven't before.

comment by gjm · 2015-02-03T01:17:26.455Z · LW(p) · GW(p)

(I take it "follow me" means "stay married to me despite the overt religious difference" rather than "deconvert along with me".)

Keeping secrets from your wife seems like a really bad idea. Are there ways for you to test the waters a little? (Admit to having serious doubts about your religion, maybe?) Perhaps there's something you can do along those lines that will both (1) give you some indication of what you can tell her without hurting her / making her file for leave you / ... and (2) prepare her mind so that when you tell her more it isn't such a shock.

My situation somewhat parallels yours -- formerly quite seriously religious, now very definitely (and openly) atheist, married to someone who is still seriously and actively religious. But my guess, from how you describe the situation, is that your family and friends are likely to be more bothered by irreligion than mine. (In particular, both I and my wife have plenty of friends and family who are not religious.) So I can tell you that it's all worked out OK for me so far, but I wouldn't advise you to take that as very strong evidence that openness about your (ir)religious opinions would work out well for you.

Even so, my guess is that it wouldn't be as terrible as you think it would. But, again, I don't think there's any reason for you to trust my guesses.

comment by mwengler · 2015-02-05T09:57:05.571Z · LW(p) · GW(p)

Lie.

Maybe you'll lie for the whole rest of your life. Maybe you will lie until your kids are out of the house. Maybe you'll lie for another few weeks or years and then decide the truth is important enough to you that Shulem's story no longer seems worse to you than living with the lie.

People lie all the time, and I think it would be foolish to try to craft a life in which you never lie, or in which you feel horribly guilty about lying. Maybe there is some society in which it makes sense not to lie for everybody, but maybe there isn't, either. Certainly a society such as your own is NOT that society. Your society enforces an appearance of conformity of agreement on certain matters of "fact" which are not obviously matters of fact at all. For you to fall foul of this enforcement is a purely voluntary action on your part. I suppose if there were a magical creature who could read your mind and who would punish you for lying, one might make the case that your best bet would be to tell the truth and take the societal consequences which are less severe than the consequences imposed by the magical creature. In some sense, this is analogous to choosing to one-box in the Newcomb's box problem: rationality means winning. For you to take societal consequences for telling the truth when the truth you are telling is that there is no magical creature reading your mind and enforcing rules about what it must contain, well, that is irrational to the extent that it involves making a choice to lose.

To the extent I can imagine being in your situation, my main concern would be getting my kids out. In my own personal lying, I never lie to my kids except if I think it is for their own good, not mine. Of course, you obviously love your Hasidic life so much that you mgiht believe that lying to your kids to keep them in theirs is for their own good, and far be it from me to tell you you would be wrong. I am very aware that for me, an intelligent physicist engineer, the "cost" of false belief in the supernatural is much higher than it is for the clerk in my department who lives her entire life at her Jehovah's Witness church. She witnessed an atheist discussion between myself and someone else once and sent me fairly naive reasons she should stay in her belief, and I responded, and I meant it, that she should believe if that is what she needed to make her life work.

Honestly, I think your real difference from your peers is not that you found the reasons not to believe, but that you couldn't convince yourself to ignore them! For myself, I give you great credit for being like that, which is small consolation I imagine for risking the loss of your family and your life. I was lucky to come from a family which was already fairly liberal (compared to hasidism anyway) about religion and in which about half of them in my parents' generation leaned towards atheism anyway. I have the luxury of living in a society which barely has the energy to even complain about my atheism, in which my atheism is as vibrant and powerful as their religiosity. If I lived in a society that punished atheism, I would lie about it. I would go only as far as I could go publicly without risking the things I found important. My own version of one-boxing: I do NOT sacrifice myself for abstract beliefs.

Ironically, I will close by suggesting you have faith. Don't be more publicly atheistic now than you can. Chances are if you hide it now that you may find over the coming years your trade-off point moves towards more exposure, more openness. Enjoy your life: we ALL live in medieval mind-controlling societies, the differences are matters of degrees rather than matters of kind. Enjoy the one you are in and make a difference on the margin. In real life, we are not truth-seeking machines, we are life-seeking machines. Our brains evolved to serve our lives, to invert that, and have a life which serves your brain is hardly required, especially once you understand that magical mind-reading controlling creatures probably do not exist.

Mazel Tov, Mike

comment by Gram_Stone · 2015-02-02T21:20:30.796Z · LW(p) · GW(p)

Hey there, Parmenides. I am totally cool with you venting at me.

And if it did get out, then we'd risk losing virtually all of our (close-knit, wonderful, highly supportive) families and friends.

I take this especially seriously. Leaving the tribe is hard, especially when it has tangible benefits and costs. I think this is the biggest thing that the rationalist community has yet to fully address insofar as it seeks to compete with other communities in traditional domains, but certainly not for lack of awareness. I think I'll link this video I found of William and Divia Eden's wedding ceremony.

I kind of feel like a creep for doing that, but this is a great example of how rationalists are making their own communities and institutions and rituals. Eliezer makes a bunch of science jokes and implicitly jabs traditional everything, as he is wont to do; the spouses agree that they totally love each other and are in it for as long as they both think they should be and that both of those things are cool; they keep the usual wedding trappings because wedding trappings are fun, and fun is cool; they change their last name to Eden because Eden is a cool last name; and there's a general feeling in the air that being cool about most stuff when it's cool to do so is generally the coolest way to go. Basically they do everything possible to avoid the kind of shitty problem that you're in now. (That is not to say that you could have avoided it through some superior exercise of personal integrity.)

You might also dig these Skepticon panels on how rationalists deal with relationships and death. I highly recommend the one on relationships because there's an atheist on the panel who to my knowledge is a former fundamentalist Christian and is in a relationship with a woman whose entire family are devout Christians.

I say all of this because you can find a new community or have a hand in making a new one. LessWrong is one such community. I have said before that most LessWrongians are 'super smart and super ethical.' They make good company. ChristianKI says something important as well:

Without knowing the social environment in which you are operating it's hard to tell, but are you really sure you would lose all your friends.

You might be overestimating the probability that your tribe will abandon you. After all, that wouldn't be a very Christian thing to do, would it?

(More accurately, I'd label myself agnostic leaning atheist with regard to the existence of one or more intelligent world-designer(s), but I give almost no credence to any religious claims beyond that. In any case, for simplicity I'm just going to refer to myself here as an atheist.)

I used to say something really similar to this. I would say, "Nominally, I'm agnostic, but practically, I'm atheist." Then I thought about other, less important beliefs in which I could make a distinction between 'the nominal and the practical.' Say a person with whom I live leaves the house and goes to the store, and it has been some time, and another person asks where they are. Usually, I say, simply, "He is at the store." But this is not necessarily true. It is entirely possible that on the way to the store he was diverted from his usual route and out of kindness stopped to help a troubled motorist, and so he is nowhere near the store; or that the store has become the site of a hostage situation, and so no one may enter the store; or that the other has already, as we speak, been killed in a traffic accident, and so he may never enter a store again; etc. Yet, I do not tell the other resident, "Nominally, I am agnostic as to the whereabouts of our fellow resident, but practically, he is at the store." My veiled belief is that he is at the store, so this is how I act. It is undesirable to act contrary to this belief because the consequences are obvious and completely negative. It is easier with religious beliefs because the consequences are not as obvious and short-term positive (but long-term negative). Nominal beliefs are useless in life because life is always lived in practice. All to say, I have learned here that very little is certain and that that is far less important than one would initially think. Almost certainty is more than enough, and you and everyone else rely on that fact everyday.

My wife would probably decide to follow me - but there's a chance she might not, and I love her way too much to risk losing her.

I think it's funny (funny-strange, not funny-haha) that you say that you're not willing to risk losing her, but you go on for another paragraph after this about other reasons that you should not do this even if you are willing to risk losing her. It sounds to me like you, in fact, are willing to risk it, and rightly so in my opinion, and like that fact scares the shit out of you, and rightly so in my opinion.

Even if she did follow me it would cause her a tremendous amount of mental anguish which I really don't want to impose on her.

Realistically consider your ability to be exactly what is desirable to your wife for the rest of your life. Ask yourself if you think you can really avoid resenting her (and you do not have to be evil or lacking in character to be resentful) for the rest of your life. Never have I successfully willed myself to meet the expectations of others.

And that's besides the terrible emotional effects that a revelation of this sort would have on my parents, kids, siblings, and friends.

I really don't see how momentary 'grief' from the loss of a tribe member, even a community's worth, is worse than you feeling what you currently feel for a lifetime. And if you don't tell the kids, then you could perpetuate the cycle.

Talking to you is a moral hazard for me. I want to make more evangelizing atheists. I tried to be a counterpoint to your gloom more than an impartial advisor, and hopefully that resulted in a more thorough overview of the risks and payoffs of this decision. I say this because I see a lot of talk of the risks of coming out of the closet, but not a lot of talk about the payoffs, and when you do talk about them, you bury them in implications about risks. You're definitely continuing in a motivated fashion. Like everyone else, you also have an overwhelming compulsion to maintain the status quo.

One last piece of advice, since I see a lot of 'all about their feelings, and not mine': Learn that making sure that the rest of your life does not suck at the cost of some hurt feelings is totally okay, and that learning that will make the rest of your life not suck.

comment by Alicorn · 2015-02-04T07:21:23.214Z · LW(p) · GW(p)

You don't say how old your children are. Is the timing on this revelation to your wife, if it occurs, likely to affect whether they are brought up religious, or is that ship sailed now?

comment by Unknowns · 2015-02-03T16:16:45.865Z · LW(p) · GW(p)

Like many of the others, I would advise you to tell your wife, but not necessarily others, at least until it seems more convenient to do so. But it is important that you make it clear to her that you are expressing your own position, and not attempting to convince her of it. As long as that is clear, I think there is no significant danger of losing her. Consider the one friend who already knows; if they did not abandon you over your beliefs, why would your wife do so? On the other hand coming out and openly trying to convert her to atheism is almost certainly a bad idea, and would definitely result in a significant risk of losing her.

Also, I think this situation is quite common in social groups which are strongly religious, and that while you may overestimate the harm that would be done by simply being open with everyone, many of the comments here dramatically underestimate that harm, because most of the commenters were never in such situations in the first place. And I think it is very, very wrong and harmful to suggest "well, if they would react badly, then ** them all, abandon everyone you know and join a new community."

comment by torekp · 2015-02-03T01:40:59.858Z · LW(p) · GW(p)

My wife would probably decide to follow me - but there's a chance she might not, and I love her way too much to risk losing her.

I'm just going to focus on this, because if I were in your position it would just loom over everything.

My wife made me swear not to keep secrets from her, because of her personal history with an ex. But even if she hadn't ... that's just too big and too relevant to your relationship. Having a secret like that damages your relationship, even apart from your own painful awareness. It just flies in the face of core values of marriage, or even friendship. It's disrespectful to her.

You have a lot to lose. But you also have a lot to gain, if you can repair this break. Are you (ex-)Christian? If so, she should at least be able to stay married, given what the New Testament says about divorce. Being in open disagreement would feel worse, but I don't think it would actually be worse, it would actually be a closer relationship. And as you imply, that could be temporary. Which means you'd have to listen to her attempts to bring you back into the fold, with a mind as open as you can stretch it, and go over the whole religion question all over again. An ordeal, and a steal at twice the price.

comment by ChristianKl · 2015-02-02T19:04:42.274Z · LW(p) · GW(p)

Without knowing the social environment in which you are operating it's hard to tell, but are you really sure you would lose all your friends?

People don't have to follow you. It's quite okay when you believe different things then people around you.

comment by Squark · 2015-02-08T19:42:26.995Z · LW(p) · GW(p)

Parmenides, hello.

I am deeply touched by your story. I can't imagine how hard it must be on your place, for which reason I feel I have no right to tell you anything. However, you asked for advice, so here are my 5 cents.

I think that your most urgent moral obligation is towards your children. You shouldn't let them be raised believing in blatant falsehoods. I don't know how old they are which obviously makes a big difference. But I would make deconverting them a priority.

I would seriously consider telling my wife. I'm almost physically incapable of keeping secrets from my wife. I know it would be killing me if I did. But then, I don't know you, your wife or your relationship.

Make atheist friends. I don't know where you live so it's hard to be specific. Is there a LessWrong meetup nearby? Some other atheist community? Atheist people you know from other places: work, schools you went to?

If you want an e-mail friend, feel welcome to write me any time: top.squark@gmail.com.

I wish you the best of luck. I think I don't speak only for myself when I say LessWrong is rooting for you.

Replies from: gjm
comment by gjm · 2015-02-08T19:57:53.533Z · LW(p) · GW(p)

I'm almost physically incapable of keeping secrets from my wife.

Clearly you are a super partner.

comment by polymathwannabe · 2015-02-02T19:28:33.668Z · LW(p) · GW(p)

Movements like the Brights can give you ideas for your current situation. For an online community of like-minded people (of any faith or none), I recommend Beliefnet.

comment by gjm · 2015-02-04T13:15:29.177Z · LW(p) · GW(p)

Large multipurpose charities like Oxfam are difficult to evaluate and (perhaps mostly for that reason, perhaps not) don't get recommendations from organizations like Givewell.

Is there anything resembling a consensus on the effectiveness of any of these charities? Better still, a comparison of them with (one or more of, or a crude estimate of the effectiveness of) Givewell's top charities?

This seems like it might be useful for at least four reasons.

Firstly, for reasons similar to Holden Karnofsky's for skepticism about questionable high-EV causes, some givers might prefer to give to a charity that does lots of obviously-probably-valuable things rather than one that does a single thing that seems to be very valuable but where some single error (e.g., it turns out that distributing mosquito nets just results in mosquitos evolving resistance and after a couple of years the nets no longer do much good and other ways of dealing with the mosquitos have become less effective) could make it hugely less valuable or even harmful. So if it turns out that Oxfam is half as effective (in expectation) as AMF, you might still prefer Oxfam on these grounds.

Secondly, some givers may be uneasy about weird unfamiliar charities doing weird unfamiliar things, as opposed to household names feeding the starving and funding infrastructure projects in the developing world. My guess is that most people inclined towards "effective altruism" won't feel much unease of this sort but, e.g., other members of your family might. If it turns out that Oxfam is half as effective (in expectation) as AMF but you can much more easily persuade your spouse to give to Oxfam than to AMF, giving to Oxfam might be the best available outcome.

Thirdly, the truth might actually be that Oxfam is 1% as effective as AMF (in which case, some not-particularly-EA folks might be persuaded to switch away to something more effective) or that actually it's probably 2x as effective but harder to measure (in which case, some EA folks might choose to switch away from the smaller more easily evaluated charities preferred by Givewell).

Fourthly, a comparison might give more insight into how it comes about that Givewell's top charities manage to be more effective (e.g., maybe the best bits of Oxfam are as good as anything else, but there's a lot of much less effective stuff in there too and they're hard to separate; or maybe the kinds of project Oxfam does are just systematically really hard; or maybe it's just that Oxfam is really big and diminishing returns set in for any given kind of work; etc.).

[EDITED formatting only, to give more prominence to the central question amid all the other blather I wrote.]

comment by ZT5 · 2015-02-02T14:05:35.877Z · LW(p) · GW(p)

How would you respond if I said I'm a rationalist, however I don't feel a strong motivation to make the world a better place?

To be clear, I do recognize making the world a better place a good thing, I just don't feel much intrinsic motivation to actually do it.

I guess in part it's because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.

Also, as far as I can tell, my psychological makeup is such that feeling, thinking or being told that I'm "obligated" to do something actually decreases my motivation. So the idea that "I'm supposed do that because it's the ethical thing to do" doesn't work for me either.

I do like the idea of making the world a better place as long as I can do that while doing something that inspires me or that I feel good about doing. Part of the reason, I think, is that I don't see myself being able to do something I really don't enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.

In the end, I estimate that I'm more likely to accomplish things with social benefit if I focus on my own needs and wait until I feel inspired to do something for others (or until there's an overlap between meeting my needs and doing things for others), rather than trying to force an intention to do things for others (and then feel I'm not being honest with myself and that I don't actually have that intention).

I don't know how to feel about that.

Replies from: RowanE, Lumifer, mwengler, Viliam_Bur, None, None, None, adamzerner, Richard_Kennaway, MathiasZaman
comment by RowanE · 2015-02-02T14:53:03.598Z · LW(p) · GW(p)

The standard pledge for people in the rationalist sphere trying to make the world a better place is 10% of income to efficient charities, which if you're making the typical kind of money for this site's demographics, is closer to "token" than "difficult and thankless task", even if it's loads more than most people do.

Personally, my own response was to notice how little guilt I felt for not living up to moral obligations and decide I was evil and functionally become an egoist while still thinking of utilitarianism as "the true morality".

Replies from: ZT5
comment by ZT5 · 2015-02-02T19:16:23.143Z · LW(p) · GW(p)

That's interesting, and I can relate to some of what you said. Thank you for sharing.

comment by Lumifer · 2015-02-02T19:24:22.537Z · LW(p) · GW(p)

I'm a rationalist, however I don't feel a strong motivation to make the world a better place?

There is no connection between being a rationalist and trying to make the world a better place.

What is a "better place" is a function of your values, anyway. People tend to disagree about that and occasionally go to war to figure out their disagreement :-/

comment by mwengler · 2015-02-06T13:29:30.609Z · LW(p) · GW(p)

My own desire to "make the world a better place" is rather attenuated, rather local, generally restricted to people I know and like.

In my own case, I have concluded that human morality is purely inherited sentiment. So I do stuff that feels good to me and skip the rest. So I gave $5 and a hamburger to a homeless guy I saw at a fast food place I frequent, but feel no particular desire to identify a charity which is effective at feeding other homeless people. The guy I supported made it to a position in front of my face, which is all I need to get sentimental.

I love my family and my children and my friends. I'll help them with stuff in interesting ways. If you want my help, figure out how to become my friend. Don't try to convince me abstractly that you "deserve" it or that helping you is more "effective" then helping my already fairly well off family and friends.

So yeah, I think it is quite possible to be rational in the sense of wanting to figure out truth from falsehood, and to not be particularly altruistic in an abstract sense.

comment by Viliam_Bur · 2015-02-03T09:46:45.960Z · LW(p) · GW(p)

What you feel is perfectly normal. Humans are not automatically strategic; we use adaptations instead of maximizing values. Think about your brain as a machine built with some heuristics... it works okay on average, in the ancient jungle. Do not overestimate it; it does not have the magical power of doing the right thing. As a rationalist, you should see the limitations of your own mind.

If we want to achieve more, we have to be strategic (or have luck). Find out what realistically motivates you: (1) punishments and rewards, (2) peer pressure. This is your environment. It may support you in your goals, it may actively work against your goals, or it may just move you in a random direction. And you do not have a magical power to overcome that pressure.

All you can do is find a few moments of extraordinary willpower and clearness of mind, and use those moments strategically to (a) steer your life towards a better future, and (b) increase the probability of having these lucid moments in the future. For example, if your environment works against your goals, you may change your environment so it works less against you in the future. Or try to create a habit that would push you in the direction you want to be pushed. If you do it strategically for a longer time, these small changes may add together, and your life may change.

I do recognize making the world a better place a good thing, I just don't feel much intrinsic motivation to actually do it.

This is what a human brain does when it does not receive social rewards (and possibly receives social punishments) for thinking about making the world a better place.

thinking or being told that I'm "obligated" to do something actually decreases my motivation

I guess in the past "being told you are obligated to something" was probably a good predictor of coming punishment (if you fail to fulfill your obligation). Also "obligation" often means that if you do it successfully, you will not receive a reward because, hey, you merely did your duty. Of course you hate these all-pain-no-gain obligations.

I don't see myself being able to do something I really don't enjoy for long enough that it produces meaningful results

That's how human brain is built. You can't enjoy something you don't receive rewards for. The difference between humans is that some of them were trained to give themselves internal rewards for doing some stuff; then they can enjoy doing that stuff even without visible results.

I estimate that I'm more likely to accomplish things with social benefit if I focus on my own needs and wait until I feel inspired to do something for others

...or you could try to create some social reward system. Which is easier said than done, but maybe you could find a group of people with similar goals, tell each other about good stuff you did, and then provide to each other social rewards.

Human brain is designed to work according to some rules. You cannot overcome these rules, but you can try to change your environment so that these rules start working for you instead of against you.

Replies from: ZT5
comment by ZT5 · 2015-02-04T00:34:02.637Z · LW(p) · GW(p)

I think your analysis is largely correct.

A lot of this is very accurate, and a little depressing since I probably do need a social reward system, or a support network - and I don't see an easy way to create one right now. :/

I do like having more clarity though, and understanding of what actually is the problem here.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-02-04T08:55:12.699Z · LW(p) · GW(p)

As an example, I want to make a computer game. Programming has an advantage of providing a quick feedback, if you are doing it well. I decide to add a new feature, I write it, then I run the game, and I see the feature is there. I get some reward in form of seeing the new feature that works.

(And "doing it well" in this context means developing the program in small steps, where each step gives you some visible outcome. Small iterations. As opposed to doing some complex step that would take a lot of time while providing you no results until it is completed. Note that "visible outcome" does not necessarily mean something that is displayed on the screen during the normal run of the program. It is something that you as a programmer can see, for example a successful unit test result of a function that usually does not interact with the screen. I suspect that the impact of unit test on programmer's morale is more important than its impact on the correctness of the code.)

But this is still just a feedback from a computer. There is no social feedback here. So I need another support layer to get that. I have friends who are also computer programmers. So whenever I add some new feature to the program, I send them the program along with the source code by e-mail. I do not expect them to inspect the source code too much; usually just to start the program and click on the new feature I have added. But I know they are programmers, and that the possibility of looking at the source code is there. Also, as programmers they can better understand and appreciate the features I have added. (To a non-programmer often trivial stuff seems very hard, but with the hard stuff they sometimes even don't understand why that had to be done.) So now my programming has a social dimension, long before the program is finished. And we do it by e-mail (and a Skype talk once in a while, and meeting in person once in a month), so even everyday geographical proximity is not needed. Of course meeting more frequently in person would be even better.

You could try to find this kind of support here. Or anywhere else.

One important detail about this kind of "observer support" is that it works best if it provides you only positive feedback. That is, when you do something and send it, you get a "that's nice!" reaction, and when you do not anything for a longer time, you only get a gentle reminder. (As opposed to people criticizing you "hey, it was five days and you did nothing, man, wake up" or even criticizing your progress as insufficient "all you did in three days was this lousy green rectangle, this way you will not complete it in thousand years".) Any progress = good. Any lack of progress = neutral. There is nothing negative. (As a general rule, punishments are way overrated. They usually bring more harm than good, especially in long term.) Sometimes it is difficult to find people who give this kind of feedback; some people are not interested at all, some people are too eager and switch to slavemaster mode.

So, what would you like to have a social reward system for?

Replies from: ZT5
comment by ZT5 · 2015-02-05T23:06:48.713Z · LW(p) · GW(p)

That's interesting. Thank you for a detailed explanation of this.

I can agree a lot with the "only positive/neutral feedback" rule.

So, what would you like to have a social reward system for?

I'm not sure, but this got me thinking in a good way. I like this question.

comment by [deleted] · 2015-02-03T05:58:38.437Z · LW(p) · GW(p)

My own position is closer to 'a human making the world a better place is only a reliable task in incremental local ways and inevitably goes wrong at large scales because our world map is inevitiably horribly horribly flawed no matter how hard we try to perfect it outisde extraordinarily narrow areas' than 'not much motivation to do it'. Totally get what you are saying though and it can result in similar results. There are a few simple ways to throw a bit of money around if that exists though (EG givewell) which are exactly such incremental local things.

Incidentally I would actually call for dissensus on how to make the world a better place. The more things people are trying the better the odds that something will actually work and then get picked up on.

comment by [deleted] · 2015-02-02T17:47:39.593Z · LW(p) · GW(p)

If I were to take a reductionalist approach, what's the connection between rationality and making the world a better place.

Replies from: ZT5
comment by ZT5 · 2015-02-02T18:33:27.762Z · LW(p) · GW(p)

I understand that a rationalist can potentially have any kind of goals, not necessary altruistic ones.

The reason for bringing this up is that I want to see if this kind of topic can be discussed here on LW, at all. And me being an (aspiring) rationalist is very relevant information to this.

Replies from: None
comment by [deleted] · 2015-02-03T16:07:50.738Z · LW(p) · GW(p)

Asking questions is one of the most rational things you can do. So screw "LessWrong". If some people aren't willing to discuss an issue with you like adults then you can't really call them rational. They should just quit to a photography blog or something.

comment by [deleted] · 2015-02-05T02:38:17.236Z · LW(p) · GW(p)

A few thoughts here:

  1. There's a concept called "Right Action" - Acting by using your logic to fulfill your values. We all have things that scare us, bore us , etc, but ultimately you can make the choice to act on what youultimately value. Sometimes, you just choose to do what you think is right, regardless of how you feel.

  2. One thing that could help is to remove the word "should" for your mental vocabulary - As per above, every moment is a choice. You get to choose whether to act on what you value. This takes "saving the world" from something that is repelling because of obligation, to something that is compelling because of choice.

  3. One other thing that might help is to remove any thoughts of "making the world a better place" out of your mind. This is a huge goal, it's daunting, and it's not actionable. Instead, what might work is to focus on a particular project, and even then, only the very next action to take. I have a long term plan to make the world a better place, but "making the world a better place" almost never enters my day to day thoughts except as a reminder of WHY i'm taking those small, individual actions.

  4. Finally, something that's helped me is to think about emotional and willpower sustainability (which you talk about at the very bottom). There's a few things you can do in that regard. Firstly, find a project to focus on that excites you and is mostly work that you enjoy. Secondly, if you're doing something that is boring/scary/unfulfilling to you (as every project sometimes requires) see if you can delegate it. Thirdly, If you can't delegate it, make sure to take breaks and give yourself permission to do things that recharge you.

comment by Adam Zerner (adamzerner) · 2015-02-04T03:56:27.644Z · LW(p) · GW(p)

Human beings derive joy from doing good. Studies on happiness find that this is one of the bigger correlates of happiness. If you're at all normal, there's probably a lot of room for you to do more good and be happier.

As for intrinsic motivation and System I... it's difficult, updating your System I isn't as straightforward as updating your System 2 (aka using evidence to update your beliefs). One day I plan on writing a post about this...

However, there are some things I'd like to note:

I guess in part it's because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.

I don't think it's that difficult or thankless (although I'm definitely in the minority here and I don't know anyone as optimistic on this front as I am, so take that for what you will). For example, take this very website/community. There's tons of relatively simple and straightforward improvements that could be made that I think would have a relatively high impact. Like making the website easier to use and including new features. For example, adding a section that makes it easy for LWers to brainstorm and collaborate on projects. That's a high level action that I could see trickling down and having a big impact. And if you're talking "genuinely" as in making fundamental changes to the way things work... I've got some thoughts here.

Also, as far as I can tell, my psychological makeup is such that feeling, thinking or being told that I'm "obligated" to do something actually decreases my motivation. So the idea that "I'm supposed do that because it's the ethical thing to do" doesn't work for me either.

Me too :/. I think that it's easy to give this spite too much weight as you make decisions. To some extent, I think it's ok to "let the spite be". Trying to exert complete control over these sorts of emotions is too stressful. Whatever marginal gains you make in making your emotions "more accurate", it's probably outweighed by the stress it causes. Finding the right balance is difficult though.

I do like the idea of making the world a better place as long as I can do that while doing something that inspires me or that I feel good about doing. Part of the reason, I think, is that I don't see myself being able to do something I really don't enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.

I think that you'd be more motivated if a) you thought you had a better chance at succeeding and b) recognized how big an impact altruism probably has on your happiness.

I don't know how to feel about that.

For the record, I admire your honest attempts at introspection and truth.

comment by Richard_Kennaway · 2015-02-03T08:49:56.292Z · LW(p) · GW(p)

That's pretty much my attitude as well.

comment by MathiasZaman · 2015-02-02T15:53:38.250Z · LW(p) · GW(p)

How would you respond if I said I'm a rationalist, however I don't feel a strong motivation to make the world a better place?

With just this information, I'd likely say that being an aspiring rationalist doesn't really have anything to do with your goals, as its mostly about methods of reaching your goals, rather than telling you what your goals should be.

Following it up with this:

To be clear, I do recognize making the world a better place a good thing, I just don't feel much intrinsic motivation to actually do it.

Confuses me a bit, however.

If one of your goals is making the world a better place (that's how I'd rephrase the statement: "I do recognize making the world a better place is a good thing," saying as saying things like "X is good" generally means "X is a desirable state of the world we should strive for), your intrinsic motivation shouldn't matter one bit.

I have little intrinsic motivation of eating healthy. Preparing food is boring to me and I don't particularly enjoy eating most healthy things. I still try to eat healthy, because one of my goals is living for a very, very long time.

I guess in part it's because I expect genuinely trying to improve things (rather than making a token effort) to be a rather difficult and thankless task.

One the one hand: How difficult is it to give 10% (or even 5 or 1 percent, if your income is very low) to an effective charity?

On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn't become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.

Part of the reason, I think, is that I don't see myself being able to do something I really don't enjoy for long enough that it produces meaningful results. So in order for it work, it pretty much has to be something I actually like doing.

This is one of the many reasons why effective altruism works. It allows you to contribute to big problems, while you're doing something you enjoy and are good at.

(Or we can wait for /u/blacktrance to come in and try to convince you that egoism is the right way to go.)

Replies from: emr, ZT5
comment by emr · 2015-02-02T17:00:29.150Z · LW(p) · GW(p)

On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn't become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.

Historical, isn't that exactly how the world became a better place? Better technology and better institutions are the ingredients of reduced suffering, and both of these see to have developed by people pursuing solutions to their own (very local) problems, like how to make money and how to stop the government from abusing you. Even scientists who work far upstream of any application seem to be more motivated by curiosity and fame than a desire to reduce global suffering.

Of course, modern wealth disparities may have changed the situation. But we should be clear, if we think that we've entered a new historical phase in which the largest future reductions in suffering are going to come from globally-altruistic motivations.

Replies from: Lumifer
comment by Lumifer · 2015-02-02T19:26:00.808Z · LW(p) · GW(p)

modern wealth disparities may have changed the situation

Compared to what, medieval Europe?

Replies from: emr
comment by emr · 2015-02-02T20:39:34.099Z · LW(p) · GW(p)

Yes. Richer states can afford to transfer more wealth. We see this in the size of modern (domestic) welfare states, which could not have been shouldered even a century ago.

Replies from: alienist
comment by alienist · 2015-02-08T05:18:32.373Z · LW(p) · GW(p)

Well, Rome was basically a welfare state two millennia ago.

comment by ZT5 · 2015-02-02T18:02:57.116Z · LW(p) · GW(p)

If one of your goals is making the world a better place (that's how I'd rephrase the statement: "I do recognize making the world a better place is a good thing," saying as saying things like "X is good" generally means "X is a desirable state of the world we should strive for), your intrinsic motivation shouldn't matter one bit.

That's not exactly what I meant, but nevertheless this is a good point.

On the other hand: So fucking what? You know how the world becomes a better place? By people doing things that are difficult and thankless because those things need to be done. The world doesn't become a better place by people sitting around waiting for the brief moment of inspiration in which they sorta want to solve a local problem.

Ok, let's play this out.

As I already said, I have good reason to believe that "should-based" motivation wouldn't work for me.

So what I'm wondering is, am I allowed to say "due to the way my mind currently works I'm choosing to optimize X by not actively committing to doing X" without running into the "you're not trying hard enough" kind of argument?

Just because some people do things in a particular way doesn't mean I can or should to try and do things the same way. It may simply not work for me. This may include thinking in a certain way or having a particular mindset.

Replies from: MathiasZaman
comment by MathiasZaman · 2015-02-02T19:18:21.962Z · LW(p) · GW(p)

So what I'm wondering is, am I allowed to say "due to the way my mind currently works I'm choosing to optimize X by not actively committing to doing X" without running into the "you're not trying hard enough" kind of argument?

I'd say yes, even if it would only be to prevent worse things.

To quote one of Yvain's recent posts:

The rationalist community tends to get a lot of high-scrupulosity people, people who tend to beat themselves up for not doing more than they are. It’s why I push giving 10% to charity, not as some kind of amazing stretch goal that we need to guilt people into doing, but as a crutch, a sort of “don’t worry, you’re still okay if you only give ten percent”. It’s why there’s so much emphasis on “heroic responsibility” and how you, yes you, have to solve all the world’s problems personally.

This might be a similar situation. If you choice is doing nothing vs doing something, doing something is pretty much always better. (Assuming you do useful things, but let's take that for granted for now.)

If you follow the standard Less Wrong interpretation of utilitarianism, you're pretty much never doing enough to improve the world. Of course no-one actually holds you to such unreasonable standards, because doing so would be pretty insane. If you tried to be a perfect utility maximizer, you'd end up paralyzed with decision fear, anxiety and/or depression and that doesn't get us anywhere at all.

Since I'm quoting people, here's a useful quote to have come out the tumblr rationalists:

[Considering yourself a bad person because utilitarianism] is like saying Usain Bolt is slow because he runs at such a tiny fraction of the speed of light.

To make that more specific to your own situation:

Maybe saying "Alright, I'll give 10% of my income and we call it that," doesn't work for you, for whatever reason. Of course you're allowed to figure out something else that does work for you. That's what rationality is all about. Reaching your goals, even if the standard approach doesn't work for me.

That being said, it might still be interesting to see if changing the way your mind works isn't easier. (It probably isn't, but just in case...) From what you describe, it sounds like a form of akrasia which you might be able to work around in other ways than a variant of planned procrastination

comment by Adam Zerner (adamzerner) · 2015-02-03T21:02:17.604Z · LW(p) · GW(p)

To any of you football fans out there, I think the outrage over the Seahawks' decision to throw it on the goal line is a classic example of hindsight bias. Throwing on the goal line is hardly unheard of, and they couldn't run it 3 times anyway. This FiveThirtyEight article explains why throwing actually was a good decision. Anyway, everyone thinks that the decision to throw it was terrible, and I think that they're being victims to the hindsight bias.

Replies from: Ander, Salemicus, mwengler
comment by Ander · 2015-02-04T00:46:18.382Z · LW(p) · GW(p)

I agree. Given that they had one remaining timeout, the sequence of pass, run (timeout), gave them three chances to score instead of 2.

Still, its quite possible that a less risky throw might have been superior, even if it was lower chance of success.

As it was, that throw was inches away from being the game winning touchdown instead of the game losing interception.

comment by Salemicus · 2015-02-06T14:12:56.282Z · LW(p) · GW(p)

The 538 article is exactly the kind of context-free argument that justly gives 'statistics' and 'rationalism' a bad name. Yes, in game-theoretic terms you want to pass a certain amount of the time. Yes, there is a good argument for calling pass on that specific down. But the issue is not whether 'pass' in the abstract was a good decision, but whether the specific play-calling actions in their particular context were good. And they were not. They were indefensible.

  • Firstly, Seattle went out in a 3 WR group, thinking that this would get New England out of goal-line defense (and thus make it easier for the run play). This didn't work, and they were foolish to think it would. This put Seattle in an awkward position where they would either have to run without enough blockers, or put the ball in the hands of poor players.
  • Secondly, Seattle ran the play out of the shotgun, making clear their intention to pass. There was no play-action or roll-out. This gives up all the game-theoretic part. If you are going to inform the opponent of your choice, you need to go with your strongest possible choice, whereas...
  • Thirdly, Seattle ignored their most favourable matchup. Seattle were the second-best offense in power-rushing playing the worst defense in stopping power runs. And instead...
  • Fourthly, Seattle went with a very unfavourable matchup in very unfavourable circumstances. They asked a receiver known for his downfield speed (and not much else) to fight for the ball on the goal-line. With the centre of the field cluttered, and all the defenders short, they called a quick slant into the centre of the field.

Yes, that play normally doesn't result in an interception. Yes, there's an element of bad luck there. But it's also an example of really poor decision-making that ended up with Seattle essentially running the play New England would have chosen for them. They screwed up, and it's embarrassing seeing the lengths people go to to defend the indefensible.

I would recommend this article highly for more depth.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-02-06T16:08:06.029Z · LW(p) · GW(p)

The 538 article is exactly the kind of context-free argument that justly gives 'statistics' and 'rationalism' a bad name.

Are you sure? That sounds exaggerated. It definitely wasn't context-free. Some examples:

  • "Let’s spot the Pats some yards, then, and assume the Patriots win1 about as often as a typical team in the AFA model would2 if they started on the 40-yard line. That would give them a 14 percent chance. Maybe that’s generous, but we’re looking for an upper bound."

  • "But the Seahawks don’t have an average rusher; they have Beast Mode..."

Firstly, Seattle went out in a 3 WR group, thinking that this would get New England out of goal-line defense (and thus make it easier for the run play). This didn't work, and they were foolish to think it would. This put Seattle in an awkward position where they would either have to run without enough blockers, or put the ball in the hands of poor players.

An awkward position? They couldn't have ran on all three plays and if you have to have a pass play, 3 WR against a goal line defense is a good place to use your passing play.

Replies from: Salemicus
comment by Salemicus · 2015-02-06T17:13:23.336Z · LW(p) · GW(p)

Are you sure? That sounds exaggerated. It definitely wasn't context-free. Some examples:

None of which consider the specific pass play that they chose to run, nor the specifics of the personnel matchup.

An awkward position? They couldn't have ran on all three plays and if you have to have a pass play, 3 WR against a goal line defense is a good place to use your passing play.

In a vacuum, it could be a good place! But in the specific context it wasn't, because New England still easily overmatched the Seattle receivers. Neither Baldwin or Kearse was ever going to get open against that coverage (and they didn't), which meant that Wilson had one viable target - a downfield specialist not a possession receiver, covered by a specialist corner, running an inside slant (possibly the riskiest possible route in that situation). And all this from the shotgun, meaning there was no worry about a run. Carroll and Bevell knew all this before the snap, but they still chose to run that play. I think they must have known they'd been out-thought, but didn't want to call a timeout there, and so went ahead anyway.

Suppose Seattle had done the kind of thing teams normally do when they pass from the 1-yard-line - come out showing run, then run a play-action, say with Wilson rolling out, with one tight end and one receiver to look for plus the chance of running it in himself, plus the easy option of throwing the ball away. Then 538's analysis would make sense. But that's not at all what happened. 538 doesn't mention the passing numbers in that situation from shotgun formation. It's like putting your money in penny stocks, and then defending your decision with the generic claim that equities are a good investment.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-02-06T19:22:29.913Z · LW(p) · GW(p)

I see. I thought that you meant context as in Seattle/NE but it seems that you mean the formation and stuff. I think that what you're saying makes sense now.

Personally I'd give more weight to:

  • The threat of running it from the shotgun.
  • The chances that a SEA receiver gets open vs. that goal line defense.

... and so I still don't think it's an awful decision. I think Wilson should have understood the situation and only made a really safe throw, and so the play call wasn't that risky. But I do agree with you that play action would have been better, especially with a roll out.

comment by mwengler · 2015-02-06T13:17:58.005Z · LW(p) · GW(p)

I have always thought that any discussion of sports was sort of a playground for human bias and human error. So much passion for no real purpose. Affiliating with a team? The opposite of taking a principled position.

I guess it never occurred to me before that actually making this thought explicit might be valuable. But since discussion of the pass has reached less wrong, here it is.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-02-06T15:41:24.221Z · LW(p) · GW(p)

So much passion for no real purpose.

What does have "real purpose"? Could you elaborate on this?

My thoughts are that most things we do for fun don't really have "purpose", that sports are no different, and that they're an underrated way (amongst this community and most of society) to accomplish the goals of having fun, being in good shape and being happy.

Replies from: emr
comment by emr · 2015-02-08T06:21:02.395Z · LW(p) · GW(p)

Ah! I may have a meta-contrarian position to contribute:

This is not useful -> This is useful for having fun -> Fun is a valid goal, but this is a fairly ineffective way to have fun.

In the same way that people are routinely in error about how to improve everything else, they are routinely in error about what things are good at actually providing fun. And there is a familiar resistance to the direct application of thought to the problem, which relies on the normal excuses ("Isn't it all subjective?", "But thinking is incompatible with feeling! Haven't you seen Spock?").

Playing sports looks really good from an "effective hedonism" standpoint, even up to several hours a week. But for most people, I'm skeptical that regularly watching sports provides a decent long-term return, when done for more than a few hours every month or year.

Tangentially related: My local baseball team is far funner to watch than the top teams, because they make more mistakes, which leads to more unpredictable and exciting plays, but at the same time they're still athletic enough that you're not just watching children flounder around. In the same way, I really enjoyed the last superbowl.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-02-08T15:48:18.177Z · LW(p) · GW(p)

Good points. Particularly about watching vs. playing. I'm a lot more skeptical about the value of watching.

comment by Richard_Kennaway · 2015-02-02T09:50:09.677Z · LW(p) · GW(p)

On LessWrong, or on blogs by LWers, advice has been given on how to become bisexual, or polyamorous.

However, there is no advice on LessWrong for how to stop liking something. Yet there are many stories of people having great difficulty giving up such things as video games and internet distractions. It seems to be easier to acquire a taste than to relinquish it.

All the advice on resisting video games and the like (internet blockers, social support) has been on using tricks of one sort or another to restrict the act, not the desire. Even when experimenting with specific deeds, it is easier to try something in spite of aversion than to forego it in spite of attraction.

Are there effective methods of ceasing to enjoy some activity, or of refraining from enjoyable things? What presently enjoyable activities would you use them on?

Replies from: emr, gjm, ChristianKl, RomeoStevens, None, None
comment by emr · 2015-02-02T17:06:18.140Z · LW(p) · GW(p)

All the advice on resisting video games and the like (internet blockers, social support) has been on using tricks of one sort or another to restrict the act, not the desire.

Some advice is about substitution, i.e. you identify the emotional need driving a stubborn behavior, and find a more approved behavior than satisfies the same need.

Replies from: hesperidia
comment by hesperidia · 2015-02-03T18:59:17.047Z · LW(p) · GW(p)

Interesting concept. I read about something similar in the book Homeward Bound: Why Women Are Embracing The New Domesticity - the author recounts that when working at a dead-end job with no challenge her impulse for creativity got shunted into "DIY" projects of questionable value like stenciling pictures of frogs onto her microwave, and that once she got into a job that stretched her abilities the desire for "DIY" evaporated.

comment by gjm · 2015-02-03T15:11:21.909Z · LW(p) · GW(p)

For me, becoming able to like a new thing seems like a much more positive change than stopping liking an old thing. The latter -- even if it would be beneficial overall -- feels like an impairment, a harm.

If others feel the same way -- I don't know whether they do -- then they would be less inclined to offer advice on how to impair yourself than on how to enlarge your range of pleasures. And if others are expected to feel the same way, advice-givers might refrain from offering advice that would be perceived as "how to impair yourself".

(A perfectly rational agent would scarcely ever want to lose the ability to like something, since that would always lower their utility. The exceptions would be game-theory-ish ones where being known not to like something would help others not fear that they'd seize it. Of course, we are very far from being perfectly rational agents and for many of us it might well be beneficial overall to lose the ability to enjoy clickbait articles or sugary desserts or riding a motorcycle at 100mph.)

Replies from: Lumifer, Richard_Kennaway
comment by Lumifer · 2015-02-03T18:09:02.599Z · LW(p) · GW(p)

I concur with gjm.

The difference between "I like X" and "I am addicted to X" might be relevant here.

comment by Richard_Kennaway · 2015-02-03T20:15:43.943Z · LW(p) · GW(p)

A perfectly rational agent would scarcely ever want to lose the ability to like something, since that would always lower their utility.

What is a perfectly rational self-modifying agent? I don't think anyone has an answer to that, although surely it is something that MIRI studies. The same argument that proves that it is never rational to cease liking something, proves that it must always be rational to acquire a liking for anything. You end up with wireheading.

comment by ChristianKl · 2015-02-02T10:17:04.558Z · LW(p) · GW(p)

For food items you can create distaste by mixing the food item with something that makes you throw up.

Replies from: Viliam_Bur, Mollie
comment by Viliam_Bur · 2015-02-03T09:22:45.403Z · LW(p) · GW(p)

Or just start eating Soylent all day long. And have no other food at home. For a month.

It is easier to avoid eating something, if you simply do not have it at home. And if you live on Soylent, you don't even go to food shops.

This may be generalizing from one example, but it works for me. When I am on Soylent, my cravings for other food just somehow disappear.

comment by Mollie · 2015-02-03T01:03:51.922Z · LW(p) · GW(p)

This comment made me wonder if trigger warnings might have a place on Less Wrong. Probably not, because I suspect that the utility gains would not be worth the controversy of trying to change norms in that direction.

Replies from: JoshuaZ, ChristianKl, MathiasZaman
comment by JoshuaZ · 2015-02-03T01:07:56.041Z · LW(p) · GW(p)

This seems if anything like an argument against it: it isn't considered a commonly triggering issue. This shows one of the fundamental problems with trigger warnings: it is unclear and often highly subjective what should get such a warning.

Replies from: Mollie
comment by Mollie · 2015-02-03T16:57:16.571Z · LW(p) · GW(p)

I agree that "unclear and often highly subjective" are downsides to categories of content that warrant trigger warnings, but this exchange (below) would pretty clearly warrant a trigger warning for eating disorders if it was on a site that used trigger warnings.

Are there effective methods of ceasing to enjoy some activity, or of refraining from enjoyable things?

For food items you can create distaste by mixing the food item with something that makes you throw up.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-02-03T17:17:18.029Z · LW(p) · GW(p)

But if anything that actually shows how subjective this is and how much of an issue it is. It is one thing to say that trigger warnings should apply to issues that may involve PTSD. It is quite another thing to suggest that they should involve mentions of every possible mental health issue.

comment by ChristianKl · 2015-02-04T13:02:47.073Z · LW(p) · GW(p)

Did the comment trigger you in a bad way?

Replies from: Mollie
comment by Mollie · 2015-02-04T21:21:54.905Z · LW(p) · GW(p)

No, my eating disorder hasn't been an active problem for ~8 years. Thank you for your concern.

comment by MathiasZaman · 2015-02-03T07:23:00.130Z · LW(p) · GW(p)

Content warnings/notes for threads might be worth it (and not that hard to do, seeing as threads already support tags), but doing so for individual comments would be mostly annoying.

comment by [deleted] · 2015-02-03T02:56:01.881Z · LW(p) · GW(p)

On LessWrong, or on blogs by LWers, advice has been given on how to become bisexual, or polyamorous.

That seems like bad advice. Your preferences are what they are. "Giving advice on how to become bisexual, or polyamorous" seems just as bad as "giving advice on how to become heterosexual, or monogamous."

However, there is no advice on LessWrong for how to stop liking something... Are there effective methods of ceasing to enjoy some activity, or of refraining from enjoyable things? What presently enjoyable activities would you use them on?

This does seem like an issue that needs discussion however. I took the hard route myself, but maybe my story is interested. Perhaps later when I have time I can be proded to give an overview of how I transformed my preferences over the last 15 years.

Replies from: Unknowns, JoshuaZ, Richard_Kennaway, ChristianKl
comment by Unknowns · 2015-02-03T16:20:19.215Z · LW(p) · GW(p)

What's wrong with "giving advice on how to become heterosexual, or monogamous" to someone who wants to become heterosexual or monogamous?

Replies from: pianoforte611
comment by pianoforte611 · 2015-02-05T01:13:39.792Z · LW(p) · GW(p)

Nothing if the advice worked, but it doesn't.

Replies from: Unknowns, alienist
comment by Unknowns · 2015-02-05T01:19:35.405Z · LW(p) · GW(p)

It may not always work, or even usually, but it worked for someone I know.

Replies from: Izeinwinter, NancyLebovitz
comment by Izeinwinter · 2015-02-07T10:15:11.503Z · LW(p) · GW(p)

Eh, it's not that it has a 100% failure rate, the main issue is that it very frequently has utterly catastrophic mental health consequences. Trying to change your sexuality is dangerous. As in "has a significant chance of killing you".

There are reasons the lbgt community is so down on attempts at curing the gay - "suicides and mental breakdowns".

I'm not aware of any statistics on the results of people trying to become gay, but a: I would be surprised if enough people have tried this to make a valid sample. and b: I do not recommend the experiment for obvious reasons of safety.

There are safe..ish. ways to turn sexuality off entirely, but just being gay is not generally enough for people to want to volunteer for those.

I've met enough people who reported their sexuality changing over time that I wouldn't be shocked if tommorow a pharma announced an novel sideeffect / off-label use for the latest anti-depressant of resetting your sexuality to "Healthy adult humans" but the history of attempts at deliberate intervention in this field is horrifying.

Replies from: alienist
comment by alienist · 2015-02-07T22:51:20.340Z · LW(p) · GW(p)

There are reasons the lbgt community is so down on attempts at curing the gay - "suicides and mental breakdowns".

As opposed to, you know, ordinary tribal feelings against defection. There are elements in the deaf community that oppose attempts to cure deafness as well.

Replies from: Izeinwinter
comment by Izeinwinter · 2015-02-08T08:29:36.703Z · LW(p) · GW(p)

Those too, but the negative impact and severe paucity of efficiency are quite real enough. About the only people still trying this today are religiously motivated quacks, with predictably depressing results, but even the historical attempts by people honestly trying to help as opposed to following the mandates of their imaginary friends in the sky had very bad results. Sometimes sexuality shifts over time. We have nothing even resembling a clue why, or how to do that deliberately.

If you tell me you know people conversion therapy worked for, I will not doubt you. People given chalk tablets for treatment routinely get better from very fatal diseases in double blind studies Not often, but it happens.

This does not mean chalk tablets are a panacea. Or, you know, medicine at all.

comment by NancyLebovitz · 2015-02-07T09:47:35.344Z · LW(p) · GW(p)

Details? What exactly did they do, and how large was the change? How long ago was it?

comment by alienist · 2015-02-07T06:50:23.983Z · LW(p) · GW(p)

Or rather anyone who claims it does is branded an "evil homophobe" thus no one would dare publish a stady claiming it does.

Replies from: gjm
comment by gjm · 2015-02-07T13:32:08.017Z · LW(p) · GW(p)

People have been trying to "cure" homosexuality since times when attitudes to homosexuality were very different from what they are now. If it's curable then there should (at least) be credible studies from earlier years saying so. Are there?

(Robert Spitzer published a study as recently as 2001 claiming to find evidence that some homosexual people can become heterosexual, so evidently it was possible to dare to do that then. He has since publicly changed his mind, which of course can be interpreted in different ways.)

comment by JoshuaZ · 2015-02-03T03:16:17.509Z · LW(p) · GW(p)

That seems like bad advice. Your preferences are what they are. "Giving advice on how to become bisexual, or polyamorous" seems just as bad as "giving advice on how to become heterosexual, or monogamous."

Why? That might make sense if a preference is part of a terminal value. But if it isn't this may not be that different than advice on say how to enjoy eating healthy foods (in my own case the answer for spinach was eat it frequently with tasty cheese). For that matter, there might well be circumstances where it would make sense to try to adjust one's preferences to becoming closer to monogamous (say one is dating someone who is strongly monogamous).

comment by Richard_Kennaway · 2015-02-03T09:28:40.986Z · LW(p) · GW(p)

Your preferences are what they are.

Preferences change: sexual development is an obvious example. Preferences can be changed: "cultivating a taste" is a thing. Although in line with my original question, the only stock phrase I can think of that comes close to the opposite of "cultivating a taste" is "overcoming temptation". A taste, once acquired, is seen as something that can only be suppressed by a continuing effort, never removed.

An alternative approach might be described as "enlightening one's self-interest": learning to perceive the harm of something clearly enough that one is no longer inclined to indulge it.

comment by ChristianKl · 2015-02-04T13:17:25.306Z · LW(p) · GW(p)

That seems like bad advice. Your preferences are what they are. "Giving advice on how to become bisexual, or polyamorous" seems just as bad as "giving advice on how to become heterosexual, or monogamous."

Preferences can be quite complex.

Most people do like the idea of having sex with multiple people but might dislike the idea that there partner has sex with multiple people at the start. Being polyarmous needs specific skills such as dealing with jealousy that aren't needed to the same extend by people who aren't poly.

Some people are in love with a person who"s poly a person might want to become poly themselves to be in that relationship.

comment by [deleted] · 2015-02-05T05:00:47.047Z · LW(p) · GW(p)

This post seems like a horny teen trying to stop watching porn. Why stop watching porn? Remember to include the middle. Why is the solution that you try new things and play the occasional video game sounds so, uh, what'd-you-call-it, Not-a-solution?

Just write a bunch of stuff that comes into your head and you'll sooner than later have way too much stuff.

comment by emr · 2015-02-02T23:50:07.917Z · LW(p) · GW(p)

A thought about heritability and malleability:

The heritability of height has increased, because the nutritional environment has become more uniform. To be very specific, "more equal" means both that people have more similar sets of options, and that they exercise similar preferences among these options.

This is interesting, because the increased heritability has coincided exactly with an increased importance of environmental factors from a decision making standpoint. In other words, a contemporary parent picking from {underfeed kids, don't underfeed kids} can exert more influence over the absolute height of their children than a parent with only the option to underfeed. Of course, modern parents overwhelming opt for the same choice. At the same time, these parent don't have much influence on the relative height advantage of their child, given a uniformity in options and preferences in the population.

This can happen whenever options and preferences are aligned in a population. For example, no matter how heritable a positive trait is, it will usually be trivial to influence it ... in a downward direction. So if you're looking at a twin study on something like subjective well-being, I've found it clarifying to explicitly note the options and preference available to the population. I'm currently reading up on positive psychology, and I keep seeing, even from domain experts, statements like, "X percentage of your happiness is genetically determined", as if the population they studied were picking actions at random.

Replies from: mwengler
comment by mwengler · 2015-02-05T01:08:46.902Z · LW(p) · GW(p)

I heard something years ago that stuck with me. In an optimum environment, 100% of human variation on everything would be genetic. So if you do everything you can environmentally to improve your kids intelligence, 100% of the variation left must be genetic. Similarly with height, musical ability, etc etc.

So whenever one finds that less than 100% of the variation in some positive trait is genetic, it means at least some of the population is not optimizing the environment to bring out that trait.

Not obviously relevant to the comment above, but on the same topic so I stuck it here.

comment by NancyLebovitz · 2015-02-03T02:54:31.026Z · LW(p) · GW(p)

Might it be reasonable to think of the anti-vaccination movement as people trying to take heroic responsibility without having good judgement?

Replies from: JoshuaZ, Strangeattractor, ChristianKl, Viliam_Bur, None, None, polymathwannabe, Emily
comment by JoshuaZ · 2015-02-03T03:16:49.049Z · LW(p) · GW(p)

Is there some reason you consider the anti-vaccination movement as closer to this than any other alternative health movement?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2015-02-03T03:39:13.157Z · LW(p) · GW(p)

The level of urgency seems a lot higher.

Replies from: JoshuaZ
comment by JoshuaZ · 2015-02-03T03:46:49.862Z · LW(p) · GW(p)

One sees similar urgency claims in the extreme end of the organic food movement and similar purity focused food ideas.

Replies from: Torello
comment by Torello · 2015-02-03T15:25:07.061Z · LW(p) · GW(p)

I think she means urgency from the perspective of the general population; many people are at risk if a growing number of people stop getting vaccines.

I think members of the organic food movement feel that their cause is urgent, but members of the general population are not put in danger by their decision to eat organic food and therefore don't have urgent feelings about it.

comment by Strangeattractor · 2015-02-04T09:57:57.144Z · LW(p) · GW(p)

In Pakistan, people are suspicious of health workers because the CIA used vaccination programs as cover stories for their agents.

Some people in Africa say that being vaccinated wreaked havoc on their psychic perception, and advise others not to do it.

Some people are allergic to ingredients used to make vaccines.

Some people object to having a medical procedure forced upon them.

It is tough to track down primary sources of information on this issue. Even if you go to a university library, many of the scientific papers are from an era that has not been digitized. Vaccine manufacturers do not release all of the information that is relevant. Getting enough good information to develop an informed opinion is not as straightforward and easy as one might expect.

People encounter problems in the medical system and do not have their concerns adequately addressed.

There are a variety of reasons that people have reservations about getting vaccinated. I think that to understand this in more depth, thinking of a monolithic "anti-vaccination movement" is probably not going to help.

In other words, it is possible that some people who object to vaccines could be more or less described as "trying to take heroic responsibility without having good judgement" but I don't think that description would be applicable to people who object to vaccines as a whole.

comment by ChristianKl · 2015-02-04T12:49:22.956Z · LW(p) · GW(p)

I think most of the people in the anti-vaccination movement have peer that are also in the movement. Going with peer opinion isn't taking heroic responsibility.

comment by Viliam_Bur · 2015-02-03T10:08:41.310Z · LW(p) · GW(p)

I can imagine different people in the anti-vaccination movement having different psychological motives. For some of them, it may be just the "purity" instinct. Others may have studied the topic a lot, unfortunately from bad sources or with bad understanding. (The difference is that the latter could have reached an opposite conclusion if presented with different literature and/or peer pressure, while the former would always opt for "not doing anything against the nature".)

Then it is an empirical question of which ones are how frequent.

comment by [deleted] · 2015-02-03T08:28:14.854Z · LW(p) · GW(p)

Temporal discounting plus low contingency seems like a strong candidate. Parents see a strong immediate negative effect when they give their child the shots. There's a low probability long-term positive effect from receiving the shot. It's a fairly typical reaction to incentives.

comment by [deleted] · 2015-02-05T03:03:32.264Z · LW(p) · GW(p)

This seems reasonable to me for many of the people joining. I'd put religious proselytizers in the same camp.

comment by polymathwannabe · 2015-02-03T19:01:39.901Z · LW(p) · GW(p)

Two other possible explanations.

comment by Emily · 2015-02-04T12:51:46.965Z · LW(p) · GW(p)

I think the anti-abortion movement fits this description quite well in many ways (though obviously this is an even more politically-charged view).

PS. Not in the mood for an abortion debate here/now; sorry in advance for not replying to any comments along debating lines.

comment by ZT5 · 2015-02-04T04:21:10.545Z · LW(p) · GW(p)

I've recently had a discussion about ethics here in this thread, and the conclusion I've arrived at is that a big reason for my lack of motivation is lack of social support.

I don't know if this is the right place to post this, nor am I fully clear on what kind of response I am expecting. I guess I would like advice and emotional support with this issue.

I have basically been in shutdown mode for the past year because I'm not getting the kind of support I need, and I have my doubts I will ever get the kind of support I need.

I am in my mid-twenties, highly intelligent and have nonconformist opinions - I also have had personal difficulties and not lived a very happy life so far. I find myself unable to connect to other people when it comes to personal stuff because most people, even well-meaning ones, can't understand what's going on for me. That goes for mental health professionals as well. And unfortunately people, even mental health professionals, can be surprisingly mean if I point out that their well-meaning opinions or advice aren't working for me - which usually ends up with them going into a death spiral of self-justification and/or hurt feelings.

I doubt therapy would work for me (based on previous experiences), because of the personal connection with the therapist not working, and the information and strategies offered to be of rather mediocre quality - mostly things I already know, things that are rather obvious, things that aren't generally true and things that don't apply to me. I doubt anything would actually work for me actually, except actually solving the underlying problem which is that I don't have a support network, "tribe", or whatever you want to call it. Or at least having a possible solution in sight.

It's not so much about receiving support (although that's a part of it) - I guess I would like to have something meaningful to do. Right now I have an at least partially altruistic mindset, and nothing to direct it at, because I have a hard time liking "people" at large. I would like to have a personal connection to a person or people who I actually feel that I like - because otherwise when I feel like being doing something positive either for specific people or for the world, I don't even have an accessible example of someone who I would want to benefit from the results of that.

So yeah. I'm not sure what to do about this, because I'm not feeling very hopeful at the moment. I find that for whatever reason even things I imagine to would be very basic (like being understood by other people) are really hard to find in reality.

...I notice I am feeling confused about this, because the particular set of experiences I just described seems to be extremely non-typical among people in general, and I'm not expecting it to be

Edit: I don't live in the US (I feel this is worth sharing because it affects the advice/options available).

Replies from: MrMind, MathiasZaman, ChristianKl, None, ChristianKl
comment by MrMind · 2015-02-04T09:02:46.261Z · LW(p) · GW(p)

What have you already tried? What hasn't worked in those approaches?

Replies from: None
comment by [deleted] · 2015-02-05T03:07:14.008Z · LW(p) · GW(p)

This is the most relevant question I think. What specific suggested strategies did you experiment with that didn't work?

Replies from: MrMind
comment by MrMind · 2015-02-05T16:53:04.590Z · LW(p) · GW(p)

This is the most relevant question I think

... and the only one which won't be answered.

comment by MathiasZaman · 2015-02-04T08:07:14.285Z · LW(p) · GW(p)

The obvious suggestion is going to (or starting) a local Less Wrong meetup. They're a good way to meet people who can become your "tribe."

Another option (and one that worked very well for me for quite a while) was to move most of my in-group needs online. I don't make a strong distinction between cyberspace and meatspace friendships, so this worked out pretty well. The "bonobo rationalists" of tumblr have a skype group that has general conversation, if you need something to try out.

What is important to keep in mind is that "having a tribe," means that most of your interactions will (and maybe should?) be trivial and banal. You need to build a rapport with the people, so that your brain will more readily accept their praise and advice.

Replies from: ZT5
comment by ZT5 · 2015-02-04T14:06:23.537Z · LW(p) · GW(p)

The obvious suggestion is going to (or starting) a local Less Wrong meetup. They're a good way to meet people who can become your "tribe."

I agree, and I would definitely visit one if there was one nearby (note: I don't live in the US. Edited the original post to reflect that).

As for trying to start a meetup - intuitively I feel there are several reasons why that might be problematic. And I don't know if there are enough (or any) rationalists in my area.

Another option (and one that worked very well for me for quite a while) was to move most of my in-group needs online. I don't make a strong distinction between cyberspace and meatspace friendships, so this worked out pretty well.

Thank you for sharing.

The "bonobo rationalists" of tumblr have a skype group that has general conversation, if you need something to try out.

Thanks for the suggestion. I believe I have found the contact information for that.

What is important to keep in mind is that "having a tribe," means that most of your interactions will (and maybe should?) be trivial and banal. You need to build a rapport with the people, so that your brain will more readily accept their praise and advice.

I don't know if I need that. Maybe other people need to have trivial interactions with me to see me as in-group, I don't know. My experience with that is that the trivial interactions are not a reliable indicator for the quality of non-trivial interactions.

...

I have a feeling that the implication here is that the way to form connections is to have a bunch of casual interactions (that's my prior expectation for how many kinds of connections work, anyway).

Maybe that's not really the implication, so I might be going on a tangent here... but I'd like to share this anyway.

Casual interactions work very poorly for me, and I have a feeling that that way to connect select against my particular mindset.

The problem with casual interactions (the way I see it) is that they put too much weight on similarity and agreement about relatively unimportant things.

It's signalling "I'm similar to you in a lot of ways" as a proxy for signalling "I'm not crazy, trustworthy, have reasonable values, etc etc...".

I think it's kind of like using "academic achievement" as a proxy for "learning", because trying to measure "learning" directly is too inconvenient. (I don't know what the term here is, lost purpose?).

I'd rather have people directly tell me what they expect, so I can tell them whether I think I can live up to that - rather than having to signal that indirectly. The problem with signalling is that a lot of standard signals people are simply not true for me (and there's dishonesty and self-deception involved in signalling because that obviously allows for stronger signals). For example, my thinking patterns are different from most people so I can't use "yeah, I've had exactly the same experience" as a bonding thing.

And, generally speaking, I suspect there might not be enough self-awareness in people for me to be able to say "you know, we're sitting here talking about X but I suspect you're actually interested in Y. How about we talk about that directly?". (or maybe there's some kind of taboo against doing exactly that, I don't know).

Replies from: Lumifer
comment by Lumifer · 2015-02-04T16:44:05.084Z · LW(p) · GW(p)

Casual interactions work very poorly for me, and I have a feeling that that way to connect select against my particular mindset.

I understand what you mean, but think of causal interactions as a fast, cheap filter.

Finding people you'd really like to connect to will necessary involve a lot of trial and error. You would like to minimize the costs (in time and effort) of the trials and the errors. Causal connections basically allow you to do this: you have a limited, surface contact with a person and in the majority of cases that will be enough for you to filter that person out and continue looking.

Don't think of small talk as a way to bond -- think of it as ritualized low-effort behavior one engages in while evaluating the other person.

Replies from: ZT5
comment by ZT5 · 2015-02-04T19:36:30.031Z · LW(p) · GW(p)

Don't think of small talk as a way to bond -- think of it as ritualized low-effort behavior one engages in while evaluating the other person.

I was in fact referring to casual interactions as way to bond and build rapport, because a lot of people do it that way, and I also think that's what MathiasZaman suggested (though maybe he meant it in a different way?).

Oh wait. Is that what you mean by small talk? I think my understanding of the concept just shifted. I was thinking of small talk as "that boring thing people do when they don't want to talk about serious stuff". But of course I use it in the fashion that you described, and it's actually quite fun when done that way.

Replies from: Lumifer
comment by Lumifer · 2015-02-04T19:51:34.285Z · LW(p) · GW(p)

casual interactions as way to bond and build rapport

If you actually want to bond, you don't want casual interactions -- you want highly emotional shared experiences.

Replies from: ZT5
comment by ZT5 · 2015-02-04T20:12:26.666Z · LW(p) · GW(p)

If you actually want to bond, you don't want casual interactions -- you want highly emotional shared experiences.

That sounds right. Thank you for pointing out the distinction.

comment by ChristianKl · 2015-02-04T13:12:48.724Z · LW(p) · GW(p)

A mental health professional that get's angry at you for pointing out that some advice doesn't work is either unskilled or is using anger as an alternative strategy to create pressure to change.

I'm myself not a mental health professional but do have quite a bit of coaching training and would never get angry at someone for him finding advice not useful. It's not even in my reservoir of choices if I think it would be helpful.

Unfortunately I don't think that a majority of academically trained psychologists have enough control over their own emotions to not get angry for bad reasons and go into self-justification.

I don't know whether your state reaches depression but to the extend that it does exercise is very important. Do you do exercise?

After exercise the second highest rated intervention on curetogether is to spend time with a pet. In the absence of human interaction, a dog can fill some of that niche. It can give you the feeling that there somebody who accepts you like you are.

Otherwise find a tribe. LW meetups are good. Joining a sports team is also good.

Replies from: ZT5
comment by ZT5 · 2015-02-04T14:50:43.977Z · LW(p) · GW(p)

Unfortunately I don't think that a majority of academically trained psychologists have enough control over their own emotions to not get angry for bad reasons and go into self-justification.

In my experience that is accurate.

To be fair, as long as people stick to the psychologist-client script, and have more-or-less typical problems, they probably will get acceptable treatment.

However, pointing out that what the mental health person is doing isn't working for me, for reasons that person doesn't immediately recognize as valid isn't sticking to the script. (and probably just being more intelligent than that person and having genuinely non-standard opinions isn't sticking to the script either).

I don't know whether your state reaches depression

That varies. To some extent, yes.

Do you do exercise?

I do regular exercise.

After exercise the second highest rated intervention on curetogether is to spend time with a pet. In the absence of human interaction, a dog can fill some of that niche. It can give you the feeling that there somebody who accepts you like you are.

That's interesting. I think that might work for me, but I have I doubts about my ability to arrange for that to happen.

LW meetups are good

Don't have one in my area (in responding to MathiasZaman's comment, I edited the my original post to reflect that I'm not located in the US).

I would go if there was a meetup in my area.

Joining a sports team is also good.

Merely doing things along other people is typically not enough for me to form connections. And it doesn't sounds interesting or fun enough to me to be worth doing for its own sake.

comment by [deleted] · 2015-02-04T05:18:12.518Z · LW(p) · GW(p)

Why do you find it hard to meet like-minded people? Have you tried meetup.com? The only solution to not having a group of people you like is to meet more people. You're certainly not lacking for options. It sounds like you just need a better searching method.

Replies from: ZT5, ZT5
comment by ZT5 · 2015-02-04T16:14:27.144Z · LW(p) · GW(p)

I like having reasonable suggestions - at the very least it's a good idea to consider these things if I haven't tried them before.

I don't know why you seem to think it would be easy to find like-minded people, though. Inferential distance?

That seems to be hard by default, unless you're living in an area with a high density of like-minded people.

Anyway, I am familiar with meetup.com. I have some meetups I could potentially participate in, though they seem to be mostly for people who want to socialize rather than specific groups for things I am interested in.

And simply meeting people at random seems like a poor way for me to try and find like-minded people. I might do that anyway for the social experience, but it seems to be a rather low return-on-investment strategy.

Replies from: None, ChristianKl
comment by [deleted] · 2015-02-04T22:57:49.416Z · LW(p) · GW(p)

Random has to generally be deliberately planned for. Any kind of search is likely to be non-random, and there are multiple methods for increasing filtering even before you meet the person. A chess club will result in very different encounters than a soccer team. You could also change your culling methods when evaluating people to improve hits. It's possible that you're over or under filtering in casual encounters or that your search parameters are poorly tuned. Studying personality types can help with that. Still, there's nothing better than just to increase your number of interactions.

comment by ChristianKl · 2015-02-06T16:10:06.015Z · LW(p) · GW(p)

What are you interested in?

comment by ZT5 · 2015-02-04T15:42:39.528Z · LW(p) · GW(p)

Why do you find it hard to meet like-minded people? Have you tried meetup.com?

I do some meetup groups available nearby (though I'd have to commute for quite a bit). There's isn't much choice of what kind of people I can meet via meetup.com.

The base rate for people I would consider like-minded is really low, so trying to meet people randomly (or by applying a simple filter) seems like a low-value strategy.

The only solution to not having a group of people you like is to meet more people.

That doesn't automatically imply it is optimal or even reasonable for me to try and maximize the amount of people I meet short-term.

You're certainly not lacking for options.

I think everyone has options. That doesn't mean that the options are viable.

I do also think it's a good idea to go over the possible strategies I might be using.

I'm not seeing any options that I'm willing to use immediately. So I think the best thing I can do right now is simply to think more about this - and see if I can find a reasonable way around the objections to have to using these options, or if I can find new options I like better.

comment by ChristianKl · 2015-02-04T19:35:44.576Z · LW(p) · GW(p)

Edit: I don't live in the US (I feel this is worth sharing because it affects the advice/options available).

Where are you from?

Replies from: ZT5
comment by ZT5 · 2015-02-04T19:41:11.896Z · LW(p) · GW(p)

Where are you from?

I'm located in Sweden.

Replies from: Izeinwinter
comment by Izeinwinter · 2015-02-06T18:29:15.979Z · LW(p) · GW(p)

Based of that the most obvious moves, depending on what you are currently doing.

Universal: Take up a sport or other hobby. The clubs associated with them are a pretty ready made social network compatible with most any lifestyle. To work, this requires you to pick one you enjoy, and with a culture you do to. There is a lot of variety on offer here - Ive been (briefly) in soccer teams that were essentially an excuse to get drunk after the game, and in hiking clubs that were quasi-military in their dedication to proper planning and preparation, and once rather memorably in a cooking club that unofficially doubled as an dating mixer. (12 people. many more pairings than was at all reasonable before it imploded)

If you want to hit reboot on your life in total, sign up for university. You're a swede, so it's free, but before you do, go hang out. Different courses of study have very different cultures. You should be able to find one which is a match. Next step is important. Make sure to join or create a good study group. Also combines pretty well with first option.

Final option, if you simply want structure above all else, the military will do that for you. It's not a lifetime solution unless you make it a lifetime career, but.. giving aim to the aimless is something it has a lot of practice at.

comment by Capla · 2015-02-04T03:27:28.418Z · LW(p) · GW(p)

Since, neither is listed on the best textbooks thread, can anyone recommend good textbooks for

1) Social psychology

2) Cognitive psychology

?

comment by naeserum · 2015-02-03T02:40:29.704Z · LW(p) · GW(p)

I think polyamory is big in the rationalist community; what is the consensus on the effects of experimenting with it on later satisfaction with monogamy?

Replies from: MathiasZaman, Adele_L, None, None, fubarobfusco
comment by MathiasZaman · 2015-02-03T07:31:00.932Z · LW(p) · GW(p)

I'm not sure how useful the question is (but I'm still curious how people with that particular experience might answer). From my discussions on polyamory with people who are polyamorous, it seems to be rather like an orientation. Some people are only happy in polyamorous relationships, others are only happy in monogamous relationships, while still others don't have a strong preference. A person with a strong preference for polyamory would likely be unsatisfied with a monogamous relationship, while someone with a monogamous preferences would be happy to not be polyamorous anymore.

What I'm trying to say is: This is a bit like asking: What is the effects of experimenting with sexual intercourse with people of the same sex on later satisfaction with sexual intercourse with people of a different sex? The answer's going to vary widely depending on whether people are homo-, bi-, or heterosexual.

comment by Adele_L · 2015-02-03T23:15:43.859Z · LW(p) · GW(p)

I don't think there are enough people who have tried it and then gone back to being monogamous for there to be a consensus - but there are a few people who have, for example, Patri Friedman.

I would guess that these people will find monogamy more satisfying after going back to it.

comment by [deleted] · 2015-02-06T02:06:40.804Z · LW(p) · GW(p)

For me, it's made me more able to cope with all sorts of issues, a better communicator etc, but didn't change overall satisfaction secondary to those effects.

comment by [deleted] · 2015-02-03T05:51:11.639Z · LW(p) · GW(p)

Question seconded.

comment by fubarobfusco · 2015-02-04T03:05:16.619Z · LW(p) · GW(p)

Something to think about:

Although the rate of polyamory within the LW-space is higher than in the general populace (IIRC, last survey had ~15% poly, ~30% unsure, >50% monogamous within LW), the rate of cheating among ostensible monogamists is quite high in the general population — and possibly also within LW as well.

(We shouldn't assume that LWers are more fundamentally honest than everyone else.)

comment by maxikov · 2015-02-02T06:05:46.205Z · LW(p) · GW(p)

Disclaimer: the identity theory that I actually alieve is the most common intuitionist one, and it's philosophically inconsistent: I regard as death teleportation but not sleeping. This comment, however, is written from System 2 perspective, that can operate even with concepts that I don't alieve

The basic idea behind timeless identity is that "I" can only be meaningfully defined inductively as "an entity that has experience continuity with my current self". Thus, we can safely replace "I value my life" with "I value the existence of an entity that feels and behaves exactly like me". That allows us to be OK with quite useful (although hypothetical) things like teleportation, mind uploading, mind backups, etc. It also seems to provide an insight into why it's OK to make a copy of me on Mars, and immediately destroy Earth!me, but not OK to destroy Earth!me hours later: the experiences of Earth!me and Mars!me would diverge, and each of them would value their own lives.

However, here is the thing: in this case we merely replace the requirement "to have an entity with experience continuity with me" with "to have an entity with experience continuity with me, except this one hour". They're actually pretty interchangeable. For example, I forget most of my dreams, which means I'm nearly guaranteed to forget several hours of experience every day, and I'm OK with that. One might say that the value of genuine experiences exceeds that of hallucinations, but I would still be pretty OK with taking a suppressor of RNA synthesis, that would temporarily give me anterograde amnesia, and do something that I don't really care about remembering - clean the house or something. Heck, even retroactively erasing my most cherished memories, although extremely frustrating, is still not nearly as bad as death.

That implies that is there are multiple copies of me, the badness of killing any of them is no more than the increase in the likelihood of all of them being destroyed (which is not a lot, unless there's Armageddon happening around) plus the value of memories formed since the last replication. Also, every individual copy should consider alieve being killed to be no worse than forgetting what happened since the last replication, which also sounds not nearly as horrible as death. That also implies that simulating time travel by discarding time branches is also a pretty OK thing to do, unless the universes diverge strongly enough to create uniquely valuable memories.

Is that correct or am I missing something?

Replies from: Manfred
comment by Manfred · 2015-02-02T12:28:31.530Z · LW(p) · GW(p)

Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples' selfish preferences. I, for instance, do not think it's okay to kill a copy of me even if I know I will live on.

In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant - they really are getting killed off in a way that matters more to them than before.

Replies from: maxikov
comment by maxikov · 2015-02-02T22:40:29.751Z · LW(p) · GW(p)

I, for instance, do not think it's okay to kill a copy of me even if I know I will live on

Not OK in what sense - as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.

From the linked post:

The counterargument is also simple, though: Making copies of myself has no causal effect on me. Swearing this oath does not move my body to a tropical paradise. What really happens is that I just sit there in the cold just the same, but then later I make some simulations where I lie to myself.

If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn't just wrong, but even meaningless, and cannot be defined very well. I'm totally not buying that on gut level, but at the same time I don't see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.

When there is a decision your original body can make that creates a bunch of copies, and the copies are also faced with this decision, your decision lets you control whether you are the original or a copy.

I don't quite get this part - can you elaborate?

If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant - they really are getting killed off in a way that matters more to them than before

What's about the thought experiment with erasing memories though? I doesn't physically violate causality, but from the experience perspective it does - suddenly the person loses a chunk of their experience, and they're basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn't seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it's hard to distinguish them on the logical level.

Replies from: Manfred
comment by Manfred · 2015-02-03T02:40:47.293Z · LW(p) · GW(p)

Not OK in what sense - as in morally wrong to kill sapient beings or as terrifying as getting killed?

The first one - they're just a close relative :)

I don't quite get this part - can you elaborate?

TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you're a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.

As to erasing memories - yeah I'm not sure either, but I'm learning towards it being somewhere between "almost a causal descendant" and "about as bad as being killed and a copy from earlier being saved."

Replies from: maxikov
comment by maxikov · 2015-02-03T18:33:57.973Z · LW(p) · GW(p)

OK, I'll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.

comment by passive_fist · 2015-02-02T02:20:59.843Z · LW(p) · GW(p)

In a previous thread, I brought up the subject of entropy being subjective and got a lot of interesting responses. One point of contention was that if you know the positions and velocities of all the molecules in a hot cup of tea, then its temperature is actually at absolute zero (!). I realized that the explanation of this in usual terms is a bit clumsy and awkward. I'm thinking maybe if this could be explained in terms of reversible operations on strings of bits (abstracting away from molecules and any solid physical grounding), it might be easier to precisely see why this is the case. In other words, I'm looking for a dynamical systems interpretation of this idea. I googled a bit but couldn't find any accessible material on this. There's a book about dynamical systems approaches to thermodynamics but it's extremely heavy and does not seem to have been reviewed in any detail so I'm not even sure of the validity of the arguments. Anyone know of any accessible materials on ideas like this?

Replies from: Pfft, mwengler, Manfred, Epictetus, None
comment by Pfft · 2015-02-02T15:04:09.493Z · LW(p) · GW(p)

Isn't all this just punning on definitions? If the particle velocities in a gas are Maxwell-Boltzmann distributed for some parameter T, we can say that the gas has "Maxwell-Boltzmann temperature T". Then there is a separate Jaynes-style definition about "temperature" in terms of the knowledge someone has about the gas. If all you know is that the velocities follow a certain distribution, then the two definitions coincide. But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on), so rather than saying that it has zero temperature, should we not just note that the information-based definition no longer captures the ordinary notion of temperate?

Replies from: passive_fist
comment by passive_fist · 2015-02-02T18:45:29.969Z · LW(p) · GW(p)

You are essentially right. The point is that 'average kinetic energy of particles' is just a special case that happens to correspond to the Jaynes-style definition, for some types of systems. But the Jaynes-style definition is the 'true' definition that is valid for all systems.

But if you happen to know more about it, it is still the case that almost all interesting properties follow from the coarse-grained velocity distribution (the gas will still melt icecubes and so on)

Again, as I mentioned in my previous replies, the gas will melt ice cubes, but is only in thermal equilibrium with 0 K ice cubes.

Replies from: Pfft
comment by Pfft · 2015-02-03T00:07:28.718Z · LW(p) · GW(p)

Again, as I mentioned in my previous replies, the gas will melt ice cubes, but is only in thermal equilibrium with 0 K ice cubes.

This claim seems dubious to me.

Like, the "original, naive" definition of thermal equilibrium is that two systems are out of equilibrium if, when put them in contact with each other, heat will flow from one to the other. If you have a 0K icecube one one hand and a gas and piece of RAM encoding that state of the gas on the other, then they certainly do not seem to be in equilibrium in this sense: when you remove the partitioning wall, the gas atoms will start bouncing against the cube, the ice atoms will start moving, and the energy of the ice cube atoms increases. Heat energy was transferred from one system to the other.

I am not claiming that there is some other temperature T such that an icecube at T would be in equilibrium with the system; rather, it seems the gas+RAM system is itself not in thermal equilibrium, and therefore does not have a temperature?

My more general point is that one can not just claim by fiat that the Jaynes-style definition is the "true" one; if there are multiple ones in play and they sometimes disagree, then one has see which one is more useful. Thermodynamics was originally motivated by heat energy flowing between different gases. It seems that in these (highly artificial) examples, the information-based definition no longer describes heat flow well, which would be a mark against it...

Replies from: passive_fist
comment by passive_fist · 2015-02-03T00:14:44.431Z · LW(p) · GW(p)

If you have a 0K icecube one one hand and a gas and piece of RAM encoding that state of the gas on the other, then they certainly do not seem to be in equilibrium in this sense:

gas+RAM is not in thermal equilibrium with the ice cube, because a large enough stick of RAM to hold this information would itself have entropy, and a lot of it (far, far larger than the information it is storing). This is actually the reason why Maxwell demons are impossible in practice - storing the information becomes a very difficult problem, and the entropy of the system becomes entirely contained within the storage medium. If the storage medium is assumed to be immaterial (an implicit assumption which we are making in this example), then the total system entropy is 0 and it's at 0 K.

My more general point is that one can not just claim by fiat that the Jaynes-style definition is the "true" one;

It is true for the same reason that Bayesian updating is the only true method for updating beliefs; any other method is either suboptimal or inconsistent or both. In fact it is the very same reason, because the entropy of a physical system is literally the entropy of the Bayesian posterior distribution of the parameters of the system according to some model.

comment by mwengler · 2015-02-05T10:48:47.224Z · LW(p) · GW(p)

At least if you are talking about Physics, whether you know the position and velocity of every atom in a system or not is irrelevant to what its temperature is. The point of thermodynamics is that under a broad range of conditions, there are statistical quantities which are predictable, such as the average kinetic energy of components after "enough" time has passed (enough time to reach thermal equilibrium in a given experiment),and of course it is not only the average kinetic energy, but the distribution of energies which are known.

There may be interesting or even amusing information theoretic senses in which tracking the microscopic details can be said to have zero entropy, but these do not impact the physics of the system. IF the system is one in which the conditions for thermal equilibrium being reached are there, then we will be able to predict the same distribution of kinetic energies in the system whether or not we are tracking every single molecules velocities and positions, or not.

As to whether or not a 0 K ice cube will melt, it will melt if your put it in contact with a gas that has enough kinetic energy in it such that when that energy is divided by all the molecules in the system, the average energy per molecule is greater than k*274 K where k is boltzmann's constant. Your detailed knowledge of every molecules position and velocity will not stop the ice cube from transitioning to its liquid state as kinetic energy from the gas is transferred into the ice cube.

NO matter how entertaining alternative definitions of entropy and temperature might be, they are completely irrelevant to the time-evolution of liquids, gases, and solids interacting under the conditions in which they are described to good accuracy by thermodynamics. There is nothing arbitrary or "in the mind' about thermodynamics, it is a simplified map of a large range of real situations, a map whos accuracy is not affected by having additional knowledge of the terrain being mapped.

Replies from: passive_fist
comment by passive_fist · 2015-02-05T19:47:48.033Z · LW(p) · GW(p)

There may be interesting or even amusing information theoretic senses in which tracking the microscopic details can be said to have zero entropy, but these do not impact the physics of the system.

Yes this is the entire point; entropy seems to be disassociated from what's "out there."

As to whether or not a 0 K ice cube will melt, it will melt if your put it in contact with a gas that has enough kinetic energy in it such that when that energy is divided by all the molecules in the system, the average energy per molecule is greater than k*274 K where k is boltzmann's constant.

And no one has said otherwise. But if you consider gas+information together, you can no longer consistently say it's at anything other than 0 K.

There is nothing arbitrary or "in the mind' about thermodynamics, it is a simplified map of a large range of real situations,

I think you're misunderstanding what "in the mind" means. It does not mean that our thoughts can influence physics. Rather, it means that quantities like entropy and temperature depend (to you) on the physical model in which you're viewing the system.

Replies from: mwengler
comment by mwengler · 2015-02-05T20:53:22.044Z · LW(p) · GW(p)

I think you're misunderstanding what "in the mind" means. It does not mean that our thoughts can influence physics. Rather, it means that quantities like entropy and temperature depend (to you) on the physical model in which you're viewing the system.

I don't think I am misunderstanding anything. But it is possible that i am merely not misunderstanding the physics, I suppose. But I participated in the other threads and I am pretty sure I know what we are talking about.

To the extent that you want to define something that allows you to characterize a boiling pot of water as having either zero entropy or zero temperature, define away. I will "merely" point out that the words entropy and temperature have already been applied to that situation by others who have come before you and in a way which is not altered by any knowledge you may have beyond the extensive quantities of the boiling pot of water.

I will point out that your quantities of "entropy" and "temperature" break the laws of thermodynamics in probably every respect. In your system, energy can flow from a colder object to a hotter object. In your system, entropy can decrease in a closed system. In summary, not only are your definitions of entropy and temperature confusing a rather difficult but unconfused subject, but they are also violating all the relationships that people versed in thermodynamics carry around about entropy and temperature.

So what is the possible point of calling your newly defined quantities entropy and temperature? It seems to me the only point is to piggyback your relatively useless concepts on the well-deserved reputation of entropy and temperature in order to get them an attention they do not deserve.

No matter how much information I have about a pot of boiling water, it is still capable of turning a turbine with its steam, cooking rice, and melting ice cubes. If you redefine temperature so that the boiling water is at 0 K but still melting ice cubes by transferring energy to the ice even though the ice is at a much hotter 250 K, then I sure wish you would call this thing that has nothing to do with average kinetic energy and which direction energy will flow something else.

Replies from: passive_fist
comment by passive_fist · 2015-02-06T21:56:17.248Z · LW(p) · GW(p)

To the extent that you want to define something that allows you to characterize a boiling pot of water as having either zero entropy or zero temperature, define away.

It's not an arbitrary definition made for fun. It is - as I've pointed out - the only definition that is consistent. Any other set of definitions will lead to 'paradoxes', like Maxwell's demon or various other 'violations' of the 2nd law.

I will point out that your quantities of "entropy" and "temperature" break the laws of thermodynamics in probably every respect.

On the contrary, they are the only consistent way of looking at thermodynamics.

In your system, energy can flow from a colder object to a hotter object.

And why not? Every time a battery powers an (incandescent) flashlight, energy is flowing from a colder object to a hotter object.

It seems to me the only point is to piggyback your relatively useless concepts on the well-deserved reputation of entropy and temperature in order to get them an attention they do not deserve.

The point is to put thermodynamics on a rigorous and general footing. That's why Jaynes and others proposed MaxEnt thermodynamics.

No matter how much information I have about a pot of boiling water, it is still capable of turning a turbine with its steam, cooking rice, and melting ice cubes

These things you speak of are due to the energy in the boiling water, not the temperature, and energy is not changed no matter how much you know about the system. A system at 0 K can still carry energy. There is nothing in the laws of physics that prevents this.

Replies from: mwengler
comment by mwengler · 2015-02-08T10:01:03.184Z · LW(p) · GW(p)

And why not? Every time a battery powers an (incandescent) flashlight, energy is flowing from a colder object to a hotter object.

Actually, no. The temperature of the electrons moving in the current is quite high. At least according to the uncontroversial definitions generally used. These electrons have a lot of kinetic energy.

A system at 0 K can still carry energy. There is nothing in the laws of physics that prevents this.

Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can't do work on any system, whether the other system is at 0 K also, or not.

Do you have in mind that a motor could be cooled down to 0 K and then run, or that a battery could be cooled down to 0 K and then run? It could be that parts of a battery or motor are at 0 K, perhaps the metal rods or cylinders of a motor are at 0 K, but the motor still turns to produce energy. But the motor itself is not at 0 K, it has motion, kinetic energy, which can be lower by its stopping running.

By the way, do you have any links to anything substantial that puts the temperature of microscopically known boiling water at 0 K? So far I've been contradicting your assertions without seeing the details that might lie behind them.

Replies from: passive_fist
comment by passive_fist · 2015-02-08T10:45:00.895Z · LW(p) · GW(p)

The temperature of the electrons moving in the current is quite high. At least according to the uncontroversial definitions generally used.

I have to say, that definition is quite new to me. The electron temperature in a piece of copper is pretty much the same as the rest of the copper, even when it's carrying many amps of current.

But to give an even more straightforward example, think of a cold flywheel turning a hot flywheel. I suppose you're going to say that the cold flywheel is 'hot' because it's turning. I'm sorry but that's not how thermodynamics works.

Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can't do work on any system, whether the other system is at 0 K also, or not.

What is the exact law that says this? I'd really like to see it. The thermodynamics you're talking about seems drastically different from the thermodynamics I learned in school.

But the motor itself is not at 0 K, it has motion, kinetic energy, which can be lower by its stopping running.

Forget a motor, just imagine an object at 0 K moving linearly through outer space.

By the way, do you have any links to anything substantial that puts the temperature of microscopically known boiling water at 0 K?

EY gives plenty of references in his linked sequences on this.

Replies from: mwengler
comment by mwengler · 2015-02-09T07:51:49.328Z · LW(p) · GW(p)

But to give an even more straightforward example, think of a cold flywheel turning a hot flywheel. I suppose you're going to say that the cold flywheel is 'hot' because it's turning. I'm sorry but that's not how thermodynamics works.

The equipartition theorem says that a system in thermal equilibrium has energy k*T/2 per degree of freedom. Consider a rigid flywheel weighing 1 kg and spinning at "around" 1 m/s so that its kinetic energy from it's rotation is 1 J. I'd like to say this system has 1 degree of freedom, spinning of the flywheel, and so its temperature is 1/k = 7e22 K. But in case you point out that the flywheel can be flying through space as well as spinning on any one of three axes, lets say its temperature is 7e22/6 = about 1e22 K.

A macroscopic rigid system has massively more weight than molecules in a gas but not very many degrees of freedom. If temperatures can be assigned to these at all, they are MASSIVE temperatures.

But it is not a rigid body you say, it is a solid made of atoms that can vibrate. Indeed the solid flywheel might be made of a piece of metal which is at 200 K or 300 K or whatever temperature you want to have heated it up to. But an experiment with a flywheel made of metal at 300 K which flywheel is being spun and unspun: the energy of the spinning is not "thermalizing" with the internal vibrational energy of the flywheel. It is not thermalizing which means these are thermodynamically uncoupled systems which means the effective temperature of the macroscopic rotation of the flywheel is in the 1e22 kind of range.

This IS how thermodynamics works. We don't usually talk about thermo of macroscopic objects with very few degrees of freedom. That doesn't mean we can't, or even that we shouldn't.

Actually there is. 0 K is the state where no further energy can be extracted from the system. So a 0 K system can't do work on any system, whether the other system is at 0 K also, or not.

What is the exact law that says this? I'd really like to see it. The thermodynamics you're talking about seems drastically different from the thermodynamics I learned in school.

See for example http://physics.about.com/od/glossary/g/absolutezero.htm "Absolute zero is the lowest possible temperature, at which point the atoms of a substance transmit no thermal energy - they are completely at rest."

Forget a motor, just imagine an object at 0 K moving linearly through outer space.

OK. As with the flywheel, a 1 kg object moving at 1 m/s through space has 1 J of kinetic energy. Even if we attribute 6 degrees of freedom to this object, that kinetic energy corresponds to about 1e22 K.

EY gives plenty of references in his linked sequences on this.

I looked through this thread and there are no links to any sequences. I searched the Wiki for "Jaynes" and there were very few references, only to mind projection fallacy. So if in fact there is any link anywhere to an argument that a pot of water with microscopically known positions and velocities is somehow at 0 K, please just point me to it.

Replies from: passive_fist
comment by passive_fist · 2015-02-10T02:24:28.370Z · LW(p) · GW(p)

Let me see if I can pick apart your misconceptions.

About the flywheel example, no, rotation does not lead to temperature, because the rotational energy of the flywheel is not thermal energy. You seem to be mixing up thermal with non-thermal energy. In thermodynamics we assign several different kinds of energy to a system:

  1. Total energy: Internal energy + Potential energy + Kinetic energy
  2. Potential energy: Energy due to external force fields (gravity, electromagnetism, etc.)
  3. Kinetic energy: Energy due to motion of the system as a whole (linear motion, rotational motion, etc.)
  4. Internal energy/thermal energy: The energy that is responsible for the temperature of a system.

But here's the kicker: The division between these concepts is not a fundamental law of nature, but depends on your model. So yes, you could build a model where rotation is included in thermal energy. But then, rotation would be part of the entropy as well, so at nonzero temperature you could not model it as rotating at a fixed speed! You'd have to model the rotation as a random variable. Clearly this contradicts with rotation at a fixed speed. That is, unless you also set the temperature to 0 K, in which entropy would be zero and so you could set the rotation to a fixed speed.

Now about the relationship between internal energy and degrees of freedom. You're misunderstanding what a degree of freedom is. The equipartition theorem says that the average energy of a particle with n degrees of freedom is nkT/2, but even if you included rotational energy as thermal energy, a large spinning object has much more than one degree of freedom. It has degrees of freedom associated with its many vibrational modes. It has so many vibrational modes that the associated 'temperature' is actually very low, not high as you describe. Indeed, if it were to 'thermalize' (say, through friction), it would not warm up the object that much. If it were true that the temperature due to rotation is 1e22, then if you let it thermalize it would violate conservation of energy, by tens of orders of magnitude (it would turn into quark-gluon plasma and explode violently, vaporizing half of the planet Earth).

And finally, you cannot calculate absolute energy for an object moving linearly through space. The kinetic energy depends on the rest frame.

Replies from: mwengler
comment by mwengler · 2015-02-10T15:26:21.295Z · LW(p) · GW(p)

Let me see if I can pick apart your misconceptions.

Ok, I have a PhD in Applied Physics. I have learned thermo and statistical mechanics a few times including two graduate level courses. I have recently been analyzing internal and external combustion engines as part of my job, and have relearned some parts of thermo for that. It may be that despite my background, I have not done a good job of explaining what is going on with thermo. But what I am explaining here is, at worst, the way a working physicist would see thermo, informed by a science that explains a shitload of reality, and in a way which is no more subjective than the "spooky action at a distance" of electromagnetic and gravitational fields. I realize appealing to my credentials is hardly an argument. However, I am pretty sure that I am right and I am pretty sure that what I have been claiming are all within spitting distance of discussions and examples of thermo and stat mech calculations and considerations that we really talked about when I was learning this stuff.

My confidence in my position is not undermined by anything you have said, so far. I have asked you for a link to something with some kind of detail that explicates the 0 K 0 entropy boiling water position, or some version of the broken concepts you are speaking generally about. You have referred only to things already linked in the thread, or in the sequence on this topic, and i have found no links in the thread that were relevant. I have asked you again to link me to something and you haven't.

But despite your not giving me anything to work with from your side, I have believed I understand what you are claiming. For the entropy side I would characterize it this way. Standard entropy makes a list of all states at the appropriate energy of an isolated system and say there is equal probability of the system being in any of these. And so the entropy at this energy of this isolated system is log(N(E)) where N(E) is the number of states that have energy E.

I think what you are saying is that if you have detailed knowledge of which state the system is in now, then with the details you have you can predict the exact trajectory of the system through state space, and so the number of states the system can be in is 1 because you KNOW which one it must be in. And so its entropy is 0.

A version of my response would be: so you know which state the system is at any instant of time and so you feel like the entropy is log(1) at any instant in time. But the system still evolves through time through all the enumerated states. And its entropy is log(N(E)), the count of states it evolves through, and it is unchanged that you know at each instant which state it is in. So I know the details of every collision because I follow the motions in detail, but every collision results in the system changing states as every collision changes the direction and speed of two molecules in the system, and over some short time, call it a thermalization time, the system explores nearly all N(E) states. So despite our superior knowledge that gives us the time-sequence of how the system changes from state to state and when, it still explores N(E) states, and its properties of melting ice or pushing pistons is still predictable purely from knowledge of N(E), and is not helped or hurt by a detailed knowledge of the time evolution of the system, the details of how it goes about xploring all N(E) states.

I have just reread this article on Maxwell's Demons. I note that at no point do they deviate from the classic definitions of temperature and entropy. And indeed, the message seems to be that once the demon is part of the system, the system grows classical entropy exactly as predicted, the demons themselves are engines producing the entropy increases needed to balance all equations.

Now about the relationship between internal energy and degrees of freedom. You're misunderstanding what a degree of freedom is.

I said rotation or movement of a rigid body. By definition a rigid body doesn't have modes of vibration in it. Of course you may think that all real bodies are not truly rigid as they are made out of molecules. But if the macroscopic motion is only weakly coupled to the vibrational modes of the material it is made of, then this is essentially saying the macroscopic and vibrational systems are insulated from each other, and so maintain there own internal temperatures which can be different from each other. Just as two gases separated by a heat-insulating wall can be at different temperatures, a feature we find often used in thermodynamic calculations.

And finally, you cannot calculate absolute energy for an object moving linearly through space. The kinetic energy depends on the rest frame.

You actually asked me to "Forget a motor, just imagine an object at 0 K moving linearly through outer space." And so I used the example you asked me to use.

Replies from: passive_fist
comment by passive_fist · 2015-02-10T20:35:36.559Z · LW(p) · GW(p)

Credentials aren't very relevant here, but if we're going to talk about them, I have a PhD in engineering and a BS in math (minor in physics).

and in a way which is no more subjective than the "spooky action at a distance" of electromagnetic and gravitational fields.

Again, as I've pointed out at least once before, entropy is not subjective. Being dependent on model and information does not mean it is subjective.

And so the entropy at this energy of this isolated system is log(N(E)) where N(E) is the number of states that have energy E.

Right off the bat, this is wrong. In a continuous system the state space could be continuous (uncountably infinite) and so N(E) makes no sense. "Logarithm of the number of states of the system" is just a loose way of describing what entropy is, not a precise way.

and so the number of states the system can be in is 1 because you KNOW which one it must be in. And so its entropy is 0.

The number of states a system can be in is always 1! A system (a classical system, at least) can never be in more than one state at a time. The 'number of states', insofar as it is loosely used, means the size of the state space according to our model and our information about the system.

And its entropy is log(N(E)), the count of states it evolves through, and it is unchanged that you know at each instant which state it is in.

There are several things wrong with this. First of all, it assumes the ergodic hypothesis (time average = space average) and the ergodic hypothesis is not required for thermodynamics to work (although it does make a lot of physical systems easier to analyze). But it also has another problem in that it makes entropy dependent on time scale. That is, choosing a fine time scale would decrease entropy. This is not how entropy works. And at any rate, it's not what entropy measures anyway.

I said rotation or movement of a rigid body. By definition a rigid body doesn't have modes of vibration in it.

But I'm not assuming a rigid body. You are. There is no reason to assume a rigid body. I offered an example of a cold flywheel turning a hot flywheel, as a system where energy moves from a cold object to a hot object. You decided for some reason that the flywheels must be rigid bodies. They aren't, at least not in my example.

Replies from: mwengler
comment by mwengler · 2015-02-11T14:20:39.247Z · LW(p) · GW(p)

Right off the bat, this is wrong. In a continuous system the state space could be continuous (uncountably infinite) and so N(E) makes no sense. "Logarithm of the number of states of the system" is just a loose way of describing what entropy is, not a precise way.

A finite system at finite energy has a finite number of states in quantum. So if we restrict ourselves only to any kind of situation which could ever be realized by human investigators in our universe, conclusions reached using discrete states are valid.

There are several things wrong with this. First of all, it assumes the ergodic hypothesis

No, I am considering all possible states N(E) of the system at energy E. Many of these states will be highly spatially anisotropic, and I am still including them in the count.

But it also has another problem in that it makes entropy dependent on time scale. That is, choosing a fine time scale would decrease entropy. This is not how entropy works. And at any rate, it's not what entropy measures anyway.

Since you won't show me in any detail the calculation that leads to water having 0 temperature or 0 energy if you have special knowledge of it, I can only work from my guesses about what you are talking about. And my guess is that you achieve low entropy, 0 entropy, because with sufficient special knowledge you reduce the number of possible states to 1 at any instant, the state that the system is actually in at that instant. But if you count the number of states the system has been in as time goes by, ever time two things collide and change velocity you bounce to another state, and so even with perfect knowledge of the time evolution over a long enough time, you still cover all possible N(E) states. But over an insufficiently long time you cover a smaller number of states. In fact, the behavior of states looked at on time-scales too short to get "thermalization," that is too short to allow the system to change through a significant fraction of the available states might possibly be describably with an entropy that depended on time, but the last thing I want to do is define new things and call them entropy when they do not have the properties of the classic entropy I have been advocating for through this entire thread.

You decided for some reason that the flywheels must be rigid bodies. They aren't, at least not in my example.

Given the length of this thread, I think it would be better if you read all the sentences in each paragraph rather than responding to one out of context.

Seriously, can't you give me an example of your 0 K 0 entropy boiling water and tell me what you hope to know from this example that we don't know already? We have probably gotten most of what we can get from an open-ended discussion of philosophy of thermodynamics. A real example from you would certainly restrict the field of discussion, possibly to something even worth doing. Who knows, I might look at what you have and agree with your conclusions.

comment by Manfred · 2015-02-02T07:43:28.230Z · LW(p) · GW(p)

Nope, sorry.

Also, I still don't buy the claim about the temperature. You said in the linked comment that putting a known-microstate cup of tea in contact with an unknown-microstate cup of tea wouldn't really be thermal equilibrium because it would be "not using all the information at your disposal. And if you don't use the information it's as if you didn't have it."

If I know the exact state of a cup of tea, and am able to predict how that state will evolve in the future, the cup of tea has zero entropy.

Then suppose I take a glass of water that is Boltzmann-distributed. It has some spread over possible microstates - the bigger the spread, the higher entropy (And also temperature, for Boltzmann-distributed things).

Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea). The combined sytem is no longer Boltzmann in either subsytem, and has the same entropy as the original glass of water, just moved into different microstates.

Note that it didn't matter what the water's temperature was - all that mattered was that the tea's distribution had zero entropy. The fact that there has been no increase in entropy is the proof that all the information has been used. If the water had the same average energy as the tea, so that no macroscopic amount of energy was exchanged, then these thing would be in thermal equilibrium by your standards.

Replies from: spxtr, passive_fist
comment by spxtr · 2015-02-02T10:25:33.273Z · LW(p) · GW(p)

Then you put the tea and the water in thermal contact. Now, for every possible microstate of the glass of water, the combined system evolves to a single final microstate (only one, because you know the exact state of the tea).

After you put the glass of water in contact with the cup of tea, you will quickly become uncertain about the state of the tea. In order to still know the microstate, you need to be fed more information.

Replies from: Manfred
comment by Manfred · 2015-02-02T12:01:19.196Z · LW(p) · GW(p)

If you have a Boltzmann distribution, you still know all the microstates - you just have a probability distribution over them. Time evolution in contact with a zero-entropy object moves probability from one microstate to another in a predictable way, with neither compression nor spreading of the probability distribution.

Sure, this requires obscene amounts of processing power to keep track of, but not particularly more than it took to play Maxwell's demon with a known cup of tea.

comment by passive_fist · 2015-02-02T08:00:33.402Z · LW(p) · GW(p)

That's wrong on both counts.

Firstly, even if you actually had a block of ice at 0 K and put it in thermal contact with a warm glass of water, the total system entropy would increase over time. It is completely false that the number of initial and final microstates are the same. Entropy depends on volume as well as temperature. (To see why this is the case, consider that you're dealing with a continuous phase space, not a discrete one).

Additionally, your example doesn't apply to what I'm talking about, because nowhere are you using the information about the cup of tea. Again, as I said, if you don't use the information it's as if you didn't have it.

I am fully aware that saying it in this way is clumsy and hard to understand (and not 100% convincing, even though it really is true). That's why I'm looking for a more abstract, theoretical way of saying it.

Replies from: Manfred
comment by Manfred · 2015-02-02T12:08:51.918Z · LW(p) · GW(p)

I'm not really sure why you say volume is changing here.

I don't understand how you want information to be used, if not to calculate a final distribution over microstates, or what you think "losing information" is if not an increase in entropy. If we're having some sort of disconnect I'd be happy to talk more, but if you're trolling me I would like to not be trolled.

Replies from: passive_fist
comment by passive_fist · 2015-02-02T19:01:17.781Z · LW(p) · GW(p)

I'm not really sure why you say volume is changing here.

Think about putting a packet of gas next to a vacuum and allowing it to expand. In this case it's even easier to see that the requirements of your thought experiment hold - you know the exact state of the vacuum, because it has no microstates. Yet the total system entropy will still increase as the molecules of gas expand to fill the vacuum. Even if you have perfect information about the gas at the beginning (zero entropy), at the end of the experiment you will not. You will have some uncertainty. This is because the phase space itself has expanded.

If we're having some sort of disconnect I'd be happy to talk more,

I think we are. I suggest becoming familiar with R Landauer and C H Bennet's work. I'd be happy to discuss this further if we are on the same page.

Replies from: Manfred
comment by Manfred · 2015-02-02T19:15:46.239Z · LW(p) · GW(p)

Think about putting a packet of gas next to a vacuum and allowing it to expand. In this case it's even easier to see that the requirements of your thought experiment hold

Oh, I see, you're thinking of particle exchange, like if one dumped the water into the tea. This case is not what I intended - by thermal contact I just mean exchange of energy.

With identical particles, the case with particle exchange gets complicated. There might even be some interesting physics there.

Replies from: passive_fist
comment by passive_fist · 2015-02-02T20:14:30.518Z · LW(p) · GW(p)

The thermodynamics of energy exchange and mass exchange are actually similar. You still get the increase in entropy, even if you are just exchanging energy.

Replies from: Manfred
comment by Manfred · 2015-02-04T08:03:24.818Z · LW(p) · GW(p)

One the one hand, this is a good point that points out a weakness in my argument - if states are continuous rather than discrete, one can increase or decrease entropy even with deterministic time-evolution by spreading out or squeezing probability mass.

But I don't know how far outside the microcanonical this analogy you're making holds. Exchanging energy definitely works like exchanging particles when all you know is the total energy, but there's no entropy increase when both are in a single microstate, or when both have the same Boltzmann distribution (hm, or is there?).

I'll think about it too.

comment by Epictetus · 2015-02-10T21:45:33.123Z · LW(p) · GW(p)

The lesson is that statistical methods are superfluous if you know everything with certainty. It's worth noting that classical mechanics is completely symmetric with respect to time (does not have a distinguished "arrow of time"), whereas thermodynamics has a definite arrow of time. You run into problems if you assume that everything behaves classically and try to apply thermodynamic notions.

Landau and Lifshitz's Statistical Physics has some discussion of issues with entropy.

Replies from: passive_fist
comment by passive_fist · 2015-02-10T22:03:53.072Z · LW(p) · GW(p)

I understand what you're saying and I agree. Though it's worth mentioning that the 'arrow of time' in thermodynamics actually doesn't exist for closed, reversible systems.

comment by [deleted] · 2015-02-02T13:21:49.420Z · LW(p) · GW(p)

I'm pretty sure Manfred is right. You drop a block of ice of unknown configuration into a cup of tea of known configuration, then your uncertainty about the system will grow over time. Of course entropy != temperature. You coudl say that the tea has zero entropy, but not zero temperature.

But what's the point of this thought exercise?

Replies from: passive_fist
comment by passive_fist · 2015-02-02T18:47:31.511Z · LW(p) · GW(p)

The block of ice is not of unknown configuration. The block of ice in my example is at 0 K, which means it has zero entropy (all molecules rigidly locked in a regular periodic lattice) and thus its configuration is completely known.

comment by Zian · 2015-02-09T06:35:15.212Z · LW(p) · GW(p)

The LessWrong logo seems to be broken at http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind.

(more generally, there's no clear place to post about technical issues)

comment by Salemicus · 2015-02-06T17:43:46.091Z · LW(p) · GW(p)

Link - effects of partisanship on perceptions of bias. The bottom line is unsurprising given the institutional factors at work.

comment by ChristianKl · 2015-02-05T16:35:55.485Z · LW(p) · GW(p)

My last vaccination was when I was 8 in Germany. There was noone in my teenage years. I'm now 28. I'm male. To what extend is it worthwhile for me to go to a doctor now for vaccination?

Replies from: Nornagest, polymathwannabe, fubarobfusco, Lumifer
comment by Nornagest · 2015-02-05T19:21:42.145Z · LW(p) · GW(p)

That depends what vaccines you got as a child. In the States, the HPV and MVC4 vaccines are normally given after age eight, along with tetanus booster shots every decade or so, but I have no idea how Europeans do it.

It's something to ask a real doctor, but I do think it'd be worth asking -- assuming a similar schedule, there's a good chance you missed a couple of shots, and you're certainly due for a tetanus booster.

Replies from: alienist
comment by alienist · 2015-02-07T06:34:38.992Z · LW(p) · GW(p)

My understanding is that the US schedule is much more aggressive than the European one.

comment by polymathwannabe · 2015-02-05T19:00:02.688Z · LW(p) · GW(p)

HPV vaccination is important, especially as men are the carrier-transmitters.

comment by fubarobfusco · 2015-02-07T17:40:46.056Z · LW(p) · GW(p)

"Worthwhile" implies cost-benefit analysis. What's the cost to you? In the U.S., if you have health insurance, vaccinations are typically covered. So the cost is pretty much an hour or so of your time and some minor discomfort.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-07T21:43:17.120Z · LW(p) · GW(p)

I live in Germany, so I do have health insurance.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-02-08T00:13:58.637Z · LW(p) · GW(p)

Ah, I misunderstood — you wrote "when I was 8 in Germany" so I took that to mean that you weren't in Germany any more, so I fell back to the prior probability. Anyway, go see a doctor. :)

Replies from: ChristianKl
comment by ChristianKl · 2015-02-08T21:14:01.988Z · LW(p) · GW(p)

I wrote that to point out that I do have the kind of vaccinations that German people usually have at age 8.

comment by Lumifer · 2015-02-05T16:49:31.369Z · LW(p) · GW(p)

This question makes no sense for vaccination "in general" -- each vaccination against a specific disease is its own separate decision driven, I guess, by how likely do you think you'll find yourself exposed to these specific pathogens.

comment by maxikov · 2015-02-05T06:36:26.210Z · LW(p) · GW(p)

Should we be concerned about the exposure to RF radiation? I always assumed that no, since it doesn't affect humans beyond heating, but then I found this:

http://www.emfhealthy.com/wp-content/uploads/2014/12/2012SummaryforthePublic.pdf

http://www.sciencedirect.com/science/article/pii/S0160412014001354

The only mechanism they suggest for non-thermal effects is:

changes to protein conformations and binding properties, and an increase in the production of reactive oxygen species (ROS) that may lead to DNA damage (Challis, 2005 and La Vignera et al., 2012)

One of the articles they cite is behind a paywall (http://www.ncbi.nlm.nih.gov/pubmed/15931683), and the other (http://www.ncbi.nlm.nih.gov/pubmed/21799142) doesn't actually seem to control for thermal effects (it has a non-exposed control, but doesn't have a control exposed to the same amount of energy in visible or infrared band). The fact that heat interferes with male fertility is no surprise (http://en.wikipedia.org/wiki/Heat-based_contraception), but it's not clear to me whether there's any difference between being exposed to RF and turning on the heater (maybe there is, if the organism deals with internal and external heat differently, or maybe this effect is negligible).

Nonetheless, if there is a significant non-thermal effect, that alone warrants a lot of research.

Replies from: Manfred, Lumifer
comment by Manfred · 2015-02-05T18:27:46.460Z · LW(p) · GW(p)

You shouldn't be worried. Because of the low low energy of radio waves, all chemical transitions they could cause in your body are already happening due to random thermal motion.

If the amplitude is high enough, though, radio waves can still move ions around. So it's possible that standing next to an AM antenna would have some psychoactive effects, similar to transcranial magnetic or DC stimulation (though the existence of a similar effect for RF, that shows up before the heat input becomes dangerous, is far from certain). But these would be be chemical changes and have nothing to do with cancer.

Also, you're totally right about radio waves warming things up.

comment by Lumifer · 2015-02-05T16:42:24.602Z · LW(p) · GW(p)

The question is too general. If you find yourself in front of a microwave antenna dish, yes, you should be very much concerned about RF radiation X-D and there's not much doubt about that.

The cell-phones-cause-brain-cancer scare was successfully debunked, wasn't it?

Replies from: maxikov
comment by maxikov · 2015-02-05T19:12:15.024Z · LW(p) · GW(p)

If the effect of RF doesn't go beyond thermal, then you probably shouldn't be concerned about sitting next to an antenna dish any more than about sitting next to light bulb of the equal power. At the same time, even if the effect is purely thermal, it may be different from the light bulb since RF penetrates deeper in tissues, and the organism may or may not react differently to the heat that comes from inside rather than from outside. Or it may not matter - I don't know.

And apparently, there is a noticeable body of research, in which I can poke some holes, but which at least adheres to basic standards of peer-reviewed journals, that suggests the existence of non-thermal effects, and links to various medical conditions. However, my background in medicine and biology is not enough to thoroughly evaluate this research, beyond noticing that there are some apparent problems with that, but it doesn't appear to be obviously false either.

Replies from: kpreid
comment by kpreid · 2015-02-07T17:47:37.543Z · LW(p) · GW(p)

next to an antenna dish any more than about sitting next to light bulb of the equal power.

Nitpick: A dish antenna is directional, a typical light bulb is not. For a fair comparison, specify a spotlight bulb.

comment by Adam Zerner (adamzerner) · 2015-02-03T21:15:06.759Z · LW(p) · GW(p)

If cars were just invented yesterday, knowing what you know about humans, would you think that it'd be sane to let people drive the way they currently do (speeds, traffic, conditions...)? I wouldn't.

Replies from: DanielLC, ZankerH, is4junk
comment by DanielLC · 2015-02-05T07:41:14.604Z · LW(p) · GW(p)

I would people to drive cars unrestricted. If I find that people are using cars significantly more dangerously than they should, I'd require insurance. If there is public outcry regardless of insurance (perhaps due to people considering life to be a sacred value, and there being lives lost), I'd put a price floor on insurance.

comment by ZankerH · 2015-02-04T11:58:38.657Z · LW(p) · GW(p)

Probably not the way it's done in the USA (from what I gather, drivers' licences are basically being handed out like candy), but the way it's handled in most European countries - requiring comprehensive education, practical exercise and independent examination on trafic laws, behaving in traffic and operating a car. The one thing we can learn from the US, though, is the absolute stigma against drunk driving, which is just not present to that extent. If cars were invented today, that's the one thing that'd probably change mechanically - a simple suite of sensors and a switch that shuts the engine down and engages the parking brake if the driver is drunk, fatigued or otherwise impaired.

Replies from: Lumifer, Douglas_Knight, Richard_Kennaway, alienist
comment by Lumifer · 2015-02-04T16:47:24.978Z · LW(p) · GW(p)

requiring comprehensive education, practical exercise and independent examination on trafic laws, behaving in traffic and operating a car.

I don't have appropriate statistics at hand, but from personal experience making driver's licenses really expensive and inconvenient to get does not result in better drivers.

comment by Douglas_Knight · 2015-02-06T00:55:41.640Z · LW(p) · GW(p)

And yet, Americans have fewer accidents per mile than Europeans. This was true even 30 years ago, before the push against drunk driving.

Added: Actually, according to this (p 22), most of Europe has, over the course of the 21st century, overtaken America. Much of that is catching up to the American approach to drunk driving, but there are other things going on, since (as the chart says) America was ahead in 1970, before it became concerned with drunk driving. Anyhow, I doubt that rigorous license standards are new.

Replies from: gjm, ChristianKl
comment by gjm · 2015-02-06T09:41:49.470Z · LW(p) · GW(p)

(I haven't verified that that statistic is correct; I'm taking it on trust.)

The US is much less densely populated than Europe. Are more of those miles that Americans drive on nice straight wide near-deserted roads?

Europe and the US are both big varied places. I bet those accident rates are highly variable. What do you see if you break them down by population density, urban versus rural, rich versus poor, etc.?

comment by ChristianKl · 2015-02-06T16:08:54.830Z · LW(p) · GW(p)

Europeans are more likely to live in cities. City traffic produces more accidents per mile.

Replies from: emr
comment by emr · 2015-02-08T05:26:53.471Z · LW(p) · GW(p)

We should probably concern ourselves with fatality rates (serious disability rates probably tracks this). Because of differences in average speed, I expect the typical rural accident to be much more severe.

comment by Richard_Kennaway · 2015-02-04T12:22:14.228Z · LW(p) · GW(p)

If cars were invented today, that's the one thing that'd probably change mechanically - a simple suite of sensors and a switch that shuts the engine down and engages the parking brake if the driver is drunk, fatigued or otherwise impaired.

That's been tried, but there's been no uptake. You could say, ok, have the government require it and that will solve the problem. We have seat belt laws, and breathalysers, why not mandatory automated breath testing before letting a car start? Well, here's something that various governments once tried, but it didn't last.

You don't have to be any sort of libertarian to understand that making people do what they ought isn't a magic wand. In democracies, the people who are making the people do what the people ought are, in the end, the people themselves. In the other parts of the world, you don't get to say what the government should make people do.

comment by alienist · 2015-02-06T02:33:39.420Z · LW(p) · GW(p)

Probably not the way it's done in the USA (from what I gather, drivers' licences are basically being handed out like candy), but the way it's handled in most European countries - requiring comprehensive education, practical exercise and independent examination on trafic laws, behaving in traffic and operating a car.

In the USA you also need to pass a test that includes both an exam on traffic laws and a road test. As far as, handing them out "like candy", true you generally don't hear of people who couldn't pass the test, but do Europeans regularly have problems passing the exam?

Replies from: Emily, MathiasZaman, Lumifer
comment by Emily · 2015-02-06T10:07:56.974Z · LW(p) · GW(p)

I'm in the UK. I know a handful of people who've taken 8 tries or more to pass the practical test. They're not the norm, but I'd say passing it on your first go is regarded as mildly surprising! I'd guess two attempts is possibly the mode? It's an expensive undertaking, too, so most people aren't just throwing themselves at the test well before they're ready in the hope of getting lucky.

Replies from: Emily
comment by Emily · 2015-02-06T10:10:20.580Z · LW(p) · GW(p)

(On the other hand, the theory test (a prerequisite for attempting the practical) is widely regarded as a bit of a joke. I don't know whether this is because I have a social circle that is good at passing written exams, though. Maybe it's more challenging for the less academically inclined?)

comment by MathiasZaman · 2015-02-06T09:55:53.093Z · LW(p) · GW(p)

do Europeans regularly have problems passing the exam?

The particulars of the exam will vary from country to country, but Belgium supposedly has one of the more lax ones and even here you routinely hear of people failing their driving exam. I actually looked it up because of your question and according to wikipedia:

  • About 47% of the written (theoretical) exams are successes. It's hard to say how many people fail, since you can try several times (and fail all of them).
  • Around 56% of the road tests are successful. Again, people can take multiple tests per year if they fail (although this is limited somewhat in that you need to spend time and money after failing every second attempt).
comment by Lumifer · 2015-02-06T15:51:34.182Z · LW(p) · GW(p)

but do Europeans regularly have problems passing the exam?

In many European countries getting the driver's license is very expensive -- we're talking hundreds and thousands of euros.

comment by is4junk · 2015-02-04T23:39:45.865Z · LW(p) · GW(p)

It would depend on how bad travel was without cars yesterday. Historically, it was horses which must have been really bad. I think if they knew back then about speeds, traffic, and conditions they still would have done it. Parts of China and India have proved it quite recently (last 50 years).

Now if we had most people in high density housing, good transport (both public and private), and online ordering/delivery then maybe cars would be very restricted.

comment by mkf · 2015-02-03T17:12:45.369Z · LW(p) · GW(p)

What's LessWrong's collective mind's opinion on efficient markets hypothesis? From Facebook feed I vaguely recall Eliezer being its supporter, it also appeared in some of the Sequences. On the other hand, there is a post published here called A guide to rational investing, which states that "the EMH is now the noble lie of the economics profession".

I have a well-read layman's understanding of both the hypothesis and various arguments for and agains it and would like to know what this community's opinion is.

Replies from: fubarobfusco, Ander, None, Salemicus, emr, alienist, Lumifer
comment by fubarobfusco · 2015-02-04T02:53:41.196Z · LW(p) · GW(p)

False: There are no $20 bills lying on the ground because someone would have picked them up already.

True: If there are a lot of people scanning the ground with high-powered money detectors, you are not going to find enough $20 bills with your naked eye to make a living on.

Replies from: None
comment by [deleted] · 2015-02-04T03:24:46.459Z · LW(p) · GW(p)

What is it, even?

comment by Ander · 2015-02-04T01:13:03.950Z · LW(p) · GW(p)

I don't believe in the strong form of the efficient market hypothesis. (I agree with some weaker versions of it).

If all humans made all investing decisions from a perfectly rational state, then the efficient market hypothesis would probably hold true, but in reality people sometimes become either overly confident or overly fearful, creating opportunities to exploit them by being more rational.

That said, in order to beat the market you must be better than the average participant (which is a high bar), and you must be enough better that you overcome trading fees. This is similar to playing poker, you must be significantly better than the average of the other players at the table in order to beat both them and the rake.

For the average person, the advice to simply buy index funds with a portion of every paycheck is the advice that will bring them the most utility, and one could be considered to be doing them a service by convincing them that the efficient market hypothesis was true, even if it isn't.

comment by [deleted] · 2015-02-04T05:36:54.165Z · LW(p) · GW(p)

My family has been investing for as far back as anyone can remember, and consistently beating the market for as far back as anyone can remember. Hell, my mother compares herself to the indices -- which makes perfect sense: if you're not beating the S&P 500, you may as well just buy whatever is on that. She's not buying whatever is on that.

We have methods that have been handed down for as far back as anyone can remember, supplemented by the books that back up our methods, some of which are about exploiting systemic holes in the way large funds work -- relative legibility, inability of large funds to invest in small companies, etc.

So no.

(Disclaimer: I haven't invested at all because I don't have the money to. Once I have a regular and sufficiently large income, I plan to stick it in an index fund until I've joined an investment club, spent several years studying the methods, etc., and only start picking stocks after that.)

comment by Salemicus · 2015-02-03T18:35:47.786Z · LW(p) · GW(p)

As Lumifer says, the truth value of the EMH depends on the exact formulation, and there are several variations even within the typical 'strong/semi-strong/weak' divisions.

But let me put it this way - I don't take people who argue against the weak-form EMH seriously, unless they own a yacht.

comment by emr · 2015-02-04T21:51:15.584Z · LW(p) · GW(p)

While we're here: How do real-world incentive structures interact with the EMH?

In the same way that "No one was ever fired for buying IBM", is it true that "No one was ever fired for selling when everyone else was"? And would that mean someone without these external social incentives will have an edge on the market? For example, what about a rule like "put money into an index fund whenever the market went down for X consecutive days and everyone is sufficiently gloomy"?

comment by alienist · 2015-02-07T06:58:10.980Z · LW(p) · GW(p)

The so-called "weak efficient market hypothesis" is more-or-less correct. The "strong efficient market hypothesis" falls apart once you attempt to taboo "efficient".

Another way to phrase this is that in some strict sense market "inefficiencies" exist, finding them is a hard problem. (The general case of this problem is NP-hard.)

comment by Lumifer · 2015-02-03T18:03:42.319Z · LW(p) · GW(p)

It's debated occasionally. I don't think there is a consensus on LW.

It might be useful for you to distinguish various forms of EMH (e.g. strong, semi-strong, and weak). Many people hold different opinions about different forms.

comment by G0W51 · 2015-02-08T03:02:06.056Z · LW(p) · GW(p)

What are some papers arguing that one shouldn't dedicate almost all efforts to decrease existential risk? I ask this because all the papers I've read have made extremely good arguments on why decreasing x-risk is important, but I've found none saying that it's not so important, and I want to be informed before spending so much time and effort decreasing x-risk.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-08T21:10:26.842Z · LW(p) · GW(p)

https://intelligence.org/wp-content/uploads/2014/01/01-16-2014-conversation-on-existential-risk.pdf is a discussion between Eliezer, Holden and Luke where Holden argues for thinking about Global Catastrophic Risks instead of xrisks.

comment by Gram_Stone · 2015-02-07T09:15:10.526Z · LW(p) · GW(p)

For a couple of days, I've been trying to explain to pinyaka why minds-in-general, and specifically, maximizers, are not necessarily reward maximizers. It's really forced me to flesh out my current understanding of AGI. I wrote the most detailed natural language explanation of why minds-in-general and maximizers are not necessarily reward maximizers that I could muster in my most recent reply, and just in case it still didn't click for pinyaka, I thought I'd prepare a pseudocode example since I had a sense that I could do it. Then I thought that instead of just leaving it on my hard drive or at the bottom of a comment thread, it might be a good idea to share it here to get feedback on how well I'm understanding everything. I'm not a programmer, or a computer scientist, or a mathematician or anything; I pretty much just read a book about Python a few years ago, read Superintelligence, and poked around LessWrong for a little bit, so I have a feeling that I didn't quite get this right and I'd love to refine my model. The code's pretty much Python.

EDIT: I couldn't get the codeblocks and indenting to work, so I put it on Pastebin: http://pastebin.com/UfP92Q9w

comment by polymathwannabe · 2015-02-06T20:23:56.649Z · LW(p) · GW(p)

This columnist argues that more personal freedom is worth a few more sick people dying. In other words, preventing death from disease is not a terminal goal for him; it's sacrificeable for his actual terminal goal of less government intrusion. Setting aside the mindkill potential over Obamacare, I find his choice of terminal goals worrying.

Replies from: Lumifer, is4junk
comment by Lumifer · 2015-02-06T21:17:22.411Z · LW(p) · GW(p)

preventing death from disease is not a terminal goal for him; it's sacrificeable

You're using a wrong framework which assumes that in every choice there must be only one terminal goal, if you sacrifice anything that sacrifice is not terminal.

A more useful framework would recognize that there is a network of terminal (and other) goals and that most decisions involve trade-offs. It's very common to give up a measure of satisfaction of some terminal goals in order to achieve satisfaction of other terminal goals.

In this specific case, trading off death from disease against government intrusion sounds like a normal balance to me -- your choice is a function of your values and how much death prevention you get/avoid in exchange for how much of government intrustion. In specific situations I can see myself leaning either this way or that way.

I find your worry over the trade-off between terminal goals worrying :-P

comment by is4junk · 2015-02-06T20:56:55.274Z · LW(p) · GW(p)

Are you worried about his ethics or is he making a mistake in logic?

The columnist says "This opinion is not immoral. Such choices are inevitable. They are made all the time." Is that the part you disagree with?

Replies from: polymathwannabe
comment by polymathwannabe · 2015-02-06T22:24:56.546Z · LW(p) · GW(p)

It's his ethics I object to. If we accept his ethics, his argument makes perfect logical sense. But I cannot acept an ethical system where life-and-death is trade-off-able for anything that is not life-and-death.

Replies from: gjm, ChristianKl
comment by gjm · 2015-02-06T23:04:45.349Z · LW(p) · GW(p)

If you drive, cross the road, eat desserts, etc., then you are (for yourself) trading off your own prospects of life and death against other things.

comment by ChristianKl · 2015-02-08T21:29:38.712Z · LW(p) · GW(p)

Basically you call for terrorists to be tortured when it can prevent people from dying?

Replies from: None, polymathwannabe
comment by [deleted] · 2015-02-08T21:46:33.963Z · LW(p) · GW(p)

Jack Bauer utilitarianism has all sorts of proponents.

comment by polymathwannabe · 2015-02-08T21:56:05.639Z · LW(p) · GW(p)

How do you conclude that from my wording?

Replies from: ChristianKl
comment by ChristianKl · 2015-02-08T22:04:38.175Z · LW(p) · GW(p)

I got the impression that you consider life-and-death to be the ultimate terminal value.

In the West we forbid governments from torturing but not from killing. Our laws consider the value of not torturing to be higher than not killing.

When we act in a way to accept deaths for preventing torture we are trading of values against each other.

Replies from: polymathwannabe
comment by polymathwannabe · 2015-02-09T13:01:49.870Z · LW(p) · GW(p)

I oppose torture too, even if death penalty is worse. In the West, death penalty is actually disappearing. The only exception that still gives the West a bad name is the U.S., which by now should stand for "Usual Suspect."

Replies from: ChristianKl
comment by ChristianKl · 2015-02-09T13:14:07.395Z · LW(p) · GW(p)

Then why wouldn't you accept that it's okay to trade of issues of life and death against other values such as having no torture?

Replies from: polymathwannabe, polymathwannabe
comment by polymathwannabe · 2015-02-17T18:27:22.384Z · LW(p) · GW(p)

I'm sorry for the time I've taken to respond to this one. You have asked a very difficult question. Please don't think I've been evading it; it's a question nobody should afford to evade.

Prevention of torture is almost as important to me as prevention of death. I would not torture a suspect to obtain information that might save lives; I support a legal system that grants detainees and convicts full legal rights like everyone else has. I include neither torture nor death penalty in my definition of a civilized society.

So I wouldn't trade pain for a life. Having been suicidal in the past, I now know not to trade life for pain, either.

Replies from: ChristianKl, gjm
comment by ChristianKl · 2015-02-17T22:49:54.481Z · LW(p) · GW(p)

I include neither torture nor death penalty in my definition of a civilized society.

Let's say you are a cop with a gun. There a criminal who threatens to cut of someone's finger and then hand itself in.

Should the cop be allowed to kill the criminal by shooting him? Our laws say, yes. It's a defensive act. Killing the criminal to prevent the finger from being cut of is right.

With torture it's a different matter. If a criminal did hid a hostage somewhere, we don't allow the cop to torture the criminal to give up the location of the hostage.

We value the good of not torturing higher than the good of not killing. If you look at the US constitution you will find a list of values. Those are all important. You won't find "don't kill" in that list, just "nor be deprived of life, liberty, or property, without due process of law".

comment by gjm · 2015-02-17T20:11:14.134Z · LW(p) · GW(p)

So, if I understand you correctly:

  • Death is so bad that you would never accept one extra death, whatever the compensating gains.
  • Torture is so bad that you would never accept one extra instance of torture, whatever the compensating gains.

What do you do when you're in the unfortunate position of having to choose between deaths and tortures? E.g., some crazed criminal has set up an infernal machine that will either torture M people or kill N people, it's boobytrapped so that if you try to break it or otherwise stop it doing one of those it will torture M+N people and then kill them, but you do have the option to flip the switch from "torture" to "death" or vice versa.

Your comment above suggests you wouldn't accept any extra torture even to save multiple lives; since you say that preventing torture is (only) almost as important to you as preventing death, I guess you also wouldn't accept any extra deaths even to prevent multiple tortures. But that leaves you in a situation where, e.g., you wouldn't switch from 1000 tortures to 1 death, nor from 1000 deaths to 1 torture. That's pretty counterintuitive, to say the least. Here's one striking consequence: suppose you have a room with two such machines (operating on completely disjoint sets of people). One is currently set to "1000 tortures" (other option: 1 death) and the other to "1000 deaths" (other option: 1 torture). It seems like you have to either (1) endorse leaving them set that way even though switching both switches takes you from 1000D+1000T to 1D+1T, or (2) endorse at least one of the individual switchings even though it trades off torture against death, or (3) say that it's wrong to switch either switch alone but right to switch both, even though the two switches affect completely different people. All of those seem to me like very painful bullets to bite.

Oh, but in real life you would never have to make such an artificial choice between torture and death! Really? If permitting versus prohibiting euthanasia comes to a vote, what will you do? For that matter, as I remarked in a reply to you elsewhere, permitting (e.g.) driving means accepting some extra deaths for the sake of mere convenience. (But an awful lot of convenience.) Will you cross the road to buy something for me from the shop, if I pay you $10000? You will? Then you're accepting an extra risk of death for the sake of mere money. Should dentists be allowed to X-ray people's teeth? I bet doing so incurs a (tiny) extra risk of death from cancers of the mouth. Is it OK to trade off better teeth against death?

[EDITED to add the following two remarks:]

In practice we very rarely face explicit tradeoffs against either torture or death, and having a policy of never making such a tradeoff in favour of torture or death is probably a very good idea: most of the time it will lead you the way you find it better to go. But once you're dealing in politics -- which, you'll recall, is where we started -- you're inevitably looking at things that affect the lives of thousands or millions of people in subtle ways, and this is exactly the sort of situation in which heuristics like "never let anyone die" are liable to let you down and lead you in directions that end up doing more harm overall.

As it happens, in the present instance I find the article you linked to as odious as I expect you do. I do not think the alleged benefits the author is weighing against extra deaths are anywhere near good enough. But my objection isn't, and couldn't be, simply that the author is prepared sometimes to weigh other things against death. A serious attempt to set political policy on the basis of minimizing deaths at all costs would, I think, rapidly lead to disaster[1].

[1] Caveat: It's possible that minimizing deaths on long enough timescales ends up giving good short-term policies. (Maybe almost any objective that isn't entirely insane works reasonably well in the short term if you apply it in the long term.) But in that case, you can't just assume that minimizing deaths in the short term is a good policy; it could be e.g. that trading deaths against freedom in the short term ends up better for everyone in the long term.

comment by polymathwannabe · 2015-02-09T13:46:28.274Z · LW(p) · GW(p)

Then you accept why wouldn't you say

I'm sorry; that syntax is not clear to me.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-09T13:56:35.880Z · LW(p) · GW(p)

Sorry sentence was messed up. I corrected it.

comment by Lumifer · 2015-02-06T16:54:06.162Z · LW(p) · GW(p)

Almost half of all DNA present on the NYC subway’s surfaces matches no known organism. (source)

Heh :-D

comment by [deleted] · 2015-02-06T15:34:43.266Z · LW(p) · GW(p)

Anyone know how to start a mutual-aid society?

Replies from: Transfuturist, None
comment by Transfuturist · 2015-02-07T06:59:56.941Z · LW(p) · GW(p)

Well, first you have to know how to start a society in the first place.

Replies from: None
comment by [deleted] · 2015-02-07T10:23:09.997Z · LW(p) · GW(p)

It's a specific kind of nonprofit organization.

Replies from: Transfuturist
comment by Transfuturist · 2015-02-07T19:36:01.566Z · LW(p) · GW(p)

...Interesting.

comment by [deleted] · 2015-02-08T20:37:24.267Z · LW(p) · GW(p)

Similar to the fraternal organizations, like the Freemasons or Oddfellows?

Replies from: None
comment by [deleted] · 2015-02-13T14:56:04.307Z · LW(p) · GW(p)

Not so much.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-15T02:41:48.843Z · LW(p) · GW(p)

The wikipedia articles does include them in it's description:

Examples of benefit societies include trade unions, friendly societies, credit unions, self-help groups, landsmanshaftn, immigrant hometown societies, fraternal organizations such as the Freemasons and the Oddfellows, coworking communities, and many others.

comment by emr · 2015-02-04T21:09:45.872Z · LW(p) · GW(p)

How can you learn to calibrate long term predictions, when it takes so long to get feedback?

comment by is4junk · 2015-02-03T03:56:29.115Z · LW(p) · GW(p)

In college, I had a professor ask us to pick any subject, make-up any 'facts', and try to make a compelling argument. He then had us evaluate others peoples essays. Let's just say I wasn't impressed with some of my fellow classmate's arguments.

Sometimes you see this in the courtroom as a failure to state a claim

Would it be interesting to have an open thread where we try this out?

[pollid:814]

Replies from: fubarobfusco
comment by fubarobfusco · 2015-02-03T04:17:35.913Z · LW(p) · GW(p)

How does this differ from the rationalization game?

Replies from: is4junk
comment by is4junk · 2015-02-03T04:36:40.004Z · LW(p) · GW(p)

I am not sure. A quick search on LessWrong only lead me to Meet Up: Pittsburgh: Rationalization Game

What I am proposing would be more of an exercise in argument structure. Either the 'facts' are irrelevant to the given argument or there are more 'facts' needed to support the conclusion.

Replies from: fubarobfusco
comment by fubarobfusco · 2015-02-03T07:36:28.832Z · LW(p) · GW(p)

Huh ... I had thought that it had been posted here. Oops!

Anyway, the rationalization game is an exercise for learning to notice what it feels like to rationalize your existing beliefs rather than looking for evidence. It pretty much amounts to taking an arbitrary proposition as your "bottom line" and trying to come up with "support" for it. The goal is to be aware of what sorts of arguments you use when you rationalize, so you can try to stop doing it.

comment by [deleted] · 2015-02-08T14:13:20.074Z · LW(p) · GW(p)

Can anyone please fill me in on what's the big damn deal with PUA on Lesswrong? It seems to be geared toward screwing women who are only as deep as their genitals get. Who the hell cares? Pretty sure every guy here would love a girl to have fun conversations with. Pretty sure

Most of the stuff I've seen against it is about scaring people or making LW a semi-cult or some other non-solution to a problem some guys REALLY want to solve here, without going to /r/socialskills and following an endless recursion of useless pathetic nonsense, as per my experience. Maybe not make a post titled "How to fuck hot bitches", but rather "Overcoming anxiety using rationality".

Seems like most for- and against-pickup arguments are extremely idiotic and are suboptimal in both theory and implementation.

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2015-02-09T08:26:02.032Z · LW(p) · GW(p)

Similarly to political topics, label "PUA" refers to a diverse set of beliefs and techniques. In such situations, humans have an instinct to choose one of the following options: "accept all beliefs" or "reject all beliefs" (internally represented as "the label is good" or "the label is bad"). Therefore whenever we start debating the topic, it quickly becomes a debate about whether PUAs are good or bad.

To prove that PUAs are good: select one belief that makes sense, preferably one that many people ignore. Then say "this is the real PUA".

To prove that PUAs are bad: select one belief that seems harmful and is offensive to many people. Then say "this is the real PUA".

We have already played a few rounds of this game, and it wasn't productive.

Replies from: ChristianKl, None
comment by ChristianKl · 2015-02-09T13:58:09.273Z · LW(p) · GW(p)

I think there are two separate issues:

(1) The label PUA and who uses it for what purpose.
(2) Individual beliefs hold by the crowd that calls itself PUA.

I have a handful facebook friends who I have meet face to face who make money in the male dating advise market. I have got to know them in different personal development contexts. Not one of them likes to be called a PUA and there are reasons for why that's the case.

It often happens that guys who have little success with women read PUA material and then buy into the marketing promise. They want to believe and then argue for PUA. At that point it can make sense to point out that they have a false idea of what PUA happens to be.

Look at Mystery, he's an archtypical PUA, who self labels that way. The guy can approach. He even can do it successfully. As far as I know he never lost approach anxiety. That's in itself no big deal, but the guy is depressed. His fate is not a state that the usual person who buys into the PUA marketing myth wants to achieve. Quite many guys, also don't have the consciousness required to even get those kinds of results.

That's the label. When it comes to individual techniques and beliefs, you don't need to use the PUA label to discuss them. It's useful to discuss social skills and we do so from time to time.

Replies from: bogus
comment by bogus · 2015-02-10T03:03:32.616Z · LW(p) · GW(p)

I agree that the "PUA" label has some problems, but I'd still consider it very much worthwhile. Talking about "dating advice" just doesn't pinpoint much about what cluster of beliefs, techniques and attitudes you're referring to. Yes, some folks may have false expectations about PUA, but a broader and more confusing label is likely to score worse on this metric, not better.

Also, Mystery is definitely not a typical case among PUAs. The book The Game describes his depression in detail, in a way that makes this quite clear. Overall, PUA 'gurus', people who are driven to expend a lot of effort on the training and techniques, are likely to be weirder than most. But it's not clear that this should put off average folks.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-10T13:54:26.979Z · LW(p) · GW(p)

Talking about "dating advice" just doesn't pinpoint much about what cluster of beliefs, techniques and attitudes you're referring to.

Tucker Max had feminists demonstrating against him for advocating misogyny. When he gives evolutionary psychology based male dating advice by teaming up with evolutionary psychology professor Geoffrey Miller, the impulse to cluster him as PUA comes easily. At the same time he rejects the label.

He thinks that PUAs wrongly objectify women. He thinks that it's important to understand the female perspective as women articulate it themselves.

There are good reasons why misogyny is associated with the label. It worthwhile to distance oneself from that. For ethical reasons, for reasons of building a genuine connection and for general emotional wellbeing.

Overall, PUA 'gurus', people who are driven to expend a lot of effort on the training and techniques, are likely to be weirder than most.

I don't think you get anywhere with that framework if you aren't driven to expend a lot of effort. Quite a few people spend time going to PUA lairs and reading PUA material but not getting any results because they don't really put in the effort.

The book The Game describes his depression in detail, in a way that makes this quite clear.

I don't know the mental state of everybody of Project Hollywood but Tyler was also depressed. Both of them were also depressed 5 years later.

Replies from: bogus
comment by bogus · 2015-02-10T16:20:15.155Z · LW(p) · GW(p)

There are good reasons why misogyny is associated with the label. It worthwhile to distance oneself from that. For ethical reasons, for reasons of building a genuine connection and for general emotional wellbeing.

As Villiam_Bur said above, you can prove anything simply by selecting biased samples. Some PUAs definitely have unhelpful attitudes towards women, and PUA jargon clearly shows a legacy of bad attitudes from past "gurus". But AIUI, many PUA gurus nowadays understand that these are not just ethically problematic, but also have very real drawbacks for their more specific goals. At the same time, we have a new "red pill" label for folks who are even more misogynistic than PUAs used to be.

I don't think you get anywhere with that framework if you aren't driven to expend [ . . . ] effort

Yes, but a PUA "guru" is still someone who is putting in a lot more effort than most. Many people only engage with PUA as far as they strictly need to. Once they've gotten a few dates and started a LTR, they just drop out of the scene. I'd argue that these should count as successes.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-10T20:22:02.076Z · LW(p) · GW(p)

But AIUI, many PUA gurus nowadays understand that these are not just ethically problematic, but also have very real drawbacks for their more specific goals.

The point is that even some people who do have speaking slots at PUA events don't like to be called a PUA to disassociate from those values.

Many people only engage with PUA as far as they strictly need to. Once they've gotten a few dates and started a LTR, they just drop out of the scene. I'd argue that these should count as successes.

If you read David Burns "The Feel Good handbook" he makes the point that showing vunerability is a condition to get someone to love you. A lot of behavior that a lot of beginning PUA's adopt go in the other direction and might make it less likely that the person get's into a long term relationship.

If you listen to Tucker Max et al podcast you will find the advice to have a clean flat to signal consciousness when a woman comes over. For most people here, that's likely good advice. On the other hand most self labeled PUA will tell you that thinks like that don't matter. They have gotten women over when their flat was in an awful state and things still worked.

Tucker Max et al did run a study on mechanical turk to see what kind of shoes woman prefer men to wear on dates. The result is that leather shoes are good but the price doesn't really matter. I could go with a PUA who tells you to peacock or that looks don't matter but if you are a nerd, then likely just wearing leather shoes is a good bet.

Telling people to clean their flat and wear leather shoes doesn't help with selling bootcamps. Telling people to go cold approach in bars and clubs does. It's produces a lot of painful anxiety and makes guys think that it's important do spent large sums of money to learn to deal with it.

Some PUAs definitely have unhelpful attitudes towards women, and PUA jargon clearly shows a legacy of bad attitudes from past "gurus".

If someone who gives seminars to teach social skills uses vocabulary that has effects that he doesn't want, that says something about that persons abilities. In my experience the people I know who are skilled with language don't do that.

Replies from: Richard_Kennaway, bogus
comment by Richard_Kennaway · 2015-02-11T14:22:35.980Z · LW(p) · GW(p)

If you read David Burns "The Feel Good handbook" he makes the point that showing vunerability is a condition to get someone to love you.

What does "vulnerable" mean in this context? People use the word a lot, but nothing listed against it in the dictionary strikes me as a positive thing: susceptible of receiving wounds or physical injury, open to attack or injury of a non-physical nature, in need of special care because of age, disability, risk of abuse or neglect. The general Google hits on the word are even more unattractive.

Replies from: Good_Burning_Plastic, Epictetus, ChristianKl
comment by Good_Burning_Plastic · 2015-02-11T21:44:39.754Z · LW(p) · GW(p)

I think he means it in Mark Manson's sense.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-02-12T13:25:43.827Z · LW(p) · GW(p)

Ah. I'll pass.

comment by Epictetus · 2015-02-11T14:47:02.606Z · LW(p) · GW(p)

What does "vulnerable" mean in this context?

Exactly what the dictionary says: open to attack or injury of a non-physical nature. It doesn't sound very good when put in those terms, granted, but the main idea is that showing vulnerability is a way of signalling trust. You give someone the power to harm you, but you trust them not to abuse it. One form is sharing secrets or personal details (if you read HPMOR, this point comes up).

comment by ChristianKl · 2015-02-11T16:53:18.714Z · LW(p) · GW(p)

open to attack or injury of a non-physical nature

You do need to be open to injury of a non-physical nature to empathize in a way with another person where you feel their pain. The act of caring about another person opens you up to feel bad when they get hurt.

But the openness for negative emotions also means an openness for positive emotions. You feel good when the other person feels good. If you are vulnerable to a girl and she smiles to you in deep happiness that feels good. The ability to do that makes the girl feel agentship. She's not just an object but an agent.

A lot of that is also unconscious. Emotional flow is part of most healthy relationships and a lot of people have barrier against that.

comment by bogus · 2015-02-11T00:24:32.362Z · LW(p) · GW(p)

If you read David Burns "The Feel Good handbook" he makes the point that showing vunerability is a condition to get someone to love you. A lot of behavior that a lot of beginning PUA's adopt go in the other direction and might make it less likely that the person get's into a long term relationship.

It's a balancing act. Most people are a lot more likely to show too much vulnerability as opposed to too little, so the advice to appear less vulnerable would seem to be justified. Similarly, a lot of the things PUAs say "don't matter" actually do matter, but only as a last resort. It's silly to put a lot of extra effort into things like making your flat extra squeaky clean, when you can pick the low hanging fruit of improvements in your social image and attitude.

And if PUA always involved spending "large sums of money" on bootcamps or proprietary material, you'd be quite right - it wouldn't be nearly as interesting as it is. But much basic advice is freely available online, although it may require some time and effort to find the best communities. (And that's one reason why I think one should be aware of the label, despite its problems: it's an easy way to find interesting material.)

Replies from: ChristianKl
comment by ChristianKl · 2015-02-11T12:28:07.626Z · LW(p) · GW(p)

Most people are a lot more likely to show too much vulnerability as opposed to too little, so the advice to appear less vulnerable would seem to be justified

I don't think that's true. Openly and directly speaking about one's desires for example isn't an easy skill. Many guys are tense because they are afraid to fail or to be rejected and put up a lot of barriers towards genuine intimacy.

It's also worth noting that you speak about "appearing vulnerable" while I speak about vulnerability.

If a woman touches you, do you tense up or do you relax? If you tense up because you are afraid of intimacy, it's going to make connection harder. It's even worse if you engage in physical contact because you read on the internet that you should and then tense up because you are afraid of physical contact.

There are many sources for improving social image and attitude. It's happens frequently that people who start with PUA start to behave in a way that burns existing social connections.

It's silly to put a lot of extra effort into things like making your flat extra squeaky clean, when you can pick the low hanging fruit of improvements in your social image and attitude.

I didn't say "extra squeaky clean" I just said clean. Don't strawman.

A lot of woman openly state that they judge man by their shoes. Wearing leather shoes instead of sneakers isn't a high hanging fruit.

But much basic advice is freely available online

Much of that basic advice is given in a way to maximize bootcamp attendence.

Replies from: bogus, Good_Burning_Plastic
comment by bogus · 2015-02-11T21:05:35.576Z · LW(p) · GW(p)

Openly and directly speaking about one's desires for example isn't an easy skill.

That's a higher-level skill though. What makes this possible in the first place is having a secure and well-defined "frame", which is an intended result of pursuing what you call "lower vulnerability". Perhaps the term "vulnerability" is simply too ambiguous.

If a woman touches you, do you tense up or do you relax? If you tense up because you are afraid of intimacy, it's going to make connection harder. It's even worse if you engage in physical contact because you read on the internet that you should and then tense up because you are afraid of physical contact.

You're right about this pitfall of physical contact; somewhat ironically, this is one thing that can be easily spotted and addressed by an actual PUA coach, while it's really hard to self-correct on one's own. You say that PUAs seek to "maximize bootcamp attendance" and this makes their free advice less than trustworthy, but that just doesn't reflect my experience. There's quite a bit of annoying commercialism, but overall development of the community largely occurs through free-ranging discussion.

There are many sources for improving social image and attitude.

Sure, but how many of these sources are as clear and (loosely) empirically based? (One of the tenets of PUA is A/B field testing of every innovation: this is the actual underlying reason for their focus on the unforgiving bar- and club-environment. It's not about making it harder for newcomers and selling more bootcamps - that's just a convenient side effect.)

comment by Good_Burning_Plastic · 2015-02-11T21:49:48.690Z · LW(p) · GW(p)

I didn't say "extra squeaky clean" I just said clean. Don't strawman.

But the former is what people whose flat is already clean are likely to hear when you say the latter -- which is why one should reverse all the advice that one hears.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-12T00:45:54.953Z · LW(p) · GW(p)

The main point is about sources of advice and not about advising the reader to adopt a specific behavior. Tucker Max does go into more detail on his podcast.

Even if we go on advice level, the advice is to signal conscientiousness. Not cleaning your dishes for three days and having them pile up in the kitchen signals low conscientiousness.

The difference between clean and "extra squeaky clean" doesn't signal additional conscientiousness but being neurotic.

The great thing about seeing that you signal conscientiousness towards woman is that developing conscientiousness is useful in general in life. Impressing woman happens a quite good motivator.

Following advice without understanding the reasons behind the advice is seldom optimal. It leads to cargo-culting. Especially for online advice it's foolish.

In person I can ask a lot of question to understand what someone's issue happens to be and then give targeted advice. Giving advice is usually not the main goal when I write something on LW. It's intellectual exchange.

Also dialectics.

Replies from: bogus
comment by bogus · 2015-02-12T01:22:34.605Z · LW(p) · GW(p)

The great thing about seeing that you signal conscientiousness towards woman is that developing conscientiousness is useful in general in life. Impressing woman happens a quite good motivator.

The kind of folks who are going to follow through with this sort of advice in the first place are likely to be more conscientious than average, not less. Given that, signaling conscientiousness is not necessarily good advice - such folks may be better off developing other skills, which are also valuable in other contexts. Saying that you should "impress women" strikes me as the kind of truism that's common in bad dating advice. There are many ways of being impressive, and knowing which are best for you in any given context is a useful skill to have.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-12T11:05:04.839Z · LW(p) · GW(p)

The kind of folks who are going to follow through with this sort of advice in the first place are likely to be more conscientious than average, not less.

Being better than average doesn't mean that it's useless to improve on it.

such folks may be better off developing other skills, which are also valuable in other contexts

Developing conscientiousness usually doesn't stand in the way of developing other skills.

Saying that you should "impress women" strikes me as the kind of truism that's common in bad dating advice.

I didn't. Most heterosexual guys already spent energy on "impressing women", my recommendation is about challenging that energy productively.

Adding two woman to a group of ten males, the behavior of that group changes. They suddenly optimize more for the image they are projecting.

comment by [deleted] · 2015-02-09T11:45:45.773Z · LW(p) · GW(p)

We have already played a few rounds of this game, and it wasn't productive.

Seems like a likely way to describe it considering the people you described are trying to say that their opinion's right, rather than trying to reach a conclusion.

comment by ChristianKl · 2015-02-08T16:55:13.174Z · LW(p) · GW(p)

Maybe not make a post titled "How to fuck hot bitches", but rather "Overcoming anxiety using rationality".

Plenty of people here engage with the topic more deeply and it's not the headline matters.

As far as those two go, if the goal is to overcome anxiety than the highly anxiety inducing activity of cold approaching woman in bars, likely isn't the most straightforward way to do so. You find that sentiment even from people like Tucker Max.

If your target is anxiety you could go and take a message workshop. There are also plenty other options.

Replies from: None
comment by [deleted] · 2015-02-08T18:04:22.931Z · LW(p) · GW(p)

It was a way to include the middle, where we solve anxiety problems LW-style and not completely alienate certain populations, despite what I think is some sort of participation anxiety on their behalf.

And I'd very like it if the people who "engage" it more "deeply" comment here. I have proper opinions regarding the matter and would like to have a discussion about it.

Lastly, in regards to cold approach, a friend of mine told me I need to visit the Red Light District because I'm cool with the guys but a bit dry with the girls. That's even more extreme than the cold approach, but at the same time I think they both flow in the same river. I do wonder if there is some truth in those kind of things.

Replies from: ChristianKl
comment by ChristianKl · 2015-02-08T21:00:23.712Z · LW(p) · GW(p)

I have proper opinions regarding the matter and would like to have a discussion about it. [...] I do wonder if there is some truth in those kind of things.

The second sentence looks like you don't have a formed opinion about the issue.

That's even more extreme than the cold approach, but at the same time I think they both flow in the same river.

I didn't criticize cold approaching for being extreme. That's a strawman. The fact that you make it suggest that you don't have a "proper opinion" on the issue. You didn't engage with the issue of cold approaching producing anxiety.

comment by Gram_Stone · 2015-02-07T09:10:42.510Z · LW(p) · GW(p)

For a couple of days, I've been trying to explain to pinyaka why not all maximizers are reward maximizers. It's really forced me to flesh out my current understanding of AGI. I wrote the most detailed natural language explanation of why not all maximizers are reward maximizers that I could muster in my most recent reply, and just in case it still didn't click for him, I thought I'd prepare a pseudocode example since I had a sense that I could do it. Then I thought that instead of just leaving it on my hard drive or at the bottom of a comment thread, it might be a good idea to share it here to get feedback on how well I'm understanding everything. I'm not a programmer, or a computer scientist, or a mathematician or anything; I read a book about Python a few years ago and I read Superintelligence, so I have a feeling that I didn't quite get this right and I'd love to refine my model. The code's pretty much Python.

def generateTerminalValue()
    ...
    return TerminalValue

def generatePossibleWorldStates()
    ...
    return PossibleWorldStates

def calculateExpectedUtility()
    ...
    return ExpectedUtility

def selectOptimalWorldState()
    generatePossibleWorldStates()
    calculateExpectedUtility(TerminalValue, PossibleWorldStates)
    max(ExpectedUtility) = OptimalWorldState
    return OptimalWorldState

def causeWorldState()
    ...

def main()
    generateTerminalValue()
    selectOptimalWorldState()
    causeWorldState(OptimalWorldState)

main()

For paperclip maximizers:

def generateTerminalValue()
    TerminalValue = MaximizePaperclips
    return TerminalValue

For reward maximizers:

def generateTerminalValue()
    TerminalValue = MaximizeRewardSignal
    return TerminalValue
comment by Gram_Stone · 2015-02-07T09:07:11.219Z · LW(p) · GW(p)

For a couple of days, I've been trying to explain to pinyaka why not all maximizers are reward maximizers. It's really forced me to flesh out my current understanding of AGI. I wrote the most detailed natural language explanation of why not all maximizers are reward maximizers that I could muster in my most recent reply, and just in case it still didn't click for him, I thought I'd prepare a pseudocode example since I had a sense that I could do it. Then I thought that instead of just leaving it on my hard drive or at the bottom of a comment thread, it might be a good idea to share it here to get feedback on how well I'm understanding everything. I'm not a programmer, or a computer scientist, or a mathematician or anything; I read a book about Python a few years ago and I read Superintelligence, so I have a feeling that I didn't quite get this right and I'd love to refine my model. The code's pretty much Python.

def generateTerminalValue()
    ...
    return TerminalValue

def generatePossibleWorldStates()
    ...
    return PossibleWorldStates

def calculateExpectedUtility()
    ...
    return ExpectedUtility

def selectOptimalWorldState()
    generatePossibleWorldStates()
    calculateExpectedUtility(TerminalValue, PossibleWorldStates)
    max(ExpectedUtility) = OptimalWorldState
    return OptimalWorldState

def causeWorldState()
    ...

def main()
    generateTerminalValue()
    selectOptimalWorldState()
    causeWorldState(OptimalWorldState)

main()

For paperclip maximizers: def generateTerminalValue() TerminalValue = MaximizePaperclips return TerminalValue

For reward maximizers: def generateTerminalValue() TerminalValue = MaximizeRewardSignal return TerminalValue

comment by [deleted] · 2015-02-06T15:49:59.929Z · LW(p) · GW(p)

Oh well, no need for a grand message or anything.. I'm a plaintext fan anyway. I'm leaving lesswrong, for better, or for worse, for what it's worth.

It seems like my presence isn't welcome here. But worry not, I will come back to haunt you in half a year or so. I'll post an update. It's going to fun. Lots of downvotes, etc.

Also, one last thing: screw olivia and nshepperd who banned me from IRC. setnick sends his regards.

Seeya.