Open thread, 18-24 March 2014

post by David_Gerard · 2014-03-18T12:26:26.145Z · LW · GW · Legacy · 176 comments

Contents

  If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
None
176 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

176 comments

Comments sorted by top scores.

comment by Metus · 2014-03-19T19:01:08.555Z · LW(p) · GW(p)

I want to thank this community for existing, all the people founding it, the people contributing to it and all the stuff linked here. I may not like all the topics or agree with all the opinions posted here. Nor may I find use of most of the stuff I read around here. But at least I don't feel so alone anymore.

That is all, thank you.

comment by Ben Pace (Benito) · 2014-03-18T18:15:20.200Z · LW(p) · GW(p)

When I hit discussion, it keeps automatically redirecting me to the 'top posts' even when I click back onto 'new'. Is anyone else getting this?

Replies from: Alejandro1, Oscar_Cunningham, jkrause, RowanE, None, None, Slackson
comment by Alejandro1 · 2014-03-18T23:40:36.661Z · LW(p) · GW(p)

Happens in Safari, not in Firefox.

Replies from: mstevens, Benito
comment by mstevens · 2014-03-19T15:35:55.363Z · LW(p) · GW(p)

I'm seeing the same problem in Chrome.

comment by Ben Pace (Benito) · 2014-03-19T05:59:45.041Z · LW(p) · GW(p)

Unfortunately, I use an iPad :(

comment by Oscar_Cunningham · 2014-03-18T19:48:16.148Z · LW(p) · GW(p)

Previous discussion in the last open thread: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/apgw

comment by jkrause · 2014-03-19T03:39:40.378Z · LW(p) · GW(p)

Yes, this happens to me in Windows, but not Ubuntu (both Chrome).

comment by RowanE · 2014-03-20T10:34:44.765Z · LW(p) · GW(p)

Am experiencing this with chrome on my phone, but did not notice earlier on my PC, even though that also uses chrome.

comment by [deleted] · 2014-03-18T19:33:52.778Z · LW(p) · GW(p)

Yes, very annoying.

comment by [deleted] · 2014-03-18T18:57:12.605Z · LW(p) · GW(p)

I used Chrome and it happened, when I use Firefox it doesn't.

comment by Slackson · 2014-03-18T18:39:31.329Z · LW(p) · GW(p)

Yep.

comment by Metus · 2014-03-19T19:04:40.611Z · LW(p) · GW(p)

How well do medical doctors fare in terms of health outcomes compared to people of similar social economic status and family history? Is there a difference between research doctors and practising doctors? What about nurses, is there a notable difference too?

This question is posted within the context of "how big is the effect of medical knowledge on personal health?" and the assumption that medical doctors should represent the upper end of the spectrum. Other medical professionals should represent data points in between. All this together should hint at the personal use of medical knowledge in some kind of unit.

Replies from: Jayson_Virissimo, TylerJay
comment by Jayson_Virissimo · 2014-03-19T21:42:18.291Z · LW(p) · GW(p)

This study seems to go quite a ways towards answering your question:

Among both U.S. white and black men, physicians were, on average, older when they died, (73.0 years for white and 68.7 for black) than were lawyers (72.3 and 62.0), all examined professionals (70.9 and 65.3), and all men (70.3 and 63.6). The top ten causes of death for white male physicians were essentially the same as those of the general population, although they were more likely to die from cerebrovascular disease, accidents, and suicide, and less likely to die from chronic obstructive pulmonary disease, pneumonia/influenza, or liver disease than were other professional white men...

These findings should help to erase the myth of the unhealthy doctor. At least for men, mortality outcomes suggest that physicians make healthy personal choices.

-- Frank, Erica, Holly Biola, and Carol A. Burnett. "Mortality rates and causes among US physicians." American journal of preventive medicine 19.3 (2000): 155-159.

You may also find this worth checking into:

The doctors had a lower mortality rate than the general population for all causes of death except suicide. The mortality rate ratios for other graduates and human service occupations were 0.7-0.8 compared with the general population. However, doctors have a higher mortality than other graduates. The lowest estimates of mortality for doctors were for endocrine, nutritional and metabolic diseases, diseases in the urogenital tract or genitalia, digestive diseases and sudden death, for which the numbers were nearly half of those for the general population. The differences in mortality between doctors and the general population increased during the periods.

-- Aasland, Olaf G., et al. "Mortality among Norwegian doctors 1960-2000." BMC public health 11.1 (2011): 173.

EDIT: I added a second study and cleaned up the citations.

Replies from: Metus, pianoforte611
comment by Metus · 2014-03-19T22:04:58.729Z · LW(p) · GW(p)

This is spot on! And a great starting point for further research. Thank you.

comment by pianoforte611 · 2014-03-24T19:30:31.761Z · LW(p) · GW(p)

Gah! I had been deceived, thanks for clearing that up.

comment by TylerJay · 2014-03-20T15:36:27.581Z · LW(p) · GW(p)

Somewhat related, I remember reading an article claiming that Doctors are more likely to opt out of life-prolonging treatment. Not really well-cited, but seemed like an interesting claim. That end-of-life hospital care is so bad that they would choose not to do it.

Link

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2014-03-21T13:09:23.931Z · LW(p) · GW(p)

Sorry for nitpicking, but don't you mean 'doctors are more likely to opt out'?

Replies from: TylerJay
comment by TylerJay · 2014-03-26T03:40:14.773Z · LW(p) · GW(p)

Yup, that's what I meant. Fixed. Thanks.

comment by mstevens · 2014-03-18T16:57:30.384Z · LW(p) · GW(p)

Is there a name for the situation where the same piece of evidence is seen as obviously supporting their side by both sides of an argument?

eg: New statistics are published showing ethic group X is committing crimes at 10 times the rate of ethic group Y.

To one side, this is obvious evidence that ethic group X are criminals.

To another side, this is obvious evidence the justice system is biased.

Both sides are totally opposed, yet see the same fact as proving they are right.

Replies from: Emile, David_Gerard, Kaj_Sotala, Dahlen, Plasmon
comment by Emile · 2014-03-18T22:07:12.784Z · LW(p) · GW(p)

Both sides are totally opposed, yet see the same fact as proving they are right.

If redheads are 10 times more likely to be in jail for violent crimes, it is evidence for both "redheads are violent" and "judges hate redheads" - and both might be true!

And "redheads are violent" and "judges hate redheads" are not totally opposed, they only look that way in a context where they are taken as arguments in support of broader ideologies who, them, are totally opposed (or rather, compete with each other so oppose each other).

More generally, many facts can be interpreted different ways, and if one interpretation is more favorable to one ideological side, that side will use that interpretation as an argument. Seen like that, facts looking like they support "opposite sides" seems almost inevitable.

comment by Kaj_Sotala · 2014-03-19T16:04:48.239Z · LW(p) · GW(p)

Confirmation bias.

Confirmation bias (also called confirmatory bias or myside bias) is the tendency of people to favor information that confirms their beliefs or hypotheses.[Note 1][1] People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).

Also more specifically attitude polarization:

Attitude polarization, also known as belief polarization, is a phenomenon in which a disagreement becomes more extreme as the different parties consider evidence on the issue. It is one of the effects of confirmation bias: the tendency of people to search for and interpret evidence selectively, to reinforce their current beliefs or attitudes.[1] When people encounter ambiguous evidence, this bias can potentially result in each of them interpreting it as in support of their existing attitudes, widening rather than narrowing the disagreement between them.[2]

comment by Dahlen · 2014-03-18T21:34:54.875Z · LW(p) · GW(p)

Apologies for the nitpick, but didn't you mean ethnic group?

Replies from: Randy_M, mstevens
comment by Randy_M · 2014-03-18T22:57:35.813Z · LW(p) · GW(p)

Everyone knows utilitarians are more likely to break rules.

(This is mostly a joke based on the misspelling. I know a sophisticated utilitarianism would consider the effect of widespread lawbreaking and not necessarily break laws so much as to be overrepresented in prison)

Replies from: Alejandro1
comment by Alejandro1 · 2014-03-18T23:42:55.404Z · LW(p) · GW(p)

I don't know if you intended your disclaimer to be funny, but I found it funnier than the original joke.

comment by mstevens · 2014-03-19T12:24:46.198Z · LW(p) · GW(p)

I did actually mean ethnic group, but now I see my typo I'm actually quite liking it this way as it's less likely to trigger real-world connotations.

comment by Plasmon · 2014-03-18T17:33:28.588Z · LW(p) · GW(p)

You know what they say: one man’s Modus Ponens is another man’s Modus Tollens

Replies from: Gurkenglas
comment by Gurkenglas · 2014-03-18T19:38:49.376Z · LW(p) · GW(p)

That's only once you reformulate grandfather's scenario as "If the justice system is unbiased, racism is justified.". It surprises me that father would cut grandfather's class along its joints... can mstevens think up examples of his class not covered by father, or nonexamples covered by father?

comment by fubarobfusco · 2014-03-19T22:47:27.229Z · LW(p) · GW(p)

I have sometimes seen arguments that fit this pattern, including on Less Wrong —

Your disagreement with me on a point of meta-level theory or ideology implies that you intend harm to me personally, or can't be trusted not to harm me if the whim strikes you to do so.

It seems to me that something is deficient or abusive about many arguments of this form in the general case, but I'm not sure that it's always wrong. What are some examples of legitimate arguments of this form?

(A point of clarification: The "meta-level theory or ideology" part is important. That should match propositions such as "consequentialism is true and deontology is false" or "natural-rights theory doesn't usefully explain why we shouldn't hurt others". It should not match propositions such as "other people don't really suffer when I punch them in the head" or "injury to wiggins has no moral significance".)

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2014-03-20T17:05:21.319Z · LW(p) · GW(p)

One mistake is overestimating the probability that the other person will act on their ideology.

People compartmentalize. For example, in theory, religious people should kill me for being unbeliever, but in real life I don't expect this from my neighbors. They will find an excuse not to act according to logical consequences of their faith; and most likely they will not even realize they did this.

(And it's probably safest if I stop trying to teach them to decompartmentalize. Ethics first, effectiveness can wait. I don't really need semi-rational Bible maximizers in my universe.)

comment by ChristianKl · 2014-03-20T10:06:39.698Z · LW(p) · GW(p)

I don't think the problem with such argument are so much that they are wrong on a factual basis, but they prevent the discussion of some important ideas.

A feminist can argue that ze can measure with a implicit bias test how biased people are and that the argument that you are making is going to make the average reader more biased. Ze might be completely right, but that doesn't mean that your argument is wrong on a factual level.

Once you move to political consideration that certain things are not allowed to be said because they support harmful memes, you are in danger of getting mind killed and be left with a world view that doesn't allow you to make good predictions about reality.

comment by VAuroch · 2014-03-19T03:19:05.130Z · LW(p) · GW(p)

The missing airplane story seems like an opportunity for prediction on par with the Amanda Knox trial.

Replies from: solipsist, solipsist
comment by solipsist · 2014-03-19T20:52:33.386Z · LW(p) · GW(p)

My heart sinks whenever I see these sorts of discussions on LessWrong.

comment by solipsist · 2014-03-19T20:55:57.822Z · LW(p) · GW(p)

My heart sinks whenever I see these sorts of discussions on LessWrong. Could we analyze sometime we expect to be verifiable?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2014-03-19T21:24:29.125Z · LW(p) · GW(p)

My heart sinks whenever I see these sorts of discussions on LessWrong. Could we analyze sometime we expect to be verifiable?

Making verifiable predictions about a missing airplane doesn't seem all that difficult to me; what am I missing? For instance, is there something wrong with this one?

comment by ArisKatsaris · 2014-03-18T22:08:26.225Z · LW(p) · GW(p)

What happened to slatestarcodex and does anyone know if it's just temporary or something to be concerned about?

Replies from: Yvain, ThisSpaceAvailable
comment by Scott Alexander (Yvain) · 2014-03-19T00:19:42.003Z · LW(p) · GW(p)

My hosting company got annoyed because something was taking up too many resources. I did what the nice person on the telephone suggested (installed some WordPress plugins, uninstalled others) and it's back online now. If the problem recurs I might have to restrict commenting for a while until I can figure out a more permanent solution, but for now everything's fine.

comment by ThisSpaceAvailable · 2014-03-18T23:10:30.372Z · LW(p) · GW(p)

I checked the site, and got a 403. Is that what you're talking about? When did you first notice it? The latest Google cache is from Mar 18, 2014 17:37:21 GMT.

http://webcache.googleusercontent.com/search?q=cache:http://slatestarcodex.com/#

Slate Star Codex

Peripherally associated with the Gwernosphere What Universal Human Experiences Are You Missing Without Realizing It?

Posted on March 17, 2014 by Scott Alexander Remember Galton’s experiments on visual imagination? Some people just don’t have it. And they never figured it out. They assumed no one had it, and when people talked about being able to picture objects in their minds, they were speaking metaphorically.

And the people who did have good visual imaginations didn’t catch them. The people without imaginations mastered this “metaphorical way of talking” so well that they passed for normal. No one figured it out until Galton sat everyone down together and said “Hey, can we be really really clear about exactly how literal we’re being here?” and everyone realized they were describing different experiences.

etc.

comment by mstevens · 2014-03-19T15:39:25.077Z · LW(p) · GW(p)

Any tips on bailing out of an argument if you want to very nearly concede the whole thing without quite saying your opponent is right?

eg if you realise the whole conversation was a terrible mistake and you're totally unequipped to have the conversation, but still think you're right.

Should you just admit they're right for simplicity even if you're not quite convinced?

Replies from: Metus, Ben_LandauTaylor, someonewrongonthenet, ChristianKl
comment by Metus · 2014-03-19T18:59:01.601Z · LW(p) · GW(p)

I state the truth: "I tend to get too attached to my opinion in live debates and want to think about your arguments in peace."

The people that get offended by this tend to be not the kind of people I want to associate with anyway.

comment by Ben_LandauTaylor · 2014-03-20T06:40:03.464Z · LW(p) · GW(p)

"Good point. I'll think about that when I have the chance."

comment by someonewrongonthenet · 2014-03-23T23:38:55.965Z · LW(p) · GW(p)

"I'm not really convinced by your argument, but I need to learn more about this issue before I can speak coherently on it"

comment by ChristianKl · 2014-03-19T15:50:35.334Z · LW(p) · GW(p)

I think our conversation raised a lot of interesting points, I think all the interesting stuff has been said. How about we switch topics?

Replies from: mstevens
comment by mstevens · 2014-03-19T15:54:19.184Z · LW(p) · GW(p)

That works as a neutral "let's move on". I sort of want a feeling of conceding more (but not totally) though.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T16:10:43.124Z · LW(p) · GW(p)

How about "I understand the points you're making, let me think more about them"..?

comment by MathiasZaman · 2014-03-19T00:08:07.779Z · LW(p) · GW(p)

The concept of heroic responsibility seems to be off-putting for some people, mostly because it looks like it puts the blame of every single bad thing at the feed of an individual. Generally, I've answered this objection by telling them that they don't need to look that broadly and that they can apply the concept at a smaller, everyday scale. So instead of worrying about solving depression forever, you can worry about making sure a friend gets the psychological help they need and not telling yourself things like: "It's their parent's/partner's/doctor's responsibility that they get proper help."

Is this a correct way to explain the concept or am I strongly misrepresenting it?

Replies from: Viliam_Bur, kalium, Squark, ChristianKl
comment by Viliam_Bur · 2014-03-19T08:37:21.164Z · LW(p) · GW(p)

Maybe it's not a problem with explaining the concept per se, it's just that its consequences are unpleasant. Feels like you are telling people that heroic responsibility is one of the possible choices, a one that they didn't make, but could have made, and perhaps even should have made. -- There probably are good reasons why most people don't take heroic responsibility, but these are difficult to explain. So it's easier to pretend that the whole concept does not make sense to you.

Also, it's not my responsibility to understand the concept of heroic responsibility. :D

EDIT: It may be related to the status-regulation emotion that apparently some people feel very strongly, and some people don't even know. The problem with "heroic responsibility" might simply be the emotional reaction of: "Who do you think you are that you even consider taking more responsibility than other people around you?! That is a task worth of a king; and you obviously aren't one. And you try to explain it to me, but I am also not a king; I don't even pretend to be, so... this whole stuff doesn't make any sense. You must be insane."

Replies from: MathiasZaman
comment by MathiasZaman · 2014-03-20T10:07:04.376Z · LW(p) · GW(p)

It seems most people don't feel good about being considered personally responsible for all the bad things in the world. Especially people who already suffer from anxiety of some kind.

But it's a worthwhile thing to know about, even in everyday life. I work at a homeless shelter at the moment, and I've occasionally gone out of my way to help people because I knew about heroic responsibility. Even if I'm not tackling homelessness as a general problem, it has still helped me become a better me.

comment by kalium · 2014-03-25T04:51:00.990Z · LW(p) · GW(p)

This seems completely incompatible with the actual concept, but certainly more palatable.

The problem with the concept as a whole is that it imposes an impossible requirement -> I will be maximally guilty whatever I do -> why even bother doing anything. Humans (with rare exceptions) just aren't built such that heroic responsibility works for them. If I'm only responsible for close relatives and friends plus some limited charity, I can actually fulfill my responsibilities so there's a reason to try, and so unheroic responsibility is a better model to live by unless you want to impress LWers.

comment by Squark · 2014-03-23T19:18:45.655Z · LW(p) · GW(p)

The way I would put it: Doing the right thing is hard. It doesn't mean one should give up without trying. Also, something can be done better even if it's not done perfectly.

comment by ChristianKl · 2014-03-19T15:34:36.608Z · LW(p) · GW(p)

In what contexts do you try to convince other people of heroic responsibility? Why do you want to frame it that way?

I think the concept comes on LW from HPMOR. Specifically from:

"You could call it heroic responsibility, maybe," Harry Potter said. "Not like the usual sort. It means that whatever happens, no matter what, it's always your fault. Even if you tell Professor McGonagall, she's not responsible for what happens, you are. Following the school rules isn't an excuse, someone else being in charge isn't an excuse, even trying your best isn't an excuse. There just aren't any excuses, you've got to get the job done no matter what."

Most people reject that kind of responsibility. It's no accident that the person who wrote those lines is on a quest to safe the world.

It also in some sense quite telling that the character who speaks those lines is a child who hasn't learned the rules about what is and what isn't his business. Taking responsibility for the life on another is invasive and if you look at the story than Hermoine isn't that happy that Harry tries to be the prime hero.

Half a year ago I sat in a café and had a conversation about personal development. As "collateral damage" the words I spoke brought up a deep personal issue in a stranger next to me and the person sort of angrily started an interaction with me. I went into a direct 10 minute NLP intervention.

Hopefully it helped but I didn't gave the person afterwards my contact details to prolong the interaction with him but just told him to seek help elsewhere. That particular interaction burned my out and it was okay because the other person started it.

If I'm however sitting in public transportation and the person sitting next to me is crying after ending a telephone conversation I don't take it as my responsibility to fix the issue.

There are days were I might do a bit on a nonverbal level but I wouldn't impose myself into the situation by speaking words. I could try to play the role of a hero, but I often chose against it and that's fine.

When it comes to psychological help for friends, then I think it's good to offer it. It's good to explain to someone which choices are available to them. On the other hand everybody has a right to feel bad and if a friend wants to feel bad and/or not be helped by myself, then it's not my business to break through and make him feel well.

That also means that if people around you don't want to take responsibility, don't force it on them. It often much better to lead by example. Tell others stories about how you feel great because you made a difficult decision to practice heroic responsibility. Bonus points for picking stories that are not out of reach for your audience ;)

Replies from: MathiasZaman
comment by MathiasZaman · 2014-03-20T10:09:18.932Z · LW(p) · GW(p)

In what contexts do you try to convince other people of heroic responsibility? Why do you want to frame it that way?

I'm not exactly trying to convince them, just trying to explain the concept. It's something that occasionally comes up when you mention Less Wrong somewhere on the internet. "Less Wrong, aren't those the guys that think you are personally responsible for children dying in Africa?"

That also means that if people around you don't want to take responsibility, don't force it on them. It often much better to lead by example. Tell others stories about how you feel great because you made a difficult decision to practice heroic responsibility. Bonus points for picking stories that are not out of reach for your audience ;)

This looks like good advice. Thank you.

Replies from: ChristianKl
comment by ChristianKl · 2014-03-20T10:25:55.586Z · LW(p) · GW(p)

"Less Wrong, aren't those the guys that think you are personally responsible for children dying in Africa?"

In those cases, it's useful to explain the advantages of that mindset. Knowing you saved a child in Africa from dying through malaria feels really great. It makes you feel powerful and makes you feel agentship.

Happiness research shows that giving to other people often makes you more happy than buying possessions for yourself.

Replies from: fubarobfusco, kalium
comment by fubarobfusco · 2014-03-21T16:29:35.305Z · LW(p) · GW(p)

Happiness research shows that giving to other people often makes you more happy than buying possessions for yourself.

To what extent has this been shown when you will never meet or hear directly from the recipients of your gift?

comment by kalium · 2014-03-25T04:43:56.514Z · LW(p) · GW(p)

Um. The one time I donated to a charity (as a child), I immediately felt terrible guilt. My family was poor at the time, and I realized my parents might have needed those $300 of saved-up allowance. When I save money, I reduce the risk that I will be a burden to those close to me, and that's really fucking valuable.

comment by aubrey · 2014-03-20T12:51:12.139Z · LW(p) · GW(p)

Laser eye surgery (LASIK) is being suggested by several people on LessWrong, who suggest it is a costly procedure that has high likelihood to improve your life. I do not think this is a good trade-off across a life time, because presbyopia.

Almost all humans experience presbyopia. This is age-related deterioration in the ability of the eye to adjust focus. In history, the biggest effect for most people is reduced ability to read, but now it also is affecting the ability to use computers.

If you have myopia (short sight), you can not see distant objects without distance glasses. However, myopia means that you will retain the ability to focus at close distance for longer as presbyopia develops.

So LASIK surgery is not only trading money for better distance vision: if it works, you get better distance vision but also you get worse close vision as presbyopia develops.

How long will you live in each condition?

According to last year's survey, the mean age is 27.4 (stdev 8.5). Presbyopia usually develops from age 40. Let us disregard the possibility of uploading because the mean date for the singularity is 2150. Life expectancy for a 27-year-old US male is 50 years. This is likely an underestimate. It is a period life expectancy, not a cohort life expectancy, and we do expect future improvements in longevity. It is, too, a population average. LessWrongers are smarter and better educated than average, which is associated with longer life. We will nonetheless use it for illustration.

So, very roughly, the trade for an average LessWronger considering LASIK is reduced need for glasses for 13 years against increased need for glasses for 37 years, or longer. Also, I guess that most LessWrongers with myopia spend much more time reading and using computers than doing tasks that require distance vision. I also guess they value those activities more.

Therefore this does not seem a good trade to me even if it were free of cost and risk.

Have I made a mistake? Has anyone more data? I know old people with presbyopia and I know young people with LASIK but I do not know anyone who has both. I guess some people on LessWrong have LASIK, but very few or no people have presbyopia, so I expect no people here with both. I guess that in universities there are faculty members who have both so I will ask on my next visits. Sometimes it seems even that all faculty members have myopia!

Replies from: Lumifer, wadavis
comment by Lumifer · 2014-03-20T14:40:53.636Z · LW(p) · GW(p)

The need for glasses is a binary variable -- either you need them or you don't.

Someone with presbyopia always needs glasses because he can't focus both near and far. It doesn't matter whether that person started with good eyesight, or with myopia, or had myopia corrected by Lasik -- he will need glasses.

You seem to think that presbyopia "corrects" myopia, that is not so. In geometric terms myopia "translates", shifts your zone of focus closer to you so that it doesn't include infinity any more. But presbyopia narrows your zone of focus, contracts it. You don't get far vision back by overlaying myopia with presbyopia.

Replies from: aubrey
comment by aubrey · 2014-04-27T16:26:32.429Z · LW(p) · GW(p)

Sorry, I was not clear. I do not think that presbyopia corrects myopia. It even makes it worse at distance. But at close myopia can offset the effect of presbyopia.

As presbyopia narrows your zone of focus, you can not focus as close as previously. If you have myopia, you can focus at much closer distances than people with normal vision. Before presbyopia, this is not much use. When presbyopia develops, you have more close vision to spare, and can still read a book or a computer screen when people with normal vision would need reading glasses.

I will give a simplified example. A person has mild myopia, and needs a correction to focus at infinity, of strength -2.50 dioptres. They have successful LASIK surgery which gives them normal vision. They develop severe presbyopia, and require a correction of +2.50 dioptres to be able to focus at a comfortable reading distance. They now need reading glasses. If they had not had LASIK surgery they would need distance glasses but would not need reading glasses. They can gain the correction of +2.50 dioptres by taking off their distance glasses.

This is why I say, "[with successful LASIK] you get better distance vision but also you get worse close vision as presbyopia develops.". What is more important may vary from person to person.

I have spoken to an optician about this, and she was mostly confirming this. This is only N=1 argument from authority though! She agreed of course that people with myopia and presbyopia can get good close vision by taking off their distance glasses. They do not need reading glasses, unless they wear contact lenses or have had LASIK. However, I must say that she did not not think that this would be a reason for most young people to not have LASIK. She said would certainly not advise LASIK for people with presbyopia or who would likely have it soon. She said also that people who wear contact lenses for myopia who get presbyopia, she will suggest under-correction in one eye, so that they have good distance vision in one eye and good close vision in the other.

Need for glasses is not a binary variable. It has more states than that. It must at least include 'need distance glasses' and 'need reading glasses'.

comment by wadavis · 2014-03-24T16:37:58.808Z · LW(p) · GW(p)

Please keep us updated on your findings and decisions.

I'm also looking at a cost-benefit of LASIK and watching the reports on early adopters. High chance of an improved quality of life versus a small chance of significantly reduced quality of life. So this is how risk aversion feels like from the inside.

Replies from: aubrey
comment by aubrey · 2014-04-27T16:31:08.702Z · LW(p) · GW(p)

I have spoken to an optician as in reply to other message in this thread.

I have also asked at a computer science department in a European university. It was funny! Everybody in the department except some grad students had myopia. Many of the older faculty also had myopia. But nobody had LASIK. Sorry I did not count properly. My visit was for other reason. However, I can say the modal mode of vision correction was glasses for distance vision and taken off for reading or computers. There were also some people with varifocal glasses. There were also some people with contact lenses for distance vision and reading glasses for reading or computers.

comment by Gunnar_Zarncke · 2014-03-20T10:24:56.877Z · LW(p) · GW(p)

[LINK] Sleep loss can cause brain damage (permanently lost neurons at least in mice). Even if the study is only about mice it nonetheless provides references to more general results:

http://www.uphs.upenn.edu/news/News_Releases/2014/03/veasey/

comment by Alejandro1 · 2014-03-24T00:05:43.159Z · LW(p) · GW(p)

Scott Aaronson reviews Max Tegmark's book on the Mathematical Universe hypothesis. Tegmark responds in the comments, with an interesting and still ongoing back-and-forth.

comment by CoffeeStain · 2014-03-22T23:54:14.858Z · LW(p) · GW(p)

I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.

As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. My appraisal of his doctor consulations seem to be something along the lines of "You should track your diet according to these guidelines, and try to see what causes relapse" rather than "Here's a cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop." For my friend, drilling into sweeping diet changes and tracking seems like an insurmountable challenge, especially with the depression caused by simply having the disease.

I'd like to be able to purchase something for him that would let him go about his life without having to worry about it so much. Any ideas on whether Soylent could be the solution, in particular as to its potential for Crohn's?

Replies from: spqr0a1, tut
comment by spqr0a1 · 2014-03-24T07:36:38.098Z · LW(p) · GW(p)

Consider helminthic therapy. Hookworm infection down-regulates bowel inflammation and my parasitology professor thinks it is a very promising approach. NPR has a reasonably good popularization. Depending on the species chosen, one treatment can control symptoms for up to 5 years at a time. It is commercially available despite lack of regulatory approval. Not quite a magic bullet, but an active area of research with good preliminary results.

comment by tut · 2014-03-23T12:33:21.800Z · LW(p) · GW(p)

There is no known "cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop." A lot of people with Crohn's seem to get some benefit from changing their diet. But the conclusions they draw always seem to contradict each other and in general the improvements are temporary. What it looks like to me (and at least on person with her own experience of the problem) is that radically changing your diet every few years is what you need to do.

comment by Ritalin · 2014-03-22T23:33:02.776Z · LW(p) · GW(p)

Has anyone evr tried writing rationalist fiction of The Sandman? It's a world that explicitly runs on storytelling patterns, but surely something can be done that illustrates the merits of rational thought even in such a setting. Rationalistis should win and adapt to the circumstances, even if they are a dreamscape.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-03-23T07:38:25.427Z · LW(p) · GW(p)

Neil Gaiman himself, sort of. In his take on the Marvel Universe, the genius scientist character has figured out the world runs on storytelling logic instead of mechanical science and that he won't ever be able to permanently change his friend turned into a superhuman rock monster back into a human since "guy permanently turned into superhuman rock monster" makes a better story element than "guy who was a superhuman rock monster but is all better now".

For the current fic writers, I don't see it working. Rationalist fiction writers range from middling to terrible in skill compared to traditional fiction writers at the top of their game like Gaiman, and trying to do this would basically be trying to one-up Gaiman in his own game at his home field. Not to mention that his world-building is a lot more self-aware already than the perennial nerd culture favorite soft targets like Star Wars or Harry Potter, like the Marvel 1602 example shows, so you wouldn't have the nice obvious stuff to work with.

Replies from: Ritalin
comment by Ritalin · 2014-03-23T12:54:46.746Z · LW(p) · GW(p)

@First bit; that's Discworld-grade brilliant, but does the knowledge spread, as it does in Discworld where everyone is Genre Savvy, or is Reed Richards still Useless? Of course, the problem with narrativium is that attempting to take advantage of it is likely to bite you back, but is it better than not knowing?

My very first attempt at writing, waaay back in 2007, was a story about a guy who was thurst into a Narrativium-based world, which was kind of like an immense, live Let's Play for the entertainment a bunch of True Fae children (about three centuries old). Gran't Morrison's Action Comics come to mind. He'd be thurst into different genres asd different roles, and he'd have to start thinking vey meta, very fast, if he wanted to survive each story. He also wanted to get the damned show cancelled without giving a bad performance that would give the Showrunner a reason to fire him (literally), leading to many Springtime For Hitler moments, much to his frustration (sort of like Hideo Kojima and Metal Gear, Hideaki Anno and Evangelion, ... but he also starts to take pride in his work (think Walter White and his blue meth)... At some point, he'd become aware that the way things made "storytelling sense" rather than "logical sense" also extended to the world outside the game. And then we'd have an Animal Man meets Grant Morrison type of meeting.

Most of the references I'm making here are retroactive; I had the idea much before I came in contact with them, but they're handy for condensing. Another work I'd compare this to would be GAINAX's Abenoashi Mahou Shoutengai. In fact, now that I think of it, it might be better to start the story as an Ontological Mystery, instead of having his slavery revealed to him right away as I initially thought.

Anyway, my point here is to say that, almost as soon as I wrote the first chapter, I went on hiatus, because I was keenly aware that I was biting much, much more than I could chew. Even now, I don't think I have remotely what it takes to plan out and pull off such a project. Designing locomotives is such easy work in comparison; all you need is patience and method.

his world-building is a lot more self-aware

It's extremely Post-Modern and illogical in every sense. It is awesome.

comment by ChristianKl · 2014-03-19T15:47:59.149Z · LW(p) · GW(p)

What have been the most useful LW meetup activities in which you participate till now?

Replies from: MathiasZaman, polymathwannabe
comment by MathiasZaman · 2014-03-19T23:28:08.505Z · LW(p) · GW(p)

Joining the Habit RPG party.

While the local meetup every month has been amazing every time I went, it didn't have that much impact on my everyday life. Joining Habit RPG definitely has.

comment by polymathwannabe · 2014-03-19T16:11:18.135Z · LW(p) · GW(p)

Last Friday I joined the tinychat study hall for the first time, and I recommend it very much.

comment by DataPacRat · 2014-03-18T12:52:48.166Z · LW(p) · GW(p)

Priming can nudge one's thoughts in certain directions; fashion can nudge others'.

It's easy enough to try priming abstract, rational, far thinking with cool blue colours, Mozart, and by surrounding oneself with books... but is there any data on scents that nudge peoples' modes of thinking in similar directions? Failing that, is there anecdata?

Replies from: ChristianKl
comment by ChristianKl · 2014-03-18T13:40:50.406Z · LW(p) · GW(p)

It's easy enough to try priming abstract, rational, far thinking with cool blue colours, Mozart, and by surrounding oneself with books..

I wouldn't use the word rational in that place. Taking action instead of spending a lot of energy in mental analysis is often rational.

but is there any data on scents that nudge peoples' modes of thinking in similar directions? Failing that, is there anecdata?

I would guess that the smell of a library with a lot of books might be able to have an effect for people who spent a lot of time in libraries.

Replies from: DataPacRat
comment by DataPacRat · 2014-03-18T20:52:00.224Z · LW(p) · GW(p)

Taking action instead of spending a lot of energy in mental analysis is often rational.

In some cases, yes; but people tend to be willing to take action all on their own quite often anyway. I'm hoping to find something that nudges in the direction of far-mode thinking for those times when it /is/ appropriate to stop and think.

Replies from: ChristianKl
comment by ChristianKl · 2014-03-18T21:48:52.517Z · LW(p) · GW(p)

In some cases, yes; but people tend to be willing to take action all on their own quite often anyway.

A lot of people on LW have akrasia problems. Having thing in their environment that bring them more into their heads wouldn't be beneficial.

I would profit more from an environment that primes me to take action than from one that makes me think more about what I'm doing.

Be aware of the tradeoff you are making.

comment by Tenoke · 2014-03-25T10:02:59.957Z · LW(p) · GW(p)

Can you please add an open_thread tag.

comment by Metus · 2014-03-19T23:37:22.223Z · LW(p) · GW(p)

Crapshot: Say I have some kind of data per country and I want to use Python or other FOSS tools to plot this on a good looking map at the country level. Is there a good tutorial for this? I ask because I can do virtually anything else with Python like data manipulation and analysis or plots, so I'd be nice to do this with Python too.

Replies from: bramflakes, Manfred, dougclow, sixes_and_sevens
comment by bramflakes · 2014-03-21T00:03:37.190Z · LW(p) · GW(p)

I know it's not FOSS or Python, but Google docs has exactly this feature built in to its spreadsheet application.

comment by Manfred · 2014-03-23T04:39:41.384Z · LW(p) · GW(p)

Looks like this might be like what you want.

comment by dougclow · 2014-03-20T13:54:50.936Z · LW(p) · GW(p)

R is free & open source, and widely used for stats, data manipulation, analysis and plots. You can get geographical boundary data from GADM in RData format, and use R packages such as sp to produce charts easily.

Or at least, as easily as you can do anything in R. I hesitate to suggest it to people who already do data work in Python (it's less ... clean) but in this sort of domain it can do many things easily that are much harder or less commonly done in Python. My impression is the really whizzy, clever stats/graphics stuff is still all about R. (See e.g. this geographic example.) There are many tutorials, some of them very good in parts, but it's famously slippery to get to grips with.

More on spatial data in R. You can also get a long way with the maps and mapdata packages.

Replies from: Metus
comment by Metus · 2014-03-20T15:47:57.226Z · LW(p) · GW(p)

I know about R. In fact I switched from R to Python because R is less ... clean. It looks like I will have to use R for plotting though the rest of the stack will be in Python.

Those maps look gorgeus!

comment by sixes_and_sevens · 2014-03-20T10:39:59.781Z · LW(p) · GW(p)

You might want to look at basemap for matplotlib.

Disclaimer: I haven't used this, (though I might start), but skimming over the synopses it looks like it will do what you want it to.

Replies from: Metus
comment by Metus · 2014-03-20T15:46:18.105Z · LW(p) · GW(p)

Thanks for the reply. Basemap looks like what I want but it is not. It is surprisingly easy to plot arbitrary data on a world map but I didn't manage to e.g. colour the countries of the world by some metric. If you look into basemap, please keep me posted.

P.S.: There is some tutorial that shows how to colour Italy's regions seperately but I did not manage to colour in the whole world.

comment by DataPacRat · 2014-03-18T15:53:02.304Z · LW(p) · GW(p)

What does Everett Immortality look like in the long term?

The general idea of EI is that there is always some small chance you will survive in any given situation, so there will be some multiverse timelines whose present is the same as your present, but in which you keep on living indefinitely. However, some forms of survival are a lot more likely than others; eg, it's a lot more likely that my cryonically-preserved brain will be scanned and turned into an AI than that a copy of my brain will spontaneously appear out of nothingness. Thus, it makes sense to plan around the most likely sorts of scenarios, and not to bother doing much planning for the least likely ones.

But thinking /very/ long term, to the heat death of the universe... every form of negentropy is going to end up exhausted, with no more energy gradients that life and intelligence could use to survive from; meaning that however extended a life might be, there will be some point at which all of a person's futures eventually fade away...

... or maybe not. Thermodynamic miracles - events violating ordinary statistics - will, on the long term, happen every so often... so might it be possible for some form of life in that era to rely on them as the last available source of negentropy? Which forms of TMs occur most often, that could most reliably be 'fed' from? How often do they occur, compared to the potential stability of patterns of matter-energy at this time-scale?

Replies from: Risto_Saarelma, None, James_Miller, shminux
comment by Risto_Saarelma · 2014-03-20T07:17:50.463Z · LW(p) · GW(p)

You're assuming some sort of pattern theory of identity when you consider uploads a potential form of survival. If you go all-out pattern theory of identity and assume we're in a big world, is there a reason why the subjectively subsequent moments of awareness need to actually take place at increasing time points on the universe's timeline? A state of matter that corresponds to your pattern's subjective t + 1 might have occurred at the universe's t - 10000 at some distant light cone. If your mind stays at any finite size, it'll eventually just end up going over the same states again, so you could just get an unbound subjective experience timeline inside a fixed timeslice of a spatially infinite, temporally finite universe.

comment by [deleted] · 2014-03-18T17:27:43.680Z · LW(p) · GW(p)

.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-03-19T08:30:32.458Z · LW(p) · GW(p)

If the ‘afterlife’ is infinite, then it will have infinitely more integral measure than the normal life.

Infinite as in "if you succeeded to make it into situation X, you are guaranteed to live forever" or merely potentially infinite, as in "for every situation X where you are alive, in some Everett branch will survive it" (in other words, you never run out of quantum immortality)? In the latter version, the integral of the 'afterlife' may still be smaller than the integral of 'normal life'.

Replies from: None
comment by [deleted] · 2014-03-19T15:46:13.496Z · LW(p) · GW(p)

Good point.

During a person's 'normal' life the number of Everett branches containing that person approaches infinity. The way mortality currently works is that there's a certain probability that you will die during each year, let's say it's 0.01 when you're 20. That percent of Everett branches gets "eliminated" each year. This probability of dying increases each year, until it approaches 1 when you're close to the age of 120. Let's ignore life-extending technologies. In Copenhagen interpretation the probability that you're alive after the age of 120 is effectively zero. In MWI there are few branches that survive beyond this, some of these for very long, potentially forever. So I agree with you, that the intergral of branches during a person's normal life is probably greater than that of the smaller number of branches that survive almost forever. This is true even if the number of branches or the length of them is infinite, didn't Cantor prove that there are different sized infinities?

Is this what you were after? I'm a bit confused. Tell me if I made any mistakes.

comment by James_Miller · 2014-03-18T17:21:57.632Z · LW(p) · GW(p)

As Max Tegmark mentioned on this Rationally Speaking podcast quantum immortality might only work if the universe is infinite.

Replies from: DataPacRat
comment by DataPacRat · 2014-03-18T17:34:29.900Z · LW(p) · GW(p)

I don't have bandwidth for a podcast just now; so 'infinite' in what direction? If the number of MWI timelines can be divided infinitely, then that seems like it would suffice, even if the universe is finite in many other ways.

Replies from: James_Miller
comment by James_Miller · 2014-03-18T20:14:52.859Z · LW(p) · GW(p)

As I recall, he doesn't believe the universe is infinite in any direction.

Replies from: DataPacRat
comment by DataPacRat · 2014-03-18T20:50:07.138Z · LW(p) · GW(p)

Did he give any reasoning for that belief? Eg, does assuming non-infinitesimal worldlines improve the predictions of the interference of double-slit style experiments?

Replies from: Luke_A_Somers, James_Miller
comment by Luke_A_Somers · 2014-03-19T17:54:14.764Z · LW(p) · GW(p)

Certainly not the latter.

If there were any perceptible grain to them, we'd be about a picosecond from the abrupt end of the universe-as-we-know-it.

comment by James_Miller · 2014-03-18T20:53:20.773Z · LW(p) · GW(p)

Again from what I recall: scientists have not found any evidence of infinities, math incompleteness problems go away without infinities, and computer physics models work even though computers have finite memories.

comment by shminux · 2014-03-18T17:33:18.326Z · LW(p) · GW(p)

Quantum immortality is a poor atheist's immortal soul.

Replies from: Luke_A_Somers, None
comment by Luke_A_Somers · 2014-03-19T17:51:04.271Z · LW(p) · GW(p)

That's the opposite of comforting.

Replies from: shminux
comment by shminux · 2014-03-19T18:27:24.801Z · LW(p) · GW(p)

How so? Don't people find it comforting believing that there are universes where they survive against impossible odds?

Replies from: JGWeissman
comment by JGWeissman · 2014-03-19T18:37:17.116Z · LW(p) · GW(p)

Mere survival doesn't sound all that great. Surviving in a way that is comforting is a very small target in the general space of survival.

Replies from: shminux
comment by shminux · 2014-03-19T18:39:03.564Z · LW(p) · GW(p)

Beats dying if you believe that some day you will be saved BY THE POWER OF SCIENCE!

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-03-20T07:30:10.772Z · LW(p) · GW(p)

So let's say you're a soldier in battle in 2000 BCE. Someone just slashed your stomach open with a sword, you're in horrible pain, your internal organs are spilling out, but you're still conscious and aware of what's happening. How are quantum immortality and the power of science going to work out for you now?

EDIT: I thought quantum immortality was thought as a thing that applies to everyone everywhere. Are we discussing some sort of more constrained version here that doesn't apply to "your chest just got smashed by an engine block but you're still conscious for a little while" but does apply to cryonics, uploading etc. information theoretic undeath shenanigans?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-03-20T16:59:02.560Z · LW(p) · GW(p)

The answer is the most likely miracle, but I am not sure what exactly that would be. All necessary miracles are so improbably that I don't trust my ability to evaluate their relative probabilities.

It could be something like: By random movement of atoms, your organs jump inside and your wounds heal (and your body overcomes the infection). All wittnesses stop fighting and start worshiping you as a god. You don't understand the situation, but successfully use your new situation to stop the war or escape from the war. You collect smart people around you, supported by your followers' donations, and together you invent science relatively slowly. It still takes a hundred years or more, that you miraculously survive with sufficient brain function. At the end your team develops a recursively self-improving AI (not necessarily a Friendly one, only one that wants to keep you alive).

Despite all the miracles, this seems like the least miraculous path from "cut with a sword" to "immortality". (Assuming that the damage really happened, because otherwise the most likely path starts with "you wake up from the nightmate".)

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2014-03-20T19:15:12.271Z · LW(p) · GW(p)

This is curiously detailed for something where basically the only requirement is that you stay aware of every moment, constant horrible pain and debilitating injuries aren't any sort of problem unless they keep you from staying conscious, and there's basically no lookahead beyond whatever the duration between consecutive states of subjective consciousness is, definitely something less than a second.

Sure, someone in the multiverse is going to get the happy shiny human-friendly thermodynamic miracle starting up for them, but it seems like there'd be countless quite a bit less improbable quivering masses of horrible injuries and pain who Just. Can't. Die.

I mean, think of the lookahead. Sure, the miracle scenario has you having a lot bigger measure of existence after the miracle has taken place, but there doesn't seem to be a point going directly forward from the lethal injury state where it's more likely to go down the path of the miracle starting to happen than to just stay improbably aware in your current rapidly decaying state. You'd probably end up with some incredibly measure-sparse weird Boltzmann-brain-like states in the end, but isn't it possible that at every step along the way there are a lot more pseudo-Boltzmann-brain futures than there are body-repairing thermodynamic miracle futures?

comment by [deleted] · 2014-03-19T16:45:34.680Z · LW(p) · GW(p)

.

Replies from: shminux
comment by shminux · 2014-03-19T16:56:48.601Z · LW(p) · GW(p)

What claim?

Replies from: None
comment by [deleted] · 2014-03-19T16:58:21.221Z · LW(p) · GW(p)

.

Replies from: shminux
comment by shminux · 2014-03-19T17:05:42.382Z · LW(p) · GW(p)

I find it counterproductive to assign probability or truth value to untestables.

Replies from: None
comment by [deleted] · 2014-03-19T17:16:48.813Z · LW(p) · GW(p)

.

Replies from: shminux
comment by shminux · 2014-03-19T18:22:34.182Z · LW(p) · GW(p)

If your decisions depend on untestables, you need a better decision theory.

Replies from: None
comment by [deleted] · 2014-03-19T18:43:59.916Z · LW(p) · GW(p)

.

Replies from: shminux, Lumifer
comment by shminux · 2014-03-19T18:54:19.043Z · LW(p) · GW(p)

Quantum immortality is based on MWI, which is designed explicitly to match the standard "shut up and calculate" approach to QM, which means that it cannot have any measurable effects outside the standard framework, where "Everett branches" are known as "possible outcomes". If you expect different consequences for your personal experience in the two pictures, you probably do not understand MWI.

Replies from: None
comment by [deleted] · 2014-03-19T18:59:49.352Z · LW(p) · GW(p)

.

Replies from: shminux, shminux
comment by shminux · 2014-03-19T20:09:36.935Z · LW(p) · GW(p)

Blanking your comments before retracting them? To hide changing your mind after learning stuff?

Replies from: None
comment by [deleted] · 2014-03-19T20:17:39.313Z · LW(p) · GW(p)

No, now that I got a clear picture of this issue I will delete this account among other things. Sorry for bothering you.

comment by shminux · 2014-03-19T20:03:27.118Z · LW(p) · GW(p)

I don't think removing the content from your comments is a good way to react to changing your mind, if that is your reason.

comment by Lumifer · 2014-03-19T18:49:49.734Z · LW(p) · GW(p)

it has some consequences on my personal experience of the world that I will probably see in some time, given that it's actually true.

What might these consequences be?

Replies from: None
comment by [deleted] · 2014-03-19T18:55:22.097Z · LW(p) · GW(p)

.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T19:12:33.278Z · LW(p) · GW(p)

I don't think that's how MWI works.

Replies from: None
comment by [deleted] · 2014-03-19T19:16:02.497Z · LW(p) · GW(p)

.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T19:27:39.596Z · LW(p) · GW(p)

So I can kill myself without worrying about some nasty existential horror shit, if needs be? Because that's really all I wanted to know and LW seems like the only place that would take a query like this seriously

Does not follow. MWI is orthogonal to "some nasty existential horror shit", it doesn't provide evidence either for or against your worries.

Replies from: None
comment by [deleted] · 2014-03-19T19:32:24.959Z · LW(p) · GW(p)

.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T19:37:34.911Z · LW(p) · GW(p)

I have no idea what do you worry about, but according to our current understanding in this life there is no detectable difference between a Copenhagen world and an Everett world. As to the afterlife, all bets are off -- contemporary physics can't help you there.

Replies from: None
comment by [deleted] · 2014-03-19T19:42:04.764Z · LW(p) · GW(p)

.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T19:46:00.099Z · LW(p) · GW(p)

Trying to understand quantum physics on the basis of web comics doesn't strike me as a useful. The lesson you should draw from that comic is that standing near a nuclear bomb when it explodes is a bad idea.

What do you mean by the afterlife?

Whatever happens to you after you die.

comment by [deleted] · 2014-03-19T07:45:58.733Z · LW(p) · GW(p)

A Puddler's Tale

They neither know of night or day,
They night and day pour out their thunder.
As every Ingot rolls away,
A dozen more are split 'asunder.
There is a sign above the gate: Eleven days since a man lay dying,
Now every shift brings fear and hate, and shaken men in terror crying.
*
The molten rivers boil away a fiery brew Hell never equalled,
To their profits the bosses pray,
And Mammon sings in his grim cathedral:
His attendants join the choir,
and Heaven help us if we're shirking!
Stoke the furnace's altar fire and just be thankful that we're working!
*
To this, men, charge the hoppers high, 'lest you endure the foreman's choler!
To this, men, drain the tankards dry,
And let us toast the almighty Dollar;
It keeps us chained here before the fire,
Where heat and noise send the weak a quaking.
That the Siren's infernal cry the open heart sets the ground to shaping.
*
To this, men, raise the ladies high and make them shriek with love and laughter!
To this, men, kiss you woman's eyes,
and raise a song unto the rafters.
Wash the steel mill from your hair,
Beat the table 'till it's breaking.
Don't let terror enter there and in the hearth set the glasses breaking!

I expect the future of human emulated minds to be interestingly similar.

comment by ThisSpaceAvailable · 2014-03-18T23:20:20.453Z · LW(p) · GW(p)

Would it be possible for a comment to have anchors that are Karma scored separately, so that someone making several points in the same comment can see which one are getting/losing Karma?

Replies from: ChristianKl
comment by ChristianKl · 2014-03-19T15:35:05.978Z · LW(p) · GW(p)

Just make multiple comments.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-03-19T17:55:07.344Z · LW(p) · GW(p)

If the points are chained to make a coherent argument, that's going to risk having the argument split up, whether you nest them as replies or put them sequentially.

comment by lukeprog · 2014-03-24T19:55:19.770Z · LW(p) · GW(p)

Amusing final sentence from Clarke & Primo (2005):

While much ink has been spilled arguing for this approach to the study of political science, little attention has been paid to justifying and rationalizing the method. On the rare occasions that justification has been attempted, the results have been maddeningly vague. Why test predictions from a deductive, and thus truth-preserving, system? What can be learned from such a test? If a prediction is not confirmed, are assumptions already known to be false to blame? What precisely is the connection between a model and a theory? These questions are never addressed in a satisfactory way.

Some will no doubt argue that such justifications are fruitless and that we should just "get on" with the business of doing science. Philosophical discussions should affect political scientists no more than political scientists affect policy makers...

comment by Arkanj3l · 2014-03-24T19:21:54.573Z · LW(p) · GW(p)

I'm having trouble knowing how well I understand a concept, while learning the concept. I tend to be good at making up consistent verbalizations of why something is, or how something works. However these verbalizations aren't always accurate.

The first strategy against this trend is to simply do more problem sets with better feedback. I'm wondering if we can come up with a supplementary strategy where I can check if I really understand a concept or not.

Replies from: ChristianKl
comment by ChristianKl · 2014-03-24T22:00:12.213Z · LW(p) · GW(p)

if I really understand a concept

What does that phrase mean?

comment by JQuinton · 2014-03-23T21:57:51.978Z · LW(p) · GW(p)

I'm contemplating going to grad school for psychology.

I'd really like to focus on the psychology of religion, but there are other areas of psychology that I find interesting too (e.g. evolutionary psychology).

I don't have a background in psychology; I took one intro course in undergrad to fulfill a requirement for my bachelor's in IT. I do read a lot of pop-sci about psychology.

Anyone have any advice for me going forward?

Replies from: Lumifer
comment by Lumifer · 2014-03-23T22:10:30.085Z · LW(p) · GW(p)

Think about how will you earn your living. Who will pay you money, for what, and how much? In particular, consider that under the assumption that you will NOT be able to get a tenured position in academia.

Replies from: JQuinton
comment by JQuinton · 2014-03-24T13:20:57.743Z · LW(p) · GW(p)

Yes... I've been reading up on all of the horror stories of adjunct professors...

comment by eggman · 2014-03-22T06:54:25.425Z · LW(p) · GW(p)

What's the process for selecting what 'rationality blogs' are featured in the sidebar? Is it selected by the administrators of the site?

I'm surprised some blogs of other users with lots of promoted posts here aren't featured as rationality blogs.

Replies from: tut
comment by tut · 2014-03-22T11:57:01.013Z · LW(p) · GW(p)

They asked everyone what blogs they wanted on the side panel when they redesigned the site. I don't think the list has been changed after they put it up.

Replies from: eggman
comment by eggman · 2014-03-23T02:02:17.487Z · LW(p) · GW(p)

Thanks for the information. In that case, I hope in the future there is another opportunity to ask what blogs are featured on the side panel. I don't know what anyone else is looking for, but as far as I'm concerned, I check these other rationality blogs as often as I check things posted directly to Less Wrong. I find Slate Star Codex, and Overcoming Bias, particularly interesting. Anyway, if other people gain similar such value from these other blogs, perhaps other blogs could be added in the future. I understand if each of us freely suggested what blogs we individually considered 'rational', there would be lots of noise, redundancy, and swamping the forum with poor suggestions. So, I may start a poll in the future asking which blogs the community as a whole would like to see added.

comment by PECOS-9 · 2014-03-19T18:48:01.902Z · LW(p) · GW(p)

Long shot again:

Any LW NYCers have a room available for <$1,000 per month that I (a friendly self-employed 23-year-old male) might be able to move into within a week or two? Or leads on a 1br/studio for <$1400? I could also go a bit above those prices if necessary.

PM me if so and I'll send more details about myself. I'm also staying with some friends in NYC right now so we could meet up anytime.

Replies from: erratio
comment by erratio · 2014-03-19T20:39:20.575Z · LW(p) · GW(p)

Have you considered posting to the NYC LW mailing list? I don't think most of them are regularly here these days

Replies from: PECOS-9
comment by PECOS-9 · 2014-03-21T21:34:23.160Z · LW(p) · GW(p)

Thanks, I was going to take your advice, but I got lucky and found a nice place yesterday.

comment by polymathwannabe · 2014-03-19T23:16:53.304Z · LW(p) · GW(p)

What is your irrational reading guilty pleasure? Whenever I need a cheap laugh, I browse Conservapedia. Where do you go to indulge the occasional craving for high-octane idiocy?

Replies from: MathiasZaman, knb, Locaha, drethelin, Lumifer, Richard_Kennaway
comment by MathiasZaman · 2014-03-19T23:26:22.722Z · LW(p) · GW(p)

I frequent reddit, that is bad enough.

comment by knb · 2014-03-23T00:34:42.439Z · LW(p) · GW(p)

RationalWiki.

comment by Locaha · 2014-03-20T07:23:46.374Z · LW(p) · GW(p)

Russian LiveJournal. With the whole Crimea business going on, the shitstorm there is really powerful as of now...

comment by drethelin · 2014-03-20T22:22:44.033Z · LW(p) · GW(p)

4chan

comment by Lumifer · 2014-03-20T00:41:16.769Z · LW(p) · GW(p)

Whenever I need a cheap laugh, I browse Conservapedia.

Any feelings of your mind being killed while doing so?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-20T03:23:23.148Z · LW(p) · GW(p)

No, it would take more than that. Google "Golden Age of Gaia" if you'd like some serious brain death.

Edited: http://www.whale.to/ and http://www.naturalnews.com/ are their own brand of terrible.

comment by Richard_Kennaway · 2014-03-20T19:45:24.216Z · LW(p) · GW(p)

This is like saying, what sort of shit do you look for when you want to smell something really horrible?

comment by ChristianKl · 2014-03-18T13:30:00.596Z · LW(p) · GW(p)

Facebook announced graph search with great fanfare but if I want to know something simple like getting a list of my recently added friends I can't just type it into the search bar but have to search in Google and find that I have to go through recent activity tab.

Similarly I have told facebook that I speak English and German through the facebook menu. It still shows me French and Rumanian posts of my friends that I can't read. It doesn't offer to translate them. A simple idea like showing me English posts that my French friends post but not showing the French posts just doesn't seem to be implemented.

When using the facebook app I can't easily view all events that happen today.

What do all those facebook engineers do, when they don't seem to go from the low hanging fruits?

Replies from: None, moridinamael
comment by [deleted] · 2014-03-18T18:19:16.720Z · LW(p) · GW(p)

You have to remember that you are not the customer for Facebook... you are the product.

Giving you more control over your timeline and the posts you see is good for you, but makes Facebook's ability to charge for access to you through "promoted posts" substantially less inviting.

On the other hand, something like graph search allows the opportunity to compete with Google and LinkedIn.

comment by moridinamael · 2014-03-18T22:17:23.876Z · LW(p) · GW(p)

Just now I had the experience of having Facebook helpfully place directly on my newsfeed a post by somebody who is not on my friends list, who happens to trigger the hell out of me and who I actively avoid reading about on Facebook as much as possible. Thanks Facebook, great algorithms you've got there.

comment by RolfAndreassen · 2014-03-19T05:51:44.285Z · LW(p) · GW(p)

Based on discussion at the South Bay Area meetup tonight.

The five pillars of Islam are

  • Shahadah, confession of faith: Declaring that there is no god except God, and Muhammad is His prophet.
  • Salat, ritual prayer, five times a day.
  • Sawm, fasting during Ramadan - ideally, eating nothing between sunset and sunrise.
  • Zakat, giving alms.
  • Hajj, making a pilgrimage to Mecca.

By analogy, I propose five pillars of LessWrongIsm:

  • Confession of faith: "There is no God except the one we're going to build, and the sage Yudkowsky is Its prophet."
  • Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.
  • Diet. Any unusual diet will do - paleo, four spoons of sugar in the morning, strict vegan, whatever; but it must be strictly adhered to between sunrise and sunset.
  • Efficient altruism.
  • Moving to the Bay Area, or making pilgrimage to a CFAR workshop.
Replies from: shminux, Luke_A_Somers, ChristianKl, polymathwannabe
comment by shminux · 2014-03-19T18:34:09.189Z · LW(p) · GW(p)

The topic "Is LW a cult?" has been discussed so much here and elsewhere that it is probably worth creating a LWiki page about it. Including the discussion of the term cult and when applying it constitutes a non-central fallacy.

comment by Luke_A_Somers · 2014-03-19T17:57:28.123Z · LW(p) · GW(p)

Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.

Can you mix and match? I don't think I could keep up with either of those by themselves.

comment by ChristianKl · 2014-03-19T14:47:27.557Z · LW(p) · GW(p)

Polyphasic sleep, five times a day. An acceptable alternative is polyamorous sex, five times a day.

As far as I know there Uberman with 6 times and Everyman with 4 I don't know a shedule to which people successfully adapted with takes 5 times.

Diet. Any unusual diet will do - paleo, four spoons of sugar in the morning, strict vegan, whatever; but it must be strictly adhered to between sunrise and sunset.

I don't think we care about the timeframe of runrise to sunset. It has to be the whole day but there might be one cheat day per week.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-03-20T03:53:34.493Z · LW(p) · GW(p)

I would like to gently suggest that you may have missed the way my tongue was poking into my cheek.

comment by polymathwannabe · 2014-03-19T13:06:59.885Z · LW(p) · GW(p)

"There is no God except the one we're going to build, and the sage Yudkowsky is Its prophet."

And this is why people mistake us for a cult.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-03-19T14:00:16.701Z · LW(p) · GW(p)

And this is why people mistake us for a cult.

I believe RolfAndreassen is being humorous.

Replies from: polymathwannabe, FiftyTwo
comment by polymathwannabe · 2014-03-19T14:03:07.072Z · LW(p) · GW(p)

I get he is. But Poe's Law works both ways: there's no self-parody that some clueless outsider won't mistake for real lunacy.

Replies from: Lumifer, amacfie
comment by Lumifer · 2014-03-19T15:55:58.114Z · LW(p) · GW(p)

there's no self-parody that some clueless outsider won't mistake for real lunacy.

That's a good thing -- I would much prefer that somebody that clueless just shake his head and continue on his merry way.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-19T16:13:58.035Z · LW(p) · GW(p)

True, we don't want to attract that particular person. But the misinformation he/she's going to spread may discourage many potential desirables.

comment by amacfie · 2014-03-19T15:30:49.662Z · LW(p) · GW(p)

I'd say it's worth it to have some humor and somewhat self-deprecating fun here.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T15:56:43.309Z · LW(p) · GW(p)

It's not only worth it, it is sorely needed. Taking yourself too seriously is a debilitating disease that can be fatal.

Replies from: fubarobfusco
comment by fubarobfusco · 2014-03-19T17:17:19.829Z · LW(p) · GW(p)

One of the signs of a cult is "grimness" — "disapproval concerning jokes about the group, its doctrines or its leader(s)."

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-19T18:25:07.283Z · LW(p) · GW(p)

How come that list doesn't mention hero worship?

Replies from: fubarobfusco
comment by fubarobfusco · 2014-03-19T20:11:00.811Z · LW(p) · GW(p)

I don't know, and unfortunately the author is dead so we can't ask him.

That said, "hero worship" could mean a number of different things, not all of which might be symptomatic of a dangerous cult. Could you expand on what you mean by it?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-19T20:45:21.565Z · LW(p) · GW(p)

Eliezer Yudkowsky is one of the most accomplished, knowledgeable, and stimulating writers I've ever encountered, and if he ever were to visit my house, I'd buy a freezer large enough to accommodate his head, just in case he choked on my boiled chickpeas. That being said, I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause. He himself has advocated that we don't worship Einstein, because it obscures the fact that he was just as human as we are, and discourages others from striving to achieve his level. Likewise, EY is no superhero, no demigod, no mythic savior, and it won't do to treat him like one. This is why, as much as I admire the guy's awesomeness, I'm against the existence of the "EY Facts" thread. I can't explain rationality to others and keep a straight face while thinking that the author I'm citing is the Way, the Truth and the Life, the last hope and salvation of humanity. Leave it to history books to sing his praises, but for the time being, it will be the opposite of helpful.

Replies from: asr, fubarobfusco, Lumifer, XiXiDu
comment by asr · 2014-03-19T20:52:27.408Z · LW(p) · GW(p)

I think the "EY facts" goes the other way. That's not hero worship, that's making a joke of hero worship.

"Chuck Norris status" is the opposite of hero-worship. Is there anybody who seriously believes that Chuck Norris is actually possessed of superhuman powers? Heck, is there anybody who even seriously believes he's a uniquely talented actor?

comment by fubarobfusco · 2014-03-19T22:52:18.568Z · LW(p) · GW(p)

I'm having difficulty parsing which parts of this comment are intended to be "within quotes" as an example of hero worship ....

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-19T22:57:43.776Z · LW(p) · GW(p)

The great-writer, chickpeas-and-freezer part is truly my opinion.

Replies from: XiXiDu
comment by XiXiDu · 2014-03-20T09:11:29.051Z · LW(p) · GW(p)

The great-writer, chickpeas-and-freezer part is truly my opinion.

Telling the world that EY is a great writer etc. is fine. Telling the world that you believe him to be great enough that you'd buy a freezer large enough to accommodate his head, in case he died in your house, is much worse than self-mockery such as the EY facts page.

No offense, but I suggest that you stop trying to improve the reputation of LW/MIRI. If MIRI wants to improve their reputation and public relations they should hire a professional outsider who is neurotypical (I am neither, so maybe I am wrong about the impression your opinion gives).

Replies from: polymathwannabe
comment by polymathwannabe · 2014-03-20T13:53:51.142Z · LW(p) · GW(p)

Upon rereading my post after a full night's sleep, I can see the problems with how I expressed it. I agree that it may have come off as too fanboyish, and we're seeing the line between fanboyism and idolatry at different positions. Continued argument will only dig me deeper.

comment by Lumifer · 2014-03-19T21:00:31.329Z · LW(p) · GW(p)

I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause.

Oh, dear. Elevating EY to Chuck Norris status is hilarious and, I would argue, shows "our cause" in good light.

Maybe elevating EY to the divinely-inspired-prophet (PBUH) status would be harmful, but I haven't seen anyone do that.

I can't explain rationality to others and keep a straight face

I don't see any need to keep a straight face. I don't know if I am typical, but I don't respond well to things explained to me with a terribly serious expression (well, as long as they don't involve things like staunching bleeding from open wounds and such).

comment by XiXiDu · 2014-03-20T09:00:49.900Z · LW(p) · GW(p)

Eliezer Yudkowsky is one of the most accomplished, knowledgeable, and stimulating writers I've ever encountered, and if he ever were to visit my house, I'd buy a freezer large enough to accommodate his head, just in case he choked on my boiled chickpeas. That being said, I think elevating him to Chuck Norris status is decidedly harmful to the propagation of our cause.

Just one data point here. The EY facts post was funny and not at all cultish. Whereas your first sentence (and to a lesser extent the whole comment) made me cringe.

comment by FiftyTwo · 2014-03-22T01:30:47.855Z · LW(p) · GW(p)

They're attempting it, but it isn't sufficiently amusing for the tradeoff to be worth it