Open Thread, November 15-22, 2013

post by drethelin · 2013-11-16T01:36:54.422Z · LW · GW · Legacy · 261 comments

Contents

261 comments

 

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

261 comments

Comments sorted by top scores.

comment by fubarobfusco · 2013-11-19T18:26:05.017Z · LW(p) · GW(p)

Scope insensitivity — When you don't care enough to use mouthwash.

Availability heuristic — People in open relationships are hotter.

Endowment effect — People with large attributes tend to think size matters more.

Hyperbolic discounting — Black Friday, for instance.

Observer-expectancy effect — Being able to see that someone is pregnant.

Fundamental attribution error — Blaming everything on the Religious Right.

Halo effect — Blaming everything on the Covenant.

Primacy effect — It's easy to remember apes and monkeys.

Replies from: bramflakes, Alejandro1
comment by bramflakes · 2013-11-20T13:47:27.161Z · LW(p) · GW(p)

Representativeness heuristic - Most liars are politicians.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-11-21T20:14:13.949Z · LW(p) · GW(p)

Availability cascade — When your whole social circle becomes polyamorous.

Curse of knowledge — What you get for reading the Necronomicon.

Denomination effect — Catholics spend more money than Methodists.

Restraint bias — Favoritism shown towards bondage practitioners.

Illusion of transparency — Ignoring the gunk on your windshield.

System justification — When your computer lines up your text for you.

Peak-end rule — The king who stands on the mountaintop will fall.

Reminiscence bump — A feature of phrenology.

Rosy retrospection — Remember how much fun you had with Rose?

Bounded rationality — The logic of kangaroos.

comment by Alejandro1 · 2013-11-20T15:11:21.258Z · LW(p) · GW(p)

Status quo bias -- Fear of losing one's social standing.

Recency illusion -- Belief that a word is new when it was really coined by Jane Austen.

Choice-supportive bias -- Pro-abortion bias.

Risk compensation -- Making a daring attack at Kamchatka and then fortifying.

comment by David_Gerard · 2013-11-16T23:32:04.926Z · LW(p) · GW(p)

You Must Try, And Then You Must Ask. Note the definition of "try" here is 15 minutes, not the locally-canonical 5. (I think 15 is about right for the context - programming and sysadmin stuff.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-17T14:13:03.433Z · LW(p) · GW(p)

The actual procedure is you must try, you must record your thinking, you must ask.

comment by JoshuaZ · 2013-11-17T23:27:48.786Z · LW(p) · GW(p)

I was discussing recently cryonics with my girlfriend, who is highly uncomfortable with the notion. We identified what may (tentatively) be part of the underlying objection by people, especially loved ones, to cryonics. Essentially, it comes down to a lack of closure. When someone dies, you can usually mourn and move on. But if there' a genuine chance of resurrection, then the ability to more or less move on to some extent goes away.

If this is the case, then one might ask why the same thing doesn't happen with religions that believe in an afterlife. That could be because they believe that everyone will be resurrected. But it may also be that in part, people often don't at some level believe there necessarily will be an afterlife, or if they do, their version of an afterlife is highly abstracted. If that's the case, cryonics may be being hurt by its own plausibility.

Replies from: Nate_Gabriel, hyporational, Lumifer, army1987, passive_fist
comment by Nate_Gabriel · 2013-11-17T23:38:36.781Z · LW(p) · GW(p)

That's...that's terrible. That it would feel worse to have a chance of resurrection than to have closure. It sounds depressingly plausible that that's people's true rejection, but I hope it's not.

Religion doesn't have the same problem, and in my experience it's because of the certainty. People believe themselves to be absolutely certain in their belief in the afterlife. So there's no closure problem, because they simply know that they'll see the person again. If you could convince people that cryonics would definitely result in them being resurrected together with their loved ones, then I'd expect this particular problem to go away.

Replies from: Brillyant, hyporational, Viliam_Bur
comment by Brillyant · 2013-11-18T15:38:06.561Z · LW(p) · GW(p)

That's...that's terrible. That it would feel worse to have a chance of resurrection than to have closure. It sounds depressingly plausible that that's people's true rejection, but I hope it's not.

In my experience, people holding on to very, very small probabilites can be unhealthy. Misplaced hope can be harmful.

Religion doesn't have the same problem, and in my experience it's because of the certainty. People believe themselves to be absolutely certain in their belief in the afterlife. So there's no closure problem, because they simply know that they'll see the person again.

I don't think it is quite this cut and dry. Religious people will assert they are certain, but I think there is a significant level of doubt there. People do use heaven as a way to cope with the loss of a loved one -- it is perfectly understandable, but I think it ultimately often prevents them from grieving and acheiving healthy and proper closure.

Replies from: RolfAndreassen, Nate_Gabriel
comment by RolfAndreassen · 2013-11-22T02:27:30.474Z · LW(p) · GW(p)

Religious people will assert they are certain, but I think there is a significant level of doubt there.

The phrase "sure and certain hope of the resurrection" is rather telling. :)

comment by Nate_Gabriel · 2013-11-18T20:11:00.354Z · LW(p) · GW(p)

Ideally, how people feel about things would be based in real-world consequences, and a chance of someone being not dead is usually strictly better than the alternative. But I can see how for a small enough chance of resurrection, it could possibly be outweighed by other people holding on to it. I still hope that isn't what's going on in this case, though. That would require people to be feeling "I'd rather have this person permanently dead, because at least then I know where I stand."

Replies from: hyporational
comment by hyporational · 2013-11-19T18:22:11.110Z · LW(p) · GW(p)

That would require people to be feeling "I'd rather have this person permanently dead, because at least then I know where I stand."

That's a pretty insulting way to put it. Consider an alternative: I'll rather spend my only short life living it to the fullest than worrying about people, including me, who will very likely permanently die no matter what I do to help it.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-20T09:03:16.765Z · LW(p) · GW(p)

In that case, the best solution would be to let the loved person freeze and then pretty much ignore them (i.e. spend only as much thought about them as we usually spend on dead people).

Replies from: hyporational, lmm
comment by hyporational · 2013-11-21T01:41:35.375Z · LW(p) · GW(p)

The problem I have imagining someone preventing their loved one to do cryonics to have closure is that they will die themselves with certainty anyways. Do they also wish they would die before their loved ones do, in the case of no cryonics?

It's easier for me to imagine why they wouldn't want to do cryonics themselves because of wanting closure.

comment by lmm · 2013-11-20T10:40:52.709Z · LW(p) · GW(p)

Sure. But self-modifying to feel differently is hard.

comment by hyporational · 2013-11-19T18:13:46.402Z · LW(p) · GW(p)

That's...that's terrible. That it would feel worse to have a chance of resurrection than to have closure.

That would depend on how high the chance of resurrection is. Closure is relatively certain.

comment by Viliam_Bur · 2013-11-18T08:26:08.276Z · LW(p) · GW(p)

Religious people also believe that after they are resurrected (assuming it will be in heaven), all their problems will be magically fixed. So there is nothing to worry about (besides getting to the heaven).

comment by hyporational · 2013-11-18T06:25:33.301Z · LW(p) · GW(p)

That could be because they believe that everyone will be resurrected.

Assuming your closure hypothesis is true, I think this is it. With cryonics, not only do you have to worry about the low chance of getting resurrected, but you also have to worry about the state of the cryonics facilities that store your loved ones while you're still alive.

Replies from: ygert
comment by ygert · 2013-11-18T12:25:14.570Z · LW(p) · GW(p)

That could be because they believe that everyone will be resurrected.

Assuming your closure hypothesis is true, I think this is it. With cryonics, not only do you have to worry about the low chance of getting resurrected, but you also have to worry about the state of the cryonics facilities that store your loved ones while you're still alive.

I doubt this is the case. Were this the case, wouldn't the solution be obvious? Why are they objecting to cryonics, rather than also signing up themselves, if they agree that cryonics is plausible and fear their loved one being resurrected and them not?

Replies from: hyporational
comment by hyporational · 2013-11-18T12:48:10.480Z · LW(p) · GW(p)

I don't understand your objection. I think you misread the previous two comments.

comment by Lumifer · 2013-11-18T02:25:14.085Z · LW(p) · GW(p)

why the same thing doesn't happen with religions that believe in an afterlife

Because with religion you return to life in afterlife which is entirely different from "normal" life. And with cryonics you return to life still in normal life, just after some time has passed.

Replies from: army1987
comment by A1987dM (army1987) · 2013-11-18T13:45:39.092Z · LW(p) · GW(p)

For some value of “normal”... Watch Good Bye, Lenin! -- and that was only a few months. Or see this post, and the post it's a followup to, and the posts that one is in turn a followup to.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T15:27:12.300Z · LW(p) · GW(p)

We are not talking about the territory, we are talking about the map.

The point is the difference in perception (which you may argue is wrong, misguided, etc. -- it still exists) of the religious afterlife and of the regular human future.

comment by A1987dM (army1987) · 2013-11-18T13:40:23.477Z · LW(p) · GW(p)

If this is the case, then one might ask why the same thing doesn't happen with religions that believe in an afterlife.

The religion I'm familiar with (dunno about others) explicitly says that the death of either spouse terminates the marriage.

That could be because they believe that everyone will be resurrected.

Yeah, but some people will go to heaven and other people will go to hell, so I don't think that's the answer.

Replies from: hyporational, JoshuaZ
comment by hyporational · 2013-11-18T16:25:04.414Z · LW(p) · GW(p)

Do I correctly suspect they have polyamory in heaven?

Yeah, but some people will go to heaven and other people will go to hell, so I don't think that's the answer.

I bet minority of people who believe in heaven actually believe in hell, or at least believe that you have to be really evil to go there.

Replies from: Lumifer, Adele_L, army1987
comment by Lumifer · 2013-11-18T16:39:39.788Z · LW(p) · GW(p)

I bet minority of people who believe in heaven actually believe in hell

Kinda: 62% of Americans believe in heaven and 53% believe in hell (source). I bet there is more data in Pew reports.

Replies from: hyporational
comment by hyporational · 2013-11-18T16:41:39.955Z · LW(p) · GW(p)

Those silly Americans :)

It seems I live in heaven already.

ETA: I blitz-googled second hand info about a World Values Survey 2000: 50 % of Finns believe in heaven, but only 25 % believe in hell. 74 % believe in god.

In a 2012 survey done by our church only 1/8 believe in "the Christian promise of eternal afterlife".

Replies from: Lumifer
comment by Lumifer · 2013-11-18T17:00:32.738Z · LW(p) · GW(p)

The funny thing is that Jesus is very very specific that more people will end up in hell than in heaven :-D

Matthew 7:13-14: "Enter ye in at the strait gate: for wide is the gate, and broad is the way, that leadeth to destruction, and many there be which go in thereat: Because strait is the gate, and narrow is the way, which leadeth unto life, and few there be that find it."

Replies from: CAE_Jones
comment by CAE_Jones · 2013-11-18T17:29:55.172Z · LW(p) · GW(p)

I wouldn't call that "very very specific", since the words are destruction (I think it can also be "lost") and life, rather than Heaven/the Kingdom or Gehenna/Hades/everlasting punishment. It does, however, make it abundantly clear that the overwhelming majority is doomed in some fashion.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T17:55:43.635Z · LW(p) · GW(p)

I wouldn't call that "very very specific", since the words are destruction

True, here Jesus is speaking about the alternative to the everlasting life. But I don't know -- is there a branch of Christian theology which holds that it's heaven or nothing -- as in, if God doesn't let you into heaven you don't go to hell but just cease to exist?

P.S. As far as I remember there are mainstream Christian interpretations of hell as nothing more than absence of God's love/grace.

Replies from: hyporational, polymathwannabe
comment by hyporational · 2013-11-18T18:23:17.030Z · LW(p) · GW(p)

Some Jehova's witnesses who I tried to deconvert at my door seemed to believe that. It was eternal life in paradise on earth or nothing.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T18:35:10.083Z · LW(p) · GW(p)

Ah, yes, it seems Jehova's Witnesses do have a "doctrine of annihilation" and for them it is heaven or nothing.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-11-19T05:12:23.280Z · LW(p) · GW(p)

Seventh-day Adventists appear to be annihilationist as well. Then there are Universalists, who insist that Aeonian in the first Century CE could not possibly mean "eternal", so that everyone eventually gets out of Hell.

comment by polymathwannabe · 2013-11-22T19:05:48.286Z · LW(p) · GW(p)

I like the Eastern Orthodox version: Heaven for everyone---like it or not.

comment by Adele_L · 2013-11-19T06:39:07.705Z · LW(p) · GW(p)

Do I correctly suspect they have polyamory in heaven?

There is (still) in Mormon heaven (at least polygamy).

comment by A1987dM (army1987) · 2013-11-19T00:21:06.430Z · LW(p) · GW(p)

Do I correctly suspect they have polyamory in heaven?

ISTR that someone asked Jesus what happens if a widow gets married again, after everyone dies and is resurrected -- which husband does she get back with? I can't remember the answer, though.

Replies from: gwern
comment by gwern · 2013-11-19T00:38:41.961Z · LW(p) · GW(p)

He ducked the question, I think, in simply saying that non-marriage was superior and/or in heaven no one is married maybe: Luke 20:27-38:

Some of the Sadducees, who say there is no resurrection, came to Jesus with a question. "Teacher," they said, "Moses wrote for us that if a man's brother dies and leaves a wife but no children, the man must marry the widow and have children for his brother. Now there were seven brothers. The first one married a woman and died childless. The second and then the third married her, and in the same way the seven died, leaving no children. Finally, the woman died too. Now then, at the resurrection whose wife will she be, since the seven were married to her?"

Jesus replied, "The people of this age marry and are given in marriage. But those who are considered worthy of taking part in that age and in the resurrection from the dead will neither marry nor be given in marriage, and they can no longer die; for they are like the angels. They are God's children, since they are children of the resurrection."

Replies from: jmmcd
comment by jmmcd · 2013-11-22T01:37:12.494Z · LW(p) · GW(p)

Golly, that sounds to me as if the people of this age don't go to heaven!

comment by JoshuaZ · 2013-11-18T15:02:01.289Z · LW(p) · GW(p)

The religion I'm familiar with (dunno about others) explicitly says that the death of either spouse terminates the marriage.

That's a good point: That's the case in both Judaism and most forms Christianity.

Yeah, but some people will go to heaven and other people will go to hell, so I don't think that's the answer.

That's very religion specific (for example Judaism in most forms doesn't believe this), and people of most religions aren't generally going to think their loved ones are going to go to hell. The counterexamples might exist in some religious traditions that have very narrow conditions such as some forms of evangelical Protestantism, but even then, most people will still be married to believers.

comment by passive_fist · 2013-11-18T09:37:50.534Z · LW(p) · GW(p)

What is meant by closure? Is it just about being concerned as to what happens to them while they're frozen? Or does it mean that people want their loved ones to die at some point?

Replies from: DaFranker
comment by DaFranker · 2013-11-18T13:40:06.738Z · LW(p) · GW(p)

The average human requires some form of mental "Ending" to their inner-narrative's "Story Of My Relationship With This Person", AFAICT. Without some form of mental marker which can be officially labeled in their mind as the "End" of things, the person and their relationship with that person will linger in their subconscious and they will worry, stress and otherwise continue agonizing over the subject as when waiting to see if a disappeared or kidnapped person will come back or turn up dead eventually.

From my viewpoint, the need for "closure" is an extremely selfish desire for some external sign that they are allowed to stop worrying about it. However, for most people, it is a "natural" part of their life and the need for closure is socially accepted and often socially expected. Outside of LW, I would expect that qualifying a need for closure as "selfish" would earn me substantial scorn and negative judgment.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T15:47:25.832Z · LW(p) · GW(p)

What's wrong with selfish desires?

Replies from: DaFranker, Desrtopa, hyporational, hyporational
comment by DaFranker · 2013-11-18T19:51:01.289Z · LW(p) · GW(p)

After several attempts at providing a direct answer to this, I find that I am currently unable to.

The term "wrong" here confuses me more than anything. What's the point of the question?

My comment is about how the need for closure is suboptimal for both the individual and the society, and how reality doesn't necessarily fit with a human's inner narrative expectations on this subject.

If you're asking about why generic society would scorn and negatively perceive the notion of it being selfish, it's because it's socially accepted and socially expected in most cultures that individuals must not be selfish. The details of that probably belong in a different discussion.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T20:12:32.818Z · LW(p) · GW(p)

The term "wrong" here confuses me more than anything. What's the point of the question? My comment is about how the need for closure is suboptimal for both the individual and the society

Let me rephrase the question in your terms, then. Why is the need for closure suboptimal? What are you optimizing for?

Consider hunger -- the desire to eat. It is "extremely selfish" and "suboptimal for both the individual and the society"?

Consider the need for solitude. Consider the desire to look pretty. Consider the yearning to be loved. Are they all "extremely selfish" and "suboptimal for both the individual and the society"?

Replies from: DaFranker
comment by DaFranker · 2013-11-18T20:48:22.527Z · LW(p) · GW(p)

Consider hunger -- the desire to eat. It is "extremely selfish" and "suboptimal for both the individual and the society"?

As a desire that causes us to fulfill a necessary condition for survival, as per physics, no. Survival is beneficial, while perhaps not always optimal, currently the best general rule that I can think of.

The other examples, modulo some signalling and escalation subtleties regarding the "look pretty" case that would require a separate and lengthy discussion, are similar cases in that the desires lead individuals to take actions that are, ceteris paribus, overall beneficial given the current human condition.

Now change a variable: Food is no longer necessary for humans to live. All humans function perfectly well, as if they were eating optimally, without food (maybe they now take energy from waste heat or something, in an entropy-optimal kind of way). In this hypothetical, I would consider the desire to eat very selfish and suboptimal - it consumes resources of all kinds, including time that the individual could be spending on other things!

My assertion is that, on average, the desire for closure is more similar to the hypothetical second case than it is similar to the first case.

Corollaries / secondary assertions: The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant (or at least would in the hypothetical case where there is no social expectation of such), and an individual that does not value conformity to an inner narrative that generates the need for closure is, ceteris paribus, happier and obtains higher expected utility than their counterparts.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T21:12:46.677Z · LW(p) · GW(p)

Now change a variable: Food is no longer necessary for humans to live ... In this hypothetical, I would consider the desire to eat very selfish and suboptimal - it consumes resources of all kinds, including time that the individual could be spending on other things!

You haven't answered an important question: what are you optimizing for?

In your hypothetical eating (for pure hedonics) does consume resources including time, but you have neglected to show that this is not a good use of these resources. Yes, they can be spent on other things but why these other things are more valuable than the hedonics of eating?

What is the yardstick that you apply to outcomes to determine whether they are suboptimal or not?

The desire is purely emotional, individuals without it usually actually function better than their counterparts in situations where it is relevant

8-0 That's an unexpected approach. Are you pointing out the "purely emotional" part in a derogatory sense? Is having emotional desires, err... suboptimal?

What do you mean by individuals without such emotional desires functioning "better"? Are emotions a crippling disability?

Replies from: DaFranker
comment by DaFranker · 2013-11-19T13:34:26.699Z · LW(p) · GW(p)

I am comparing across utility systems, so my best yardsticks are intuition and a vague idea of strength of hedons + psychological utilon estimates as my best approximation of per-person-utility.

I do realize this makes little formal sense considering that the problem of comparing different utility functions with different units is completely unresolved, but it's not like we can't throw balls in we don't understand physics.

So what I'm really optimizing for is a weighted or normalized "evaluation", on the theoretical assumption that this is possible across all relevant variants of humans, of any given human's utility function. Naturally, the optimization target is the highest possible value.

It's with that in mind that if I consider the case of two MWI-like branches of the same person, one where this person spontaneously develops a need for closure and one where it doesn't happen, and try to visualize in as much detail as possible both the actions and stream of consciousness of both side-by-side, I can only imagine the person without a need for closure to be "better off" in a selfish manner, and if these individuals' utility functions care for what they do for or cost to society, this compounds into an even greater difference in favor of the branch without need for closure.

This exercise can be (and I mentally did, yesterday) extended to the four-branch example of hunger and need for food, for all binary conjunctions. It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.

Replies from: Lumifer
comment by Lumifer · 2013-11-19T18:41:46.354Z · LW(p) · GW(p)

so my best yardsticks are intuition and a vague idea of ... estimates

Um. Intuition is often used as a fancy word for "I ain't got no arguments but I got an opinion". Effectively you are talking about your n=1 personal likes and dislikes. This is fine, but I don't know why do you want to generalize on that basis.

It seems to me that clearly the hungerless, food-need-less person ought to be better off and obtain higher values on their utility function, ceteris paribus.

Let's extend that line of imagination a bit further. It seems to me that this leads to a claim that the less needs and desires you have, the more "optimal" you will be in the sense of obtaining "higher values on [the] utility function". In the end someone with no needs or desires at all will score the highest utility.

That doesn't look reasonable to me.

comment by Desrtopa · 2013-11-19T16:11:24.270Z · LW(p) · GW(p)

I would say that for practical purposes, we could distinguish "selfish" desires from simple "desires," as being ones which place an inappropriate degree of burden on other people. After all, in general usage, we tend to use selfish to mean "privileging oneself over others to an inappropriate degree," not "concerned with oneself at all."

In that context, "I need you to definitely stay dead forever so I can stop worrying about it," seems like a good example of a selfish desire, and rather more like something one would apply to a comic book archenemy than a loved one.

Replies from: hyporational, Lumifer
comment by hyporational · 2013-11-19T18:00:15.714Z · LW(p) · GW(p)

"I need you to definitely stay dead forever so I can stop worrying about it,"

What? Why isn't it more like: "It's very probable that you stay dead forever, so I better stop worrying about it and move on, because if I don't, it'll likely screw up my very probably finite, only life."

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T18:35:59.394Z · LW(p) · GW(p)

If the person takes the burden on themselves to stop worrying about their loved ones who pursue cryonics, that would be a better description. I was trying to encapsulate the scenario under discussion of people who resist letting their loved ones pursue cryonics because it interferes with their sense of closure.

Replies from: hyporational
comment by hyporational · 2013-11-19T19:04:42.742Z · LW(p) · GW(p)

That indeed would be so incredibly selfish that I was blind to the possibility until you explicitly pointed it out now.

comment by Lumifer · 2013-11-19T18:50:40.949Z · LW(p) · GW(p)

in general usage, we tend to use selfish to mean "privileging oneself over others to an inappropriate degree,"

This definition turns on the word "inappropriate" which is a weasel word and can mean everything (and nothing) under the sun. How can one be so selfish as to order a Starbucks latte when there are hungry children in Mozambique?

"I need you to definitely stay dead forever so I can stop worrying about it,"

Doesn't look nice, but then most things dialed to 11 don't look nice.

Let's look at analogous realistic examples. Let's say there is a couple, one spouse gets into a car accident and becomes a vegetable. He's alive and can be kept alive (on respirators, etc.) for a long time, but his mind is either no longer there or walled off. What do you think is the properly ethical, appropriately non-selfish behavior for the other spouse?

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T18:59:33.217Z · LW(p) · GW(p)

Doesn't look nice, but then most things dialed to 11 don't look nice.

Let's look at analogous realistic examples.

The example I gave is not just a realistic, but a real example, if as posited upthread, people are resisting having their loved ones pursue cryonics because it denies them a sense of closure.

What does or does not qualify as an inappropriate level of self-privilege is of course subject to debate, but when framed in those terms I think such a position would be widely agreed to be beyond it.

Replies from: Lumifer
comment by Lumifer · 2013-11-19T19:16:44.760Z · LW(p) · GW(p)

a real example, if as posited upthread, people are resisting having their loved ones pursue cryonics because it denies them a sense of closure.

Well, one person. And not "resist", but "highly uncomfortable with". And "may (tentatively) be part of the underlying objection". You are adding lots of certainty which is entirely absent from the OP.

I am still interested in your normative position, though. So let's get back to cryonics. Alice and Bob are a monogamous pair. Bob dies, is cryopreserved. Alice is monogamous by nature and young, she feels it's possible that Bob could be successfully thawed during her lifetime.

What, in your opinion, is the ethical thing for Alice to do? Is it OK for her to remarry?

Replies from: Viliam_Bur, Desrtopa
comment by Viliam_Bur · 2013-11-20T09:10:26.931Z · LW(p) · GW(p)

What, in your opinion, is the ethical thing for Alice to do?

Use some clever rationalization and remarry. More rationally, she should be aware that the probability of Bob being resurrected during her lifetime is pretty low.

comment by Desrtopa · 2013-11-19T19:33:02.013Z · LW(p) · GW(p)

I don't think that's enough information for me to return a single specific piece of advice. What does Alice think Bob would think of her getting married in his absence were he to be brought back at a later date? How likely does she think it is that he'd be brought back in her lifetime? Does she think that she'd still want to be in a relationship with him if she waited and he was brought back after, say, forty years? Etc.

There are certainly trends in relationship behavior which can constitute actionable information, but I think the solution to any specific relationship problem is likely to be idiosyncratic.

Replies from: Lumifer
comment by Lumifer · 2013-11-19T19:42:42.316Z · LW(p) · GW(p)

What does Alice think Bob would think of her getting married in his absence were he to be brought back at a later date?

Bob also was monogamous. Alice is pretty sure Bob wouldn't like it.

How likely does she think it is that he'd be brought back in her lifetime?

Alice is uncertain. She thinks it's possible, she is not sure how likely it is.

Does she think that she'd still want to be in a relationship with him if she waited and he was brought back after, say, forty years?

She has no idea what she'll want in 40 years.

the solution to any specific relationship problem is likely to be idiosyncratic.

So, are there are any general guidelines, you think? Remember, the claim we are talking about is that the desire for closure is extremely selfish and "suboptimal".

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T19:52:54.764Z · LW(p) · GW(p)

So, are there are any general guidelines, you think?

Well, I suspect that anyone preserved with current technology is probably not coming back, while this may not be the case for people preserved in the future given different technological resources, so I'd suggest treating death as probably permanent in cases of present cryonics. So as a guideline, I'd suggest that both those slated to be cryonically preserved and their survivors treat the procedure as offering a low probability that the subject is put into suspended animation rather than permanent death.

How to deal with that situation is up to individual values, but I think that for most people, refusing to seek another partner would result in an expected decrease in future happiness.

comment by hyporational · 2013-11-18T15:57:25.070Z · LW(p) · GW(p)

Not this again... The guy's definition of selfishness is probably narrower than yours, so start the spat from there :)

I'd search a thread where I actually defended the "selfishness is wrong" position, but I can't find it, because the search function doesn't seem to work anymore.

Replies from: Lumifer
comment by Lumifer · 2013-11-18T16:24:40.448Z · LW(p) · GW(p)

Given "the need for "closure" is an extremely selfish desire" it doesn't look like his definition is improbably narrow :-/

Also note that we are talking about desires, not actions.

Replies from: hyporational
comment by hyporational · 2013-11-18T16:30:18.541Z · LW(p) · GW(p)

Fair enough. I suppose desires often lead to actions, so sometimes it's good to do some pre-emptive soul searching.

comment by hyporational · 2013-11-18T16:34:16.928Z · LW(p) · GW(p)

I suspect selfish desire here refers to the fact that if people just move on, they're less likely to care about longevity research, cryonics and A.I. Do you find that acceptable?

Replies from: Lumifer
comment by Lumifer · 2013-11-18T16:53:23.699Z · LW(p) · GW(p)

Do you find that acceptable?

More, I find it desirable. I suspect I would find it hard to be sympathetic to a morality which would consider some increased caring for longevity research and cryonics more valuable than not screwing up the rest of your life.

Replies from: hyporational
comment by hyporational · 2013-11-18T16:57:52.192Z · LW(p) · GW(p)

I agree. Besides, I can both move on, and care about those things. Added A.I. to the bunch.

comment by NancyLebovitz · 2013-11-20T11:48:19.854Z · LW(p) · GW(p)

Randomized tests of charity

Read the comments and weep-- it's almost as though there are a lot of people who resent specificity.

I wouldn't have been surprised if there were people who said "but randomizing means that half the people aren't getting the obviously valuable help!", but nobody said that.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-21T09:13:42.072Z · LW(p) · GW(p)

A person A uses "common sense" to suggest a solution X, which seems obvious to them.

Research shows that in reality, Y is much better than X, and sometimes X is actually not even helpful.

A person B says: See? This is why you should always collect and evaluate data!

A person C says: No, the real lesson is that we should use common sense which obviously recommends Y! Why didn't anybody use their common sense?

Replies from: DaFranker
comment by DaFranker · 2013-11-21T14:07:29.048Z · LW(p) · GW(p)

A, B and C must all come from LessWrong or some other rationality-promoting subset of the human population.

Meanwhile, in the real world we live in:

Persons α, β and γ respectively say, after C: "C, you're crazy! That's not common, sense, that's madness! If we do that, then Obama gets even more control and will come in our houses and take our guns away from us!",

"No, what this study shows is that scientists cannot be trusted, one should turn their mind and spirit to the way of {faith}, open your mind and see {faith_symbol}, it is the only way to truly know the Truth.",

"According to the Dimensionless Liquid Cristallized Aether Wave Lorem Ipsum Something Theory, as it says here {link to blog of commenter, and sole source for the theory}, the resonance of {unknown term} with {misused esoteric term} causes an error in the measurement of the effect of Y, which means that as {math terms, probably misused} the real factor is actually Z, and this is why we should use this theory for everything from now on."

Replies from: Lumifer
comment by Lumifer · 2013-11-21T15:56:38.754Z · LW(p) · GW(p)

Meanwhile, in the real world we live in

The real world we live in is large and diverse. No one forces you to listen to alpha, beta, or gamma. Or even to delta who says after C: "Fuck, man, I need a beer. Dude, have you seen the game last night?"

Replies from: DaFranker
comment by DaFranker · 2013-11-21T18:21:58.486Z · LW(p) · GW(p)

My point is that alpha, beta, gamma and your very good delta are much more representative of the average-person cluster in personspace, or even of the educated, literate, internet-going population.

Replies from: Lumifer
comment by Lumifer · 2013-11-21T18:36:17.807Z · LW(p) · GW(p)

Delta, yes. Alpha, beta, and gamma -- not so sure. They look like stereotypes of the Enemy from the Other Tribe to me.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-22T09:54:27.646Z · LW(p) · GW(p)

Then, for the sake of balance, here is one from another tribe: "Both solutions X and Y are mere rationalizations to justify the rich white man's patriarchal rule. The real solution is to ignore both X and Y, and focus on destroying the system."

(Last seen: yesterday, in an article about why free online education is actually an evil tool of oppression. A strawmanish summary: It privileges rich white kids with computers and fast internet.)

Yet another tribe: "We don't need X or Y. Just remove the government and use Bitcoins. Then all the problems will fix themselves!"

(Last seen: two days ago, in a debate about education. I guess, I really live in an insane world.)

comment by Pfft · 2013-11-20T15:48:41.930Z · LW(p) · GW(p)

Did they add auto-playing video ads to the LessWrong frontpage, or do I have malware?

Replies from: wmoore, jaime2000, Lumifer, Jayson_Virissimo
comment by wmoore · 2013-11-26T00:25:25.016Z · LW(p) · GW(p)

SiteMeter in the sidebar was the cause of this. It has been removed.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-11-26T18:53:19.434Z · LW(p) · GW(p)

Thank you!

Replies from: satt
comment by satt · 2013-12-11T21:20:21.349Z · LW(p) · GW(p)

Who downvotes a thank you to one of the site maintainers? Sheesh.

Replies from: arundelo
comment by arundelo · 2013-12-12T01:39:37.031Z · LW(p) · GW(p)

A downvote stalker. Many comments in fubarobfusco's recent posting history have multiple upvotes, some have one or none, but almost all of them have at least one downvote, and usually no more than one. (Hover over the karma score; unfortunately it's a percentage rather than a raw vote count.)

Replies from: MugaSofer
comment by MugaSofer · 2014-01-02T20:52:07.090Z · LW(p) · GW(p)

Huh. I wonder how common that is?

comment by jaime2000 · 2013-11-20T15:52:44.362Z · LW(p) · GW(p)

Yeah, same here. Little video on the bottom-right corner starts playing randomly. Not related to any LW topic I can think of.

Replies from: gattsuru, Pfft
comment by gattsuru · 2013-11-20T18:17:34.345Z · LW(p) · GW(p)

I'm not seeing anything like that. (Windows 7 running Chrome)

Replies from: jaime2000
comment by jaime2000 · 2013-11-20T22:18:29.315Z · LW(p) · GW(p)

Likewise. This is what I see.

Replies from: Pfft
comment by Pfft · 2013-11-21T01:15:19.709Z · LW(p) · GW(p)

I'm using Firefox on MacOS. I see the same thing, but not repeatably (it's appeared a couple of times so far).

comment by Pfft · 2013-11-20T20:06:41.287Z · LW(p) · GW(p)

Phew, I guess I'm safe then. Thanks.

Replies from: fezziwig
comment by fezziwig · 2013-11-22T15:56:41.526Z · LW(p) · GW(p)

Or you both have malware. LW has no other ads at all, so how likely is it that their first ad is an auto-playing video?

EDIT: And I personally see no video ads ever, FWIW.

Replies from: drethelin
comment by drethelin · 2013-11-22T16:54:29.826Z · LW(p) · GW(p)

I have this too and it started recently. It doesn't happen anywhere apart from LW. If there's malware it's something to do with LW

comment by Lumifer · 2013-11-22T17:13:49.116Z · LW(p) · GW(p)

My usual browsers are pretty much locked down all the time, but I dug out IE and looked at some LW pages. Didn't see any ads, video or not.

I would suspect malware, but it's also possible that other browser tabs (e.g. Facebook) are playing games...

P.S. Oh, a plug-in I have says there are three "requests" on this page from SpecificMedia (http://specificmedia.com/) and it does seem to specialize in video advertising.

Replies from: Pfft, gwern
comment by Pfft · 2013-11-22T21:18:02.270Z · LW(p) · GW(p)

I'm leaning towards it being part of the website. The video doesn't seem to play very often, but every time the page loads, there is a script tag from "vindicosuite.com" at the bottom, which contains links to specificmedia.com. Both of these domains are related to video advertisement.

To see the tag, I load the page in Chrome, rightclick->Inspect Element, and scroll to the bottom of the HTML view.

I see this both under Firefox on my laptop and Chrome on my office computer, it seems unlikely that they would both have the same malware at the same time.

Replies from: Lumifer
comment by Lumifer · 2013-11-22T21:52:06.729Z · LW(p) · GW(p)

there is a script tag from "vindicosuite.com" at the bottom, which contains links to specificmedia.com.

Um. That doesn't look nice.

I am also suspicious because the vindicosuite code is very clearly appended to the end of the page, just stuck onto the end. It's not integrated with the page in any way.

YO, ADMINS!! Are you quite sure lesswrong.com hasn't been pwned?

comment by gwern · 2013-11-23T00:13:34.969Z · LW(p) · GW(p)

I just loaded http://lesswrong.com/ in both my Iceweasel and Chromium (Debian testing), and C-u, C-f 'suite', 'specific' - I see nothing? Nothing in curl either.

Replies from: Pfft, Lumifer
comment by Pfft · 2013-11-23T06:50:58.696Z · LW(p) · GW(p)

You will not see it with View Source (C-u). Try right-clicking on the page and selecting Inspect Element (in either Firefox or Chrome). (The tag is added by some Javascript, so it's only present in the DOM, not in the original source).

Replies from: Pfft
comment by Pfft · 2013-11-23T07:07:33.279Z · LW(p) · GW(p)

Ok, it seems the culprit is this fragment

 <!-- Site Meter -->
<script type="text/javascript" src="http://s18.sitemeter.com/js/counter.js?site=s18lesswrong"> 
</script>
<noscript>
<a href="http://s18.sitemeter.com/stats.asp?site=s18lesswrong" target="_top">
<img src="http://s18.sitemeter.com/meter.asp?site=s18lesswrong" alt="Site Meter" border="0"/></a>
</noscript>
<!-- Copyright (c)2009 Site Meter -->

Commenting it out will get rid of the vindicosuite stuff. So I guess this is sitemeter.com gone rogue.

Some Googling found at least one other person who noticed the same problem.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-11-23T07:44:02.564Z · LW(p) · GW(p)

There are a few other blogs complaining about this.

AdBlock can handle it with this pattern:

||s18.sitemeter.com$domain=lesswrong.com    
comment by Lumifer · 2013-11-23T00:43:27.597Z · LW(p) · GW(p)

Interesting. I just looked on an entirely different computer and there's nothing.

Since the connection is http (and not https) I wonder whether there's some code injection going on...

But on the computer I used during the day there was a call to x.vindicosuite.com with a bunch of parameters. It was at the very end of the page, right before the closing tag.

comment by Jayson_Virissimo · 2013-11-21T01:59:48.097Z · LW(p) · GW(p)

I'm not seeing anything like that on Windows 8/Firefox, Ubuntu/Firefox, or iOS/Safari. What OS and browser are you using?

Replies from: Pfft
comment by Pfft · 2013-11-21T03:26:45.337Z · LW(p) · GW(p)

MacOS/Firefox. I have only seen it a few times, not every time I load the page. But I never saw this style of ad on any other website.

comment by aubrey · 2013-11-19T10:55:40.111Z · LW(p) · GW(p)

I've been worried that I'm not very good at negotiating about money. Recently, I had evidence to update in that direction. As part of a course, we paired up and did a negotiation roleplay exercise. I was one of two massive outliers out of thirty who agreed an outcome much, much worse than the rest of the group.

The exercise was structured so that there was quite a lot of space between the two negotiator's bottom lines. I was clear of my bottom line. I got everything I had to get. But almost all of the money that was available for negotiation went to the other person. This seems very familiar from other, real-life contexts I've been in.

I don't like the idea that I'm losing money that I could have just by negotiating better. What can I do to get better?

I've read lots and lots of books and articles, and been on lots of courses. I could write a very convincing guide to negotiation skills. I think that I would find doing more of this very interesting, but it wouldn't make me any better at it in practice.

I've explored more training courses, but haven't found any that offer more than a handful of role plays, and none that promise the sort of feedback that would make deliberate practice#Deliberate_practice) work. Do such courses exist?

My other idea was to practice with friends using games that involve a lot of negotiation. But I'm good at those games already. The skill doesn't seem to transfer over to real-world contexts. Are there games that are very close to real world negotiations?

What have I not thought of?

Replies from: Desrtopa, ChristianKl, Viliam_Bur
comment by Desrtopa · 2013-11-19T15:57:46.544Z · LW(p) · GW(p)

At NYU a month or so back, I saw a flier for paid participation in an experiment which assesses your negotiation skills, and vitally, offers feedback. I don't know if you're in the correct area, but if you are, it might be worth looking into whether it's still running (I don't know for a fact that it is, but I do know that other advertized studies have run for significantly longer periods than that.)

Replies from: aubrey
comment by aubrey · 2013-11-19T21:57:41.826Z · LW(p) · GW(p)

Thank you, that's very useful to know.

As it happens I'm in Europe right now, but there are plenty of universities with psych departments nearby so I will check out their calls for participants.

comment by ChristianKl · 2013-11-21T00:14:23.202Z · LW(p) · GW(p)

There are plenty of real world situations where you can negotiate prices. If you buy groceries at a chain such as Walmart you can't negotiate prices.

If you however buy them at a farmers market you can.

Even if you buy a hot dog at a street vendor you can negotiate for price.

Are there games that are very close to real world negotiations?

A friend of mine learned negotiating by trading world of Warcraft items. I think he brought WoW-gold for 50€ and used that as capital basis for trading the items.

When you want some form of interacting with your friends, you could bet with them frequently. Betting means that you can negotiate the odds.

Replies from: aubrey
comment by aubrey · 2013-11-22T06:19:28.058Z · LW(p) · GW(p)

There are plenty of real world situations where you can negotiate prices.

Thank you, I hadn't thought properly about that. I was avoiding all those situations. Doing actual negotiations for small stakes is an excellent plan. It's not quite perfect. It doesn't give very good feedback, since I don't get any information about what they would have been willing to settle for if I were a better negotiator.

When you want some form of interacting with your friends, you could bet with them frequently. Betting means that you can negotiate the odds.

I'm interested in this, but my friends really don't like to do it. They also don't want to play games with me for money.

I think part of the reason they don't want to is that they think I will win. But I think I would win less if we played for money!

comment by Viliam_Bur · 2013-11-19T15:52:31.578Z · LW(p) · GW(p)

In real life, do you feel high-status or low-status? High-status people feel like they deserve more, so it would be probably natural for them to extract as much value as possible, while "the bare minimum is good enough for me" would be a natural attitude of a low-status person.

Replies from: Moss_Piglet, Lumifer, aubrey
comment by Moss_Piglet · 2013-11-19T22:46:38.900Z · LW(p) · GW(p)

High-status people feel like they deserve more, so it would be probably natural for them to extract as much value as possible, while "the bare minimum is good enough for me" would be a natural attitude of a low-status person.

I question that analysis.

High status can certainly create a sense of entitlement in some people, like how rich people generally leave awful tips, but at the same time you can see a sort of noblesse oblige in others which leads to huge charitable donations and voluntarily forgoing chances to improve their position (like sending their kids to public schools). Low status people generally have a lot less to lose and thus tend to be a lot more pragmatic in my experience.

Replies from: aubrey
comment by aubrey · 2013-11-22T06:44:44.766Z · LW(p) · GW(p)

Thank you, this has prompted some very useful thoughts.

you can see a sort of noblesse oblige in others which leads to huge charitable donations and voluntarily forgoing chances to improve their position

This feels close to my situation. I come from a high-privilege family with a long tradition of self-sacrifice and doing good works.

I thought of a theory to explain my situation. Part of my problem is not really wanting to win in negotiations. I do want to win, just not as much as I want to avoid feeling socially awkward and grasping. When the negotiation stakes are small for me, I would rather spend more money and feel better. When the stakes are negligible for all, like in games, I play to win. When the stakes are very high for me, it would be rational to negotiate hard, but I don't do that as much as I want to, out of habit learned on smaller stakes and because I do hyperbolic discounting and weigh feeling awkward now higher than losing lots of money long term.

The stakes in the role play negotiation were negligible, but I still did badly. However, it felt real enough while I was doing it. I was feeling very awkward.

Exposure and deliberate practice seem like good ways of reducing the awkwardness. Another line is to work on the hyperbolic discounting. I could think of the decision in Timeless terms: "I'm making the awkwardness vs profit decision for all similar negotiations." Or I could think what I would advise a needy friend to do.

comment by Lumifer · 2013-11-22T15:55:45.035Z · LW(p) · GW(p)

High-status people feel like they deserve more, so it would be probably natural for them to extract as much value as possible, while "the bare minimum is good enough for me" would be a natural attitude of a low-status person.

That may be a factor, but that's one factor out of many.

Let me offer a similar argument but one that points in a different direction:

High-status people feel secure in their position so there's less pressure on them to "win" and they would be more willing to give up some stuff for warm fuzzies.

Or another one:

Rich people don't care about monetary rewards as much so in negotiations they would be willing to exchange financial benefits (the common subject of negotiations) for other things not usually measured as outcomes (time, status or, again, warm fuzzies).

I am not particularly attached to these two assertions, the point is the ease of making them and others like them.

comment by aubrey · 2013-11-19T22:07:23.999Z · LW(p) · GW(p)

Interesting question.

I definitely feel high-status. I'm in a high income percentile, and have substantially higher net worth than that most people with my income history, because I've made good decisions about how to use my income. Obviously, I know people who are higher status and wealth than me, but I mostly compare myself to people who are not. (Partly a deliberate choice to try to boost happiness and life satisfaction.)

There might be a status connection. To exaggerate, it feels more like "someone of my status shouldn't stoop to grubbing around with low-status arguments about money" rather than "someone of my status doesn't deserve money". That might be a contributory factor, but it feels like the effect size is small.

In the negotiations I feel regret about, I definitely felt like the higher-status participant, but the less experienced at negotiation. Or rather, the less effective one.

Replies from: Viliam_Bur, Lumifer
comment by Viliam_Bur · 2013-11-20T09:15:27.959Z · LW(p) · GW(p)

An idea: Next time, before you start playing, precommit to converting the game money to real money by some nice coefficient, sending it to an effective charity, and writing about it (including the specific sum) on LessWrong. Then imagine that you are the only person who will ever send money to that charity, and that this is your only opportunity for sending.

Then start playing...

Replies from: aubrey
comment by aubrey · 2013-11-20T10:14:19.434Z · LW(p) · GW(p)

That sounds worth trying. Thanks. Intuitively it feels as though it would make me care about the game outcome like I care about real-money negotiations. It also feels as though it would make me suck at the game. But that's progress, because I have a known route to practicing at games.

At first I wanted to say "That's going to cost me money!". But I was fine with the idea of paying for training courses to get better. Financially it comes to the same thing, but loss aversion is at work here.

Thinking that through made me realise I'm muddled about my utility function here. If I play the game to win, I end up spending more money. I already give as much to charity as I feel comfortable with, so it would mean some discomfort.

comment by Lumifer · 2013-11-22T15:57:58.635Z · LW(p) · GW(p)

it feels more like "someone of my status shouldn't stoop to grubbing around with low-status arguments about money"

Yep. I might want to add that such an attitude is much more common in Europe than in the US.

comment by blacktrance · 2013-11-18T01:44:33.015Z · LW(p) · GW(p)

I've been thinking about scope insensitivity, and wonder whether it can be mistaken for decreasing marginal value. Suppose a slice of pizza costs $1, and you're willing to buy it for that much. That doesn't mean that you'd be willing to buy a million slices of pizza for a million dollars - in fact, if you're only hungry enough for one slice and don't have a refrigerator handy, you may not want to pay more than $1 for any amount of pizza greater than or equal to once slice. The same can apply to donating to charity to save lives. You may value saving one life for $200, but maybe you're not willing to pay $400 to save two lives.

Replies from: AlexSchell, army1987, hyporational, DanielLC, hyporational
comment by AlexSchell · 2013-11-18T13:25:53.938Z · LW(p) · GW(p)

You may value saving one life for $200, but maybe you're not willing to pay $400 to save two lives

This is what scope insensitivity is. The original paper calls it "purchase of moral satisfaction" -- the revealed preference in these experiments is for an internal state of moral satisfaction as opposed to the actual lives you're saving. Like hunger, the internal state is quickly satiated and so exhibits diminishing returns, but actual lives do not exhibit diminishing returns (in the relevant range, for humans, on reflection).

Replies from: Desrtopa
comment by Desrtopa · 2013-11-19T16:16:49.682Z · LW(p) · GW(p)

That would be an effective demonstration of scope insensitivity in an ideal scenario where money has a flat conversion to utility in that range for the individual in question. If $200 is in the subject's budget, but $400 is not, this may be entirely rational behavior. A donation which puts you into debt will have a much more dramatic effect on your own utility than one which leaves you solvent.

Replies from: AlexSchell, lmm
comment by AlexSchell · 2013-11-21T05:02:28.436Z · LW(p) · GW(p)

You're right, the $200 vs $400 example isn't ironclad -- even a rational altruist will still have a limited budget. The reason I ascribed scope insensitivity to the example was that it's worded in terms of valuing 'saving lives' as opposed to valuing lives, which, as I explain, is a hallmark of scope insensitivity.

comment by lmm · 2013-11-20T10:39:18.187Z · LW(p) · GW(p)

Still doesn't explain people paying less to save 20000 birds than to save 2000.

Replies from: Viliam_Bur, Desrtopa
comment by Viliam_Bur · 2013-11-21T09:01:35.443Z · LW(p) · GW(p)

Saving 10 times more birds would make me 10 times more happy. Therefore, I would make a bigger celebration afterwards. But the celebration also comes from my limited budget, so there would be less money left to contribute to saving the birds.

(End of rationalization exercise.)

comment by Desrtopa · 2013-11-20T16:41:01.359Z · LW(p) · GW(p)

I never claimed that no effective demonstrations of scope insensitivity exist.

comment by A1987dM (army1987) · 2013-11-19T00:29:05.317Z · LW(p) · GW(p)

Eliezer's answer (I can't find it at the moment) is that there are many more lives around, and it seems unlikely that U(875,431 people die) - U(875,430 people die) is that different from U(875,430 people die) - U(875,429 people die).

(OTOH, once you've spent $200 to save a life, you have $200 less, and so the marginal utility of another $200 is larger.)

comment by hyporational · 2013-11-18T08:34:16.138Z · LW(p) · GW(p)

What would the decreasing marginal value be in the charity example? If we're just talking fuzzies, isn't that scope insensitivity?

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-11-20T19:46:12.718Z · LW(p) · GW(p)

The decreasing marginal value is the amount of fuzzies your donation lets you feel.

If you get 10 utilons from doing something that saves one person's life, perhaps you get only 12 from doing something that saves two people's lives.

Decreasing marginal satisfaction from doing more to help a problem is the cause of scope insensitivity - people feel bad knowing that they aren't helping, and feel good knowing that they are.

comment by DanielLC · 2013-11-23T19:36:42.887Z · LW(p) · GW(p)

That does raise the question of why you only value lives you saved. If all lives are important, marginal value won't change much over one life saved.

Replies from: blacktrance
comment by blacktrance · 2013-11-23T20:03:25.463Z · LW(p) · GW(p)

For the same reason you only value the pizza you eat, not just pizza eaten in general - because it's utility derived from consumption of a good/service.

comment by hyporational · 2013-11-18T06:41:43.048Z · LW(p) · GW(p)

It seems you're confused about what scope insensitivity means.

comment by gwern · 2013-11-18T00:04:33.897Z · LW(p) · GW(p)

Someone on Reddit attempted to apply cognitive biases, based on Eliezer Yudkowsky's "Cognitive Biases Potentially Affecting Judgment of Global Risks", to Bitcoin: http://www.reddit.com/r/Bitcoin/comments/1qua30/cognitive_biases_that_make_people_skeptical/

I don't think I agree with all of it (is 'black swan' really a cognitive bias? and I don't think bystander apathy applies to investments), but still interesting.

comment by Anatoly_Vorobey · 2013-11-16T22:53:38.170Z · LW(p) · GW(p)

A long while ago there was a discussion about helping people write posts, and I seem to remember it ended with a mailing list to which anyone who wanted feedback or help with polishing their draft could send it. At least that's how I remember it, but I can't find that discussion now. Whatever happened to that idea, is it still current?

Replies from: Manfred
comment by Manfred · 2013-11-16T23:38:41.300Z · LW(p) · GW(p)

If you also want someone to review your post, and the content is at least mildly interesting, I would be happy to do a thorough job.

comment by A1987dM (army1987) · 2013-11-21T13:47:42.897Z · LW(p) · GW(p)

Maybe there should be a Latest Welcome Thread sidebar in Main, like there already is Latest Rationality Quote in Main and Latest Open Thread and Latest Rationality Diary in Discussion.

Replies from: Douglas_Knight, DaFranker
comment by Douglas_Knight · 2013-11-21T19:24:29.611Z · LW(p) · GW(p)

The Welcome Thread is for newcomers, so it would make more sense on the front page, not the sidebar.

Replies from: army1987
comment by A1987dM (army1987) · 2013-12-15T09:33:06.399Z · LW(p) · GW(p)

We could have them both. My idea is for it to be easy for regulars to find the latest newcomer's self-introduction.

comment by DaFranker · 2013-11-21T13:52:54.180Z · LW(p) · GW(p)

I also personally wish there were a "Latest Media Thread" link in Discussion, but I understand others might consider it visual pollution or clutter (even though I don't agree that it should be an issue).

comment by jetm · 2013-11-21T04:41:22.280Z · LW(p) · GW(p)

So I've been reading Worm ( parahumans.wordpress.com ), and there's this tiny thing that's been growing ever-more annoying, and I can't hold off asking about it any longer.

I keep seeing passages like this: "Realizing the position he had me in, feeling the pressure of his thighs against my hips, his weight resting partially on my lower body, I must’ve blown a synapse. My thought process ground to a halt. It didn’t help that the first place my mind went was interpreting his ‘start’ as being this position leading to something else."

Do people actually think like this? Seems like it would be really inconvenient.

Replies from: arundelo, shminux, drethelin, CAE_Jones
comment by arundelo · 2013-11-21T04:56:05.849Z · LW(p) · GW(p)

I pretty rarely wrestle with people (or otherwise have such close physical contact), so if someone I have a crush on is literally sitting on top of me pinning me on my back, I have trouble staying calm. (I'm thinking of a particular time this happened to me. I felt about like Taylor does in your Worm quote.)

Replies from: Nornagest, jetm
comment by Nornagest · 2013-11-22T20:51:39.768Z · LW(p) · GW(p)

I practice jujitsu, an art which involves a lot of close physical contact. At times I've worked with people I find attractive, but the mindset's different enough that I don't have those sorts of intrusive thoughts during practice. It takes time to develop that mindset, though -- beginning students are often uncomfortable.

Haven't read Worm, though, so I'm not sure how applicable that would be.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-11-22T21:31:28.063Z · LW(p) · GW(p)

Ure pehfu qbrfa'g ernyyl erpvcebpngr ng guvf cbvag, naq unf orra qbvat yvtug znegvny negf sbe zbfg bs uvf yvsr. Ur qbrfa'g tvir nal vaqvpngvba gung ur'f srryvat gur njxjneqarff gur jnl fur vf, ohg jr pna'g xabj sbe fher, fvapr jr arire npprff uvf gubhtugf ba gur fhowrpg

(Taylor is new to this sort of training during the scene in question.)

comment by jetm · 2013-11-21T05:11:34.873Z · LW(p) · GW(p)

Interesting. Does this.... urgency ever turn out to be useful? I'm assuming that at the worst it's not distracting enough to justify taking the time to prevent it.

(In case I was not clear, I was talking about a more general thingy than being sat upon. Pretty much all of 6.3 for example.)

Replies from: arundelo
comment by arundelo · 2013-11-22T05:58:07.000Z · LW(p) · GW(p)

I'd say it ranges from enjoyable through distracting-but-pleasant to just distracting, but it's never bad enough to noticeably affect my ability to get stuff done. Maybe it sometimes was in my teens and twenties, but I haven't been able to remember a clear example. Arguably one example would be my wrestling anecdote. That feeling was sudden enough (my friend "attacked" me without warning) and intense enough to make me not able to fight back very well. Long story, but this meant that the playfight didn't last as long as I would have liked it to.

comment by Shmi (shminux) · 2013-11-21T05:19:08.393Z · LW(p) · GW(p)

I've just finished Worm! At the pace of 10^5 words/week it left me little time for other activities. I hope someone makes a movie out of this web novel, it's so visual.

Anyway, the passage you quoted seems like a perfectly good description of one's physical contact with her crush. Especially if it's the first crush of a not-quite 16 year-old girl.

Replies from: Alicorn
comment by Alicorn · 2013-11-21T05:47:05.619Z · LW(p) · GW(p)

I hope someone makes a movie out of this web novel

God, I hope not, can you imagine trying to cram that thing into a movie? It needs the Game of Thrones treatment.

Replies from: shminux
comment by Shmi (shminux) · 2013-11-21T07:41:03.330Z · LW(p) · GW(p)

It needs the Game of Thrones treatment.

It does, but not even HBO has the budget for the necessary special effects in every episode. Maybe a LoTR-size trilogy would do. The first installment could end with the hospital scene.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-21T14:52:25.988Z · LW(p) · GW(p)

You could just give Moore's Law some more time, and special effects will get cheaper.

Replies from: Lumifer, shminux
comment by Lumifer · 2013-11-21T15:34:50.201Z · LW(p) · GW(p)

The cost of special effects isn't in the rendering hardware, it is in the expensive human labor to create them.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-21T15:52:40.613Z · LW(p) · GW(p)

Better rendering hardware means more efficient tools for the human.

Replies from: Lumifer
comment by Lumifer · 2013-11-21T16:03:41.145Z · LW(p) · GW(p)

No, it does not. The way it works is that a variety of humans painstakingly create the complete digital description of the special effect you need (which might involve 3D modeling, motion capture, creating textures, etc.) and once they're done this digital description is handed off to the rendering farm which spends some time -- from minutes to months -- rendering high-resolution movie frames from the digital description.

Better rendering hardware might reduce the time from months to days, but that's not where the costs are -- the costs are in creating the world, not in rendering it to frames.

Sure, better hardware in general will also give more efficient tools to modelers and such, but I don't think the productivity gains here are going to be large.

comment by Shmi (shminux) · 2013-11-21T15:32:06.191Z · LW(p) · GW(p)

Hmm, I wonder if any studio bothered to calculate, plot and extrapolate the cost of visual effects over time and plan/budget accordingly.

comment by drethelin · 2013-11-21T05:20:14.702Z · LW(p) · GW(p)

I've had moments like that. It's much more of a thing I think when you're extremely lonely and inexperienced. But I still get emotional thrills when sitting in very close contact to someone I find attractive regardless of the fact that I have plenty of physical fun these days.

comment by CAE_Jones · 2013-11-22T21:28:18.472Z · LW(p) · GW(p)

Yes. Not even being attracted to anyone, physical contact (especially the close kind in grappling) tends to have me consciously trying not to panic over those sorts of implications. It might be a cultural thing? Possibly mixed with personality type/experience?

(Yes, it is annoying.)

comment by tgb · 2013-11-17T13:57:42.042Z · LW(p) · GW(p)

What Earth-like world would maximize the amount of good a single person could achieve?

This is a question I am struggling to come up with moderately plausible solutions to for a piece of fiction I am slowly poking away at. I'd love some suggestions. Here's a more complete description:

Suppose all the worlds in a huge space of possibilities exist and most ones that could support human life do. You have the ability to pick out one of these worlds by describing it and then to visit it. You happen to be a utilitarian and so with this great power you want to select the world in which you can do the most good. So what does a world which has been going through a huge amount of suffering, but could be corrected by one person, look like? You can describe the geological, environmental, and, to some extent, physical constraints of the system but not the social ones except in so far as they are determined by the above. (So you can't just say "Here's an Earth where a Hitler rules forever over a population of 100 billion in constant fear and torture." You have to come up with a world that would cause Hitlers to exist.) The world does not have to resemble Earth except so as to be habitable. Flora and fauna can be specified to an extent but should be evolutionary plausible given such an environment, but humans are guaranteed existence on this planet.

It might be easier to start with the question: what does the world that maximizes human suffering look like? Make it too inhospitable and everyone dies or the population falls to a point at which it becomes sustainable. How do we overcome the inherent general tendency for populations to not totally over-burden their environment? Remember that we're picking a single world out of the space of essentially all possible worlds so we ought to be able to find a really genuinely awful one. And it only has to be reasonably plausible for the point of view of the reader, this isn't even hard science fiction.

I'm refraining from posting my one plausible idea for the moment since I don't want to taint the discussion but will post it for feedback if I'm not getting too much in the way of discussion. And of course, if you're posting something here then I assume you don't mind me using it (with credit) if this work ever gets finished.

Replies from: lmm, Izeinwinter, Richard_Kennaway
comment by lmm · 2013-11-17T23:01:51.446Z · LW(p) · GW(p)

My best guess is to delay the discovery/acceptance of science. Imagine a world where Descartes never made his clever arguments (the hand etc.) that made it possible to pursue natural philosophy in a way that was compatible with Christianity. Or one in which western Christianity took up a ban on idol-like representations (as Islam has - a friend tells me it very nearly got there, and changing one influential essay would be enough), so art was not pursued the same way, projective geometry didn't arise, and the axiomatic revolution never happened, or happened much later.

Or, being more subtle, what about a world with no islands - that is, no places where heretics could go into exile but continue working on their heretical things. The Netherlands was able to be such a place because of the defensibility of swamplands and its position as a pawn in wider political machinations, right? I'm trying to think of the kind of world that would give rise to a strong, unified government - which then wouldn't even need to be particularly evil, just populist and follow the typical medieval outlook. What about a world where weapons of mass destruction were easily available, where any reasonably competent person could create a city-destroying weapon from naturally available materials?

People learn best through pain. What about a world that has embraced this - where torture is an everyday fact of life, what you do to kids as part of bringing them up, what you do to subordinates who need to acquire a new skill?

Lead poisoning makes people stupid and aggressive - not enough to make society collapse, but enough to make everyone's lives a little bit worse. What about a world where, rather than gold and copper and the like, the aesthetically attractive metals are the kind that cause heavy metal poisoning.

Replies from: tgb
comment by tgb · 2013-11-18T13:05:49.533Z · LW(p) · GW(p)

This is a good try, but I am looking for something much, much worse. Like 30 billion people living in complete destitution with massive famines sweeping the world ever seven years killing off 40% of the population each time and leaving the rest in pain and agony. Or only one in twenty children survive past the age of seven, the rest dying terribly. A world so bad it makes you uncomfortable to even think about it. I'm also thinking about this more on geological time scales - we can probably come up with a world where such suffering has been going on at least long enough for evolutionary pressures to be significantly compensating for some of the dreadful environmental effects.

I do like the idea of lead poisoning. The only thing difficult with that from the story perspective would be the relative difficulty in rectifying that problem but that's probably fixable with some creativity. Easily accessible WMPs wasn't the direction I was thinking of but might work and is definitely your most horrifying suggestion from my view.

comment by Izeinwinter · 2013-11-18T19:26:32.804Z · LW(p) · GW(p)

If you can pick the physical, but not the social, makeup then you don't have sufficient precision to reliably pick out a world that is actually significantly worse of than your home timeline - if it supports humans, either it has social and technological structures conductive to a high-utility equilibrium social order, or it can be dystopian. Note that base-line earth is largely dystopian in the sense I mean here! Most of the population is living lives much poorer than our understanding would permit. , and you will not be able to reliably predict the social structures at all from the information you have. Which also means picking a world certain to have problems you can solve is impossible, and you will in fact be certain to land in a world with social and political structures you dont understand at all. Which will probably cause you difficulties.

Best bet for landing somewhere where you can do some actual good, with a skillset that will allow you to do so? Study the engineering principles of heavy EE, hydro-power, Nuclear Engineering, metallurgy, and the techniques of the modern agricultural revolution.. Pick a world with no easily available fossil fuels. With sufficient knowledge it should be possible to specify out a path to an industrial economy that goes "Muscles and animals->Hydro->Nukes" and you are likely to have some info which is useful to the locals regardless of what level they start at.

Replies from: tgb
comment by tgb · 2013-11-19T13:38:53.991Z · LW(p) · GW(p)

Well, this is fiction so there's a fair handful of deus ex machina here; if it needs to happen for the story, a particularly bad world needs only to be plausible to come about. Moreover, there's a poorly-defined mechanism which makes the chosen world be more like what they expect it to be than would be really justified by its description. Sorry I'm doing a poor job of describing the exact setup, since it's a little convoluted. It's also not a 100% known mechanism for the character involved either.

comment by Richard_Kennaway · 2013-11-18T13:35:49.078Z · LW(p) · GW(p)

What Earth-like world would maximize the amount of good a single person could achieve?

I don't actually believe the following answer, but as we're talking about fiction, and some people might actually think it would work, why not.

A world in which a catastrophe has reduced the population to a group of a few breeding pairs, or just one -- plus yourself, who have managed to survive the catastrophe and preserve your access to the technology of the vanquished civilisation. You use your overwhelming power to take charge of the remaining tribe and direct their development, confident that you can do better than the blind chances of history. You crack some sort of good-enough immortality for yourself, so that you can perpetuate your absolute rule indefinitely, and make humans live they way the clearly ought to. Since you are, for practical purposes, omnipotent, omniscient, and omnibenevolent, failure would be a contradiction in terms.

Replies from: tgb
comment by tgb · 2013-11-18T15:21:38.619Z · LW(p) · GW(p)

This one sounds like a lot of work to do - I don't think I mentioned that you have the power to do this as often as you like (and have time for!) and this would tie you to just a single world forever instead of doing as much good as possible across many worlds. I'd also like to not assume any access to omnipotence, immortality, or other such things: you can specify a world to travel to by describing and then travel to it. But once you're there you're just like anyone else, save for the knowledge that this world matched your description and any knowledge you brought from Earth. So you might know the location of a particularly rich vein of minerals, for example, or an underground reservoir of water that, when released in some manner, could alter global climate significantly by disturbing an El Nino-like process. Or releasing a bacteria into the ocean which spreads and alters atmospheric composition over the next ten millenia (you're a long-term thinker and don't mind it if the benefits you bestow don't take effect until long after you're dead). Or altering the flow of a river to bring previously-unknown fertilizers to the largest pockets of human civilization.

I've been considering mostly geo-engineering style solutions since that seems to be your main comparative advantage given your knowledge of the world and few of the big things that could really really alter the long-term course of the world. And it fits with the idea of picking an implausible world out of the vast space of possibilities - you get to in effect choose that the planet was 'designed' to have an easily-triggerable chain reaction. Other options probably exist, like introducing the right technology in the right place, but they haven't been as obvious to me.

comment by ChrisHallquist · 2013-11-16T01:42:16.632Z · LW(p) · GW(p)

Is there any way (possibly by asking a mod?) to revert a post to a previous version, after it's been saved?

With my post summarizing Feldman's Epistemology textbook, I had a window open with an incomplete draft for some reason and clicked "save" before I realized I was doing. Now the finished version of the post is gone. It wasn't a post I was terribly satisfied with, so if it's gone it's probably not a big loss, but if there's an easy way to fix it, would be nice to know.

Replies from: Vladimir_Nesov, 9eB1
comment by Vladimir_Nesov · 2013-11-16T11:45:40.975Z · LW(p) · GW(p)

Here it is as raw HTML, with markup (copied from feedly), you can paste it using the 'HTML' button in the article editor: http://pastebin.com/HuQiP3AV

(Also, use the original URL, don't create a new article: http://lesswrong.com/lw/j20 )

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-16T16:19:10.079Z · LW(p) · GW(p)

Thanks. Even better. Reposted now!

comment by 9eB1 · 2013-11-16T07:01:44.629Z · LW(p) · GW(p)

All the posts appear in the RSS feed immediately when they are posted, so I have a copy of it as it was posted. I can't copy the formatting, but the text is here: http://pastebin.com/eHpFVeZY

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-16T07:12:21.077Z · LW(p) · GW(p)

Phew. Thanks. Will repost tomorrow.

comment by Viliam_Bur · 2013-11-22T10:16:03.220Z · LW(p) · GW(p)

I created a subreddit "Rozum" for rationalists who would like to have LW-style rationality discussions in Slovak (or Czech) language. Anyone interested, just send me a private message with your username.

Note: I am unfamiliar with the Reddit system; I just assume there are enough similarities with LW system, especially the upvotes and downvotes. And I am completely unfamiliar with the moderator's options, so there is a chance I set something wrong, in which case I will try to fix it later.

At this moment, the subreddit settings are "anyone can read, only approved members can contribute". (I hope it also means that only the approved members can vote.) It is supposed to be a temporary solution, and the reason is to protect the fresh new community from the possibility of being overrun and overvoted by a small group of trolls or insane people.

In the evening I will write some introduction there, I just wanted to announce it now here, to see how many people are interested.

Replies from: Adele_L
comment by Adele_L · 2013-11-22T21:02:26.002Z · LW(p) · GW(p)

At this moment, the subreddit settings are "anyone can read, only approved members can contribute". (I hope it also means that only the approved members can vote.)

I'm pretty sure everyone can vote, and that there's not really a way to stop this. But usually for new subreddits, there aren't too many trolls, and they few that are there can be easily handled with moderation.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-23T12:18:35.180Z · LW(p) · GW(p)

Thanks for the advice! Now I changed the settings to "public" (but I am prepared to change it back anytime if there would be too much abuse). I guess the easiness of joining is more important, especially during the first weeks, until the good debate starts.

comment by Brillyant · 2013-11-19T15:06:54.675Z · LW(p) · GW(p)

Anki help needed...

My girlfriend is in nursing school. She had been doing well on here tests until recently, when she marginally failed a rather important exam. Anki came to mind as tool that may help her through the upcoming tests this semester, as it seems many people here at LW speak very highly of it. I'm looking for some general 101 help and suggestions in regard to Anki...

Why does it work? Best practices for optimizing test scores with Anki? Drawbacks or things to avoid? Success stories? Are there people with learning-styles where Anki would not be effective?

As I try to come up with questions, I realize I'm pretty incompetent at even knowing what to ask... Any knowledge you think would be useful would be appreciated. Thanks!

Replies from: ChristianKl, Viliam_Bur, Adele_L, Emile, jkaufman
comment by ChristianKl · 2013-11-21T04:03:28.206Z · LW(p) · GW(p)

Why does it work? Best practices for optimizing test scores with Anki? Drawbacks or things to avoid? Success stories? Are there people with learning-styles where Anki would not be effective?

People don't have something like inherent learning styles. They have strategies for learning. Using Anki is a learning style.

One frequent error when making Anki cards is to think that the card should contain the solution to an exam question. That leads to cards that are too complicated.

http://www.supermemo.com/articles/20rules.htm is a good introduction to how SRS works.

Replies from: arundelo
comment by arundelo · 2013-11-21T04:35:29.308Z · LW(p) · GW(p)

People don't have something like inherent learning styles. They have strategies for learning.

Could you expand on this? (Or point me to something already written.)

Replies from: Viliam_Bur, ChristianKl
comment by Viliam_Bur · 2013-11-21T09:26:19.739Z · LW(p) · GW(p)

Perhaps wikipedia would be a good starting point.

Some psychologists and neuroscientists have questioned the scientific basis for and the theories on which they are based. According to Susan Greenfield the practice is "nonsense" from a neuroscientific point of view: "Humans have evolved to build a picture of the world through our senses working in unison, exploiting the immense interconnectivity that exists in the brain." Many educational psychologists believe that there is little evidence for the efficacy of most learning style models, and furthermore, that the models often rest on dubious theoretical grounds. According to Stahl, there has been an "utter failure to find that assessing children's learning styles and matching to instructional methods has any effect on their learning."

A non-peer-reviewed literature review by authors from the University of Newcastle upon Tyne identified 71 different theories of learning style. (...) Coffield's team found that none of the most popular learning style theories had been adequately validated through independent research, leading to the conclusion that the idea of a learning cycle, the consistency of visual, auditory and kinesthetic preferences and the value of matching teaching and learning styles were all "highly questionable."

an adequate evaluation of the learning styles hypothesis—the idea that optimal learning demands that students receive instruction tailored to their learning styles—requires a particular kind of study. Specifically, students should be grouped into the learning style categories that are being evaluated (e.g., visual learners vs. verbal learners), and then students in each group must be randomly assigned to one of the learning methods (e.g., visual learning or verbal learning), so that some students will be "matched" and others will be "mismatched". At the end of the experiment, all students must sit for the same test. If the learning style hypothesis is correct, then, for example, visual learners should learn better with the visual method, whereas auditory learners should learn better with auditory method. (...) studies utilizing this essential research design were virtually absent from the learning styles literature. In fact, the panel was able to find only a few studies with this research design, and all but one of these studies were negative findings—that is, they found that the same learning method was superior for all kinds of students (...) As a consequence, the panel concluded, "at present, there is no adequate evidence base to justify incorporating learning styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have strong evidence base, of which there are an increasing number."

So it seems to me that it's actually a lot of different theories, and none of them has an experimental proof. The evidence seems to actually point the other way.

My interpretation is, if you start using pictures in your class and you get better results, that's not because you have finally provided something useful to the "visual learners", but because you have provided something useful for everyone.

comment by ChristianKl · 2013-11-21T05:00:49.142Z · LW(p) · GW(p)

Could you define a question? There seems to be a lot of things that I could say on the topic.

Replies from: arundelo
comment by arundelo · 2013-11-22T05:39:15.414Z · LW(p) · GW(p)

Are you talking about basically the same stuff in Viliam_Bur's comment?

Using Anki is a learning style.

Did you mean to say "strategy" instead of "style" here?

Thanks.

comment by Viliam_Bur · 2013-11-20T09:22:57.869Z · LW(p) · GW(p)

Don't overcomplicate it. Anki allows you to create many "fields" for the card, but you often need two (question, answer) or four (question, footnote for question, answer, footnote for answer). Start now.

Don't worry about creating too many cards, or getting your answers wrong. You learn even by getting wrong answers. The only failure is to stop using Anki. -- Unless the questions are badly designed, in which case don't hesitate to redesign them. For example if there are two questions you often confuse with each other, try replacing them with a question "what is the difference between X1 and X2?".

Create some schedule for using Anki. For example: "the first thing after I start my computer".

comment by Adele_L · 2013-11-19T15:10:37.936Z · LW(p) · GW(p)

Gwern's article is a good place to start.

comment by Emile · 2013-11-20T09:56:16.040Z · LW(p) · GW(p)

I've been using Anki for the past few months, and recommend it for learning things. It's also a good way of making use of a daily commute.

The Lesswrong Wiki has pointers to some of the places it's been discussed here.

As a quick summary:

  • Make your cards super easy (the answers should be a single word as much as possible, and you should be able to answer it very quickly)
  • Use cloze deletion; eg if you want to learn "As part of the reward pathway, dopamine is manufactured in nerve cell bodies located within the ventral tegmental area and is released in the nucleus accumbens and the prefrontal cortex.", you might make cards with:

"As part of the ???? pathway, dopamine is manufactured in nerve cell bodies located within the ventral tegmental area" reward

"As part of the reward pathway, dopamine is manufactured in ???? located within the ventral tegmental area" nerve cell bodies

"As part of the reward pathway, dopamine is manufactured in nerve cell bodies located within the ?????" ventral tegmental area

  • Create Anki cards yourself, don't use pre-made decks
  • Enter stuff in Anki once you have a basic understanding (i.e. after reading about it or having a lecture etc. - not directly entering facts witout processing them)
  • Make cards that ask for pretty much the same information several times (definition of a concept, what's the concept of this definition, such-and such is an example of which concept, etc.)
  • Review in little sessions dispersed through the day
  • Delete stuff if it feels useless / too difficult. If it's too hard, best to delete it and make several easier cards.
comment by jefftk (jkaufman) · 2013-11-20T20:37:51.012Z · LW(p) · GW(p)

The problem I've seen with trying to sell people on Anki in the outside world is that many places will give you printable flash cards. An easy way to get these into Anki would go a long way.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-21T17:22:30.660Z · LW(p) · GW(p)

The problem I've seen with trying to sell people on Anki in the outside world is that many places will give you printable flash cards. An easy way to get these into Anki would go a long way.

Coverting them depends entirely on how the cards are formatted. If you get them in some digital file you could convert them into a table and import them into Anki.

However the more I use Anki the less I think of Anki as flashcards and the more I think of it as a new medium.

With a stack of printable cards you go through the cards till you think you know them all. You usually try to review similar cards together. With Anki you don"t review similar cards together but spread them out over time.

People often want to brute force information into their brain with flashcards while Anki is primarily about making sure that you can remember what you learned.

comment by A1987dM (army1987) · 2013-11-20T21:54:10.519Z · LW(p) · GW(p)

Slate Star Codex is back in the Recent on Rationality Blogs sidebar! Yay!

comment by Bayeslisk · 2013-11-17T00:25:17.408Z · LW(p) · GW(p)

An interesting idea I had: epistemology sparring. Figure out a way to make model battlefield rationality work, say in some kind of combat-like game - think boffer LARP, paintball, or even lacrosse or dodgeball - just make it immediate and physical. Make success in the game tied directly to the ability to determine the probability of truth of some statement quickly, and more quickly and accurately than your opponents, and make the games short - no more than half an hour each. Do these frequently to allow for averaging out the failures that will definitely result during play, both at the beginning, due to inexperience, and due to bad luck or inconsistent performance. Any suggestions?

Replies from: MathiasZaman, yli, EGarrett, Lumifer, Manfred
comment by MathiasZaman · 2013-11-20T12:25:47.058Z · LW(p) · GW(p)

I occasionally go airsofting, which is quite similar to paintball. The organizers mainly borrow from First Person Shooters for their rules, but if you adapt the rules to something similar to Quirrell's games in Methods of Rationality and throw in ways to do easily conceal information, you might be able to make it work.

First of all, have more than two teams. Two teams don't allow for interesting scenarios such as teaming up, betraying, bluffing... Adding an extra one does.

Don't have "respawns". Failure should be meaningful and you can have "wipe out the enemy" as an objective. It also lowers the time needed to complete an objective.

Give incomplete or inaccurate information to your players about the goals or the layout of your arena. This forces them to quickly process new information they encounter on the battlefield.

There are other things you can do, but I'll have to give it some more thought.

Replies from: Emile, Bayeslisk
comment by Emile · 2013-11-20T17:58:39.993Z · LW(p) · GW(p)

You can still have interesting stuff with more than two teams!

Players compete for points, and get 50 points if their team wins, but get points for other things: negative points for being killed, positive points for sub-objectives that don't particularly help their team (picking up items hidden throughout the playing field, standing on tall things...).

For extra fun, give each player a random secret objective (including a "you actually play for the other team, and get scored accordingly" card).

Or for a totally different feel, have an "accomplish objectives" game (find hidden items, or bring back water to your jar with a spoon, the one with the most wins), but also give them paintball/airsoft guns, with rules on death/respawning etc.

comment by Bayeslisk · 2013-11-20T17:29:27.733Z · LW(p) · GW(p)

I used to play airsoft a few years ago. The main problem with that would be the relatively large expense to start playing, and the need for a very large area. I agree that there should be more than two teams, as I've said elsewhere, and the lack of respawns would make sense.

Possibly, however, limited-range weaponry would be more fun/useful/easy to deal with - nerf/boffer swords, (which I admit I have no experience with) thrown weaponry, like tennis balls or dodgeballs, and so on.

comment by yli · 2013-11-18T23:56:57.452Z · LW(p) · GW(p)

In a way every game is a rationality game, because in almost every game you have to discover things, predict things, etc. In another way almost no game is one, because domain-specific strategies and skills win out over general ones.

One idea is based on the claim that general rationality skills matter more when it's a fresh new game that nobody has played yet, since then you have to use your general thinking skills to reason about things in the game and to invent game-speficic strategies. So what if there were "mystery game" competitions where the organizers invented a new set of games for every event and only revealed them some set time before the games started? I don't know of any that exist, but it would be interesting to see what kinds of skills would lead to consistent winning in these competitions.

There are various other ways you could think of to make it so that the game varies constantly and there's no way to accumulate game-specific skills, only general ones like quick thinking, teamwork etc. Playing in a different physical place every match like in HPMoR's battles is one.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-11-19T02:58:55.010Z · LW(p) · GW(p)

Possibly. You're giving me an idea - have a simple game with a few interacting rules drawn randomly from a larger set of interacting rules, and see who figures out how to take advantage of how the rules interact. Remind me to get back to you on this.

Replies from: gwern, lmm
comment by gwern · 2013-11-19T04:48:47.046Z · LW(p) · GW(p)

have a simple game with a few interacting rules drawn randomly from a larger set of interacting rules, and see who figures out how to take advantage of how the rules interact.

Sounds like a matrix IQ test...

BTW, have people in this thread played the Flash game "This Is The Only Level" http://armorgames.com/play/4309/ ?

Replies from: Bayeslisk, JoshuaZ
comment by Bayeslisk · 2013-11-19T07:14:33.779Z · LW(p) · GW(p)

What is a matrix IQ test?

I have played that. My idea is slightly similar, I think.

Replies from: tut
comment by tut · 2013-11-19T13:20:33.694Z · LW(p) · GW(p)

Something like the Raven matrices.

comment by JoshuaZ · 2013-11-19T05:06:36.802Z · LW(p) · GW(p)

Ok. I just wasted 15 minutes on that game and got up to stage 13. That looks potentially addicting. How many stages are there?

Replies from: witzvo
comment by witzvo · 2013-11-21T00:59:36.022Z · LW(p) · GW(p)

About 30. Fun. Just finished. (16:21, 83 deaths) Edit: uhoh there's a 31. hmmm.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-11-21T01:29:40.351Z · LW(p) · GW(p)

O. I can't get past 31. Is that a real level? If so, what does one do?

comment by lmm · 2013-11-19T09:32:12.548Z · LW(p) · GW(p)

Mafia as played online works like that - you have broad expectations, but no idea which roles exist in this specific game.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-11-19T17:38:07.572Z · LW(p) · GW(p)

This is also very close to what I'm trying to get at, but I want for it to be both more physical and conducted in real time.

comment by EGarrett · 2013-11-17T07:49:19.933Z · LW(p) · GW(p)

Hi, this is P. from New Jersey, right? (don't want to give away your personal info if you don't wish). I spoke to someone at the MeetUp yesterday who brought up this exact issue.

-Ernie

Replies from: Bayeslisk
comment by Bayeslisk · 2013-11-17T18:18:31.104Z · LW(p) · GW(p)

Yes, it is!

Replies from: EGarrett
comment by EGarrett · 2013-11-17T22:48:47.183Z · LW(p) · GW(p)

Pretty cool. Given what I've seen though (not from you), I don't know if I'll be posting much here. It was great to meet you guys.

Replies from: tgb
comment by tgb · 2013-11-18T13:09:55.754Z · LW(p) · GW(p)

Wow - someone down votes the new guy expressing apprehension at becoming part of this community? That's a new low in terms of openness.

We're not all like that, EGarrett!

Replies from: EGarrett
comment by EGarrett · 2013-11-18T15:19:25.627Z · LW(p) · GW(p)

Hi tgb,

I think it's a reference to the post at the bottom of the page, that appeared to receive a few downvotes and ad hominems, and the site blocked my attempt to fix the post or reply to clarify my intent. Not the best foot to start off on or feel welcomed. But your reply is much appreciated.

-EG

comment by Lumifer · 2013-11-20T18:12:48.674Z · LW(p) · GW(p)

Things become much easier if you drop the "physical" constraint.

A basic example would be the ability to gain power-ups by demonstrating rationality in some Quake/Unreal FPS environment.

Replies from: Kawoomba, Bayeslisk
comment by Kawoomba · 2013-11-20T18:27:12.692Z · LW(p) · GW(p)

Whenever you become better at executing successful strategies in-game, you're improving your instrumental rationality concerning your goal of "beating the game". Already. As is. Probabilities don't need to be explicitly stated, and typically aren't.

Replies from: Lumifer
comment by Lumifer · 2013-11-20T18:36:56.999Z · LW(p) · GW(p)

Yes, of course. But the OP basically wanted to gamify teaching rationality, in particular by providing immediate feedback to decisions in a game setting. What I am saying is that modifying an FPS game so that specific rationality challenges (which reflect what you want to teach) result in gaining or losing power-ups is much easier than setting up a many-people physical game.

comment by Bayeslisk · 2013-11-21T07:49:23.867Z · LW(p) · GW(p)

It would become easier, I agree. It would also lose a lot of what makes real, physical things special. Why do people still play actual board games made of cardboard, or bother to meet face to face? A lot of the rationality involved would probably be figuring out through facial expression and vocal intonation whether or not you can trust someone, and this is nearly impossible in a nonphysical context.

comment by Manfred · 2013-11-17T07:46:47.818Z · LW(p) · GW(p)

Poker-boxing.

Replies from: Bayeslisk
comment by Bayeslisk · 2013-11-17T18:19:12.259Z · LW(p) · GW(p)

I would prefer that it be among relatively many people to allow for alliance-building, betrayal, and exercise of the social aspects of combat epistemology. Also, people who come in with knowledge of chess or boxing would have an unfair advantage.

comment by gwern · 2013-11-20T23:56:02.724Z · LW(p) · GW(p)

Are there any expats in Japan who might be willing to do me a favor and find out the answer to a technology-related question I have? It's something I lack the Japanese skills and on-the-ground experience to answer. It should be pretty easy and take under half an hour. Ping me at gwern@gwern.net if you're willing, and thanks in advance.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-21T13:57:00.421Z · LW(p) · GW(p)

Are there any expats in Japan who might be willing to do me a favor and find out the answer to a technology-related question I have?

Is there a reason why you address expats in Japan instead of people living in Japan?

Replies from: gwern
comment by gwern · 2013-11-21T16:18:00.245Z · LW(p) · GW(p)

Yes. I want expats specifically, not Japanese.

comment by Ritalin · 2013-11-18T12:57:21.365Z · LW(p) · GW(p)

They're making a Noah film. It's got Russel Crowe, Emma Watson and Anthony Hopkins. From the trailer, I anticipate that this film will be an immense source of unfortunate implications and horrifying subtext. "You must trust that He will speak in a language you can understand"?! You know, after reading some resources that pattern-match God's behaviour with that of an abusive partner, I just can't unsee it...

Also, amusingly enough, it features a spherical Earth. And I have to wonder how they'll fit "one couple of every species of the Earth" in that ship, huge though it is, without involving Gallifreyan technology. And about all the water on Earth not being sufficient to actually flood everything; will they have God miraculously, spontaneously and temporarily generate water for that specific purpose, and then later remove it?

You know, I'd love to be chill about this stuff, to say it's "just a good story", that I should invoke the MST3K mantra and just relax, but I can't, because the story doesn't seem to be all that good in the first place. Jor El's story has a similar motif and looks better than this.

Replies from: Nate_Gabriel, Desrtopa, tgb, Nornagest, WalterL, RomeoStevens, ChristianKl, Gvaerg, gattsuru
comment by Nate_Gabriel · 2013-11-21T14:27:19.192Z · LW(p) · GW(p)

Standard young-Earther responses, taken from when I was a young-Earth creationist.

Round Earth: Yes. You sort of have to stretch to interpret the Bible as saying the Earth is round or flat, so it's not exactly a contradiction. Things like "the four corners of the Earth" are obvious metaphor.

Animals on the boat: The "kinds" of animals (Hebrew "baramin") don't correspond exactly to what we call species. There are fewer animals in the ark than 2*(number of modern species); this is considered to be a sufficient answer even though it probably isn't. I don't know exactly what level of generality the baramin are supposed to be; I guess it depends on how much evolution the particular creationist is willing to accept. They'll typically use the example of dogs and wolves being the same "kind," but if that's the level of similarity we're talking about then there'll still be an awful lot of kinds.

Amount of water: The Earth used to be a lot smoother. Shallower oceans, lower mountains, etc. So it could be covered with a more reasonable amount of water. We know this because in the genealogies some guy named his son after the fact that "in his day the Earth was divided." (The word for divided, Peleg, means earthquake or cataclysm or something. This verse also doubles as tectonic plates being moved around.)

I don't agree with these, but thought that to avoid strawmanning I should post the l responses that I would have used. Not that they're much better than the straw version, but this is the kind of thing that would have been said by at least one YEC.

comment by Desrtopa · 2013-11-19T16:28:32.332Z · LW(p) · GW(p)

Also, amusingly enough, it features a spherical Earth. And I have to wonder how they'll fit "one couple of every species of the Earth" in that ship, huge though it is, without involving Gallifreyan technology.

Biblically speaking, it's seven breeding couples of every "clean" species, one of every unclean.

What's much more nonsensical than fitting all those animals onto the ark in the first place though, is the idea that it would actually save them. You've got predators reduced to equal numbers with their prey species; memory check, what do they live on?

Replies from: Ritalin
comment by Ritalin · 2013-11-20T15:51:39.058Z · LW(p) · GW(p)

Didn't Think This Through, huh?

Given how God is Amighty, one wonders why he didn't have all the bad people just drop dead where they stood, Kira-style. He did something similar with Egypt's firsborn, yes?

Replies from: Bayeslisk
comment by Bayeslisk · 2013-11-22T07:54:07.356Z · LW(p) · GW(p)

Bizarre... it seems just like if a nomadic Bronze Age tribe had picked up scraps of tales from, say, Babylon and Egypt, embellished other collective memories, and created some out of whole cloth for political purposes!

comment by tgb · 2013-11-18T13:19:25.894Z · LW(p) · GW(p)

This actually looks better than I expected. I anticipate it to have good visuals and to possibly be worth watching. And there are some redeeming aspects to the storyline; isn't it fundamentally about taking heroic responsibility and doing what is necessary in the face of an existential threat? Now there's also all sorts of other luggage it's pulling around (wait, isn't the existential threat being caused by the one who's asking him to alleviate it?).

Daren Aaronofsky has done some good and rather intense films in the past (Pi, Black Swan, The Wrestler, Requiem for a Dream). He'll likely make it at least powerful if not meaningful.

comment by Nornagest · 2013-11-22T20:24:19.731Z · LW(p) · GW(p)

pattern-match God's behaviour with that of an abusive partner

There's an interpretation of the Bible, or at least the Old Testament, that depicts its god and people as coevolving a workable set of ethics as successive attempts at top-down imposition fail.

This isn't going to fly with the omni(potent|scient|benevolent) God that's standard in modern Christianity, of course, but from my admittedly atheistic standpoint it tallies a lot better with the story as depicted.

Replies from: Ritalin
comment by Ritalin · 2013-11-22T21:55:09.590Z · LW(p) · GW(p)

I would like to know more about this interpretation.

comment by WalterL · 2013-11-22T20:07:30.338Z · LW(p) · GW(p)

You know I've always wondered how the world would react to Modern-Day-Noah if someone demonstrated a miracle or two. Ze has marching orders from on High, power to protect zir autonomy/safety but not compel obedience. Justification is "God Says So" and task is onerous and of no obvious benefit. (Build a big temple/ark/pyramid/whatever). Seems like that could be a pretty cool movie.

Replies from: Lumifer
comment by Lumifer · 2013-11-22T20:51:25.023Z · LW(p) · GW(p)

Seems like that could be a pretty cool movie.

You don't say.

comment by RomeoStevens · 2013-11-18T20:57:49.083Z · LW(p) · GW(p)

It is surprising that more cash ins of the Christian demographic aren't done given the excellent performance of previous bible movies.

comment by ChristianKl · 2013-11-21T14:16:59.811Z · LW(p) · GW(p)

in that ship, huge though it is,

It's [url=https://en.wikipedia.org/wiki/Noah's_Ark]size[/url] is actually well defined:

it will be 300 cubits long (137.16 m, 450 ft), 50 wide (22.86 m, 75 ft), and 30 high (13.716 m, 45 ft);

Replies from: Ritalin
comment by Ritalin · 2013-11-21T20:46:32.042Z · LW(p) · GW(p)

I didin't feel like it was worth mentioning because of the pointlessness of it, but still, those are some absurdly large measurements for a ship, to be built in the Bronze Age, by one family.

comment by Gvaerg · 2013-11-23T18:05:31.842Z · LW(p) · GW(p)

There is also a TV adaptation from 1999), where the chronology is a bit mixed-up because it presents the destruction of Sodom and Gomorrah as some sort of "prelude" to the Flood, whereas in the Bible the Sodom story is several hundred years after Noah. The reason why I'm bringing this up is that in that film, the destruction of Sodom is presented with fireballs/meteorites, which also feature in this linked trailer, so I'm lead to think this film will also distill the two stories together in some way (there is no fire-related destruction in the Bible anywhere near the Flood story).

Also, I'm wondering if they will be incorporating popular/deuterocanonical traditions a la The Passion of the Christ - for example, Methuselah dying seven days before the Flood.

Replies from: Ritalin
comment by Ritalin · 2013-11-23T20:25:13.523Z · LW(p) · GW(p)

Hopkins probably plays Methuselah. And I had always thought the Sodom story was antediluvian... How many times must the LORD cleans the world He so incompetently made? Clearly He has very sucky people-modeling skills.

Replies from: Gvaerg
comment by Gvaerg · 2013-11-23T21:29:13.751Z · LW(p) · GW(p)

Well, God only claimed he would never destroy people with water again... everything else was fair game.

Replies from: Ritalin
comment by Ritalin · 2013-11-23T22:32:28.286Z · LW(p) · GW(p)

God claimed he would never destroy people with water again

Well that makes him quite the bloody liar, then, doesn't it? What with all them Tsunamis and Typhoons and Hurricanes and plain big old Floods that've taken place since then, to this very day.

comment by gattsuru · 2013-11-18T22:46:16.636Z · LW(p) · GW(p)

Also, amusingly enough, it features a spherical Earth. And I have to wonder how they'll fit "one couple of every species of the Earth" in that ship, huge though it is, without involving Gallifreyan technology.

I'd strongly caution against fighting a false version of your opponent. Even among biblical literalists, very nearly none believe in a -spherical- (EDIT: flat, thank you for catching the typo) Earth (often citing parts of the bible that call the world a sphere!), and that's been the case for over a millennium. And while the movie probably will have impossible space CGI shenanigans, even the Creationist idiots tend to think of things in terms of "kind" rather than "species" (and often don't understand the later's definition), and try to create some artificial dividing line between macroevolution and animal husbandry.

And about all the water on Earth not being sufficient to actually flood everything; will they have God miraculously, spontaneously and temporarily generate water for that specific purpose, and then later remove it?

The original Jewish version would probably go that way, since it was closer to a cataclysm/Ragnarok event in that belief structure. Modern Christian translations generally just turn it into rain. Given that the actions of a literal magic sky being are part of the premise...

That's not to say Biblical Literalism or Creationism is particularly coherent, nor that it's likely to be a good movie, of course.

Replies from: NancyLebovitz, Ritalin, polymathwannabe
comment by NancyLebovitz · 2013-11-21T14:48:42.539Z · LW(p) · GW(p)

Even among biblical literalists, very nearly none believe in a spherical Earth (often citing parts of the bible that call the world a sphere!), and that's been the case for over a millennium.

Is this a typo for "very nearly none believe in a flat Earth"?

Replies from: gattsuru
comment by gattsuru · 2013-11-21T15:57:41.892Z · LW(p) · GW(p)

Gah, yes. Thank you for catching that.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-11-21T16:44:08.318Z · LW(p) · GW(p)

I'm a little surprised no one caught it sooner.

Replies from: arundelo, lmm
comment by arundelo · 2013-11-22T09:32:34.731Z · LW(p) · GW(p)

This failure mode of saying the reverse of what was meant is called misnegation. Often it's accompanied by readers or listeners taking the intended meaning without noticing the mistake.

comment by lmm · 2013-11-22T08:19:15.900Z · LW(p) · GW(p)

I saw it sooner, but posting a correction seemed nitpicky.

comment by Ritalin · 2013-11-19T01:33:00.674Z · LW(p) · GW(p)

Still, if they could pull it off in a way that makes internal sense, that'd be kind of an awesome feat.

comment by polymathwannabe · 2013-11-22T19:32:41.137Z · LW(p) · GW(p)

The Old Testament does not describe Earth as a 3D sphere but as a flat circle.

comment by MrMind · 2013-11-18T11:06:35.441Z · LW(p) · GW(p)

There has been some attempts at categorizing infinite-dimensional quantum physics.
While finite-dimensional quantum physics is pretty much categorized (symmetric monoidal dagger categories, see the works of Coecke and Abramsky), the infinite version is still in its infancy.
If a fully quantum category will be found, and showed to have more than one model, then I think that the idea of reality as an infinite-dimensional configuration space inhabited by an atemporal wave-function should be abandoned.

Replies from: pragmatist, Douglas_Knight
comment by pragmatist · 2013-11-18T14:37:08.440Z · LW(p) · GW(p)

If a fully quantum category will be found, and showed to have more than one model, then I think that the idea of reality as an infinite-dimensional configuration space inhabited by an atemporal wave-function should be abandoned.

Why?

Replies from: MrMind
comment by MrMind · 2013-11-18T15:42:36.107Z · LW(p) · GW(p)

Because then any model of the full quantum category would describe the same reality: instead of the infinite-dimensional Hilbert space you could just choose say a much looser ontology with sets and relations between them (as in Spekken's toy model of quantum physics).
This arbitrariness would undermine the raison d'etre of the present model.

comment by Douglas_Knight · 2013-11-18T17:12:03.602Z · LW(p) · GW(p)

There are already examples of quantum theories with multiple interpretations, most famously AdS/CFT. Do these affect your view of realism?

comment by LoserOfPasswords · 2013-11-17T16:41:48.496Z · LW(p) · GW(p)

I have a question about modafinil which I'm well-aware should be addressed to a doctor, but I did ask it to a doctor, and didn't get a meaningful answer. Given that I have a minor (I think?) heart condition (a year ago, for about a week, I had sinus bradycardia at 50bpm, sinus tachycardia at 169bpm, occasional wrenching feelings in my chest accompanying 3 beats salvo premature ventricular contractions and premature atrial contractions, no atrial ventricular block, no sustained ventricular tachycardia), what are the chances that taking modafinil (given that it is a stimulant) will damage/kill me? I'm a 22-year old man, and to the best of my knowledge have no other relevant health conditions to take into account.

I'm asking LessWrong because I want a probabilty estimate I can base expected utility calculations off of, not an "I can't prescribe that because I'm afraid of malpractice lawsuits.", and because I suspect this question is easier than it sounds and the doctor I talked to was just being cautious because (as he said) he wasn't familiar with modafinil.

Replies from: dougclow, hyporational, Douglas_Knight, gattsuru
comment by dougclow · 2013-11-20T21:27:19.874Z · LW(p) · GW(p)

I think this is spectacularly hard to get a robust estimate of, but my wild uninformed guess is your chances of dying of it interacting with your heart condition are less than 25%, and probably less than 5%. (I try not to pull probabilities higher or lower than 5%/95% out of the air - I need a model for that.) That's for the simple case where you don't get addicted and take ever-higher doses or start taking other stimulants too or start smoking, etc.

The only hard information I can get a handle on is that the US manufacturer lists existing cardiovascular conditions as a potential contraindication. I suspect this is on general principles (stimulants are known to make them worse, modafinil is a stimulant, sort-of) rather than on hard data about problems caused.

Reporting systems for drug side effects are haphazard and leaky at best, and it's very hard to do decent analysis. Unusual combinations that aren't very deadly just aren't going to show up in the research.The fact that we haven't heard that it's deadly does, though, put something of a ceiling on just how bad it could be (my 25% above).

Most medics would reckon taking stimulants you don't have to when you have a known cardiovascular condition is unwise. (Although some of them do it themselves in early career.) Quantifying 'unwise' is tricky. There's the general issue of data I just mentioned. Then there's trying to think it through. On the plus side, modafinil is less likely to cause problems for CV patients in the way that other more general CNS stimulants are known to; but on the minus side, we don't properly understand how it does work.

Doctors are by nature very cautious: "first, do no harm" and all that. You might come to a different cost/benefit decision.

FWIW, I wouldn't take it in your shoes. But I don't take it myself, despite having no contraindications. I'm extremely risk averse, particularly about my own life, and place more emphasis on quantity than quality compared to most people (on hedonic adaptation grounds).

comment by hyporational · 2013-11-18T06:49:51.171Z · LW(p) · GW(p)

a year ago, for about a week, I had sinus bradycardia at 50bpm, sinus tachycardia at 169bpm, occasional wrenching feelings in my chest accompanying 3 beats salvo premature ventricular contractions and premature atrial contractions

Everyone has these, the question is why, how many and for how long. What "condition" were you diagnosed with?

Replies from: NancyLebovitz, LoserOfPasswords
comment by NancyLebovitz · 2013-11-21T02:26:48.254Z · LW(p) · GW(p)

I don't have that kind of thing enough to notice. Do you mean everyone has odd heartbeats, but some can only be noticed with instrumentation?

Replies from: hyporational
comment by hyporational · 2013-11-21T02:33:01.854Z · LW(p) · GW(p)

Everyone has odd heartbeats. Only a small minority of them can be noticed without instrumentation. Having an occasional wrenching feeling isn't a sign of pathology, and most people have them. I have them. You can also have sensations in you chest that have nothing to do with arrhythmias.

comment by LoserOfPasswords · 2013-11-18T16:30:14.224Z · LW(p) · GW(p)

I wasn't diagnosed with any cause of these. I complained of wrenching feelings in my chest (If I remember correctly, at worst like once a day) for about a week once a year ago, they hooked up a halter monitor, and this is what they said I had. The doctor who did this said they were basically harmless, it's just the second doctor (who I tried to get modafinil from) that thought they contraindicated it.

Replies from: hyporational
comment by hyporational · 2013-11-18T16:38:52.513Z · LW(p) · GW(p)

And how long was the Holter monitoring? How many episodes of arrhythmias?

Replies from: LoserOfPasswords
comment by LoserOfPasswords · 2013-11-18T19:12:51.397Z · LW(p) · GW(p)

3 days, unknown number of arrhythmias. (Roughly one that I noticed per day over the whole week, don't remember how many happened with the monitor on),

Replies from: hyporational
comment by hyporational · 2013-11-21T01:56:29.189Z · LW(p) · GW(p)

Well, as I said everyone has them. The number of premature ventricular contractions and whether they happen in groups would be the most interesting figure. I can imagine if you had many they would have told you so.

How many you noticed might have nothing to do with how many you actually had, and you noticing something might not have anything to do with actually having arrhythmia.

comment by Douglas_Knight · 2013-11-20T23:06:33.795Z · LW(p) · GW(p)

Yes, modafinil elevates heart rate and blood pressure. But probably not as much as caffeine.

comment by gattsuru · 2013-11-20T22:13:05.454Z · LW(p) · GW(p)

The FDA's analysis says (pdf warning) :

Modafinil has not been evaluated in patients with a recent history of myocardial infarction or unstable angina, and such patients should be treated with caution.

In clinical studies of PROVIGIL, signs and symptoms including chest pain, palpitations, dyspnea, and transient ischemic T-wave changes on ECG were observed in three subjects in association with mitral valve prolapse or left ventricular hypertrophy. It is recommended that PROVIGIL tablets not be used in patients with a history of left ventricular hypertrophy or in patients with mitral valve prolapse who have experienced the mitral valve prolapse syndrome when previously receiving CNS stimulants. Such signs may include but are not limited to ischemic ECG changes, chest pain, or arrhythmia. If new onset of any of these symptoms occurs, consider cardiac evaluation.

Blood pressure monitoring in short-term (<3 months) controlled trials showed no clinically significant changes in mean systolic and diastolic blood pressure in patients receiving PROVIGIL as compared to placebo. However, a retrospective analysis of the use of antihypertensive medication in these studies showed that a greater proportion of patients on PROVIGIL required new or increased use of antihypertensive medications (2.4%) compared to patients on placebo (0.7%). The differential use was slightly larger when only studies in OSAHS were included, with 3.4% of patients on PROVIGIL and 1.1% of patients on placebo requiring such alterations in the use of antihypertensive medication. Increased monitoring of blood pressure may be appropriate in patients on PROVIGIL.

I'm not sure we have large enough numbers to give meaningful analysis. I could give a 25% (+/- 22) confidence that you'd likely experience elevated blood pressure, but you'd experience far greater harm from everyday stress or lack-of-exercise. There's a nice big list of serious adverse effects, but two or four out of seven hundred people doesn't seem very useful.

Dietary and psychiatric concerns may also be relevant.

comment by fubarobfusco · 2013-11-21T17:22:56.523Z · LW(p) · GW(p)

Gray, Ward, and Norton, "Paying It Forward: Generalized Reciprocity and the Limits of Generosity ", is a recent study using chained dictator games: each participant is left some amount by the previous (fictitious) player, and chooses how much to leave to the next (also fictitious) player.

Replies from: witzvo
comment by witzvo · 2013-11-21T18:59:20.355Z · LW(p) · GW(p)

Looks like the makings of a good main post, to me. (Haven't read it all yet)

Five experiments demonstrate that people pay forward behavior in the sorts of fleeting, anonymous situations that increasingly typify people’s day-to-day interactions. These data reveal that—in contrast to the focus of media, laypeople, and prior research—true generosity is paid forward less than both greed and equality. Equality leads to equality and greed leads to greed, but true generosity results only in a return to equality—an asymmetry driven by the greater power of negative affect.

comment by hyporational · 2013-11-18T16:20:00.980Z · LW(p) · GW(p)

Does the search function actually work for people here? It would be nice to find old comments every now and then. For me it just hangs in the loading phase, and very rarely returns anything.

Have tried both Chrome and Firefox.

Replies from: Tenoke, gwern, Oscar_Cunningham, Nectanebo, NancyLebovitz
comment by Tenoke · 2013-11-20T11:07:12.763Z · LW(p) · GW(p)

I had the same problem for ages. I've just made a bookmarklet for a google search starting with 'site:lesswrong.com' when I enter 'lw' in the url bar.

Replies from: hyporational
comment by hyporational · 2013-11-20T12:50:15.661Z · LW(p) · GW(p)

Ok, googling most old comments seems to work fine now. It seems all of them are not indexed, and there's significant delay in indexing new comments.

comment by gwern · 2013-11-19T01:01:07.185Z · LW(p) · GW(p)

I long ago simply set up window manager keybindings for Google and other sites, so I just hit mod-g 'keyword site:lesswrong.com'. If you don't have a hotkey set up for Google... I recommend you set one up. If you do nearly as many searches as I do, you'll find it pays for itself in weeks.

Replies from: hyporational
comment by hyporational · 2013-11-19T01:11:05.236Z · LW(p) · GW(p)

This used to work better before, I think. It seems google doesn't index most comments these days. I suppose that's what causes this whole problem then, huh?

comment by Oscar_Cunningham · 2013-11-18T20:08:24.468Z · LW(p) · GW(p)

For me it works well.

comment by Nectanebo · 2013-11-18T19:08:23.035Z · LW(p) · GW(p)

I have the same problem, although I don't think it's ever returned anything for me.

comment by NancyLebovitz · 2013-11-18T16:36:28.126Z · LW(p) · GW(p)

It works for me-- it's not very efficient in returning what I want-- if I ask for [quotes], the results won't be in chronological order, but if I ask for [quotes $month], they will be-- but it returns results quickly.

I'm using Chrome.

comment by JQuinton · 2013-11-19T22:56:37.670Z · LW(p) · GW(p)

I was supposed to go to Brazil this past week but I didn't get my visa in time. A few weeks earlier, since I was hurried about getting my visa in time, I contacted one of those companies that attempts to expedite the visa process. In conversations with them, they told me flat out that I might not get my visa in time, but once I put in the order I would not be able to get my money back.

It seemed like the bet on the low odds was worth the trip to Brazil. But, quite obviously, I did not get my visa in time and thus I lost out on both my plane ticket fee and the visa expediting fee (the plane ticket was also non-refundable). When I mention this to some friends, they say that I should have gotten my money back from the visa company, even though I agreed up front that I wouldn't get my money back if they weren't able to get me the visa in time.

So my question(s): Should I have not agreed to the no-refund offer up front? Is that a thing that these companies do, or was I just dealing with a less than reputable company in my haste to get my visa in time? I'm also a bit embarrassed by not getting the visa in time; I've had numerous people tell me that I should be more aggressive in trying to get my money back but on the face of it that doesn't seem likely. But is my "doesn't seem likely" feeling true or a rationalization?

Replies from: RolfAndreassen, ChristianKl
comment by RolfAndreassen · 2013-11-22T02:18:32.333Z · LW(p) · GW(p)

ostensibly

This word does not appear to be the one you want; it does not make sense in this context.

Replies from: JQuinton
comment by JQuinton · 2013-11-22T22:41:35.473Z · LW(p) · GW(p)

Thanks. No one has ever corrected me about my improper usage before.

comment by ChristianKl · 2013-11-20T21:22:32.309Z · LW(p) · GW(p)

How did you choose the visa expediting company? Did you have recommendations? If a company has a very good reputation it's okay that it put the risk of failure on the client. If it doesn't have a good reputation however, there no real reason why they shouldn't take the risk.

There also the issue of why you don't name the company by it's name. In our times people search the internet to form opinions about companies.

Replies from: JQuinton
comment by JQuinton · 2013-11-21T13:54:37.569Z · LW(p) · GW(p)

The name of the company was Visa Passport Pro

comment by hyporational · 2013-11-18T16:02:52.828Z · LW(p) · GW(p)

Does searching comments actually work for some people here? I can't find anything these days, and it's a serious limitation.

comment by EGarrett · 2013-11-17T08:09:59.961Z · LW(p) · GW(p)

You live in an unending hell, and the only way to escape is by calling your nearest quantum physics lab.

This is a totally half-baked notion based on a layman reading of quantum mechanics, but it's fun to bring up anyway. We talked about it a bit at yesterday's meetup.

Anyway, according to this theory, you might be stuck in a literally infinite hell, and you have to call your nearest quantum physics lab ASAP in order to escape. The reason is this: Given the 99.9999...% chance that you are not alive at any given point in the universe's (or multiverse's) timeline...it's extremely unlikely that you would be alive right now, let alone that you would ever have existed in the first place. One way to explain this is to say that the infinite number of universes being generated (either through multiverse bubbles or what-have-you), INEVITABLY and CONTINUALLY create you. With perhaps quintillions of years in-between each creation...thus it's trivial to explain why you happen to be alive and conscious right now. You always are, from your perspective. When you die, you are non-existent for the quintillion kajillion years, and the moment your particular universe reoccurs, bang, you pop up again in your crib, which from your perspective is instantaneous. You have no memory of course of any of this, but it means you live an infinite number of times.

Now here's the kicker. IF by some unfortunate chance, something happens to you and you are killed or end up spending the rest of your life in pain, IT WILL HAPPEN TO YOU AN INFINITE NUMBER OF TIMES. In every life. BUT, if the macro world is deterministic and quantum physics isn't, you can escape this by calling your nearest quantum physics lab, getting a non-deterministic bit of information from them ("would you mind checking if a certain particle decayed in the last minute?"), and using their to dictate an action that changes the course of your day and your life. Even something as simple as deciding which route to take to work.

This way, you break the deterministic pattern of your life, including a potential infinitely recurring disaster, and can live multiple versions of your life.

That is all and I hope this nonsense is at least a bit fun or interesting. Nice to meet all of you.

-Ernie

Replies from: Emile, savageorange
comment by Emile · 2013-11-17T14:59:28.661Z · LW(p) · GW(p)

Given the 99.9999...% chance that you are not alive at any given point in the universe's (or multiverse's) timeline...it's extremely unlikely that you would be alive right now, let alone that you would ever have existed in the first place.

There is an arrangement of pieces of clementine peels on my desk right now; it's exceedingly unlikely that that particular arrangement would happen right now at exactly this place. Therefore there must be an infinite amount of universes in which that arrangement exists or something.

comment by savageorange · 2013-11-17T10:10:59.522Z · LW(p) · GW(p)

I don't understand why you believe quantum mechanics is non-deterministic. Do you, perhaps, believe probability functions are non-deterministic?

Replies from: EGarrett
comment by EGarrett · 2013-11-17T12:06:11.888Z · LW(p) · GW(p)

Hi savageorange, I'm far from an expert in quantum mechanics. But I was led to believe it by a couple sources I came across recently. Particularly the Youtube vid from Minutephysics called "Can we predict everything" which mentions that non-determinism has been proven, and another video, I think from Lawrence Krauss, that mentioned that Einstein's "god does not play dice" was proven wrong by one of Einstein's contemporaries.

By the way, this is the first post I made on the site, I tried not to present it as a topic for formal discussion and mentioned that it was just something fun to ponder from a very layman perspective. The neg votes are a bit of a dismay.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-11-17T21:31:49.450Z · LW(p) · GW(p)

The neg votes are a bit of a dismay.

I understand your feelings.

My advice is to avoid discussing quantum physics for a while. Especially if your knowledge on the topic comes from short slogans from youtube videos. That kind of stuff really gets downvoted here.

(I don't know what was your motivation for that, so I will just guess wildly. It is not necessary to discuss "smart topics" in this forum. That's the kind of signalling that would probably seem cool in Mensa. Here, you could get better results by saying the obvious. Discuss the difficult topics when you feel confident that you understand them; otherwise, it may be better to ask questions.)

Any other topic you may consider interesting to discuss on LW? (In a new top-level comment.) For example, what is the thing you care about that you hope LW might be somehow helpful for? Or just generally, what are the things you care about?