January 1-14, 2012 Open Thread

post by daenerys · 2012-01-01T05:40:16.848Z · LW · GW · Legacy · 129 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

If continuing the discussion becomes impractical, that means you win at open threads; a celebratory top-level post on the topic is traditional.


Poster's Note: omg, it felt so weird typing "2012" up there.

129 comments

Comments sorted by top scores.

comment by daenerys · 2012-01-01T05:40:48.348Z · LW(p) · GW(p)

Experiment: The open threads are always used a lot when they are first posted, but they are quickly forgotten, because they get hidden as the month progresses.

This month I will post two open threads. One for January 1-14, and another for January 15-31. I predict, at P=.8 that there will be significantly more open thread posts in the second half of the month using this 2-post method.

To test my hypothesis, I will average the number of open thread comment in the second half of the month in the past few open thread posts, and compare them to the amount of open thread comments on the January 15-31st post.

This test WILL be biased, as it is neither blind on my side, nor on the commenters' side (aka people who post will read this and see the test I am doing.). This will be somewhat ameliorated by me NOT posting this experiment note in the second thread, and in two weeks' time it will not be foremost in people's minds.

Replies from: billswift, None, Xachariah, Jayson_Virissimo
comment by billswift · 2012-01-01T10:08:09.978Z · LW(p) · GW(p)

Repeating my suggestion from the June 2010 Open Thread:

Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.

I think it would also be good for people who have interesting links but no real comments on them. It is annoying when someone comments on link only posts complaining that the person should not have posted a link without their own commentary.

comment by [deleted] · 2012-01-01T09:01:41.995Z · LW(p) · GW(p)

What's "significantly more"?

Replies from: daenerys
comment by daenerys · 2012-01-01T16:59:21.393Z · LW(p) · GW(p)

What's "significantly more"?

I predict:

P(twice as many subthreads started)= .4

P(50% more)= .7

P(5% more) = .9

comment by Xachariah · 2012-01-02T21:59:09.944Z · LW(p) · GW(p)

I predict, at P=.8 that there will be significantly more open thread posts in the second half of the month using this 2-post method.

Why would you test just the second half of the month, versus total posts in a month?

It seems that you would actually want to measure total participation in a month. I see no reason not to measure it directly, rather than indirectly.

Replies from: daenerys
comment by daenerys · 2012-01-02T23:29:56.203Z · LW(p) · GW(p)

Good idea! Thanks. It requires more counting, but I think it's the better way. Experiment updated!

comment by Grognor · 2012-01-04T03:09:51.305Z · LW(p) · GW(p)

I'm not even sure if this is even worth a comment.

But the "ETA" acronym. Can we stop using this? To normal people, "ETA" means "estimated time of arrival", not "edited to add". Just using "Edit:" instead of "ETA:" will work fine.

Replies from: Sniffnoy, Prismattic, Vaniver
comment by Sniffnoy · 2012-01-04T04:07:01.544Z · LW(p) · GW(p)

I like "Addendum:", but I don't think I've seen anyone else use this.

comment by Prismattic · 2012-01-04T03:41:20.923Z · LW(p) · GW(p)

On the subject of confusing acronyms, I get thrown every time someone uses NT to abbreviate neurotypical; my first take is always that they mean a Myers-Briggs -NT- type, until I realize that there is an alternative which makes the post actually make sense.

comment by Vaniver · 2012-01-04T23:24:58.875Z · LW(p) · GW(p)

Also, P.S. is a fine replacement.

comment by Oscar_Cunningham · 2012-01-01T15:18:07.137Z · LW(p) · GW(p)

Why doesn't Microsoft put an ad-blocker in Internet Explorer to reduce Google's revenue?

EDIT: And that of Facebook.

Replies from: Multiheaded, Jack, FAWS, Rain
comment by Multiheaded · 2012-01-01T19:49:48.406Z · LW(p) · GW(p)

They might not want escalation?

comment by Jack · 2012-01-02T22:25:25.537Z · LW(p) · GW(p)

It would likely leave them open to an anti-trust suit and might even violate their existing settlement agreement with the DOJ.

comment by FAWS · 2012-01-02T19:37:48.072Z · LW(p) · GW(p)

They are more interested in supplanting google with bing than in destroying that business model? And they might have been afraid of accusations of abusing their dominating position in the browser market and not have reevaluated in light of their shrinking market share.

comment by Rain · 2012-01-03T22:44:11.913Z · LW(p) · GW(p)

Because Microsoft's primary customers are not end users, but integration companies such as Dell and HP, and these companies maximize profits by loading PCs up with lots of advertisements.

comment by RomeoStevens · 2012-01-01T10:58:09.934Z · LW(p) · GW(p)

When buying items I often do quite a bit of research, I suspect many people on LW are also the type to do this. I think this is a potential goldmine of information as I would trust the research/reviews of people from LW at least an order of magnitude more than random reviews on the internet.

Something as simple as "what do LWers buy on amazon" would be highly interesting to me. Is there a feasible way of collating this kind of info?

Replies from: magfrump
comment by magfrump · 2012-01-01T19:28:11.152Z · LW(p) · GW(p)

Isn't there a lesswrong referral set up for everything bought via a link from the site? I try to always use this. If it keeps any info about what was purchased that would definitely be interesting.

comment by David_Gerard · 2012-01-01T13:41:52.574Z · LW(p) · GW(p)

I note that the 2012 Welcome thread accumulated ~100 comments in very short order - it looks like there was pent-up demand. It may be worth doing these more often than every year or two as well.

In particular, the about page needs to link to the current welcome thread, not the 2010 one!

Replies from: daenerys
comment by daenerys · 2012-01-01T17:18:22.429Z · LW(p) · GW(p)

It may be worth doing these more often than every year or two as well.

This sounds like the sort of thing that just needs someone to be pro-active about. If you think it's time for a new Intro thread, post one!

(Ditto on open threads, media threads, social interaction threads, etc)

Replies from: David_Gerard
comment by David_Gerard · 2012-01-01T17:31:29.563Z · LW(p) · GW(p)

The catch being that someone has to link it from "about" :-) But yes.

comment by MileyCyrus · 2012-01-01T07:40:15.293Z · LW(p) · GW(p)

Is it ethical to eat a cow with human level intelligence that wanted to be eaten? To avoid convenient worlds, assume the cow not only wants to be eaten, but also likes and approves of being eaten. [Edited to clarify that this is an intelligent cow we're talking about.]

Replies from: Solvent, daenerys, wedrifid, ArisKatsaris, Multiheaded, satt, FAWS, scientism, Emile, shminux, Manfred
comment by Solvent · 2012-01-01T08:16:24.372Z · LW(p) · GW(p)

Well. The default answer is yes, because we like fulfilling preferences.

If the cow was genetically engineered to want to be eaten by a farmer who wanted to sell meat to vegetarians, then we may want to not buy the meat, just to discourage the farmer from doing that in the first place.

That's what I think, anyway.

comment by daenerys · 2012-01-01T09:44:10.839Z · LW(p) · GW(p)

We can already "grow" meat in a lab. I can't imagine that we would have the technology to genetically engineer an intelligent cow that wants to be eaten, but NOT be able to grow whatever meat we want of an extremely high quality.

Even more than that, I would bet the grown meat is more economical in terms of resource usage. ie with grown meat, we are only growing the parts we want, whereas with a cow, all the less-edible meat pieces are probably being wasted.

Replies from: EStokes
comment by EStokes · 2012-01-01T13:12:05.744Z · LW(p) · GW(p)

Isn't that kind of missing the point of the question?

Replies from: daenerys
comment by daenerys · 2012-01-01T17:36:36.779Z · LW(p) · GW(p)

Isn't that kind of missing the point of the question?

There was a recent discussion, Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning, that contemplated the idea that sometimes it is useful to miss the point of the question. When I read the Eat Smart Cow question, it seemed like the type of question that requires said "missing of point". Quote below:

In light of how people process answers to such detailed questions, and how the answers are incorporated into the thought patterns - which might end up used in the real world - is it not in fact most rational not to address that kind of question exactly as specified, but to point out [other options in the hypothetical]?

I am in fact, rather happy that I read something on LW, and applied the thought to a different question a couple days later. It makes me feel like an a growing aspiring rationalist who Learns Things.

Replies from: TimS, ahartell
comment by TimS · 2012-01-01T18:02:30.350Z · LW(p) · GW(p)

As I said in that thread, fighting the hypo is not polite behavior.

On the first day of a physics class, you can ask the professor to justify learning physics given the problem of Cartesian skepticism. The professor might have an interesting answer if she decided to engage you. But what would actually happen is that the professor will ask you to leave, because the conversation will not be a physics conversation, and the social norm is that physics classes are for physics conversations.

In short, the practical conversation you are trying to start is not the same as the theoretical one that was started.

comment by ahartell · 2012-01-02T19:30:35.585Z · LW(p) · GW(p)

The problem is that it doesn't answer the point of the question. In the least convenient possible world, where we can make the cows but can't just grow their meat, how would you answer the question?

comment by wedrifid · 2012-01-01T08:17:44.388Z · LW(p) · GW(p)

Is it ethical to eat a cow that wanted to be eaten?

I eat cows that emphatically don't want to be eaten all the time. I don't have an ethical problem with it.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-01-01T08:20:21.491Z · LW(p) · GW(p)

Sorry, I was talking about the cow that wanted to be eaten in the Hitchhiker's Guide to the Galaxy. That cow was as intelligent as a human adult.

Replies from: wedrifid
comment by wedrifid · 2012-01-01T11:32:33.802Z · LW(p) · GW(p)

Sorry, I was talking about the cow that wanted to be eaten in the Hitchhiker's Guide to the Galaxy. That cow was as intelligent as a human adult.

At a first guess I'd say it is ethical to eat that cow but potentially not ethical to create him in the first place.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-01T21:12:21.518Z · LW(p) · GW(p)

The Ameglian Major Cow — which not only wants to be eaten, but is capable of saying so, clearly and distinctly — seems to be in the same family of ethically problematic artificial intelligences as the house-elf — which wants to serve you, suffers if it cannot do so, and has no defenses against mistreatment.

In both cases, if the creature already exists, you may as well exploit it, since doing so fulfills the creature's own intentions. But it seems to have been created for just the purpose of turning a vice into a virtue: of creating an artificial setup in which doing something that would normally be wrong (killing and eating other sapient beings; keeping slaves and benefiting from their labor) is rendered not-wrong by exceptional circumstances.

And this, in turn, seems calculated to degrade our moral intuitions. I suspect I would not want to meet a person who had grown up around house-elves and Ameglian Major Cows, and therefore expected that all intelligences were similarly eager for exploitation.

comment by ArisKatsaris · 2012-01-01T18:18:39.169Z · LW(p) · GW(p)

I assume that it's not the "eating" that's the ethical problem, but the killing?

So, can't we boil the question to its essentials of "is it ethical to kill creatures that want to be killed (and like and approve it)"?

Replies from: magfrump
comment by magfrump · 2012-01-01T19:25:54.603Z · LW(p) · GW(p)

Oops.

comment by Multiheaded · 2012-01-04T23:05:51.700Z · LW(p) · GW(p)

(here's another one of my crap confessions)

This reminds me of how, when I was about 18 and craved sexual excitement (more than I do now), I correctly guessed that shock, shame and guilt would make great aphrodisiacs for a mind like mine (although they wear off quickly). So I started looking for extreme and transgressive porn on the net. What I found to my liking was guro hentai and written extreme, to the point of lethality ("snuff") BDSM stories. However, I realized that descriptions of blatant non-consent turned on some defense mechanisms that gave me a measure of disgust and anger, not arousal, so I turned to "consensual" extreme material.

I was, back then, amazed to discover that there was a large number of stories featuring entirely willing torture, sexualized killings, and, yes, cannibalism. So, yes, I've read a fair amount of descriptions of humans desiring, eroticizing and actively seeking out being cooked and eaten. Some were quite competently written.

Soon enough, however, the novelty and the psychological effects wore off, and now I prefer much more mild BDSM porn; I feel neither shocked by nor drawn to the stuff I described.

Not really sure if all that is acceptable to disclose on LW, but I suspect that most of you folks won't be outraged or creeped out.

comment by satt · 2012-01-02T16:54:45.243Z · LW(p) · GW(p)

I don't have much beef with that guy who killed and ate a few kilograms of a human volunteer he found on the Internet; eating a cow with human smarts that wants to be eaten doesn't seem qualitatively worse.

Replies from: r_claypool
comment by r_claypool · 2012-01-04T00:45:53.766Z · LW(p) · GW(p)

I would not want that guy in my neighborhood. I want to live around people who will not eat me, even if I go crazy.

comment by FAWS · 2012-01-02T19:39:21.796Z · LW(p) · GW(p)

Yes.

comment by scientism · 2012-01-02T14:59:20.614Z · LW(p) · GW(p)

Probably not. I take it as uncontroversial that some people are insane or mentally unstable and their wants/desires should not be fulfilled. The way to probe this possibility is to ask for a justification of the want/desire. So I'd ask the cow to give reasons for wanting to be eaten. It's hard for me to see how those reasons could be convincing. Certainly "because I'm a cow" wouldn't convince me. I can imagine that an intelligent cow might long for death and seek assisted suicide, since being an intelligent cow would be rather like being a severely disabled human being, but the part where he or she wants to be eaten is alarming.

Replies from: fubarobfusco, MileyCyrus, TheOtherDave, taelor
comment by fubarobfusco · 2012-01-02T18:11:01.003Z · LW(p) · GW(p)

So I'd ask the cow to give reasons for wanting to be eaten. It's hard for me to see how those reasons could be convincing. Certainly "because I'm a cow" wouldn't convince me.

"Why do you want to be eaten?"

"It seems nice. Why do you want to have sex with attractive members of your species?"

"Because it gives me pleasure."

"But you're not having that pleasure right now. You're just anticipating it. Your anticipation is something happening in your brain now, irrespective of whether a particular sex act would actually turn out to be pleasurable. Similarly, my desire to be eaten is happening in my brain now, fully aware of the fact that I won't be around to notice it if it happens. Not that different."

"So? I know where my desire to have sex comes from — it comes from my evolutionary past; members of my ancestors' generations who didn't want sex are much less likely to have had kids. They died alone, or became monks or something."

"And I know where my desire to be eaten comes from — it comes from my genetically-engineered past; members of my ancestors' generations who didn't want to be eaten were discarded. Their bodies ended up being destroyed without being eaten! (And, of course, without having their DNA cloned and propagated.)"

"But that's artificial! You were manipulated!"

"No, I wasn't. My ancestral environment and my ancestors' genes were manipulated; just as yours were manipulated by sexual selection. I've personally never met a genetic engineer in my life! My experience of it is just that ever since puberty, I've really wanted someone to eat me. It's just the sort of organism I am. Human gets horny; cow gets tasty."

"So what you're saying is, you want to be eaten because ..."

"... because I'm a cow, pretty much. So ... steak?"

Replies from: Prismattic
comment by Prismattic · 2012-01-02T20:50:32.241Z · LW(p) · GW(p)

This whole thread puts me in mind of this for some reason.

comment by MileyCyrus · 2012-01-02T16:42:48.739Z · LW(p) · GW(p)

Do you think consensual BDSM is unethical?

Replies from: scientism
comment by scientism · 2012-01-03T00:35:36.399Z · LW(p) · GW(p)

I think that in some extreme forms the ability to consent is called into question. That's what I'm claiming with the cow: The cow's desire is extreme enough to call into question its sanity, which would render it unable to consent, which would make the act unethical. I would say the same about any form of BDSM that results in death.

Replies from: MileyCyrus, TheOtherDave, shminux
comment by MileyCyrus · 2012-01-03T03:59:15.401Z · LW(p) · GW(p)

The cow's desire is extreme enough to call into question its sanity, which would render it unable to consent, which would make the act unethical. I would say the same about any form of BDSM that results in death.

The cow goes to a psychiatrist. The psychiatrist notes that she shows none of the typical signs of insanity: delusional beliefs, poor self-control, emotional distress. The cow simply values being eaten.

If that wouldn't convince you that the cow was sane, what would?

comment by TheOtherDave · 2012-01-03T02:40:51.358Z · LW(p) · GW(p)

Let me make sure I understand this: the fact that the cow consents to death is sufficient evidence to justify the conclusion that the cow is unable to meaningfully consent to death?

Replies from: scientism
comment by scientism · 2012-01-03T16:04:09.720Z · LW(p) · GW(p)

No, absolutely not. The fact that the cow consents to being eaten is potentially evidence that the cow in unable to meaningfully consent to death. Again, the cow might have good reasons to want to die - it might even have good reasons to not care about whether you eat it or not after it's dead - but what I'm disputing is whether it can have good reasons to want to be eaten. These are all extremely different things. Likewise, there may be good reasons for a person to want to die but sexual gratification is not a good reason and it's highly likely to signify mental derangement.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-03T16:23:34.523Z · LW(p) · GW(p)

So, I've asked this elsewhere, but... why is "well, geez, it's more useful than just having me rot in the ground" not a good enough reason to prefer (and not just be indifferent to) being eaten after I'm dead?

Conversely, what makes wanting to be buried underground after I die not evidence that I'm unable to consent? (Many people in the real world seem to have this desire.)

(I don't mean to collide with the cryonics conversation here; we can assume my brain has been cryopreserved in all of these cases if we like. Or not. It has nothing to do with my question.)

Replies from: scientism
comment by scientism · 2012-01-03T16:56:22.830Z · LW(p) · GW(p)

There's a difference between wanting to be eaten and wanting to die in addition to either being indifferent to being eaten afterwards or preferring it. The difference is that in the former case dying is a consequence of the desire to be eaten whereas in the latter case presumably the cow would have a reason to want to die in addition to its preference to want to be eaten afterwards.

The cow that wants to be eaten does not necessarily want to die at all. Death is a consequence of fulfilling its desire to be eaten and to want to be eaten implies that it finds dying an acceptable consequence of being eaten but no more. The cow could say "I don't want to die, I love living, but I want to be eaten and I'm willing to accept the consequences." It could simply value being eaten over living without necessarily wanting to die.

Likewise, I can say, "I don't want to die, but if I do, I'd like to be buried afterwards" and this is obviously a very different thing that saying "I want to be buried and if I have to die in order to be buried I'm willing to accept that consequence."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-03T17:56:18.649Z · LW(p) · GW(p)

Ah, OK. When you said "it might even have good reasons to not care about whether you eat it or not after it's dead - but what I'm disputing is whether it can have good reasons to want to be eaten" I thought you were contrasting indifference with active desire.

Sure, I agree that there's a relevant difference between wanting X after I die and wanting X now, especially when X will kill me.

So, OK, revising... is the fact that the cow desires being eaten enough to accept death as a consequence of satisfying that desire sufficient evidence to justify the conclusion that the cow is unable to meaningfully consent to death?

comment by shminux · 2012-01-03T00:55:44.603Z · LW(p) · GW(p)

So, any suicide attempt must be prevented?

comment by TheOtherDave · 2012-01-02T16:04:34.619Z · LW(p) · GW(p)

That last part seems backward to me. If I'm going to die anyway, why shouldn't I want to be eaten? My corpse has nutritional value; I generally prefer that valuable things be used rather than discarded.

Understand, I don't want to be eaten when I die, but it seems clear to me that I'm the irrational one here, not the cow. It's just that my irrationality on the matter is conventional.

Replies from: Prismattic
comment by Prismattic · 2012-01-02T17:51:01.491Z · LW(p) · GW(p)

The UK supposedly has a rule where you can take home and eat roadkill that you find, but not roadkill that you yourself were responsible for killing. The theoretical incentive problem is fairly obvious, even though enforcability is not.

The blanket prohibition on eating people whether or not they want to be eaten after they die may make sense in terms of not incentivizing other people to terminate them ahead of schedule.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-02T19:41:13.645Z · LW(p) · GW(p)

Sure, I agree that it may be in our collective best interests to prevent individuals from eating one another, whether they want to be eaten or not. It may even be in our best interests to force individuals to assert that they don't want to be eaten, and if so, it's probably best for them to do so sincerely rather than lie, since sincere belief is a much reliable source of such assertions.

I just deny that their desire to be eaten, should they have it, is irrational.

comment by taelor · 2012-01-02T21:53:07.675Z · LW(p) · GW(p)

Probably not. I take it as uncontroversial that some people are insane or mentally unstable and their wants/desires should not be fulfilled. The way to probe this possibility is to ask for a justification of the want/desire.

So, you'd ask the cow to derive ought from is?

comment by Emile · 2012-01-01T10:31:02.162Z · LW(p) · GW(p)

I would definitely be OK with it if the cow was also uploaded before death (though that's far from the least convenient word).

comment by shminux · 2012-01-01T09:56:40.261Z · LW(p) · GW(p)

Yes, provided it was ethical to breed it.

Replies from: TimS
comment by TimS · 2012-01-01T17:55:33.852Z · LW(p) · GW(p)

I'm not sure there's a causal relationship. If you met a cow in the wild that wanted to be eaten, I think it would be ethical to eat it.

But I strongly believe that creating such a cow would be unethical.

Replies from: daenerys
comment by daenerys · 2012-01-01T18:01:53.878Z · LW(p) · GW(p)

I agree. I think there is no difference between creating an intelligent cow that wants to be eaten vs. creating a human that tastes good, and wants to be eaten.

comment by Manfred · 2012-01-01T08:44:59.012Z · LW(p) · GW(p)

Depends on what you mean by "ethical," as many things do.

For me personally, it doesn't seem that bad. However, I don't think it's the optimal outcome - maybe we could encourage the cow to take up some other interests?

comment by SilasBarta · 2012-01-04T16:43:35.520Z · LW(p) · GW(p)

In the spirit of dissolving questions (like Yvain did very well for disease), I wanted to give an off-fthe-cuff breakdown of a similar contentions issue: use of the term "design", as in "cats are designed to be good hunters [of small animals]" or "knives are designed to cut".

Generally, people find both of those intuitive, which leads to a lot of unnecessary dispute between reductionists and anti-reductionists, with the latter claiming that the former implicitly bring teleology into biology.

So, here's what I think is going on when people make a "design" evaluation, or otherwise find the term intuitively applicable: there are a few criteria that make something seem "designed", and if enough of them are satisfied, it "feels" designed. Further, I think there are three criteria, and people call something designed if it meets two of them. Here's how it works.

"X is designed to Y" if at least two of these are met:

1) Goodness: X is good at Y.
2) Narrowness: The things that make X better at Y make it worse at other things normally similar to Y.
3) Human intent: A human crafted X with the intent that it be used for Y. (Alternately: replace human with "an intelligent being".)

(You can think of it as a "hub-and-spoke" neural net model where each of the criteria's being activated make the "design" judgment stronger.)

"A knife is designed to cut" meets all three, and we have no problem calling it designed for that. Likewise for "A computer is designed to do computations quickly".

Now for some harder cases that create fake disagreements:

"A cat is designed to hunt small animals." Cats are good at hunting mice, etc, so it meets 1. They weren't human-designed to catch small animals so they fail 3. Finally, a lot of the things that make them good at catching mice make them unsuited for other purposes. For example, their (less-)social mentality and desire to keep themselves clean makes them harder for mice to detect when hunting, but it prevents them from using hunting tactics that dogs use on larger animals (e.g. "split the pack up and have one of them chase the prey downwind to the rest of the pack").

Thus, the cat example meets 2, and this "narrow optimality" gives us the "feel" of it being designed, and has historically led humans to equate this with 3.

How about this one: "a stone is designed to hurt people". Stones are good at hurting people, so they pass 1. However, they are not narrowly good, nor were they specifically crafted with the intent of hurting people. Thus, we generally don't get the feeling of stones being designed to hurt people.

Try this test out for yourself on things that "feel" designed or non designed.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-04T16:49:28.526Z · LW(p) · GW(p)

In the spirit of dissolving questions (like Yvain did very well for )

That's the most literally dissolved question I've ever seen.

Replies from: SilasBarta
comment by SilasBarta · 2012-01-04T18:11:06.789Z · LW(p) · GW(p)

Sorry, fixed. I was trying to remove the google crap from the URL when I got the search results but overdid it to the point that even the anchor text disappeared.

comment by D_Malik · 2012-01-02T19:47:00.851Z · LW(p) · GW(p)

I'm interested in seeing statistics about what LWers do in their daily lives. In that vein, I'm thinking of a survey about what lifehacks the average LWer uses. This could help people decide which lifehacks to try out.

For instance, most readers of this comment probably know what SRS, nootropics, n-back, self-tracking, polyphasic sleep and so on are, but I'd guess less than 10% of you currently practice each. To reduce the barriers to entry to people actually trying to improve their lives with these tools, we need to get some quick clues about what's worthwhile.

The survey could also measure the reasons why people don't employ certain lifehacks. Is it because they're lazy or because they doubt the lifehack's efficacy or because they've never heard of the hack?

comment by OpenThreadGuy · 2012-01-16T00:36:12.482Z · LW(p) · GW(p)

I plan to make a post for the second half of this month because daenerys hasn't made one yet. Can I have a couple of karma points so I can make it happen?

Replies from: None
comment by [deleted] · 2012-02-16T00:11:42.561Z · LW(p) · GW(p)

You literally are open thread guy, I must admit I didn't expect that.

comment by [deleted] · 2012-01-02T12:34:50.218Z · LW(p) · GW(p)

Is this video (Thomas Sowell on Intellectuals and Society ) and the ideas, concepts and arguments in it worth discussing on LW in a separate discussion thread? By which I mean am I underestimating the mind-killing triggered by some of the political criticisms made by Thomas Sowell in it? Obviously he's politically biased in his own direction, but the fundamental idea that public intellectuals are basically rent seekers seems alarmingly plausible.

Also Thomas Sowell is obviously an intellectual, his criticism should reduce our trust in his criticism. ;)

Replies from: Vaniver
comment by Vaniver · 2012-01-02T13:10:25.757Z · LW(p) · GW(p)

The Sowell I've read has been fascinating and useful, but I agree with much of his politics and his insights seem mostly confined to politics (and economics). I'm not sure if it's worth discussing videos as much as books- they introduce a handful of ideas, but without the extensive justifications and clarifications that exist in the book. That suggests to me that bias will play a larger role in interpretation than it would when considering the full argument.

Replies from: None
comment by [deleted] · 2012-01-02T13:23:21.703Z · LW(p) · GW(p)

Maybe a thread about the book he mentions in the video. I'll need to actually read it then. :)

comment by juliawise · 2012-01-01T16:27:33.153Z · LW(p) · GW(p)

Timelapse video of the Milky Way Stunningly beautiful. I knew I live on a planet spinning through space, but I'd never felt it viscerally.

comment by Vladimir_Nesov · 2012-06-09T00:02:15.135Z · LW(p) · GW(p)

One more item for the FAI Critical Failure Table (humor/theory of lawful magic):

37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.

Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be close enough to the real effect, so the details of expectation-caused phenomena can lawfully exist independently of the content of people's expectations about them. Since a (justified) expectation is sufficient for something to happen, all sorts of miracles can happen. Since to happen, a miracle has to be expected to happen, it's necessary for someone to know about the miracle and to expect it to happen. Learning about a miracle from an untrustworthy (or mistakenly trusted) source doesn't make it happen, it's necessary for the knowledge of possibility (and sufficiently clear description) of a miracle to be communicated reliably (within the tolerance of what counts for an effect to have been correctly anticipated). The path of a powerful wizard is to study the world and its history, in order to make correct inferences about what's possible, thereby making it possible.

comment by David_Gerard · 2012-01-10T11:38:08.164Z · LW(p) · GW(p)

Cracked delivers: 6 Small Math Errors That Caused Huge Disasters.

The first paragraph hook is perfect:

If there are any children reading this, there's really only one thing we want to tell you about adulthood: If you make one tiny mistake, people will die.

Replies from: None
comment by [deleted] · 2012-01-10T13:10:00.605Z · LW(p) · GW(p)

Good article, but they missed a great opportunity to talk about hindsight bias. Those mistakes are only "laughably simple" after the fact.

comment by Prismattic · 2012-01-08T22:37:46.218Z · LW(p) · GW(p)

This correction from the NYT seems almost designed as LW-linkbait for miscellaneous humor.

comment by thescoundrel · 2012-01-06T04:33:51.619Z · LW(p) · GW(p)

My grandmother died tonight. If you are on the fence, or procrastinating, please sign up for cryonics. Costs less then signing up for high speed internet for a year, opens up the possibility of not ceasing to exist. In a side note, I learned again tonight that atheism is no sure sign of rationality: http://www.reddit.com/r/atheism/comments/o4x5u/think_you_are_rational_my_grandmother_died/

Then again, I was not at my most elegant.

Rejoice, all you who read these words, for tonight we live, and so have the continued hope of escaping death.

comment by Multiheaded · 2012-01-07T22:10:20.193Z · LW(p) · GW(p)

Life would've been hell of a lot more fun if it worked on hentai logic. Of course, grass being greener on the other side, people would've probably consumed anti-porn in such a universe. Like, about celibate nuns that actually stay celibate and nothing sexual ever happens to them. This'd be as over-the-top ridiculous as tentacle sex is to us.

Replies from: MixedNuts, orthonormal
comment by MixedNuts · 2012-01-07T22:30:40.322Z · LW(p) · GW(p)

(My taste is heavily slanted in favor of yaoi, but I am told the trend is only slightly weaker in straight porn.) An interesting feature of hentai logic is that some people (women and ukes, or in some works everyone) do not know if they want sex (and usually hold the false belief they do not), whereas their partners have perfect knowledge of their wishes. The connection can be universal, conditional on the active partner loving or lusting after the passive one, or the other way around.

It would be pleasant, for people who enjoy surprises, to really have such a thing as "surprise sex you didn't know you wanted". People who don't can top, and people who don't want that all the time can switch; two tops together produces the current situation and two bottoms together produces the mirrored version of the trope where both partners initiate sex they were initially reluctant to have because they magically sense their partner's desire (though it requires ignorance about one's preference, not false belief that I WOULD NEVER). Having your desires known to your partners is obviously necessary as a replacement[*] if any sex is going to happen; while some may enjoy the intimacy of having the connection depend on mutual feelings, it may be more convenient to have it be universal so that the knowledge is available at all times.

Also, no work would ever get done.

[*] Edit: Actually, one could also have a central authority that looks into the hearts and pants of everyone, and decrees "You two. Get it on.".

Replies from: Multiheaded
comment by Multiheaded · 2012-01-07T22:43:45.881Z · LW(p) · GW(p)

(Reasonable enough, you'd better forward this to Eliezer so that he can think through that possibility deeper and harder for his planned story)

comment by orthonormal · 2012-01-08T00:10:42.447Z · LW(p) · GW(p)

I kept looking for the parent of this comment to figure out what the hell the context was.

comment by David_Gerard · 2012-01-07T18:37:56.816Z · LW(p) · GW(p)

You'll be delighted, I'm sure, to know that the unimpeachably robust scientific researchers of the Discovery Institute have been writing about transhumanism.

Replies from: endoself, David_Gerard
comment by endoself · 2012-01-07T22:59:21.452Z · LW(p) · GW(p)

He invokes standard rhetoric to decry transhumanists' desire to increase their intelligence, but not their compassion. I think this raises an interesting question: are there any drugs that are at least somewhat likely to increase compassion and who is experimenting with them?

Replies from: Nornagest, TheOtherDave
comment by Nornagest · 2012-01-12T19:26:25.188Z · LW(p) · GW(p)

I'm not a biochemist by any means, but I was looking into this a few months back. I was able to find some people experimenting with intranasal oxytocin, who reported increases in subjective trust and empathy; the biological half-life of that delivery route seems very short, though, on the order of minutes.

Then there's the well-known empathogen family, of course.

comment by TheOtherDave · 2012-01-08T00:26:50.261Z · LW(p) · GW(p)

I vaguely recall some results that indicated greater likelihood of experiencing an emotional reaction in response to seeing someone else modeling that emotion (e.g., reporting being sad when someone else in the same room was crying) after taking some drug or another... seratonin, maybe?... but it was many years ago and I may well be misremembering.

Which isn't quite the same thing as compassion, of course, but it wouldn't surprise me to discover they were correlated.

Replies from: endoself
comment by endoself · 2012-01-08T19:18:46.776Z · LW(p) · GW(p)

People at the longecity forum seem to be taking serotonin precursors, including 5-HTP and tryptophan. I'm not sure exactly what benefits they expect to receive, but improved mood is at least one of them.

comment by David_Gerard · 2012-01-08T09:09:17.364Z · LW(p) · GW(p)

And, of course, a conspiracy theorist analysis of the sinister transhumanist cult. I bet you didn't know ems have already been achieved. (courtesy ciphergoth)

comment by Tripitaka · 2012-01-03T17:53:27.999Z · LW(p) · GW(p)

In accordance to this comment and the high number of upvotes received, here is the place to list further threads worthy of preservation, advertising, rejuvenation; also to list short-term/long-term solutions to this problem.

List of threads:

Possible super-easy solutions to better ensure visibility of these threads:

  • make a wiki-page
  • reposting in every other "monthly open thread"
  • include a link in "welcome to lesswrong"-thread

Any and all suggestions are welcome, especially longterm-solutions ; and I hereby precommit to make a wiki-page with the suggestions a week from now and post it in at least the next two open threads.

comment by [deleted] · 2012-01-12T18:46:26.271Z · LW(p) · GW(p)

Real Names Don't Make for Better Commenters, but Pseudonyms Do

Replies from: Nornagest
comment by Nornagest · 2012-01-12T18:53:36.347Z · LW(p) · GW(p)

Interesting that pseudonymous commentators get both more positive and more negative feedback than real-named commentators; granted, the difference on at least the latter is pretty small, and from the Slate article I have no idea if it's statistically significant. (Suspect not, actually; if only four percent of the data set came from people posting under real names, the data set would have to be very large for a 3% spread between that and the pseudonymous baseline to be significant.)

Assuming it is, the first possibility that comes to mind is that pseudonyms lend themselves to constructed identity in a way that real names don't, and that constructed identities tend to be amplified in some sense relative to natural ones, leading to stronger reactions. That's pretty speculative, though.

comment by ahartell · 2012-01-02T19:37:33.041Z · LW(p) · GW(p)

I think it would be cool to have some sort of asterisk next to the usernames of people who have declared "Crocker's Rules". Would that be hard to implement?

Replies from: hamnox
comment by hamnox · 2012-01-02T21:44:39.946Z · LW(p) · GW(p)

I think it was generally assumed that people posting on this site are looking for information rather than social niceties.

Replies from: ahartell
comment by ahartell · 2012-01-02T21:51:26.263Z · LW(p) · GW(p)

I'm not totally sure what your saying here, but if it's that "it's safe to assume everyone here is operating by Crocker's Rules", then I don't think you're right.

comment by Curiouskid · 2012-01-01T22:50:20.448Z · LW(p) · GW(p)

Funny comic = a straw vulcan of a rationalist relationship.

http://www.smbc-comics.com/index.php?db=comics&id=2474#comic

Replies from: fubarobfusco
comment by fubarobfusco · 2012-01-02T03:51:55.374Z · LW(p) · GW(p)

And a good example of the true rejection problem.

comment by MileyCyrus · 2012-01-01T07:34:14.248Z · LW(p) · GW(p)

What are the warmest gloves known to humanity? I have poor circulation in my hands and none of the store gloves ever keep my hands warm outside.

Replies from: shminux, jswan, Douglas_Knight, Vladimir_Golovin, billswift, Manfred
comment by shminux · 2012-01-01T09:55:55.094Z · LW(p) · GW(p)

you can get a battery-operated hand-warmer insert

Replies from: David_Gerard
comment by David_Gerard · 2012-01-01T13:45:59.656Z · LW(p) · GW(p)

I have many friends with bad circulation who swear by the various available USB warming gloves. Not so useful outside, but the obvious Google search turns up many leads.

comment by jswan · 2012-01-06T01:57:24.227Z · LW(p) · GW(p)

I have poor circulation (a touch of Reynaud's syndrome) as well, and I've tried a great many products in the context of cycling, ice climbing, and just general being outside in the cold. The short answer is that there are no gloves that will reliably keep your hands warm and allow you to retain dexterity if you're not getting your heart rate up to promote circulation. Mittens work better, by far. In no particular order, here are some more long winded tips:

1) Use mittens whenever possible. Ones that allow skin-to-skin contact between your fingers work best.

2) Keep gloves in your pockets and switch from the mittens to the gloves when you need dexterity.

3) Cut off a pair of small wool socks to make wrist warmers. This helps but isn't a panacea.

4) Use chemical handwarmers when necessary.

5) If you have to use gloves, some relatively cheap options that work well include, in order of warmth: a) freezer gloves, b) lined elk skin gloves (available at large hardware stores), c) Gore Windstopper gloves, available in outdoor shops.

6) Try to keep your heart rate up when outside, with your hands below your heart. This helps a lot.

7) Never wear wet gloves. If you're going to get wet, alternate two or more pairs of gloves and keep the extras inside your jacket where they will stay warm and dry out a bit.

8) Consider vapor barrier gloves or mittens from RBH Designs if you want to spend some real money. I have not personally tried their handwear, but their vapor barrier socks are impressively warm and perform as advertised.

comment by Douglas_Knight · 2012-01-04T06:00:49.783Z · LW(p) · GW(p)

Better insulating your torso will increase circulation to your extremities. It's probably easier.

comment by Vladimir_Golovin · 2012-01-01T12:58:41.325Z · LW(p) · GW(p)

Next time we have a real winter here, I'm getting me a pair of thermal gloves from an outdoor sports shop. Haven't tried them yet, but I'm very satisfied with my pair of thermal underwear I bought last winter.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-01T18:25:20.114Z · LW(p) · GW(p)

I got a kickass coat at a sports shop last month, literally half as thick and heavy as my other one yet better insulated cause it's made out of technobabble. Didn't know that winter clothing has such a gradient of quality.

Replies from: Alicorn, Xachariah
comment by Alicorn · 2012-01-02T05:34:17.469Z · LW(p) · GW(p)

it's made out of technobabble

Stealing this phrase.

comment by Xachariah · 2012-01-02T22:17:26.775Z · LW(p) · GW(p)

Would you be willing check and see what the material/name/brand is?

I would be interested in purchasing a such a coat, or one which is similar.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-04T22:43:20.986Z · LW(p) · GW(p)

It's Outventure.

comment by billswift · 2012-01-01T10:18:02.907Z · LW(p) · GW(p)

I have had to work outside in cold weather, often with thin gloves, try getting a coat with better insulated sleeves, or even adding insulation (think leg warmers) to your existing coat. Also knit wristlets will help an amazing, too - not only is there often a variable gap between cuff and glove, but that is where our circulation is closest to the surface. Also, mittens are always warmer than gloves of similar, or even quite a bit heavier, weight. If you don't need to use your fingers, try them; and despite the other comment, they are usually warmer without an inner glove.

comment by Manfred · 2012-01-01T08:55:15.949Z · LW(p) · GW(p)

Maybe try light gloves (like knit) inside a big pair of water-resistant mittens.

comment by [deleted] · 2012-01-12T16:52:38.760Z · LW(p) · GW(p)

I was wondering, why aren't don't articles that are part of a sequence tagged as such?

For example: fun theory sequence

I often get linked to one of the older articles and then have to do a search first to figure out which sequence it is a part of and then to find the other parts of the sequence.

comment by [deleted] · 2012-01-12T16:24:03.491Z · LW(p) · GW(p)

I wasn't sure this was worth its own discussion thread, so I rather dumped it here.

Manna

A neat science fiction story set in the period of transition to a automated society and later in a post-scarcity world. There are several problems with it however, the greatest is that so far the author seems to have assumed that "want" and "envy" are tied to primarily tied in material needs. This is simply not true.

I would love to live in a society with material equality on a sufficiently hight standard, I'd however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.

Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.

I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn't be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar's number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don't really figure in their "my social status compared to others" part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.

To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about "equality". We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.

I'm pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect... dedicating resoruces for something like a real life space elevator simply wouldn't cross their minds. People I think systematically overestimate how much something being "real" matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining "peas" (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn't buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy.

I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.

It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.

It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years dosen't seem like something fleas on a insignificant speck should sneer at.

comment by [deleted] · 2012-01-01T16:21:35.556Z · LW(p) · GW(p)

Who do I talk to to change my Less Wrong username?

Replies from: Multiheaded, saturn
comment by Multiheaded · 2012-01-01T19:54:56.664Z · LW(p) · GW(p)

Are you hesitant to test the default hypothesis of "Eliezer Yudkowsky" because of the vast disutility of interrupting him with such insignificant earthly matters?

Replies from: None
comment by [deleted] · 2012-01-01T20:45:14.294Z · LW(p) · GW(p)

Yes. And if SI is letting Eliezer be webmaster as well as AI Researcher, then something is totally wrong.

Replies from: Multiheaded
comment by Multiheaded · 2012-01-01T20:50:04.690Z · LW(p) · GW(p)

It's certainly letting him be a fanfiction writer. And even a fanfiction reader too.

Replies from: None, moridinamael
comment by [deleted] · 2012-01-01T21:09:45.223Z · LW(p) · GW(p)

Yes, that's true. Perhaps I'd be more disturbed by this if I was more sane and enjoyed the fan fic less.

comment by moridinamael · 2012-01-02T17:56:14.323Z · LW(p) · GW(p)

That fanfiction has probably done more to raise the sanity waterline than lesswrong.com. I'm basing this assertion on the fact that five of my friends have read HP:MOR and all seemed to learn from it, but none have the patience or free time to invest in LW.

(I just realized that this kind of thought is why we have open threads. It's an observation I've been kicking around for a while but it never seems appropriate to bring up on this site.)

comment by saturn · 2012-02-26T01:38:12.786Z · LW(p) · GW(p)

Probably Matt, although he might tell you to just create a new account.

comment by [deleted] · 2012-01-01T14:45:53.828Z · LW(p) · GW(p)

What is everybody's Twitter handle? I want to follow you.

Mine is @michaelcurzi.

Replies from: David_Gerard, taelor, arundelo, Grognor, Jack
comment by David_Gerard · 2012-01-10T11:26:49.752Z · LW(p) · GW(p)

@davidgerard

By the way - if you use the web interface, you'll notice it's been sucking real bad lately. I've been finding the mobile site far more usable.

comment by arundelo · 2012-01-04T04:30:51.354Z · LW(p) · GW(p)

@arundelo. (I'm not very active.)

comment by Grognor · 2012-01-04T03:13:41.741Z · LW(p) · GW(p)

@Rongorg. I see you also follow Steven Kaas, who is the best tweeter.

comment by Jack · 2012-01-02T22:32:58.641Z · LW(p) · GW(p)

@dearerstill

More people should reply to this. I tweet for my audience so if Less Wrong people follow me I will tweet a) more and b) about things you will probably find interesting.

comment by [deleted] · 2012-02-09T16:41:31.594Z · LW(p) · GW(p)

Our brains are paranoid. The feeling illustrated by this comic is, I must unfortunately admit, pretty familiar.

comment by TimS · 2012-01-06T01:49:32.769Z · LW(p) · GW(p)

It seems uncontroversial that a substantial amount of behavior that society labels as altruistic (i.e. self-sacrificing) can be justified by decision theoretic concepts like reputation and such. For example, the "altruistic" behavior of bonobos is strong evidence to me that more altruism can be justified by decision theory than I know decision theory. (Obviously, this assumes that bonobo behavior is as de Waal describes).

Still, I have an intuition that human morality cannot be completely justified on the basis of decision theory. Yes, superrationality and such, but that's not mathematically rigorous AFAIK and thus is susceptible to being used as a just-so story.

Does anyone else have this intuition? Can the sense that morality is more than game theory be justified by evidence or formal logic?

Replies from: Vladimir_Nesov, TheOtherDave
comment by Vladimir_Nesov · 2012-01-06T02:13:14.532Z · LW(p) · GW(p)

Morality is a goal, like making paperclips. That doesn't follow from game-theoretic considerations.

Replies from: TimS
comment by TimS · 2012-01-06T03:45:09.418Z · LW(p) · GW(p)

Fair enough. But I still have the intuition that a common property of moral theories is a commitment to instrumental values that require decisions different from those recommended by game theory.

One response is to assert that game theory is about maximizing utility, so any apparent contradiction between game theory and your values arises solely out of your confusion about the correct calculation of your utility function (i.e. the value should adjust the utility pay-out so that game theory recommends the decision that is consistent with your values). I find this answer unsatisfying, but I'm not sure if the dissatisfaction is rational.

comment by TheOtherDave · 2012-01-06T02:00:22.839Z · LW(p) · GW(p)

Yes, lots of other people have the intuition that human morality requires more than decision theory to justify it. For example, it's a common belief among several sorts of theists that one cannot have morality without some form of divine intervention.

Replies from: TimS
comment by TimS · 2012-01-06T03:39:09.550Z · LW(p) · GW(p)

I wasn't clear. My question wasn't about the justifications so much as the implications of morality.

In other words, is it a common property of moral theories that they call for different decisions than called for by decision theory?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-06T04:06:32.543Z · LW(p) · GW(p)

I suspect we're still talking past each other. Perhaps it will help to be concrete.

Can you give me an illustrative example of a situation where decision theory calls for a decision, where your intuition is that moral theories should/might/can call for a different decision?

Replies from: TimS
comment by TimS · 2012-01-06T05:23:10.851Z · LW(p) · GW(p)

I'm thinking of a variant of Parfit's Hitchhiker. Suppose the driver lets you in the car. When you get to the city, decision theory says not to pay.

To avoid that result, you can posit reputation-based justifications (protecting your own reputation, creating an incentive to rescue, etc). Or you can invoke third-party coercion (i.e. lawsuit for breach of contract). But I think it's very plausible to assert that these mechanisms wouldn't be relevant (it's a big, anonymous city, rescuing hitchhikers from peril is sufficiently uncommon, how do you start a lawsuit against someone who just walks away and disappears from your life).

Yet I think most moral theories currently in practice say to pay despite being able to get away with not paying.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-06T15:03:12.774Z · LW(p) · GW(p)

OK, I think I understand what you're saying a little better... thanks for clarifying.

It seems to me that decision theory simply tells me that if I estimate that paying the driver improves the state of the world (including the driver) by some amount that I value more than I value the loss to me, then I should pay the driver, and if not I shouldn't. And in principle it gives me some tools for estimating the effect on the world of paying or not-paying the driver, which in practice often boil down to "answer hazy, try again later".

Whereas most moral theories tell me whether I should pay the driver or not, and the most popularly articulated real-world moral theories tell me to pay the driver without bothering to estimate the effect of that action on the world in the first place. Which makes sense, if I can't reliably estimate that effect anyway.

So I guess I'd say that detailed human morality in principle can be justified by decision theory and a small number of value choices (e.g., how does value-to-me compare to value-to-the-world-other-than-me), but in practice humans can't do that, so instead we justify it by decision theory and a large number of value choices (e.g., how does fulfilling-my-commitments compare to blowing-off-my-commitments), and there's a big middle ground of cases where we probably could do that but we're not necessarily in the habit of doing so, so we end up making more value choices than we strictly speaking need to. (And our formal moral structures are therefore larger than they strictly speaking need to be, even given human limitations.)

And of course, the more distinct value choices I make, the greater the chance of finding some situation in which my values conflict.

comment by Multiheaded · 2012-01-01T18:02:54.452Z · LW(p) · GW(p)

Pleased to finally meet you, agent.

I'm A BOMB.