Open thread, August 19-25, 2013

post by David_Gerard · 2013-08-19T06:58:15.174Z · LW · GW · Legacy · 326 comments

Contents

326 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

326 comments

Comments sorted by top scores.

comment by Omid · 2013-08-19T16:28:21.937Z · LW(p) · GW(p)

Commercials sound funnier if you mentally replace "up to" with "no more than."

Replies from: bbleeker
comment by Sabiola (bbleeker) · 2013-08-20T14:56:22.433Z · LW(p) · GW(p)

Also easier to translate. In fact, we often translate "up to" with "maximaal", the equivalent of "up to a maximum of" in Dutch. But of course that only translates the practical sense, and leaves out the implication of "up to a maximum of xx (and that is a LOT)". We could translate it with "wel" ("wel xx" ~ "even as much as xx"), but in most contexts, that sounds really... American, over the top, exaggerated. And also it doesn't sound exact enough, when it clearly is intended to be a hard limit.

comment by pan · 2013-08-19T22:12:51.663Z · LW(p) · GW(p)

Why doesn't CFAR just tape record one of the workshops and throw it on youtube? Or at least put the notes online and update them each time they change for the next workshop? It seems like these two things would take very little effort, and while not perfect, would be a good middle ground for those unable to attend a workshop.

I can definitely appreciate the idea that person to person learning can't be matched with these, but it seems to me if the goal is to help the world through rationality, and not to make money by forcing people to attend workshops, then something like tape recording would make sense. (not an attack on CFAR, just a question from someone not overly familiar with it).

Replies from: sixes_and_sevens, ChristianKl, Benito, somervta
comment by sixes_and_sevens · 2013-08-21T11:39:43.494Z · LW(p) · GW(p)

I'm a keen swing dancer. Over the past year or so, a pair of internationally reputable swing dance teachers have been running something called "Swing 90X", (riffing off P90X). The idea is that you establish a local practice group, film your progress, submit your recordings to them, and they give you exercises and feedback over the course of 90 days. By the end of it, you're a significantly more badass dancer.

It would obviously be better if everything happened in person, (and a lot does happen in person; there's a massive international swing dance scene), but time, money and travel constraints make this prohibitively difficult for a lot of people, and the whole Swing 90X thing is a response to this, which is significantly better than the next best thing.

It's worth considering if a similar sort of model could work for CFAR training.

comment by ChristianKl · 2013-08-19T22:28:05.215Z · LW(p) · GW(p)

One of the core ideas of CFAR is to develop tools to teach rationality. For that purpose it's useful to avoid making the course material completely open at this point in time. CFAR wants to publish scientific papers that validate their ideas about teaching rationality.

Doing things in person helps with running experiments and those experiments might be less clear when some people already viewed the lectures online.

Replies from: pan, Frood
comment by pan · 2013-08-19T23:31:23.050Z · LW(p) · GW(p)

I guess I don't see why the two are mutually exclusive, I doubt everyone would stop attending workshops if the material was freely available, and I don't understand why something can't be published if it's open sourced first?

comment by Frood · 2013-08-20T06:07:16.772Z · LW(p) · GW(p)

I'm guessing that the goal here is to gather information on how to teach rationality to the 'average' person? As in, the person off of the street who's never asked themselves "what do I think I know and how do I think I know it?". But as far as I can tell, LWers make up a large portion of the workshop attendees. Many of us will have already spent enough time reading articles/sequences about related topics that it's as if we've "already viewed the lectures online".

Also, it's not as if the entire internet is going to flock to the content the second that it gets posted. There will still be an endless pool of people to use in the experiments. And wouldn't the experiments be more informative if the data points weren't all paying participants with rationality as a high priority? Shouldn't the experiments involve trying to teach a random class of high-schoolers or something?

What am I missing?

Replies from: ChristianKl
comment by ChristianKl · 2013-08-26T17:26:30.315Z · LW(p) · GW(p)

And wouldn't the experiments be more informative if the data points weren't all paying participants with rationality as a high priority?

As far as I understand that isn't the case. They do give out scholarship, so not everyone pays. I also thinks that they do testing of the techniques outside of the workshops.

Shouldn't the experiments involve trying to teach a random class of high-schoolers or something?

Doing research costs money and CFAR seems to want to fund itself through workshop fees. If they would focus on high school classes they would need a different source of funding.

comment by Ben Pace (Benito) · 2013-08-21T13:59:35.837Z · LW(p) · GW(p)

Is a CFAR workshop like a lecture? I thought it would be closer to a group discussion, and perhaps subgroups within. This would make a recording highly unfocused and difficult to follow.

Replies from: somervta
comment by somervta · 2013-08-22T09:25:36.554Z · LW(p) · GW(p)

Any one unit in the workshop is probably something in between a lecture, a practice session and a discussion between the instructor and the attendees. Each unit is different in this respect. For most of the units, a recording of a session would probably not be very useful on its own.

comment by somervta · 2013-08-21T01:51:24.072Z · LW(p) · GW(p)

(April 2013 Workshop Attendee)

(The argument is that) A lot of the CFAR workshop material is very context dependent, and would lose significant value if distilled into text or video. Personally speaking, a lot of what I got out of the workshop was only achievable in the intensive environment - the casual discussion about the material, the reasons behind why you might want to do something, etc - a lot of it can't be conveyed in a one hour video. Now, maybe CFAR could go ahead and try to get at least some of the content value into videos, etc, but that has two concerns. One is the reputational problem with 'publishing' lesser-quality material, and the other is sorta-almost akin to the 'valley of bad rationality'. If you teach someone, say, the mechanics of aversion therapy, but not when to use it, or they learn a superficial version of the principle, that can be worse than never having learned it at all, and it seems plausible that this is true of some of the CFAR material also.

Replies from: pan
comment by pan · 2013-08-21T15:33:22.306Z · LW(p) · GW(p)

I agree that there are concerns, and you would lose a lot of the depth, but my real concern is with how this makes me perceive CFAR. When I am told that there are things I can't see/hear until I pay money, it makes me feel like it's all some sort of money making scheme, and question whether the goal is actually just to teach as many people as much as possible, or just to maximize revenue. Again, let me clarify that I'm not trying to attack CFAR, I believe that they probably are an honest and good thing, but I'm trying to convey how I initially feel when I'm told that I can't get certain material until I pay money.

It's akin to my personal heuristic of never taking advice from anyone who stands to gain from my decision. Being told by people at CFAR that I can't see this material until I pay the money is the opposite of how I want to decide to attend a workshop, I instead want to see the tapes or read the raw material and decide on my own that I would benefit from being in person.

Replies from: metastable, palladias, tgb, somervta
comment by metastable · 2013-08-21T19:16:22.074Z · LW(p) · GW(p)

Yeah, I feel these objections, and I don't think your heuristic is bad. I would say, though, and I hold no brief for CFAR, never having donated or attended a workshop, that there is another heuristic possibly worth considering: generally more valuable products are not free. There are many exceptions to this, and it is possible for sellers to counterhack this common heuristic by using higher prices to falsely signal higher quality to consumers. But the heuristic is not worthless, it just has to be applied carefully.

comment by palladias · 2013-08-25T18:23:24.412Z · LW(p) · GW(p)

We do offer some free classes in the Bay Area. As we beta-test tweaks or work on developing new material, we invite people in to give us feedback on classes in development. We don't charge for these test sessions, and, if you're local, you can sign up here. Obviously, this is unfortunately geographically limited. We do have a sample workshop schedule up, so you can get a sense of what we teach.

If the written material online isn't enough, you can try to chat with one of us if we're in town (I dropped in on a NYC group at the beginning of August). Or you can drop in an application, and you'll automatically be chatting with one of us and can ask as many questions as you like in a one-on-one interview. Applying doesn't create any obligation to buy; the skype interview is meant to help both parties learn more about each other.

comment by tgb · 2013-08-21T18:28:37.815Z · LW(p) · GW(p)

While you have good points, I would like to say that making money is not unaligned with the goal of teaching as many people as possible. It seems like a good strategy is to develop high-quality material by starting off teaching only those able to pay. This lets some subsidize the development of more open course material. If they haven't gotten to the point where they have released the subsidized material, then I'd give them some more time and judge them again in some years. It's a young organization trying to create material from scratch in many areas.

comment by somervta · 2013-08-22T09:21:09.043Z · LW(p) · GW(p)

I feel your concerns, but tbh I think the main disconnect is the research/development vs teaching dichotomy, not (primarily) the considerations I mentioned. The volunteers at the workshop (who were previous attendees) were really quite emphatic about how much they had improved, including content and coherency as well as organization.

(Relevant)

comment by drethelin · 2013-08-22T19:00:12.299Z · LW(p) · GW(p)

I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.

comment by Omid · 2013-08-20T16:02:11.575Z · LW(p) · GW(p)

This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.

It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.

For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work women do, they end up thinking that they're doing their share when they aren't. Auctions safeguard against this abuse. Instead of the wife just cleaning the bathroom, she and her husbands bid for how much they'd be willing to clean the bathroom for. The lower bid is considered the fair market price of cleaning the bathroom. Then she and her husband engage in a joint-purchase auction to decide if the bathroom will be cleaned at all. Either the bathroom gets cleaned and the cleaner gets fairly compensated, or the bathroom doesn't get cleaned because the total utility of cleaning the bathroom is less than the disutility of cleaning the bathroom.

And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.

Replies from: kalium, Manfred, passive_fist, knb, Luke_A_Somers, Multiheaded, NancyLebovitz, maia, shminux
comment by kalium · 2013-08-21T05:46:58.696Z · LW(p) · GW(p)

This sounds interesting for cases where both parties are economically secure.

However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.

Replies from: Viliam_Bur, Fronken
comment by Viliam_Bur · 2013-08-31T10:58:41.466Z · LW(p) · GW(p)

my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception.

I believe you are wrong. (Or I am; in which case please explain to me how.) Here is what I would do it if I lived with a bunch of millionaires, assuming my money is limited:

The first time, I would ask a realistic price X. And I would do the chores. I would put the gained money apart into "the money I don't really own, because I will use them in future to get my status back" budget.

The second time, I would ask 1.5 × X. The third time, 2 × X. The fourth time, 3 × X. If asked, I would explain the change by saying: "I guess I was totally miscalibrated about how I value my time. Well, I'm learning. Sorry, this bidding system is so new and confusing to me." But I would act like I am not really required to explain anything.

Let's assume I always do the chores. Then my income grows exponentially, which is a nice thing per se, but most importantly, it cannot continue forever. At some moment, my bid would be so insanely high, that even Bill Gates would volunteer to do the chores instead. -- Which is completely okay for me, because I would pay him the $1000000000 per hour from my "get the status back" budget, which at the given time already contains the money.

That's it. Keep your money from chores in a separate budget and use them only to pay others for doing the chores. Increase or decrease the bids depending on the state of that budget. If the price becomes relatively stable, there is no way you would do more chores than the other people around you.

The only imbalance I can imagine is if you have a housemate A which always bids more than a housemate B, in which case you will end up between them, always doing more chores than A but less than B. Assuming there are 10 A's and 1 B, and the B is considered very low status, this might result in a rather low status for you, too. -- The system merely guarantees you won't get the lowest status, even if you are the less wealthy person in the house; but you can still get the second-lowest place.

comment by Fronken · 2013-08-24T17:50:37.432Z · LW(p) · GW(p)

Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.

comment by Manfred · 2013-08-20T17:16:54.578Z · LW(p) · GW(p)

Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.

Replies from: maia
comment by maia · 2013-08-20T18:16:35.741Z · LW(p) · GW(p)

Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of?

We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.

Replies from: someonewrongonthenet, Manfred
comment by someonewrongonthenet · 2013-08-20T20:36:39.029Z · LW(p) · GW(p)

What kind of gaming the system were you thinking of?

If the idea is to say exactly how much you are willing to pay, there would be an incentive to:

1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high

2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.

In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards.

No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all,

we trust each other not to be cheats and jerks. That’s true love, baby

Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?)

Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards.

Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unmet needs.

Replies from: maia
comment by maia · 2013-08-21T02:51:59.833Z · LW(p) · GW(p)

1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high

But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you?

2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.

I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.

The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding.

I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them.

whenever there is real disagreement, you use money to effect an easy compromise.

This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-08-21T13:19:24.151Z · LW(p) · GW(p)

if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.

You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping.

But if other people bid high, then you have to pay more.

Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it."

If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here.

When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item.

it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1

So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands.

Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive.

One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of this rule is that it creates a stronger incentive to bid snipe.

Price inflation (seller's strategy) and bid sniping (buyer's strategy) are the two basic forms of deception in this game. Your rule empowers the buyer to set the price, thereby making price inflation harder at the cost of making bid sniping easier. I don't think there is a way around this - it seems to be a general property of trading. Finding a way around it would probably solve some larger scale economic problems.

Replies from: rocurley
comment by rocurley · 2013-08-21T19:36:18.221Z · LW(p) · GW(p)

(I'm one of the other users/devs of Choron)

There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of).

Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example.

If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore.

A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-08-23T01:22:00.410Z · LW(p) · GW(p)

at the cost of determinism

And market efficiency.

Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.

Replies from: rocurley
comment by rocurley · 2013-08-23T03:44:30.858Z · LW(p) · GW(p)

To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right.

If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-08-25T17:05:03.172Z · LW(p) · GW(p)

To be sure I'm following you...

Yes, that's one of the inefficiencies. The other inefficiency is that whenever the 2nd player wins, the service gets more expensive.

If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?

Because of the fact that the service gets more expensive. When there are multiple players, this might not seem like such a big deal - sure, you might pay more than the cheapest possible price, but you are still ultimately all benefiting (even if you aren't maximally benefiting). Small market inefficiencies are tolerable.

It's not so bad with 3 players who bid 20, 30, 40, since even if the 30-bidder wins, the other two players only have to pay 15 each. It's still inefficient, but it's not worse than no trade.

However, when your economy consists of two people, market inefficiency is felt more keenly. Consider the example I gave earlier once more:

I bid 30. You bid 40. So I can sell you my service for $30-$40, and we both benefit. . But wait! The coin flip makes you win the auction. So now I have to pay you $40.

My stated preference is that I would not be willing to pay more than $30 for this service. But I am forced to do so. The market inefficiency has not merely resulted in a sub-optimal outcome - it's actually worse than if I had not traded at all!

Edit: What's worse is that you can name any price. So suppose it's just us two, I bid $10 and you bid $100, and it goes to the second bidder...

Replies from: rocurley
comment by rocurley · 2013-08-27T01:28:00.002Z · LW(p) · GW(p)

I don't think that the service gets more expensive under a second price auction (which Choron uses). If you bid $10 and I bid $100, normally it would go to you for $100. In the randomized case, it might go to me for $100.

I think I agree with you about the possibility of harm in the 2 person case.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-08-27T15:59:13.310Z · LW(p) · GW(p)

I don't think that the service gets more expensive under a second price auction (which Choron uses). If you bid $10 and I bid $100, normally it would go to you for $100. In the randomized case, it might go to me for $100.

Oh yes, that's right. I think I initially misunderstood the rules of the second price - I thought it would be $10 to me or $100 to you , randomly chosen.

comment by Manfred · 2013-08-20T20:54:50.992Z · LW(p) · GW(p)

What kind of gaming the system were you thinking of?

Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.

Replies from: maia, juliawise
comment by maia · 2013-08-21T02:42:28.487Z · LW(p) · GW(p)

Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself.

a method of communicating between people who like each other and have outside-the-market reasons to cooperate

Good way to describe it.

comment by juliawise · 2013-10-13T15:00:26.144Z · LW(p) · GW(p)

Except she specifies that if they're bidding above market wages for a task (cleaning the bathroom would work fine), they'll just pay someone else to do it. Of course, chores like getting up to deal with a sick child are not so outsourceable.

comment by passive_fist · 2013-08-22T08:06:30.665Z · LW(p) · GW(p)

Most couples agree that chores and common goods should be split equally.

I'm skeptical that most couples agree with this.

Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-22T21:20:14.543Z · LW(p) · GW(p)

I'm skeptical that most couples agree with this.

Most couples worldwide, or most couples in W.E.I.R.D. societies?

Replies from: passive_fist
comment by passive_fist · 2013-08-23T02:12:41.998Z · LW(p) · GW(p)

Both.

comment by knb · 2013-08-20T22:39:06.201Z · LW(p) · GW(p)

Wow someone else thought of doing this too!

My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.

Replies from: Vaniver
comment by Vaniver · 2013-08-22T23:54:58.887Z · LW(p) · GW(p)

Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.

This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T10:44:37.280Z · LW(p) · GW(p)

The problem could be that they actually are willing to do it for $10, but it's a low-status thing to admit.

If we both lived in the same appartment, and we both pretended that our time is precious that we are only willing to clean the appartment for $1000... and I do it 50% of the time, and you do it 50% of the time, at the end none of us gets poor despite the unrealistic prices, because each of us gets all the money back.

Now when the third person comes and cares about money more than about status (which is easier for them, because they don't live in the same appartment with us), our pretending is exposed and we become either more honest or poor.

comment by Luke_A_Somers · 2013-08-20T18:04:41.267Z · LW(p) · GW(p)

I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.

Replies from: dreeves
comment by dreeves · 2013-09-23T01:37:26.029Z · LW(p) · GW(p)

Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions).

But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.

comment by Multiheaded · 2013-08-22T17:33:39.360Z · LW(p) · GW(p)

And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.

P.S.: those last two sentences ("No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.") also remind me of "If those women were really oppressed, someone would have tended to have freed them by then."

Replies from: Omid
comment by Omid · 2013-08-23T00:59:22.034Z · LW(p) · GW(p)

The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.

I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.

Replies from: Nornagest, fubarobfusco
comment by Nornagest · 2013-08-26T00:37:46.504Z · LW(p) · GW(p)

The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex

The modern BDSM culture's origins are somewhat obscure, but I don't think I'd be comfortable saying it was created by nerds despite its present demographics. The leather scene is only one of its cultural poles, but that's generally thought to have grown out of the post-WWII gay biker scene: not the nerdiest of subcultures, to say the least.

I don't know as much about the origins of poly, but I suspect the same would likely be true there.

comment by fubarobfusco · 2013-08-25T23:48:03.857Z · LW(p) · GW(p)

The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex.

Hmm, I don't know that I would consider those rules overall to be clearly superior for everyone, although they do reasonably well for me. Rather, I value the existence of different subcultures with different norms, so that people can choose those that suit their predilections and needs.

(More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.)

There are certain of those social rules that seem to be pretty clear improvements to me, though — chiefly the increased care on the subject of consent. That's an improvement in a vanilla-monogamous-heteronormative subculture as well as a kink-poly-genderqueer one.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T11:34:48.501Z · LW(p) · GW(p)

(More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.)

This works best if none of the "subcultures with different norms" creates huge negative externatilies for the rest of the society. Otherwise, some people get angry. -- And then we need to go meta and create some global rules that either prevent the former from creating the externalities, or the latter from expressing their anger.

I guess in case of BDSM subculture this works without problems. And I guess the test of the polyamorous community will be how well they will treat their children (hopefully better than polygamous mormons treat their sons), or perhaps how will they handle the poly- equivalents of divorce, especially the economical aspects of it (if there is a significant shared property).

comment by NancyLebovitz · 2013-08-24T10:06:04.357Z · LW(p) · GW(p)

One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.

comment by maia · 2013-08-20T18:14:14.661Z · LW(p) · GW(p)

Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far.

The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.

comment by Shmi (shminux) · 2013-08-20T18:26:36.261Z · LW(p) · GW(p)

I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful.

One (unrelated) interesting quote:

my womb is worth about the cost of one graduate-level course at Columbia, assuming I’m interested in bearing your kid to begin with.

comment by David_Gerard · 2013-08-19T06:59:14.080Z · LW(p) · GW(p)

Weekly open threads - how do you think it's working?

Replies from: Emile, RolfAndreassen
comment by Emile · 2013-08-19T07:22:52.887Z · LW(p) · GW(p)

I think it's much better than monthly open threads - back then, I would sometimes think "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more".

Replies from: Manfred
comment by Manfred · 2013-08-19T12:49:16.086Z · LW(p) · GW(p)

You haven't ever posted a top-level comment in a weekly open thread.

Replies from: Kaj_Sotala, Tenoke
comment by Kaj_Sotala · 2013-08-19T18:18:08.769Z · LW(p) · GW(p)

I have, and I agree with Emile's assessment.

comment by Tenoke · 2013-08-19T13:10:51.404Z · LW(p) · GW(p)

What has that to do with it?

Replies from: Manfred, Kawoomba
comment by Manfred · 2013-08-19T13:57:56.251Z · LW(p) · GW(p)

Suppose we were wondering about changing the flavor of our pizza. Someone says "Yeah, I'm really glad you've got these new flavors on your menu, I used to think the old recipe was boring and didn't order it much."

And then it turns out that this person hasn't ever actually tried any of your new flavors of pizza.

Sort of sets an upper bound on how much the introduction of new flavors has impacted this person's behavior.

Replies from: Tenoke, bogdanb, Emile
comment by Tenoke · 2013-08-19T14:16:31.389Z · LW(p) · GW(p)

You can judge a lot more about a thread than about a pizza by just looking at it.

Also, if you seriously think that Open Threads can only be evaluated by people with top-level comments in them you probably misunderstand both how most people use the Open Threads and what is required to judge them.

Replies from: Manfred
comment by Manfred · 2013-08-19T16:42:00.661Z · LW(p) · GW(p)

I think you can judge quite a lot about pizza without eating it. That merely wasn't what I was talking about. Don't bait and switch conversations please.

Replies from: Tenoke
comment by Tenoke · 2013-08-19T16:56:36.130Z · LW(p) · GW(p)

Don't bait and switch conversations please.

irony.

comment by bogdanb · 2013-08-19T19:34:42.322Z · LW(p) · GW(p)

Note that he didn’t say “I didn’t post much”, he just said that there existed times when he thought about posting but didn’t because of the age of the thread. That is useful evidence, you can’t just ignore it if it so happens that there are no instances of posting at all.

(In pizza terms, Emile said “I used to think the old recipe was bad and I never ordered it. It’s not that surprising in that case that there are no instances of ordering.)

comment by Emile · 2013-08-19T16:26:35.619Z · LW(p) · GW(p)

Sure!

Though here is more of a case of "once in a blue moon I got o the pizza place ... and I'm bored and tired of life ... and want to try something crazy for a change ... but then I see the same old stuff on the menu, I think man, this world sucks ... but now they have the Sushi-Harissa-Livarot pizza, I know next time I'm going to feel better!"

I agree it's a bit weird that I say that p(post|weekly thread) > p(post| monthly thread) when so far there are no instances of post|weekly thread.

comment by Kawoomba · 2013-08-19T13:37:42.911Z · LW(p) · GW(p)

Well, it's evidence for "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more."

Replies from: Tenoke
comment by Tenoke · 2013-08-19T13:49:26.475Z · LW(p) · GW(p)

Haha but no, Manfred says that he hasn't ever posted a top-level comment in a weekly open thread.

comment by RolfAndreassen · 2013-08-19T16:09:41.883Z · LW(p) · GW(p)

I prefer it to the old format; once a month is too clumpy for an open thread. It was fine when this was a two-man blog, but not for a discussion forum.

comment by Anders_H · 2013-08-19T21:13:31.694Z · LW(p) · GW(p)

Last week, I gave a presentation at the Boston meetup, about using causal graphs to understand bias in the medical literature. Some of you requested the slides, so I have uploaded them at http://scholar.harvard.edu/files/huitfeldt/files/using_causal_graphs_to_understand_bias_in_the_medical_literature.pptx

Note that this is intended as a "Causality for non-majors" type presentation. If you need a higher level of precision, and are able the follow the maths, you would be much better off reading Pearl's book.

(Edited to change file location)

Replies from: Adele_L
comment by Adele_L · 2013-08-19T22:35:49.752Z · LW(p) · GW(p)

Thanks for making these available.

Even if you can follow the math, these sorts of things can be useful for orienting someone new to the field, or laying a conceptually simple map of the subject that can be elaborated on later. Sometimes, it's easier to use a map to get a feel for where things are than it is to explore directly.

comment by mstevens · 2013-08-20T10:39:18.187Z · LW(p) · GW(p)

I want to know more (ie anything) about game theory. What should I read?

Replies from: sixes_and_sevens, Manfred, None
comment by sixes_and_sevens · 2013-08-20T11:30:13.921Z · LW(p) · GW(p)

If you have the time, I heartily recommend Ben Polak's Introduction to Game Theory lectures. They are highly watchable and give a very solid introduction to the topic.

In terms of books, The Strategy of Conflict is the classic popular work, and it's good, but it's very much a product of its time. I imagine there are more accessible books out there. Yvain recommends The Art of Strategy, which I haven't read.

Replies from: mstevens
comment by mstevens · 2013-08-20T13:09:01.068Z · LW(p) · GW(p)

I hate trying to learn things from videos, but the books look interesting.

Replies from: sixes_and_sevens, sixes_and_sevens
comment by sixes_and_sevens · 2013-08-20T16:22:41.177Z · LW(p) · GW(p)

(If you want a specific link, here is Yvain's introduction to game theory sequence. There are some problems and inaccuracies with it which are generally discussed in comments, but as a quick overview aimed at a LW audience it should serve pretty well.)

comment by sixes_and_sevens · 2013-08-20T15:00:21.435Z · LW(p) · GW(p)

What are your motives for learning about it? If it's to gain a bare-bones understanding sufficient for following discussion in Less Wrong, existing Less Wrong articles would probably equip you well enough.

Replies from: mstevens
comment by mstevens · 2013-08-21T07:51:51.470Z · LW(p) · GW(p)

My possibly crazy theory is that game theory would be a good way to understand feminism.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2013-08-21T09:36:17.362Z · LW(p) · GW(p)

OK, I'm interested. Can you explain a little more?

Replies from: mstevens
comment by mstevens · 2013-08-21T10:20:00.281Z · LW(p) · GW(p)

It's a little bit intuition and might turn out to be daft, but

a) I've read just enough about game theory in the past to know what the prisoner's dilemma is

b) I was reading an argument/discussion on another blog about the men chatting up women, who may or may not be interested, scenario, and various discussions on irc with MixedNuts have given me the feeling that male/female interactions (which are obviously an area of central interest to feminism) are a similar class of thing and possibly game theory will help me understand said feminism and/or opposition to it.

Replies from: sixes_and_sevens, JQuinton
comment by sixes_and_sevens · 2013-08-21T11:00:57.997Z · LW(p) · GW(p)

A word of warning: you will probably draw all sorts of wacky conclusions about human interaction when first dabbling with game theory. There is huge potential for hatching beliefs that you may later regret expressing, especially on politically-charged subjects.

comment by JQuinton · 2013-08-23T17:14:48.032Z · LW(p) · GW(p)

I also had the same intuition about male/female dynamics and the prisoner's dilemma. It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior.

But since feminism is such a mind-killing topic, I wouldn't feel too comfortable expressing alternative explanations (especially among non-rationalists) since people tend to feel that if you disagree with the explanation then you disagree with the normative goals.

Replies from: satt
comment by satt · 2013-08-24T15:36:00.444Z · LW(p) · GW(p)

It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior.

One model which I've seen come up repeatedly in the humanities is the "marriage market". Unsurprisingly, economists seem to use this idea most often in the literature, but peeking through the Google Scholar hits I see demographers, sociologists, and historians too. (At least one political philosopher uses the idea too.)

I don't know how predictive these models are. I haven't done a systematic review or anything remotely close to one, but when I've seen the marriage market metaphor used it's usually to explain an observation after the fact. Here is a specific example I spotted in Randall Collins's book ''Violence: A Micro-sociological Theory''. On pages 149 & 150 Collins offers this gloss on an escalating case of domestic violence:

It appears that the husband's occupational status is rising relative to his wife's; in this social class, their socializing is likely to be with the man's professional associates (Kanter 1977), and thus it is when she is in the presence of his professional peers that he belittles her, and it is in regard to what he perceives as her faulty self-presentation in these situations that he begins to engage in tirades at home. He is becoming relatively stronger socially, and she is coming to accept that relationship. Then he escalates his power advantage, as the momentum of verbal tirades flows into physical violence.

A sociological interpretation of the overall pattern is that within the first two years of their marriage, the man has discovered that he is in an improving position on the interactional market relative to his wife; since he apparently does not want to leave his wife, or seek additional partners, he uses his implicit market power to demand greater subservience from his wife in their own personal and sexual relationships. Blau's (1964) principle applies here: the person with a weaker exchange position can compensate by subservience. [...] In effect, they are trying out how their bargaining resources will be turned into ongoing roles: he is learning techniques of building his emotional momentum as dominator, she is learning to be a victim.

(Digression: Collins calls this a sociological interpretation, but I usually associate this kind of bargaining power-based explanation with microeconomics or game theory, not sociology. Perhaps I should expand my idea of what constitutes sociology. After all, Collins is a sociologist, and he has partly melded the bargaining power-based explanation with his own micro-sociological theory of violence.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T11:59:05.547Z · LW(p) · GW(p)

Collins calls this a sociological interpretation, but I usually associate this kind of bargaining power-based explanation with microeconomics or game theory, not sociology. Perhaps I should expand my idea of what constitutes sociology.

All sciences are describing various aspects of the reality, but there is one reality, and all these aspects are connected. Asking whether some explanation belongs to science X or science Y is useful when we want to find the best tools to deal with it; but the more important question is whether the explanation is true or false; how well it predicts reality.

Some applied topics may be considered by various sciences to be in their (extended) territory. For example, I have seen game theory considered a part of a) mathematics, b) economy, and c) psychology. I guess the mechanism itself is mathematical, and it has important economical and psychological consequences, so it is usefull for all of them to know about it.

There may be the case that one outcome is influenced by many factors, and the different factors are best explained by different sciences. For example, some aspects of relationships in marriage can be explained by biology, psychology, economics, sociology, perhaps even theology when the people are religious. Then it is good to check across all sciences to see whether we didn't miss some important factor. But the goal would be to create the best model, not to pick the favourite explanation. (The best model would include all relevant factors, but relatively to their strength.)

Trying to focus on one science only... I guess it is trying to influence the outcome; motivated thinking. For example if someone decides to ignore the biology and only focus on sociology, that already makes it obvious what kind of answer they want to get. And if someone decides to ignore the sociology and only focus on biology, that also makes it obvious. But the real question should be how specifically do both biological and sociological aspect influence the result.

Replies from: satt
comment by satt · 2013-09-02T01:55:40.314Z · LW(p) · GW(p)

Asking whether some explanation belongs to science X or science Y is useful when we want to find the best tools to deal with it; but the more important question is whether the explanation is true or false; how well it predicts reality.

Indeed. Still, I want my mental models/stereotypes of different sciences to roughly match what scientists in those different fields are actually doing.

comment by Manfred · 2013-08-20T17:02:37.588Z · LW(p) · GW(p)

I actually found The Selfish Gene a pretty good book for developing game theory intuitions. I'd put it as #2 on my list after "the first 2/3 of The Strategy of Conflict".

comment by [deleted] · 2013-08-20T13:19:30.203Z · LW(p) · GW(p)

If you're looking for something shorter than a full text, I can recommend this entry at the Standord Encyclopedia of Philosophy.

comment by Dorikka · 2013-08-19T19:32:07.065Z · LW(p) · GW(p)

Open comment thread:

If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.

(Copied since it was well received last time.)

Replies from: shminux, Armok_GoB
comment by Shmi (shminux) · 2013-08-19T20:22:55.213Z · LW(p) · GW(p)

What's the name of the bias/fallacy/phenomenon where you learn something (new information, approach, calculation, way of thinking, ...) but after awhile revert to the old ideas/habits/views etc.?

Replies from: RobbBB, moreati
comment by Rob Bensinger (RobbBB) · 2013-08-20T06:12:53.188Z · LW(p) · GW(p)

Relapse? Backsliding? Recidivism? Unstickiness? Retrogression? Downdating?

Replies from: shminux
comment by Shmi (shminux) · 2013-08-20T17:40:52.066Z · LW(p) · GW(p)

Hmm, some of these are good terms, but the issue is so common, I assumed there would be a standard term for it, at least in the education circles.

comment by moreati · 2013-08-19T21:48:41.211Z · LW(p) · GW(p)

I can't think of an academic name, the common phrases in Britain are 'stuck in your ways', 'bloody minded', 'better the the devil you know'.

Replies from: army1987
comment by A1987dM (army1987) · 2013-08-21T18:09:19.347Z · LW(p) · GW(p)

Depending on what timescales shminux is thinking of as “awhile” (hours or months?), RobbBB's suggestions may be better.

comment by Armok_GoB · 2013-08-27T19:49:08.187Z · LW(p) · GW(p)

Open subcomment subthread:

If it's not worth saying anywhere, it goes here.

Replies from: Dorikka
comment by Dorikka · 2013-08-27T23:54:07.489Z · LW(p) · GW(p)

I thought it wasn't necessary to paste the note (not mine) that accompanied the original comment. :P

Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)

Replies from: Armok_GoB
comment by Armok_GoB · 2013-08-28T19:01:50.905Z · LW(p) · GW(p)

Hey now, half the joke was sort of original, about the implication of sufficient metaleevels in this direction. :p

comment by knb · 2013-08-20T01:15:13.017Z · LW(p) · GW(p)

I don't know how technically viable hyperloop is, but it seems especially well suited for the United States.

Investing in a hyperloop system doesn't make as much sense in Europe or Japan for a number of reasons:

  1. European/Japanese cities are closer together, so Hyperloop's long acceleration times are a larger relative penalty in terms of speed. The existing HSR systems reach their lower top speeds more quickly.

  2. Most European countries and Japan already have decent HSR systems and are set to decline in population. Big new infrastructure projects tend not to make as much sense when populations are declining and the infrastructure cost : population ratio is increasing by default.

  3. Existing HSR systems create a natural political enemy for Hyperloop proposals. For most countries, having HSR and Hyperloop doesn't make sense.

In contrast, the US seems far better suited:

  1. The US is set for a massive population increase, requiring large new investments in transportation infrastructure in any case.

  2. The US has lots of large but far-flung cities, so long acceleration times are not as much of a relative penalty.

  3. The US has little existing HSR to act as a competitor. The political class has expressed interest in increasing passenger rail infrastructure.

  4. Hyperloop is proposed to carry automobiles. Low walkability of US towns is the big killer of intercity passenger rail in the US. Taking HSR might be faster than driving, but in addition to other benefits, driving saves money on having to rent a car when you reach the destination city.

Another possible early adopter is China (because they still need more transport infrastructure, land acquisition is a trivial problem for the Communist party, and they have a larger area, mitigating the slow acceleration problem.) I see China as less likely than the US because they do have a fairly large HSR system and it is expanding quickly. Also, China is set for population decline within a few decades, although they have some decades of slow growth left.

Russia is another possible candidate. Admittedly they have the declining population problem, but they still need more transport infrastructure and they have several big, far-flung cities. The current Russian transportation system is quite unsafe, so they could be expected to be willing to invest in big new projects. The slow acceleration problem would again be mitigated by Russia's large size.

Replies from: luminosity, None, CAE_Jones, DanielLC, metastable
comment by luminosity · 2013-08-20T11:24:09.526Z · LW(p) · GW(p)

Don't forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.

comment by [deleted] · 2013-08-20T04:29:09.885Z · LW(p) · GW(p)

In theory there is no difference between theory and practice. In practice, there is.

I continue to fail to see how this idea is anything more than a cool idea that would take huge amounts of testing and engineering hurdles to get going if it indeed would prove viable. Nothing is as simple as its untested dream ever is.

Not hating on it, but seriously, hold your horses...

Replies from: knb
comment by knb · 2013-08-21T09:48:28.736Z · LW(p) · GW(p)

I feel like I covered this in the first sentence with, "I don't know how technically viable hyperloop is." My point is just to argue that the US would be especially well-suited for hyperloop if it turns out to be viable. My goal was mainly to try to argue against the apparent popular wisdom that hyperloop would never be built in the US for the same reason HSR (mostly) wasn't.

comment by CAE_Jones · 2013-08-20T02:30:06.261Z · LW(p) · GW(p)

I was only vaguely following the Hyperloop thread on Lesswrong, but this analysis convinced me to Google it to learn more. I was immediately bombarded with a page full of search results that were pecimmistic at best (mocking, pretending at fallasy of gray but still patronizing, and politically indignant (the LA Times) were among the results on the first page)[1]. I was actually kinda hopeful about the concept, since America desperately needs better transit infrastructure, and KND's analysis of it being best suited for America makes plenty of sense so far as I can tell.

[1] I didn't actually open any of the results, just read the titles and descriptions. The tone might have been exaggerated or even completely mutated by that filter, but that seems unlikely for the titles and excerpts I read.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-20T19:11:32.874Z · LW(p) · GW(p)

I suggest that this is very weak evidence against the viability, either political, economic, or technical, of the Hyperloop. Any project that is obviously viable and useful has been done already; consequently, both useful and non-useful projects get the same amount of resistance of the form "Here's a problem I spent at least ten seconds thinking up, now you must take three days to counter it or I will pout. In public. Thus spoiling all your chances of ever getting your pet project accepted, hah!"

comment by DanielLC · 2013-08-20T05:57:51.665Z · LW(p) · GW(p)

I've been told that railways primarily get money from freight, and nobody cares that much about freight getting there immediately. As such, high speed railways are not a good idea.

I know you can't leave this to free enterprise per se. If someone doesn't want to sell their house, you can't exactly steer a railroad around it. However, if eminent domain is used, then if it's worth building, the market will build it. Let the government offer eminent domain use for railroads, and let them be built if they're truly needed.

Replies from: kalium, knb
comment by kalium · 2013-08-20T17:19:41.786Z · LW(p) · GW(p)

Much of Amtrak uses tracks owned by freight companies, and that this is responsible for a good chunk of Amtrak's poor performance. However, high-speed rail on non-freight-owned tracks works pretty well in the rest of the world; it just needs its own right-of-way (in some cases running freight at night when the high-speed trains aren't running, but still having priority over freight traffic).

Replies from: DanielLC
comment by DanielLC · 2013-08-20T23:04:15.673Z · LW(p) · GW(p)

Are high speed trains profitable enough for people to build them without government money? I'm not sure how to look that up.

Replies from: knb, kalium, fubarobfusco
comment by knb · 2013-08-21T09:40:12.964Z · LW(p) · GW(p)

Many of the private passenger rail companies were losing money before they were nationalized, but that was under heavy regulation and price controls. The freight rail companies were losing money before they were deregulated as well. These days they are quite profitable.

A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.

Replies from: DanielLC
comment by DanielLC · 2013-08-21T22:55:28.246Z · LW(p) · GW(p)

A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.

You mean the problem that someone isn't going to be willing to sell their property? Eminent domain is certainly necessary. I'm just wondering if it's sufficient.

comment by kalium · 2013-08-20T23:34:34.027Z · LW(p) · GW(p)

That's not at all the same question as "Are high-speed trains a good idea?"

  • Any decent HSR would generate quite a lot of value not captured by fares. It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.

  • France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?

Replies from: DanielLC
comment by DanielLC · 2013-08-21T03:36:40.482Z · LW(p) · GW(p)

It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.

If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a signifcant difference. Of course, it also suggests that they might be able to capture it through price discrimination, since the airports seem to manage.

How much government help is necessary for a train to be built?

It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.

The economics of a train is not comparable to the economics of a city. If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector.

France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?

Making a profit is not a sufficient condition for it to be worth while to build. It has to make enough profit to make up for the capital cost. It might well do that, and it is possible to check, but it's a lot easier to ask if one has been built without government funding.

If it is worth while to build trains in general, and the government doesn't always fund them, then someone will build one without the government funding them.

Replies from: kalium, kalium
comment by kalium · 2013-08-21T04:50:57.598Z · LW(p) · GW(p)

If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector.

I don't understand the reasoning by which you conclude that if an effect is measurable it must be so overwhelmingly huge that you wouldn't have to measure it.

On a much smaller scale, property values rise substantially in the neighborhood of light rail stations, but this value is not easily captured by whoever builds the rails. Despite the measurability of this created value, we do not find that "[light rail] is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector."

Replies from: DanielLC
comment by DanielLC · 2013-08-21T05:06:03.346Z · LW(p) · GW(p)

If the effect is measurable on an accurate but imprecise scale (such as the effect of a train on the economy), then it will be overwhelming on an inaccurate but precise scale (such as ticket sales).

You are suggesting we measure the utility of a single business by its effect on the entire economy. Unless my guesses of the relative sizes are way off, the cost of a train is tiny compared to the normal variation of the economy. In order for the effect to be noticeable, the train would have to pay for itself many, many times over. Ticket sales, and by extension the free market, might not be entirely accurate in judging the value of a train. But it's not so inaccurate that an effect of that magnitude will go unnoticed.

Am I missing something? Are trains really valuable enough that they'd be noticed on the scale of cities?

Replies from: kalium
comment by kalium · 2013-08-21T05:34:05.118Z · LW(p) · GW(p)

Are you claiming that a scenario in which

  • Fares cover 90% of (construction + operating costs)

  • Faster, more convenient transportation creates non-captured value worth 20% of (construction + operating costs)

is impossible? You seem to be looking at this from a very all-or-nothing point of view.

Replies from: DanielLC
comment by DanielLC · 2013-08-21T23:02:00.038Z · LW(p) · GW(p)

Faster, more convenient transportation is what fares are charging for. Non-captured value is more complicated than that.

If the non-captured value is 20% of the captured value, it's highly unlikely that trains will frequently be worth building, but rarely capture enough value. That would require that the true value stay within a very narrow area.

If it's not a monopoly good, and marginal costs are close to average costs, then captured value will only go down as people build more trains, so that value not being captured doesn't prevent trains from being built. If it is a monopoly good (I think it is, but I would appreciate it if some who actually knows tells me), and marginal costs are much lower than average costs, then a significant portion of the value will not be captured. Much more than 20%. It's not entirely unreasonable that the true value is such that trains are rarely built when they should often be built.

That's part of why I asked:

How much government help is necessary for a train to be built?

If the government is subsidizing it by, say, 20%, then the trains are likely worth while. If the government practically has to pay for the infrastructure to get people to operate trains, not so much.

Also, that comment isn't really applicable to what you just posted it as a response to. It would fit better as a response to my last comment. The comment you responded to was just saying that unless the value of trains is orders of magnitude more than the cost, you'd never notice by looking at the economy.

comment by kalium · 2013-08-21T04:43:06.751Z · LW(p) · GW(p)

If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a significant difference.

Marginal and average cost are obviously different, but your example of business fliers is not relevant. Business fliers aren't paying for their flights, but do often get to choose which airline they take. If there is one population that pays for their own flights and another population that does not even consider cost, it would be silly not to discriminate whatever the relation between marginal and average cost.

Replies from: DanielLC
comment by DanielLC · 2013-08-21T05:13:11.803Z · LW(p) · GW(p)

The businesses are perfectly capable of choosing not to pay for their employees flights. The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost. If it wasn't for price discrimination, consumer surplus would be high, and a large amount of value produced by the airlines would go towards the consumers.

Are high-speed trains natural monopolies? That is, are the capital costs (e.g. rail lines) much higher than the marginal costs (e.g. train cars)? I think they are, and if they are considering the consumer surplus is important, but if they're not, then it doesn't matter.

Replies from: kalium
comment by kalium · 2013-08-21T05:20:06.671Z · LW(p) · GW(p)

The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost.

What marginal cost are you referring to here? If it's the cost to the airline of one butt-in-seat, we know it's less than one fare because the airline is willing to sell that ticket. And this has nothing to do with average cost. I think you've lost the thread a bit.

Replies from: DanielLC
comment by DanielLC · 2013-08-21T22:59:55.115Z · LW(p) · GW(p)

What I mean is that, if everyone payed what people who travel for pleasure pay, then people travelling for business would pay much less than they're willing to, so the amount of value airports produce would be a lot less than what they'd get. If they charged everyone the same, either it would get so expensive that people would only travel for business, even though it's worth while for people to travel for pleasure, or it would be cheap enough that people travelling for business would fly for a fraction of what they're willing to pay. Either way, airports that are worth building would go unbuilt since the airport wouldn't actually be able to make enough money to build it.

comment by fubarobfusco · 2013-08-20T23:49:04.408Z · LW(p) · GW(p)

Are high speed trains profitable enough for people to build them without government money?

Are highways?

Replies from: DanielLC
comment by DanielLC · 2013-08-21T03:23:34.665Z · LW(p) · GW(p)

Some roads do collect tolls. Again, I don't know how to look it up, but I don't think they have government help. They're in the minority, but they show that having roads is socially optimal. Similarly, if there are high-speed trains that operate without government help, we know that it's good to have high-speed trains, and while it may be that government encouragement is resulting in too many of them being built, we should still build some.

comment by knb · 2013-08-21T09:36:05.711Z · LW(p) · GW(p)

I'm not sure what your point is here. Passenger rail and freight rail are usually decoupled. Amtrak operates on freight rail in most places because the government orders the rail companies to give preference to passenger rail (at substantial cost to the private freight railways).

Hyperloop would help out a lot, since it takes the burden off of freight rail. I suppose hyperloop could be privately operated (that would be my preference, so long as there was commonsense regulation against monopolistic pricing).

Replies from: DanielLC
comment by DanielLC · 2013-08-21T23:04:44.520Z · LW(p) · GW(p)

so long as there was commonsense regulation against monopolistic pricing

If competitors can simply build more hyperloops, monopolistic pricing won't be a problem. If you only need one hyperloop, then monopolistic pricing is insufficient. They will still make less money than they produce. Getting rid of monopolistic pricing runs the risk of keeping anyone from building the hyperloops.

comment by metastable · 2013-08-20T05:21:15.343Z · LW(p) · GW(p)

I'd like to hear more about possibilities in China, if you've got more. Everything I've read lately suggests that they've extensively overbuilt their infrastructure, much of it with bad debt, in the rush to create urban jobs. And it seems like they're teetering on the edge of a land-development bubble, and that urbanization has already started slowing. But they do get rights-of-way trivially, as you say, and they're geographically a lot more like the US than Europe.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-20T06:18:18.947Z · LW(p) · GW(p)

(The Money Illusion would like to dispute this view of China. Not sure how much to trust Sumner on this but he strikes me as generally smart.)

Replies from: gattsuru
comment by gattsuru · 2013-08-20T17:01:31.652Z · LW(p) · GW(p)

Mr. Sumner has some pretty clear systemic assumptions toward government spending on infrastructure. This article seems to agree with both aspects, without conflicting with either, however.

The Chinese government /is/ opening up new opportunities for non-Chinese companies to provide infrastructure, in order to further cover land development. But they're doing so at least in part because urbanization is slowing and these investments are perceived locally as higher-risk to already risk-heavy banks, and foreign investors are likely to be more adventurous or to lack information.

comment by Tenoke · 2013-08-30T13:33:04.046Z · LW(p) · GW(p)

I lost an AI box experiment against PatrickRobotham with me as the AI today on irc. If anyone else wants to play against me then PM me here or contact me on #lesswrong.

Replies from: Kawoomba, shminux
comment by Kawoomba · 2013-08-30T17:12:01.611Z · LW(p) · GW(p)

Do we still keep up with those secrecy shenanigans even when no MIRI employees were involved, or can you share some details?

Replies from: Tenoke
comment by Tenoke · 2013-08-30T17:34:23.524Z · LW(p) · GW(p)

I don't share details because subsequent games will be less fun and because if I am using dick moves I don't want people to know how much of a dick I am.

comment by Shmi (shminux) · 2013-08-30T18:14:44.231Z · LW(p) · GW(p)

Failing to convince your jailer to let you out is the highly likely outcome, so it is not very interesting. I would love to hear about any simulated AI winning against an informed opponent.

Replies from: Tenoke
comment by Tenoke · 2013-08-30T18:31:50.077Z · LW(p) · GW(p)

I posted this to advertise that I am looking for people to play with me.

comment by David_Gerard · 2013-08-20T17:31:36.503Z · LW(p) · GW(p)

When you're trying to raise the sanity waterline, dredging the swamps can be a hazardous occupation. Indian rationalist skeptic Narendra Dabholkar was assassinated this morning.

Replies from: shminux, knb
comment by Shmi (shminux) · 2013-08-20T17:45:47.312Z · LW(p) · GW(p)

Political activism, especially in the third world, is inherently dangerous, whether or not it is rationality-related.

comment by knb · 2013-08-20T22:11:02.079Z · LW(p) · GW(p)

He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-21T11:44:06.646Z · LW(p) · GW(p)

That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.

Replies from: fubarobfusco, knb
comment by fubarobfusco · 2013-08-22T03:58:48.053Z · LW(p) · GW(p)

I haven't been able to find the text of the bill — only summaries such as this one. Do you have a link?

comment by knb · 2013-08-21T19:56:27.857Z · LW(p) · GW(p)

Did you even read my comment?

Replies from: David_Gerard
comment by David_Gerard · 2013-08-21T22:22:14.768Z · LW(p) · GW(p)

Yes, I did. Your characterisation of the new law is factually ridiculous.

Replies from: knb
comment by knb · 2013-08-21T22:49:56.501Z · LW(p) · GW(p)

That isn't all the law does, as you would know if you actually read it.

comment by David_Gerard · 2013-08-30T22:49:40.044Z · LW(p) · GW(p)

So, are $POORETHNICGROUP so poor, badly off and socially failed because they are about 15 IQ points stupider than $RICHETHNICGROUP? No, it may be the other way around: poverty directly loses you around 15 IQ points on average.

Or so says Anandi Mani et al. "Poverty Impedes Cognitive Function" Science 341, 976 (2013); DOI: 10.1126/science.1238041. A PDF while it lasts (from the nice person with the candy on /r/scholar) and the newspaper article I first spotted it in. The authors have written quite a lot of papers on this subject.

Replies from: Transfuturist, Vaniver
comment by Transfuturist · 2013-08-31T01:52:41.047Z · LW(p) · GW(p)

The biggest problem I have with racists claiming racial realism is this.

Replies from: Protagoras, David_Gerard
comment by Protagoras · 2013-08-31T03:13:09.708Z · LW(p) · GW(p)

The racists claim that this is irrelevant because of research that corrects for socioeconomic status and still finds IQ differences. Of course, researchers have found plenty of evidence of important environmental influences on IQ not measured by SES. It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European; how much African vs. European ancestry someone has seems to matter only indirectly to the IQ effects, which seem to directly follow whichever artificial simplified category someone is identified as belonging to.

Replies from: Viliam_Bur, Vaniver, David_Gerard
comment by Viliam_Bur · 2013-08-31T12:30:27.594Z · LW(p) · GW(p)

Not completely serious, just wondering about possible implications, for sake of munchkinism:

Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ?

I guess it would first require the rest of the society accepting the superiority (at least in intelligence) of the purple people, and their purpleness being easy to identify and difficult for others to fake. (Possible to achieve with some genetic manipulation.)

Also, could this mechanism possibly explain the higher intelligence of Jews? I mean, if we stopped suspecting them from making international conspiracies and secretly ruling the world (which obviously requires a lot of intelligence), would their IQs consequently drop to the average level?

Also... what about Asians? It is the popularity of anime than increases their IQ, or what?

Replies from: Protagoras, bogus
comment by Protagoras · 2013-08-31T15:35:09.713Z · LW(p) · GW(p)

Unfortunately, while we know there are lots of environmental factors that affect IQ, we mostly don't know the details well enough to be sure of very much, or to have much idea how to manipulate it. However, as I understand it, some research has suggested that there are interesting cultural similarities between Jews in most of the world and Chinese who don't live in China, and that the IQ advantage of Chinese is primarily among Chinese who don't live in China, so something in common between how the Chinese and Jewish cultures deal with being minority outsiders may explain part of why both show unusually high IQs when they are minority outsiders (and could explain a lot of East Asians generally; considering how enormous the cultural influence of China has been in the region, it would not be terribly surprising if many other East Asian groups had acquired whatever the relevant factor is).

This paper by Ogbu and Simons discusses some of the theories about groups that do poorly (the "involuntary" or "caste-like" minorities). Unfortunately I couldn't find a citation for any discussion of differences between voluntary minorities which would explain why some voluntary minorities outperform rather than merely equalling the majority, apart from Ned Block's passing reference to a culture of "self-respect" in his review of The Bell Curve.

comment by bogus · 2013-08-31T13:59:42.537Z · LW(p) · GW(p)

Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ?

It's been done - many people do in fact self-identify as 'Indigo children', 'Indigos' or even 'Brights'. The label tends to come with a broadly humanistic and strongly irreligious worldview, but many of them are in fact highly committed to some form of spirituality and mysticism: indeed, they credit these perhaps unusual convictions for their increased intelligence and, more broadly, their highly developed intuition.

Replies from: David_Gerard
comment by David_Gerard · 2013-09-01T20:46:35.671Z · LW(p) · GW(p)

Ah, "Brights" is Dawkins and Dennett's terrible word for atheists; "Indigos" is completely insane and incoherent new-age nonsense about allegedly superpowered children. How did you conflate the two?

comment by Vaniver · 2013-09-01T17:40:39.461Z · LW(p) · GW(p)

It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European

I've seen mixed reports on this. Human Varieties, for example, has a series of posts on colorism which finds a relationship between skin color and intelligence in the population of African Americans, as predicted by both the hereditarian and "colorist" (i.e. discrimination) theories, but does not find a relationship between skin color and intelligence within families (as predicted by the hereditarian but not the colorist theory), and I know there were studies using blood type which didn't support the hereditarian theory but appear to have been too weakly designed to do that even if hereditarianism were true. Are you aware of any studies that actually look at genetic ancestry and compare it to IQ? (Self-reported ancestry would still be informative, but not as accurate.)

comment by David_Gerard · 2013-08-31T12:45:19.764Z · LW(p) · GW(p)

It's because Europeans are 4% Neanderthal and partake of the Neanderthals' larger brains, and Africans aren't.

Replies from: Vaniver
comment by Vaniver · 2013-09-01T17:44:18.877Z · LW(p) · GW(p)

There is large enough variance in Neanderthal ancestry among Europeans that we might actually be able to see differences within the European population (and then extrapolate those to guess how much of the European-African gap that explains). I seem to recall seeing some preliminary reports on this, but I can't find them right now so I'm not confident they were evidence-driven instead of theory-driven.

comment by David_Gerard · 2013-08-31T07:56:49.498Z · LW(p) · GW(p)

The really interesting thing is that you see results from all over the world showing this. Catholics in Northern Ireland in the 1970s measuring 15 points lower than Protestants. Burakumin in Japan measuring 15 points lower than non-Burakumin. SAME GENE POOL. This strongly suggests you get at least 15 points really easily just from social factors, and these studies may (because a study isn't solid science yet, not even a string of studies from the same group) point to one reason.

Replies from: Viliam_Bur, Eugine_Nier
comment by Viliam_Bur · 2013-08-31T12:31:55.090Z · LW(p) · GW(p)

Could be interesting to know how much of that is the status directly, and how much is better nutrition and medical care.

comment by Eugine_Nier · 2014-02-26T06:11:36.120Z · LW(p) · GW(p)

Burakumin in Japan measuring 15 points lower than non-Burakumin. SAME GENE POOL.

That's not obvious. Remember, there were strong taboos against interbreeding with Burakumin in Japan.

Replies from: David_Gerard
comment by David_Gerard · 2014-02-26T14:26:27.111Z · LW(p) · GW(p)

They separated only a few hundred years ago.

comment by Vaniver · 2013-09-01T17:32:36.034Z · LW(p) · GW(p)

So, I totally buy the "cognitive load decreases intellectual performance, both in life and on IQ tests" claim. This is very well replicated, and has immediate personal implications (don't try to remember everything, write it all down; try to minimize sources of stress in your life; try to think about as few projects at a time as possible).

I don't think it's valid to say "instead of A->B, it's B->A," or see this as a complete explanation, because the ~13 point drop is only present in times of financial stress. Take standardized school tests, and suppose that half of the minority students are under immediate financial stress (their parents just got a hefty car repair bill) and the other half aren't (the 'easy' condition in the test), whereas none of the majority students are under immediate financial stress. Then we should expect the minority students to be, on average, 6.5 points lower, but what we see is the gap of 15 points.

It's also plausible that the differentiatior between people is their reaction to stress--I know a lot of high-powered managers and engineers under significant stress at work, who lose much less than a standard deviation of their ability to make good decisions and focus on other things and so on. Some people even seem to perform better under stress, but it's hard to separate out the difference between motivation and fluid intelligence there.

Replies from: David_Gerard
comment by David_Gerard · 2013-09-01T20:41:40.746Z · LW(p) · GW(p)

Being poor means living a life of stress, financial and social. John Scalzi attempts to explain it. John Cheese has excellent ha-ha-only-serious stuff on Cracked on the subject too.

I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.

Replies from: Vaniver
comment by Vaniver · 2013-09-01T21:04:04.782Z · LW(p) · GW(p)

Being poor means living a life of stress, financial and social.

Then why, during the experiment, did the poor participants and the rich participants have comparable scores when presented with a hypothetical easy financial challenge (a repair of $150)?

The claim the paper makes is that there are temporary challenges which lower cognitive functionality, that are easier to induce in the poor than the rich. If you expect that those challenges are more likely to occur to the poor than the rich (which seems reasonable to me), then this should explain some part of the effect- but isn't on all the time, or the experiment wouldn't have come out the way it did.

I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.

While I have my doubts about the replicability of any social science article that made it into Science, the interpretation concerns here are assuming the effect the paper saw is entirely real and at the strength they reported.

comment by brazil84 · 2013-08-21T14:51:15.977Z · LW(p) · GW(p)

Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.

The only reason I can think of is to support Alcor.

Replies from: Randy_M, Turgurth, None
comment by Randy_M · 2013-08-23T15:25:47.788Z · LW(p) · GW(p)

It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!"

(may not be an exact quote)

comment by Turgurth · 2013-08-22T01:08:16.906Z · LW(p) · GW(p)

I don't think it's been asked before on Less Wrong, and it's an interesting question.

It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.

The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.

Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.

And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.

There are also altruistic reasons to join Alcor, but that's a separate issue.

Replies from: brazil84
comment by brazil84 · 2013-08-22T22:13:24.181Z · LW(p) · GW(p)

Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference.

Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death.

It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor.

Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later.

So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best.

Does that sound about right?

Replies from: Turgurth
comment by Turgurth · 2013-08-23T01:53:29.179Z · LW(p) · GW(p)

That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.

There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance.

That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account.

So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.

Replies from: brazil84
comment by brazil84 · 2013-08-23T13:28:35.228Z · LW(p) · GW(p)

ne is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars

Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you.

Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions.

There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions

Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy.

Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company.

So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach.

(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)

So again, the key here is determining how strongly you value your continued existence.

I agree with this to an extent.

Replies from: gwern, Turgurth
comment by gwern · 2013-08-23T19:56:42.408Z · LW(p) · GW(p)

(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)

That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.

Replies from: brazil84
comment by brazil84 · 2013-08-23T22:05:06.008Z · LW(p) · GW(p)

That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.

Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again.

Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.

comment by Turgurth · 2013-08-23T18:56:29.777Z · LW(p) · GW(p)

Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.

Replies from: brazil84
comment by brazil84 · 2013-08-23T19:26:09.521Z · LW(p) · GW(p)

Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?

I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.

Replies from: Ben_LandauTaylor
comment by Ben_LandauTaylor · 2013-08-28T18:35:19.562Z · LW(p) · GW(p)

what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?

In addition to the money, Alcor requires a lot of legal paperwork, including a notarized will. You can probably do that if you have "a few months," but it's one more thing to worry about, especially if you're dying of something that leaves you mentally impaired and makes legal consent complicated. I don't know how strict about this Alcor would be; I second the grandparent's advice to ask Diane.

comment by [deleted] · 2013-08-21T15:43:47.774Z · LW(p) · GW(p)

There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness.

For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less.

Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality.

(Assuming transportation costs are prohibitive enough to be treated as a rounding error.)

comment by diegocaleiro · 2013-08-20T04:56:23.778Z · LW(p) · GW(p)

There is a circulating google docs for people who are moving into the Bay Area soonish.

Any tips for people moving in from those who are in?

People who have available rooms or houses. Let Nick Ryder know.

Replies from: Nisan
comment by Nisan · 2013-08-22T19:09:51.843Z · LW(p) · GW(p)

Some advice for people who want to rent from landowners.

comment by Kaj_Sotala · 2013-08-20T06:49:12.401Z · LW(p) · GW(p)

Artificial intelligence and Solomonoff induction: what to read?

Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.

One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also directs us to in the presentation, namely his A philosophical treatise of universal induction, coauthored with Samuel Rathmanner and published in the journal Entropy in 2011. After reading the paper, I have moved the reading of Hutter's book far down my list of priorities, because gerneralizing from the paper leads me to suspect that the book is not so good.

I find the paper bad. There is nothing wrong with the ambition - to sketch various approaches to induction from Epicurus and onwards, and to try to argue how it all culminates in the concept of Solomonoff induction. There is much to agree with in the paper, such as the untenability of relying on uniform priors and the limited interest of the so-called No Free Lunch Theorems (points I've actually made myself in a different setting). The authors' emphasis on the difficulty of defending induction without resorting to circularity (see the well-known anti-induction joke for a drastic illustration) is laudable. And it's a nice perspective to view Solomonoff's prior as a kind of compromise between Epicurus and Ockham, but does this particular point need to be made in quite so many words? Judging from the style of the paper, the word "philosophical" in the title seems to mean something like "characterized by lack of rigor and general verbosity".4 Here are some examples of my more specific complaints [...]

I still consider it plausible to think that Kolmogorov complexity and Solomonoff induction are relavant to AGI7 (as well as to statistical inference and the theory of science), but the experience of reading Uncertainty & Induction in AGI and A philosophical treatise of universal induction strongly suggests that Hutter's writings are not the place for me to go in order to learn more about this. But where, then? Can the readers of this blog offer any advice?

Replies from: Wei_Dai, linkhyrule5
comment by Wei Dai (Wei_Dai) · 2013-08-27T09:48:04.254Z · LW(p) · GW(p)

My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can't really back up his claims as to how central these ideas are.

If Olle wanted to become an FAI researcher then I'd suggest getting an overview of the AIT field from Li and Vitanyi's textbook, but if he is more interested in what I called "Singularity Strategies" (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I'm afraid nobody has the answers at the moment.

(I wonder if we could convince Olle to join LW? I'd comment on some of Olle's posts but I'm really wary of personal blogs, which tend to disappear and take all of my comments with them.)

Replies from: gwern
comment by gwern · 2013-08-27T15:08:15.941Z · LW(p) · GW(p)

I'd comment on some of Olle's posts but I'm really wary of personal blogs, which tend to disappear and take all of my comments with them.

Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.

comment by linkhyrule5 · 2013-08-20T08:44:27.759Z · LW(p) · GW(p)

Random question - is AGI7 a typo, or a term?

Replies from: Manfred
comment by Manfred · 2013-08-20T09:26:50.001Z · LW(p) · GW(p)

Open link, control+f "relavant to AGI". Get directed to "relavant to AGI7".

Footnote 7 is "7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance."

comment by Omid · 2013-08-22T17:46:01.372Z · LW(p) · GW(p)

Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?

At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I waste less money on overhead.

On the other hand, there's a tax break for purchasing health insurance, and soon there will be subsidies as well. Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. So the insurance company will pay less than the person who pays out of pocket. All these together might outweigh money wasted on overhead.

On the third hand, I'm a young healthy male. Under the ACA, my insurance premiums will be inflated so that old, sick, and female persons can have lower premiums. The money that's being transferred to these groups won't be spent on me, so it reduces the expected value of my insurance.

Has anyone added all these effects up? Would you recommend I purchase skimpy insurance or comprehensive?

Replies from: Randy_M
comment by Randy_M · 2013-08-23T15:32:16.462Z · LW(p) · GW(p)

"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. "

This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half).

But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.

comment by sixes_and_sevens · 2013-08-20T11:46:51.844Z · LW(p) · GW(p)

If you had to group Less Wrong content into eight categories by subject matter, what would those categories be?

Replies from: Emile
comment by Emile · 2013-08-20T13:06:47.802Z · LW(p) · GW(p)
  • Self-improvement, optimal living, life hacks
  • Philosophy
  • Futurism (Cryonics, the singularity
  • Friendly AI and SIAI, I mean, MIRI
  • Maths, Decision Theory, Game theory
  • Meetups
  • General-interest discussion (biased towards the interests of atheist nerds)
  • Meta
Replies from: somervta, Dorikka, palladias
comment by somervta · 2013-08-21T02:05:40.306Z · LW(p) · GW(p)

I would remove meetups, as that isn't really LW content as such.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-24T17:20:20.524Z · LW(p) · GW(p)

It would be good to have it in a separate category, though, so you could disappear it from the front page.

comment by Dorikka · 2013-08-20T19:32:46.639Z · LW(p) · GW(p)

For unspecified levels of meta. :P

comment by palladias · 2013-08-25T18:36:32.214Z · LW(p) · GW(p)

I'd subdivide Lifehacks into:

  • debiasing lifehacks - practical ways to subvert/avoid cognitive biases (CoZE exercises, Monday-Tuesday game, etc)
  • non-epistemological lifehacks - domain specific clever ideas (frameworks for chore negotiation, investment strategies, etc)
Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T12:11:53.055Z · LW(p) · GW(p)

epistemical lifehacks;
general instrumental lifehacks (e.g. how to overcome procrastination);
specific instrumental lifehacks (domain-specific)

comment by Paul Crowley (ciphergoth) · 2013-09-02T08:58:52.168Z · LW(p) · GW(p)

I don't understand the graph in Stephen Hsu on Cognitive Genomics - help?

Replies from: gwern
comment by gwern · 2013-09-02T18:12:10.096Z · LW(p) · GW(p)

So to first quote Hsu's description:

This graph displays the number of GWAS hits versus sample size for height, BMI, etc. Once the minimal sample size to discover the alleles of largest impact (large MAF, large effect size) is exceeded, one generally expects a steady accumulation of new hits at lower MAF / effect size. I expect the same sort of progress for g. (MAF = Minor Allele Frequency. Variants that are common in the population are easier to detect than rare variants.)

We can’t predict the sample size required to obtain most of the additive variance for g (this depends on the details of the distribution of alleles), but I would guess that about a million genotypes together with associated g scores will suffice. When, exactly, we will reach this sample size is unclear, but I think most of the difficulty is in obtaining the phenotype data. Within a few years, over a million people will have been genotyped, but probably we will only have g scores for a small fraction of the individuals.

I'll try to explain it in different terms. What you are looking at is a graph of 'results vs effort'. How much work do you have to do to get out some useful results? The importance of this is that it's showing you a visual version of statistical power analysis (introduction).

Ordinary power analysis is about examining the inherent zero-sum trade-offs of power vs sample size vs effect size vs statistical-significance, where you try to optimize each thing for one's particular purpose; so for example, you can choose to have a small (=cheap) sample size and a small Type I (false positives) error rate in detecting a small effect size - as long as you don't mind a huge Type II error rate (low power, false negative, failure to detect real effects).

If you look at my nootropics or sleep experiments, you'll see I do power analysis all the time as a way of understanding how big my experiments need to be before they are not worthlessly uninformative; if your sample size is too small, you simply won't observe anything, even if there really is an effect (eg. you might conclude, 'with such a small n as 23, at the predicted effect size and the usual alpha of 0.05, our power will be very low, like 10%, so the experiment would be a waste of time').

Even though we know intelligence is very influenced by genes, you can't find 'the genes for intelligence' by looking at just 10 people - but how many do you need to look at?

In the case of the graph, the statistical-significance is hardwired & the effect sizes are all known to be small, and we ignore power, so that leaves two variables: sample size and number of null-rejection/findings. The graph shows us simply that as we get a larger sample, we can successfully find more associations (because we have more power to get a subtle genetic effect to pass our significance cutoffs). Simple enough. It's not news to anyone that the more data you collect, the more results you get.

What's useful here is that the slope of the points is encoding the joint relationship of power & significance & effect size for genetic findings, so we can simply vary sample size and spit out estimated number of findings. The intercept remains uncertain, though. What Hsu finds so important about this graph is that it lets us predict for intelligence how many hits we will get at any sample size once we have a datapoint which then nails down a unique line. What's the datapoint? Well, he mentions the very interesting recent findings of ~3 associations - which happened at n=126k. So to plot this IQ datapoint and guessing at roughly where it would go (please pardon my Paint usage):

OK, but how does that let Hsu predict anything? Well, the slope ought to be the same for future IQ findings, since the procedures are basically the same. So all we have to do is guess at the line, and anchor it on this new finding:

So if you want to know what we'll find at 200000 samples, you extend the line and it looks like we'll have ~10 SNPs at that point. Or, if you wanted to know when we'll have found 100 SNPs for intelligence, you simply continue extending the line until it reaches 100 on the y-axis, which apparently Hsu thinks will happen somewhere around 1000000 on the x-axis (which extends off the screen because no one has collected that big a sample yet for anything else, much less intelligence).

I hope that helps; if you don't understand power, it might help to look at my own little analyses where the problem is usually much simpler.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-09-03T12:19:50.346Z · LW(p) · GW(p)

Many thanks for this!

So in broad strokes: the smaller a correlation is, the more samples you're going to need to detect it, so the more samples you take, the more correlations you can detect. For five different human variables, this graph shows number of samples against number of correlations detected with them on a log/log scale; from that we infer that a similar slope is likely for intelligence, and so we can use it to take a guess at how many samples we'll need to find some number of SNPs for intelligence. Am I handwaving in the right direction?

Replies from: gwern
comment by gwern · 2013-09-03T15:26:54.510Z · LW(p) · GW(p)

so the more samples you take, the more correlations you can detect.

Yes, although I'd phrase this more as 'the more samples you take, the bigger your "budget", which you can then spend on better estimates of a single variable or if you prefer, acceptable-quality estimates of several variables'.

Which one you want depends on what you're doing. Sometimes you want one variable, other times you want more than one variable. In my self-experiments, I tend to spend my entire budget on getting good power on detecting changes in a single variable (but I could have spent my data budget in several ways: on smaller alphas or smaller effect sizes or detecting changes to multiple variables). Genomics studies like these, however, aren't interested so much in singling out any particular gene and studying it in close detail, but finding 'any relevant gene at all and as many as possible'.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-09-03T16:01:06.398Z · LW(p) · GW(p)

And there's a "budget" because if you "double-spend", you end up with the XKCD green acne jelly beans?

Replies from: gwern
comment by gwern · 2013-09-03T16:38:47.964Z · LW(p) · GW(p)

Eh, I'm not sure the idea of 'double-spending' really applies here. In the multiple comparisons case, you're spending all your budget on detecting the observed effect size and getting high-power/reducing-Type-II-errors (if there's an effect lurking there, you'll find it!), but you then can't buy as much Type I error reduction as you want.

This could be fine in some applications. For example, when I'm A/B testing visual changes to gwern.net, I don't care if I commit a Type I error, because if I replace one doohickey with another doohickey and they work equally well (the null hypothesis), all I've lost is a little time. I'm worried about coming up with an improvement, testing the improvement, and mistakenly believing it isn't an improvement when actually it is.

The problem with multiple comparisons comes when people don't realize they've used up their budget and they believe they really have controlled alpha errors at 5% or whatever. When they think they've had their cake & ate it too.

I guess a better financial analogy would be more like "you spend all your money on the new laptop you need for work, but not having checked your bank account balance, promise to take your friends out for dinner tomorrow"?

Replies from: Lumifer
comment by Lumifer · 2013-09-03T17:27:53.897Z · LW(p) · GW(p)

I am a bit confused -- is the framework for this thread observation (where the number of samples is pretty much the only thing you can affect pre-analysis) or experiment design (where you you can greatly affect which data you collect)?

I ask because I'm intrigued by the idea of trading off Type I errors against Type II errors, but I'm not sure it's possible in the observation context without introducing bias.

Replies from: gwern
comment by gwern · 2013-09-03T18:57:26.890Z · LW(p) · GW(p)

I'm not sure about this observation vs experiment design dichotomy you're thinking of. I think of power analysis as something which can be done both before an experiment to design it and understand what the data could tell one, and post hoc, to understand why you did or did not get a result and to estimate things for designing the next experiment.

Replies from: Lumifer
comment by Lumifer · 2013-09-03T19:20:53.794Z · LW(p) · GW(p)

Well, I think of statistical power as the ability to distinguish signal from noise. If you expect signal of a particular strength you need to find ways to reduce the noise floor to below that strength (typically through increasing sample size).

However my standard way of thinking about this is: we have data, we build a model, we evaluate how good the model output is. Bulding a model, say, via some sort of maximum likelihood, gives you "the" fitted model with specific chances to commit a Type I or a Type II error. But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?

Replies from: gwern
comment by gwern · 2013-09-03T19:28:38.136Z · LW(p) · GW(p)

But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?

Model-building seems like a separate topic. Power analysis is for particular approaches, where I certainly can trade off Type I against Type II. Here's a simple example for a two-group t-test, where I accept a higher Type I error rate and immediately see my Type II go down (power go up):

R> power.t.test(n=40, delta=0.5, sig.level=0.05)

     Two-sample t test power calculation 

              n = 40
          delta = 0.5
             sd = 1
      sig.level = 0.05
          power = 0.5981
    alternative = two.sided

NOTE: n is number in *each* group

R> power.t.test(n=40, delta=0.5, sig.level=0.10)

     Two-sample t test power calculation 

              n = 40
          delta = 0.5
             sd = 1
      sig.level = 0.1
          power = 0.7163
    alternative = two.sided

NOTE: n is number in *each* group

In exchange for accepting 10% Type I rather than 5%, I see my Type II fall from 1-0.60=40% to 1-0.72=28%. Tada, I have traded off errors and as far as I know, the t-test remains exactly as unbiased as it ever was.

Replies from: Lumifer
comment by Lumifer · 2013-09-03T20:10:21.275Z · LW(p) · GW(p)

I am not explaining myself well. Let me try again.

To even talk about Type I / II errors you need two things -- a hypothesis or a prediction (generally, output of a model, possibly implicit) and reality (unobserved at prediction time). Let's keep things very simple and deal with binary variables, let's say we have an object foo and we want to know whether it belongs to class bar (or does not belong to it). We have a model, maybe simple and even trivial, which, when fed the object foo outputs the probability of it belonging to class bar. Let's say this probability is 92%.

Now, at this point we are still in the probability land. Saying that "foo belongs to class bar with a probability of 92%" does not subject us to Type I / II errors. It's only when we commit to the binary outcome and say "foo belongs to class bar, full stop" that they appear.

The point is that in probability land you can't trade off Type I error against Type II -- you just have the probability (or a full distribution in the more general case). It's the commitment to to a certain outcome on the basis of an arbitrarily picked threshold that gives rise to them. And if so it is that threshold (e.g. traditionally 5%) that determines the trade-off between errors. Changing the threshold changes the trade-off, but this doesn't affect the model and its output, it's all post-prediction interpretation.

Replies from: gwern
comment by gwern · 2013-09-03T21:39:24.497Z · LW(p) · GW(p)

So you're trying to talk about overall probability distributions in a Bayesian framework? I haven't ever done power analysis with that approach, so I don't know what would be analogous to Type I and II errors and whether one can trade them off; in fact, the only paper I can recall discussing how one does it is Kruschke's paper (starting on pg11) - maybe he will be helpful?

Replies from: Lumifer
comment by Lumifer · 2013-09-04T01:10:28.634Z · LW(p) · GW(p)

Not necessarily in the Bayesian framework, though it's kinda natural there. You can think in terms of complete distributions within the frequentist framework perfectly well, too.

The issue that we started with was of statistical power, right? While it's technically defined in terms of the usual significance (=rejecting the null hypothesis), you can think about it in broader terms. Essentially it's the capability to detect a signal (of certain effect size) in the presence of noise (in certain amounts) with a given level of confidence.

Thank for the paper, I've seen it before but didn't have a handy link to it.

Replies from: gwern
comment by gwern · 2013-09-04T17:13:44.449Z · LW(p) · GW(p)

You can think in terms of complete distributions within the frequentist framework perfectly well, too.

Does anyone do that, though?

Essentially it's the capability to detect a signal (of certain effect size) in the presence of noise (in certain amounts) with a given level of confidence.

Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits; then the sample size & effect size interact to say how many bits each n contains. So a binary variable contains a lot less than a continuous variable, a shift in a rare observation like 90/10 is going to be harder to detect than a shift in a 50/50 split, etc. That's not stuff I know a lot about.

Replies from: Lumifer
comment by Lumifer · 2013-09-04T17:44:30.334Z · LW(p) · GW(p)

Does anyone do that, though?

Well, sure. The frequentist approach, aka mainstream statistics, deals with distributions all the time and the arguments about particular tests or predictions being optimal, or unbiased, or asymptotically true, etc. are all explicitly conditional on characteristics of underlying distributions.

Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits;

Yes, something like that. Take a look at Fisher information, e.g. "The Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the probability of X depends."

comment by [deleted] · 2013-08-23T23:45:51.075Z · LW(p) · GW(p)

This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.

Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-24T16:14:08.083Z · LW(p) · GW(p)

"Don't be a dick" is now "Wheaton's law"? Pfeh!

comment by mwengler · 2013-08-21T18:50:29.763Z · LW(p) · GW(p)

We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.

What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?

Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?

And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?

What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?

Replies from: wedrifid, David_Gerard
comment by wedrifid · 2013-08-22T02:26:19.368Z · LW(p) · GW(p)

What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?

I observe that the answer to the last question is not constrained to be positive.

Replies from: Randy_M
comment by Randy_M · 2013-08-23T15:49:15.394Z · LW(p) · GW(p)

"Letting those people die was worth it, because they took their cursed yapping poodle with them!"

(quote marks to indicate not my actual views)

comment by David_Gerard · 2013-08-21T19:06:01.920Z · LW(p) · GW(p)

Do the nervous systems of 3^^^3 nematodes beat the nervous systems of a mere 7x10^9 humans? If not, why not?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-21T19:55:00.489Z · LW(p) · GW(p)

I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.

Replies from: ahbwramc, MugaSofer, David_Gerard, Armok_GoB
comment by ahbwramc · 2013-08-22T23:28:20.175Z · LW(p) · GW(p)

I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-23T00:58:16.274Z · LW(p) · GW(p)

Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.

Replies from: Bakkot, Armok_GoB, MugaSofer
comment by Bakkot · 2013-08-24T18:48:08.643Z · LW(p) · GW(p)

The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.

But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).

Wikipedia has an analysis.

(Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-24T19:11:47.704Z · LW(p) · GW(p)

Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.

comment by Armok_GoB · 2013-08-27T20:09:04.335Z · LW(p) · GW(p)

I dispute that; the paperclip is almost certainly either more or less likely to become a Boltzmann brain than an equivalent volume of vacuum.

comment by MugaSofer · 2013-08-23T15:57:07.732Z · LW(p) · GW(p)

It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero.

And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?

(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-23T18:46:54.600Z · LW(p) · GW(p)

This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.

Replies from: shminux
comment by Shmi (shminux) · 2013-08-23T18:59:24.174Z · LW(p) · GW(p)

Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-23T20:34:30.089Z · LW(p) · GW(p)

No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.

Replies from: Eliezer_Yudkowsky, fubarobfusco, Emile
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-24T00:54:16.197Z · LW(p) · GW(p)

To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.

However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.

comment by fubarobfusco · 2013-08-25T23:37:53.018Z · LW(p) · GW(p)

Does it matter to you that octopuses are quite commonly cannibalistic?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T01:05:31.911Z · LW(p) · GW(p)

No. Babyeater lives are still important.

Replies from: MugaSofer, shminux
comment by MugaSofer · 2013-08-26T17:18:02.151Z · LW(p) · GW(p)

Funny, I parsed that as "should we then maybe be capturing them all to stop them eating each other?"

Didn't even occur to me that was an argument about extrapolated octopus values.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T19:38:42.502Z · LW(p) · GW(p)

It wasn't, your first parse would be a correct moral implication. The Babyeaters must be stopped from eating themselves.

Replies from: MugaSofer
comment by MugaSofer · 2013-08-26T21:36:18.460Z · LW(p) · GW(p)

... whoops.

I meant I parsed fubarobfusco's comment differently to you, ("they want to be cannibals, therefore it's ... OK to eat them? Somehow?"), because I just assumed that obviously you should save the poor octopi (i.e. it would "bother" you in the sense of moral anguish, not "betcha didn't think of this!")

comment by Shmi (shminux) · 2013-08-26T03:11:48.249Z · LW(p) · GW(p)

I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T12:10:07.934Z · LW(p) · GW(p)

To me the Prime Directive approach makes more sense.

As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it's more or less okay that way. Would you accept this consequence?

Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?

Replies from: shminux
comment by Shmi (shminux) · 2013-08-31T18:52:26.707Z · LW(p) · GW(p)

The idea behind the PD is that for foreign enough cultures

  • you can't predict the consequences of your intervention with a reasonable certainty,

  • you can't trust your moral instincts to guide you to do the "right" thing

  • the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case

  • so you end up acting like a UFAI more likely than not.

Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this.

So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.

comment by Emile · 2013-08-25T12:11:03.483Z · LW(p) · GW(p)

I do not eat anything that recognizes itself in a mirror.

Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.

(this is not a rhetorical question)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:16:08.169Z · LW(p) · GW(p)

Yes. If pigs were objects of value, it would be morally wrong to eat them, and indeed the moral thing to do would be to not create them.

Replies from: Vladimir_Nesov, drethelin
comment by Vladimir_Nesov · 2013-08-25T20:48:14.557Z · LW(p) · GW(p)

This needs a distinction between the value of creating pigs, existence of living pigs, and killing of pigs. If existing pigs are objects of value, but the negative value of killing them (of the event itself, not of the change in value between a living pig and a dead one) doesn't outweigh the value of their preceding existence, then creating and killing as many pigs as possible has positive value (relative to noise; with opportunity cost the value is probably negative, there are better things to do with the same resources; by the same token, post-FAI the value of "classical" human lives is also negative, as it'll be possible to make significant improvements).

comment by drethelin · 2013-08-25T21:05:59.069Z · LW(p) · GW(p)

I don't think it's morally wrong to eat people, if they happen to be in irrecoverable states

comment by MugaSofer · 2013-08-23T15:40:07.936Z · LW(p) · GW(p)

... really?

Um, that strikes me as very unlikely. Could you elaborate on your reasoning?

comment by David_Gerard · 2013-08-21T22:22:52.781Z · LW(p) · GW(p)

But zero is not a probability.

Edit: Adele_L is right, I was confusing utilities and probabilities.

Replies from: Adele_L, MugaSofer
comment by Adele_L · 2013-08-22T00:04:33.562Z · LW(p) · GW(p)

Zero is a utility, and utilities can even be negative (i.e. if Eliezer hated nematodes).

comment by MugaSofer · 2013-08-23T15:40:50.595Z · LW(p) · GW(p)

... are you pointing out that there is a nonzero probability that Eliezer's CEV actually cares about nematodes?

Replies from: David_Gerard
comment by David_Gerard · 2013-08-24T16:15:40.074Z · LW(p) · GW(p)

No, Adele_L is right, I was confusing utilities and probabilities.

comment by Armok_GoB · 2013-08-27T20:04:25.707Z · LW(p) · GW(p)

Keyword here is believe. What probability do you assign?

And if you say epsilon or something like that, is the epsilon bigger or smaller than 1/(3^^^3/10^100)?

comment by Salemicus · 2013-08-20T21:29:43.986Z · LW(p) · GW(p)

I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?

Replies from: gwern
comment by gwern · 2013-08-20T21:51:48.683Z · LW(p) · GW(p)

Have you tried searching the LW bugtracker or using a different browser?

Replies from: Salemicus
comment by Salemicus · 2013-08-20T22:24:54.716Z · LW(p) · GW(p)

Thank you for this suggestion. I have discovered that this works in Chrome.

comment by [deleted] · 2013-08-20T16:00:09.724Z · LW(p) · GW(p)

Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.

You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.

Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."

Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutility on your utility function."

You: "You're working together?"

Mugger A: "No, you're just really unlucky."

Mugger B: "Yeah, I don't know this guy."

You: "But I can't give both of you all of this money!"

Mugger A: "Tell you what. You're having a horrible day, so if you give me half your money, I'll give you a 50% chance of avoiding my 3^^^3 disutility. And if you give me a quarter of your money, I'll give you a 25% chance of avoiding my 3^^^3 disutility. Maybe the other Mugger will let you have the same kind of break. Sound good to you, other Mugger?"

Mugger B: "Works for me. Start paying."

You: Do what, exactly?

I can see at least 4 vaugely plausible answers:

Pay Mugger A: 3^^^3 disutility is likely going to be more than whatever you think your maximum is and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger B (unless he's just faking).

Pay Mugger B: Maximum disutility is by it's definition of greater than or equal to any other disutility, worse than 3^^^3, and has probably happened to at least a few people with utility functions (although probably NOT to a 3^^^3 extent), so it's a serious threat and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger A (unless he's just faking).

Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)

Don't Pay: This seems like it becomes generally less likely than in a normal Pascal's mugging since there are no clear escape routes, and you're outnumbered, so there is at least some real threat unless they're both faking.

The problem is, I can't seem to justify any of my vaugely plausible answers to this conundrum well enough to stop thinking about it. Which makes me wonder if the question is ill formed in some way.

Thoughts?

Replies from: Emile, sixes_and_sevens, None, Armok_GoB
comment by Emile · 2013-08-20T17:21:31.336Z · LW(p) · GW(p)

I may be fighting the hypothetical here, but ...

If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B.

Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)

I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-08-20T18:14:05.048Z · LW(p) · GW(p)

That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.

comment by sixes_and_sevens · 2013-08-20T16:34:49.469Z · LW(p) · GW(p)

If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve.

I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.

comment by [deleted] · 2013-08-20T16:25:10.747Z · LW(p) · GW(p)

If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2)

If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)

comment by Armok_GoB · 2013-08-27T20:33:08.072Z · LW(p) · GW(p)

Give it all to mugger B obviously. I almost certainly am experiencing -3^^^3 utilions according to almost any measure every millisecond anyway, given I live in a Big World.

comment by Shmi (shminux) · 2013-08-26T18:26:39.082Z · LW(p) · GW(p)

I wonder if it makes sense to have something like a registry of the LW regulars who are experts in certain areas. For example, this forum has a number of trained mathematicians, philosophers, computer scientists...

Something like a table containing [nick, general area, training/credentials, area of interest, additional info (e.g. personal site)], maybe?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-08-31T12:36:36.668Z · LW(p) · GW(p)

On a wiki page. Allowing anyone to opt out.

The first step would be to gather data... probably in an article made for this purpose... or in a fresh open thread.

comment by Document · 2013-08-25T08:32:34.619Z · LW(p) · GW(p)

This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.

I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?

Replies from: tut
comment by tut · 2013-08-25T13:01:01.527Z · LW(p) · GW(p)

Probably use dban.

Replies from: Document, Document
comment by Document · 2013-08-27T08:16:13.624Z · LW(p) · GW(p)

How should I answer this dialog? The help link at the bottom was unhelpful.

Replies from: tut
comment by tut · 2013-08-27T09:34:08.256Z · LW(p) · GW(p)

I used the second option, but it would surprise me if it didn't work either way.

Replies from: Document
comment by Document · 2013-09-01T17:28:54.448Z · LW(p) · GW(p)

Seems to have worked; thanks.

comment by Document · 2013-08-25T17:10:21.363Z · LW(p) · GW(p)

Thanks; I'll try it. (I should have mentioned that it was a Windows 8 PC, but your link mentions working under Windows, so thanks again.)

Replies from: tut
comment by tut · 2013-08-26T13:07:46.077Z · LW(p) · GW(p)

It doesn't work under any operating system, it has its own very simple OS on the CD.

Replies from: Document
comment by Document · 2013-08-26T19:29:09.901Z · LW(p) · GW(p)

Good point; not sure what I was thinking. I could have said something about the CPU and BIOS(?), but for now I'll just see if it works.

(Edit: seems to havea worked; thanks.)

comment by ahbwramc · 2013-08-24T03:14:35.752Z · LW(p) · GW(p)

I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.

comment by closeness · 2013-08-23T13:50:13.507Z · LW(p) · GW(p)

How can I apply rationality to business?

Replies from: wedrifid
comment by wedrifid · 2013-08-23T15:47:16.490Z · LW(p) · GW(p)

How can I apply rationality to business?

  • Avoid sunk costs.
  • If stuff doesn't work figure out why and (in most cases) do different stuff.
  • When predicting how long a project will take consider how long similar tasks tend to take and use that as a (rather strong) guide.
comment by linkhyrule5 · 2013-08-21T21:15:53.101Z · LW(p) · GW(p)

Has anyone done a study on redundant information in languages?

I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.

(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)

I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)

Replies from: gwern, wedrifid
comment by gwern · 2013-08-21T22:05:32.478Z · LW(p) · GW(p)

Yes, it's been studied quite a bit by linguists. You can find some pointers in http://www.gwern.net/Notes#efficient-natural-language which may be helpful.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-21T22:51:54.012Z · LW(p) · GW(p)

Thanks.

... huh. Now I'm thinking about actually doing that experiment...

Replies from: gwern, palladias
comment by gwern · 2013-08-22T21:47:32.581Z · LW(p) · GW(p)

I ran into another thing in that vein:

To measure the artistic merit of texts, Kolmogorov also employed a letter-guessing method to evaluate the entropy of natural language. In information theory, entropy is a measure of uncertainty or unpredictability, corresponding to the information content of a message: the more unpredictable the message, the more information it carries. Kolmogorov turned entropy into a measure of artistic originality. His group conducted a series of experiments, showing volunteers a fragment of Russian prose or poetry and asking them to guess the next letter, then the next, and so on. Kolmogorov privately remarked that, from the viewpoint of information theory, Soviet newspapers were less informative than poetry, since political discourse employed a large number of stock phrases and was highly predictable in its content. The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.

--The Man Who Invented Modern Probability - Issue 4: The Unlikely - Nautilus

Replies from: JQuinton, linkhyrule5
comment by JQuinton · 2013-08-23T20:41:16.597Z · LW(p) · GW(p)

The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.

This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.

comment by linkhyrule5 · 2013-08-22T23:21:46.844Z · LW(p) · GW(p)

I wonder what that metric has to say about Finnigan's Wake...

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-08-23T07:47:53.179Z · LW(p) · GW(p)

By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).

comment by palladias · 2013-08-25T18:37:46.916Z · LW(p) · GW(p)

If you do, please post about it!

comment by wedrifid · 2013-08-22T02:33:55.891Z · LW(p) · GW(p)

(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)

Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).

comment by Adele_L · 2013-08-19T23:39:54.168Z · LW(p) · GW(p)

Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?

If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal's mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be "1/googol" such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.

But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.

So does thinking about it this way add anything, or is it just more confusing?

Replies from: Armok_GoB
comment by Armok_GoB · 2013-08-27T20:22:33.528Z · LW(p) · GW(p)

You cant pay for things in Utils, you can only pay for them in Opportunities.

This is where pascals mugging goes wrong as well; the only reason to not give pascals mugger the money is the possibility of an even greater opportunity coming along later; a mugger that's more credible, and/or offers an even greater potential payof. (And once any mugger offers INFINITE utility there's only credibility left to increase.)

Replies from: Adele_L
comment by Adele_L · 2013-08-27T23:04:17.038Z · LW(p) · GW(p)

That doesn't work because the expected value of things that you should do, e.g. donating to an effective charity, is far lower than the expected value of a pascal mugging.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-08-28T19:03:21.459Z · LW(p) · GW(p)

I expect an FAI to have at least 10% probability of acquiring infinite computational power. This means donations to MIRI have infinite expected utility.

comment by JoshuaZ · 2013-08-19T17:07:35.439Z · LW(p) · GW(p)

A new study shows that manipulative behavior could be linked to the development of some forms of altruism. The study itself is unfortunately behind a paywall.

Replies from: somervta, Richard_Kennaway, diegocaleiro
comment by somervta · 2013-08-21T02:02:17.086Z · LW(p) · GW(p)

I have access - PM me if you're interested in it.

comment by Richard_Kennaway · 2013-08-19T21:18:25.647Z · LW(p) · GW(p)

It's about eusocial animals. Human relevance?

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-19T21:20:09.111Z · LW(p) · GW(p)

Unclear. One could conceive of similar action occurring in highly social species that aren't eusocial but have limited numbers of breeding pairs, but that's not frequently done by primates.

comment by diegocaleiro · 2013-08-20T04:45:48.663Z · LW(p) · GW(p)

Didn't Sci Hub work to find an upaid version, it often does......http://sci-hub.org/

Replies from: gwern
comment by gwern · 2013-08-20T15:54:42.264Z · LW(p) · GW(p)

Sci-hub does not work for US users AFAIK.

comment by [deleted] · 2013-08-21T18:08:03.824Z · LW(p) · GW(p)

This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf

It extensively discusses something called 'Winograd schema questions': If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html

The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:

The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But what does it tell us when a good semblance of a behaviour can be achieved using cheap tricks that seem to have little to do with what we intuitively imagine intelligence to be? Are these intuitions wrong, and is intelligence really just a bag of tricks? Or are the philosophers right, and is a behavioural understanding of intelligence simply too weak? I think both of these are wrong. I suggest in the context of question-answering that what matters when it comes to the science of AI is not a good semblance of intelligent behaviour at all, but the behaviour itself, what it depends on, and how it can be achieved. I go on to discuss two major hurdles that I believe will need to be cleared.

If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.

Edit: Upon searching, I also found some critiques of Levesque's work as well, so looking up opposition to some of these points may also be a good idea.

comment by gwern · 2013-08-25T16:39:09.239Z · LW(p) · GW(p)

I have made it up to episode 5 of Umineko, and I've found one incident in particular unusually easy to resolve (easy enough that though the answer hasn't been suggested by anyone in-game, I am sure that I know how it was/could be done); I'm wondering how much it is due to specialized knowledge and whether it really looks harder to other people. (Because of the curse of knowledge, it's now difficult for me to see whether the puzzle really is as trivial as it looks to me.) So, a little poll, even though LWers are not the best people to ask.


In episode 5, an unknown caller phones Natsuhi in her locked personal room. He says he's predicted her favorite season of the year, and asks her what it really is. She replies 'winter', and he says that is what he predicted. She is skeptical and he tells her to look underneath a clock in her room. She does and finds a slip of paper with the word 'winter' on it: he had been there earlier and left it as proof of his prediction. Natsuhi is shocked and mystified.

How sure are you that you know how he did it?

[pollid:549]

How would you rate your familiarity with cryptography?

[pollid:550]

(Please rot13 any replies.)

Replies from: Richard_Kennaway, ygert, palladias, Adele_L, Alicorn, MugaSofer, NancyLebovitz, beoShaffer, Risto_Saarelma, gjm, Kindly, garethrees, gwern, David_Gerard, tut
comment by Richard_Kennaway · 2013-08-26T08:44:29.721Z · LW(p) · GW(p)

V pna guvax bs guerr jnlf bs qbvat guvf gevpx.

  1. Ur uvq sbhe fyvcf bs cncre, bar sbe rnpu frnfba. Cerfhznoyl ur jvyy erzbir gur bgure guerr ng gur svefg bccbeghavgl.

  2. Ur unf qbar fbzr erfrnepu gb qvfpbire fbzr snpg nobhg ure gb hfr va uvf qrzbafgengvba.

  3. Fur unf hfrq ure snibevgr frnfba nf gur nafjre gb n frphevgl dhrfgvba ba n jro fvgr gung ur unf nqzva-yriry npprff gb.

Gurer znl or bgure jnlf. Jvgu fb znal, V pnaabg or irel fher gung nal fvatyr bar gung V pubbfr vf evtug.

comment by ygert · 2013-08-25T22:12:20.458Z · LW(p) · GW(p)

Guvf "chmmyr" frrzf rnfl gb na rkgerzr, gb zr ng yrnfg. Gur gevivny fbyhgvba jbhyq or gb uvqr nyy gur cbffvoyr nafjref va qvssrerag cynprf, naq bayl gryy ure gb ybbx va gur cynpr jurer ur uvq gur nafjre ur trgf gbyq vf pbeerpg. (Va guvf pnfr, haqre gur pybpx.)

comment by palladias · 2013-08-25T18:40:24.524Z · LW(p) · GW(p)

Cerqvpgvba: Ur chg sbhe fyvcf bs cncre va gur ebbz (r.t. pybpx, grqql orne, fubr, cntr # bs grkgobbx), naq pubfr juvpu bowrpg gb qverpg ure gb onfrq ba ure erfcbafr. Ur'f unir gb erzbir gur bgure guerr fbbavfu, ohg ur boivbhfyl unq npprff bapr, naq vs gurl'er nyy va fhssvpvragyl bofpher cynprf, vg jbhyq or cerggl rnfl

comment by Adele_L · 2013-08-25T20:23:15.338Z · LW(p) · GW(p)

My thought was the same as palladias'. I'm not seeing an obvious way involving cryptography though, but I am somewhat familiar with it (I understand RSA and its proof).

Replies from: gwern
comment by gwern · 2013-08-25T20:29:11.253Z · LW(p) · GW(p)

Zl crefbany guvaxvat jnf "Bar bs gur rnfvrfg jnlf gb purng n pelcgbtencuvp unfu cerpbzzvgzrag vf gb znxr zhygvcyr fhpu unfurf naq fryrpgviryl erirny n fcrpvsvp bar nf nccebcevngr; gur punenpgre unf irevsvnoyl cerpbzzvggrq gb n cnegvphyne cerqvpgvba bs 'jvagre', ohg unf ur irevsvnoyl cerpbzvggrq gb bayl bar cerqvpgvba?"

(Nqzvggrqyl V unir orra guvaxvat nobhg unfu cerpbzzvgzragf zber guna hfhny orpnhfr V unir n ybat-grez cebwrpg jubfr pbapyhfvba vaibyirf unfu cerpbzzvgzragf naq V qba'g jnag gb zvfhfr gurz be yrnir crbcyr ebbz sbe bowrpgvba.)

Replies from: palladias, saturn
comment by palladias · 2013-08-25T23:37:24.495Z · LW(p) · GW(p)

V qvqa'g guvax ng nyy nobhg unfurf (naq V qba'g unir zhpu rkcrevrapr jvgu gurz rkprcg n ovg bs gurbel). V whfg ena 'jung jbhyq V qb jvgu npprff gb gur ebbz nurnq bs gvzr naq jung qb V xabj?' naq bhg cbccrq sbhe furrgf bs cncre.

comment by saturn · 2013-08-27T16:52:24.722Z · LW(p) · GW(p)

Bs pbhefr, erirnyvat n unfu nsgre gur snpg cebirf abguvat, rira vs vg'f irevsvnoyl gvzrfgnzcrq. Nabgure cbffvoyr gevpx vf gb fraq n qvssrerag cerqvpgvba gb qvssrerag tebhcf bs crbcyr fb gung ng yrnfg bar tebhc jvyy frr lbhe cerqvpgvba pbzr gehr. V qba'g xabj bs na rnfl jnl nebhaq gung vs gur tebhcf qba'g pbzzhavpngr.

Replies from: David_Gerard
comment by David_Gerard · 2013-08-30T23:01:45.235Z · LW(p) · GW(p)

Guvf vf irel yvxr gur sbbgonyy cvpxf fpnz.

comment by Alicorn · 2013-08-25T16:49:47.720Z · LW(p) · GW(p)

V'z abg fher V jbhyq unir pnyyrq guvf n sbez bs pelcgbtencul jrer V hacevzrq, ohg jvgu bayl sbhe cbffvoyr nafjref ur whfg unf gb cvpx sbhe uvqvat cynprf naq gryy ure gb ybbx va gur evtug bar, evtug?

comment by MugaSofer · 2013-08-26T19:03:06.421Z · LW(p) · GW(p)

Gurer jrer abgrf sbe rnpu bs gur sbhe frnfbaf uvqqra va qvssrerag cynprf nebhaq gur ebbz. Gur pnyyre fvzcyl ersreerq ure gb gur uvqvat-cynpr bs gur abgr gung zngpurq ure nafjre.

Zl svefg gubhtug ba ernqvat gur ceboyrz - juvpu fgvyy frrzf yvxr zl org thrff, ba ersyrpgvba, gubhtu.

Qvqa'g ibgr ba gur "ubj fher ner lbh", orpnhfr V'z ab ybatre fher ubj fher V nz - V'z hasnzvyvne jvgu gur fubj, naq gur ersrerapr gb pelcgbtencul fhttrfgf fbzr bgure fbyhgvba (V'z snzvyvne jvgu ehqvzragnel zntvp gevpxf, juvpu vf cebonoyl jurer ZL fbyhgvba pbzrf sebz.) Ohg V pregnvayl qba'g unir "ab vqrn" ubj vg jnf qbar.

comment by NancyLebovitz · 2013-08-26T08:02:58.696Z · LW(p) · GW(p)

Posted before I read other replies:

V fhfcrpg gurer ner sbhe fyvcf bs cncre va qvssrerag cnegf bs ure ebbz. Naq vs ur pbhyq farnx gurz va, gura gurer'f n ernfbanoyr punapr ur pna farnx gur guerr fyvcf ersreevat gb aba-jvagre frnfbaf bhg orsber fur svaqf gurz.

comment by beoShaffer · 2013-08-26T06:25:34.208Z · LW(p) · GW(p)

Yvxr frireny bs gur bgure pbzzragref V dhvpxyl fnj ubj guvf pbhyq or qbar jvgu onfvp fgntr zntvp, ohg qrfcvgr orvat snveyl snzvyvne jvgu pelcgb V qvqa'g vzzrqvngryl znxr gur pbaarpgvba gb pelcgb hagvy V fnj lbhe pbzzrag ba unfu cer-pbzzvgzragf. Univat n fvatyr pnabavpny yvfg bs lbhe cer-pbzzvgzragf. choyvfurq va nqinapr jbhyq frrz gb fbyir cngpu guvf fcrpvsvp irarenovyvgl.

comment by Risto_Saarelma · 2013-08-26T05:18:05.200Z · LW(p) · GW(p)

V cnggrea zngpurq zl vqrn bs gur fbyhgvba gb gur onfvp fgntr zntvp gevpx bs univat znal uvqqra bcgvbaf naq znxvat gur znex guvax lbh bayl unq gur bar lbh fubjrq gurz, abg pelcgbtencul.

comment by gjm · 2013-08-31T20:49:56.120Z · LW(p) · GW(p)

I'm rather alarmed at how many people appear to have said they're very sure they know how he did it, on (I assume, but I think it's pretty clear) the basis of having thought of one very credible way he could have done it.

I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'.

(For what it's worth, I pedantically chose the "middle" option for that question, but I found the same obvious solution as everyone else.)

Replies from: gwern
comment by gwern · 2013-09-01T14:55:45.298Z · LW(p) · GW(p)

I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'.

In the case of Umineko, there's not really any difference between 'find a way' and 'find the way', since it adheres to a relativistic Schrodinger's-cat-inspired epistemology where all that matters is successfully explaining the observed evidence. So I don't expect the infelicitous wording to make a difference.

Replies from: gjm
comment by gjm · 2013-09-01T16:24:00.356Z · LW(p) · GW(p)

In the case of Umineko, there's not really any difference [...]

Ah, OK. I wasn't aware of that bit of context. Thanks.

Replies from: gwern
comment by gwern · 2013-09-23T14:38:56.417Z · LW(p) · GW(p)

As it turns out, there's a second possible way using a detail I didn't bother to mention (because I assumed it was a red herring and not as satisfactory a solution anyway):

Angfhuv npghnyyl fnlf fur'f arire rire gbyq nalbar ure snibevgr frnfba rkprcg sbe gur srznyr freinag Funaaba lrnef ntb, naq guvaxf nobhg jurgure Funaaba pbhyq or pbafcvevat jvgu gur lbhat znyr pnyyre. Rkprcg Funaaba vf n ebyr cynlrq ol gur traqre-pbashfrq pebffqerffvat phycevg Lnfh (nybat jvgu gur ebyrf bs Xnaba & Orngevpr), fb gur thrff pbhyq unir orra onfrq ba abguvat ohg ure zrzbel bs orvat gbyq gung.

Crefbanyyl, rira vs V jnf va fhpu n cbfvgvba, V jbhyq fgvyy cersre hfvat gur pneq gevpx: jul pbhyqa'g Angfhuv unir punatrq ure zvaq bire gur lrnef? Be abg orra frevbhf va gur svefg cynpr? Be Funaaba unir zvferzrzorerq? rgp

comment by Kindly · 2013-08-25T22:30:15.660Z · LW(p) · GW(p)

Mentally subtract my vote from "No idea" onto "Very" since apparently I can read poll answers better than poll questions.

comment by garethrees · 2014-07-05T22:07:54.857Z · LW(p) · GW(p)

Creuncf gur fyvc bs cncre ybbxrq fbzrguvat yvxr guvf. (Qrfvtavat na nzovtenz jbhyq or nanybtbhf gb svaqvat zhygvcyr zrffntrf jvgu gur fnzr unfu.)

Replies from: gwern
comment by gwern · 2014-07-05T22:13:54.105Z · LW(p) · GW(p)

Creuncf gur fyvc bs cncre ybbxrq fbzrguvat yvxr guvf.

Gung'q arire jbex sbe n frpbaq ba n uhzna. V qba'g guvax V'ir frra nal nzovtenzf juvpu ner fb fzbbgu gung lbh pbhyq frr rvgure bar onfrq ba n cevzr jvgubhg abgvat gung gur jevgvat vf irel bqq. V pna'g rira ernq nal bs gung nzovtenz rkprcg sbe 'fcevat', fgenvavat uneq.

Replies from: garethrees
comment by garethrees · 2014-07-05T22:45:42.911Z · LW(p) · GW(p)

Gung cnegvphyne nzovtenz, fher. (Vg'f nyfb qvsvphyg gb svaq zhygvcyr zrffntrf jvgu gur fnzr unfu.) Ohg Qreera Oebja hfrq guvf nzovtenz va uvf 2007 frevrf "Gevpx be Gerng" jvgu ng yrnfg gur nccrnenapr bs fhpprff (gubhtu nf nyjnlf jvgu Oebja, vg'f cbffvoyr ur jnf sbbyvat hf engure guna gur cnegvpvcnag).

comment by gwern · 2013-09-23T01:54:19.538Z · LW(p) · GW(p)

Thanks for all the poll submissions. I decided since I just finished Umineko, this is a good time to analyze the 49 responses.

The gist is that the direction seems to be as predicted and the effect size reasonable (odds-ratio of 1.77), but not big enough to yield any impressive level of statistical-significance (p=0.24):

R> poll <- read.csv("http://dl.dropboxusercontent.com/u/182368464/umineko-poll.csv")
R> library(ordinal)
R> summary(clm(as.ordered(Certainty) ~ Crypto, data=poll))
formula: as.ordered(Certainty) ~ Crypto
data:    poll

 link  threshold nobs logLik AIC   niter max.grad cond.H
 logit flexible  48   -30.58 67.16 5(0)  5.28e-09 2.9e+01

Coefficients:
       Estimate Std. Error z value Pr(>|z|)
Crypto    0.571      0.491    1.16     0.24

Threshold coefficients:
    Estimate Std. Error z value
0|1    1.988      0.708    2.81
1|2    3.075      0.822    3.74
(1 observation deleted due to missingness)
R> exp(0.571)
[1] 1.77

Or if you prefer, a linear regression:

R> summary(lm(Certainty ~ Crypto, data=poll))

Call:
lm(formula = Certainty ~ Crypto, data = poll)

Residuals:
   Min     1Q Median     3Q    Max
-0.409 -0.287 -0.287 -0.164  1.836

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)    0.164      0.151    1.09     0.28
Crypto         0.122      0.117    1.05     0.30
comment by David_Gerard · 2013-08-30T22:53:37.323Z · LW(p) · GW(p)

Zhygvcyr ovgf bs cncre, boivbhfyl.

comment by tut · 2013-08-27T09:20:49.604Z · LW(p) · GW(p)

DUH

Downvoted because you made a poll in the open thread, thus making the RSS feed impossible to subscribe to, and producing a whole thread full of encrypted nonsense.

Replies from: gwern
comment by gwern · 2013-08-27T15:10:29.925Z · LW(p) · GW(p)

DUH

The answer to the question I am asking (whether perceived difficulty interacts with cryptography knowledge) is not 'duh', and is difficult-to-impossible to answer without a poll. If you think the answer is duh, you are not understanding the point of the poll and you are underrating the possible inferential distance & curses of knowledge at play in trying to guess the answer.

comment by [deleted] · 2013-08-25T10:41:23.466Z · LW(p) · GW(p)

I have never consciously noticed a dust speck going into my eye, at least I don't remember it. This means it didn't make big enough effect on my mind so that it would have made a lasting impression on my memory. When I first read the post about dust specks and torture, I had to think hard about wtf the speck going into your eye even means.

Does this mean that I should attribute zero negative utility to dust speck going into my eye?

Replies from: gwern, Locaha
comment by gwern · 2013-08-25T14:42:56.525Z · LW(p) · GW(p)

Does this mean that I should attribute zero negative utility to dust speck going into my eye?

You could consider the analogous problem of waking up during surgery & then forgetting it afterwards.

comment by Locaha · 2013-08-25T10:58:32.972Z · LW(p) · GW(p)

The dust speck is just a symbol for the smallest negative utility unit. Just imagine something else.

Replies from: None
comment by [deleted] · 2013-08-25T12:34:20.063Z · LW(p) · GW(p)

Oh, I was already aware of that (and this is not just hindsight bias, I remember reading about this today and someone suggested replacing the speck with the smallest actual negative utility unit). This isn't really about the original question anyway. I was just thinking if something that doesn't even register on a conscious level could have negative utility.

Replies from: Locaha
comment by Locaha · 2013-08-26T06:58:14.442Z · LW(p) · GW(p)

I was just thinking if something that doesn't even register on a conscious level could have negative utility.

I guess anything with a negative cumulative effect.

Imagine the dust specks piling in your eye until they start to interfere with your vision.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-29T10:38:14.690Z · LW(p) · GW(p)

Well, yes, but it's one dust speck per person...

And it's entirely possible that utility of dust speck isn't additive. In fact, it's trivially so: one dust speck is fine, a few trillion will do gruesome things to your head.

Replies from: Locaha
comment by Locaha · 2013-08-29T11:24:47.604Z · LW(p) · GW(p)

I'm now thinking of developing a Dust Speck Machine Gun. Or Shotgun, possibly.

Well, yes, but it's one dust speck per person...

Well, I don't see how anything that never registers on any level can have any utility.

But... I dunno. Something that lowers your IQ by 1 point may be something you will never discover, and yet it will cause you negative utility...

comment by NancyLebovitz · 2013-08-24T10:12:58.639Z · LW(p) · GW(p)

What if this were a video game? A way of becoming more strategic.

comment by metastable · 2013-08-21T00:18:21.064Z · LW(p) · GW(p)

Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)

Replies from: somervta
comment by somervta · 2013-08-21T01:34:11.601Z · LW(p) · GW(p)

Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.

Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.

Replies from: asr, metastable
comment by asr · 2013-08-21T05:08:50.177Z · LW(p) · GW(p)

What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.

My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently.

It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?

Replies from: pengvado, somervta
comment by pengvado · 2013-08-21T17:37:45.561Z · LW(p) · GW(p)

Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.

If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.

Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.

Replies from: asr
comment by asr · 2013-08-21T19:46:18.036Z · LW(p) · GW(p)

I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X.

I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.

But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.

comment by somervta · 2013-08-21T14:56:20.689Z · LW(p) · GW(p)

You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory.

Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.

comment by metastable · 2013-08-21T03:17:32.364Z · LW(p) · GW(p)

Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?

And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?

To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?

Replies from: asr, somervta
comment by asr · 2013-08-21T05:02:03.805Z · LW(p) · GW(p)

it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses.

Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.

In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy."

One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.

comment by somervta · 2013-08-21T04:52:19.046Z · LW(p) · GW(p)

Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?

Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists.

To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?

Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.

comment by Skeptityke · 2013-08-31T17:20:15.746Z · LW(p) · GW(p)

Um... In the HPMOR notes section, this little thing got mentioned.

"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."

That sounds like really exciting news to me, TBH. Someone seriously needs to bid. There are less than 7 hours left and nobody has taken him up on the offer.

Replies from: ArisKatsaris, CAE_Jones, Mitchell_Porter
comment by ArisKatsaris · 2013-09-01T13:07:24.796Z · LW(p) · GW(p)

That sounds like really exciting news to me

Well, keep in mind that Eliezer himself claims that "it's not as exciting as it sounds".

And of course you always need to have in mind that what Eliezer considers to be "the secret answer to the hard problem of conscious experience" may not be as satisfying an answer to you as it is to him.

After all, some people think that the non-secret answer to the hard problem of conscious experience is something like "consciousness is what an algorithm feels like from the inside" and this is quite non-satisfactory to me (and I think it was non-satisfactory to Eliezer too).

(And also, I think the bidding started at something like $4000.)

comment by CAE_Jones · 2013-09-01T02:58:51.362Z · LW(p) · GW(p)

I got excited for the fraction of a second it took me to remember that everyone who could possibly want to bid could probably afford to spend more money than I have to my name on this without it cutting into their living expenses. Unless my plan was "Bid $900, hope no one outbids, ask Eliezer to get me a job as quickly as possible", which isn't really that exciting a category, however useful.

comment by Mitchell_Porter · 2013-09-01T02:17:06.947Z · LW(p) · GW(p)

I might have bid on that, but the auction is already over.

comment by sakranut · 2013-08-29T05:35:37.031Z · LW(p) · GW(p)

I enjoyed this non-technical piece about the life of Kolmogorov - responsible for a commonly used measure of complexity, as well as several now-conventional conceptions of probability. I wanted to share: http://nautil.us/issue/4/the-unlikely/the-man-who-invented-modern-probability

comment by Flipnash · 2013-08-19T21:26:53.781Z · LW(p) · GW(p)

what is a reliable way of identifying arbitrary solved or unsolved problems??

Replies from: Discredited, Alsadius
comment by Discredited · 2013-08-21T14:52:57.847Z · LW(p) · GW(p)

The existence of an industry indicates a common problem that humans can make some progress toward solving. http://en.wikipedia.org/wiki/Standard_Industrial_Classification

A manual or a textbooks for an field that is more applied than descriptive is full of procedural knowledge for solving the problems of that domain. You can find very good books explaining how to draw portraits, but for some reason people don't openly say portrait drawing is solved. Maybe in applied fields we just work to solve bigger and harder problems, like figuring out how to forecasting the weather ever more accurately, and once the problems are mostly, reliably solved the fields just quietly disappear. Like we don't have lamp lighters anymore, because light bulbs mostly and reliably solve the problem that lamp lighters were specialized to deal with. Or it's unusual for a university education to build up to theology these days, when theology used to be the main reason for universities existing.

comment by Alsadius · 2013-08-20T00:02:00.647Z · LW(p) · GW(p)

Arbitrary, as in ones you pick yourself? Well, pick a problem, then Google it.

Do you mean random?

Replies from: Flipnash
comment by Flipnash · 2013-08-20T00:17:04.815Z · LW(p) · GW(p)

I do mean random. The only way I've come up with that reliably can identify a problem would be to pick a random household item, then think of what problem it is supposed to solve therefore identifying a problem, but that doesn't work for unsolved problems....

Replies from: Pentashagon, Manfred
comment by Pentashagon · 2013-08-20T00:44:03.970Z · LW(p) · GW(p)

I think you have to start by imaging better possible states of the world, and then see if anyone has thought of a practical way to get from the current state to the better possible state; if not it's an unsolved problem.

In household terms, start by imagining the household in a "random" better state (cleaner, more efficient, more interesting, more comfortable, etc.) and once you have a clear idea of something better, search for ways to achieve the better state. In concrete terms, always having clean dishes and delicious prepared food would be much better than dirty dishes and no food. Dishwashers help with the former, but are manual and annoying. Microwaves and frozen food help with the latter, but I like fresh food. Paying a cook is expensive. Learning to cook and then cooking costs time. What is cheap, practical, and yields good results? Unsolved problem, unless you want to eat Soylent.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-08-20T19:13:28.367Z · LW(p) · GW(p)

What is cheap, practical, and yields good results?

Skilled slaves? Perhaps 'ethical' should be added to your list of constraints. :)

Replies from: Lumifer
comment by Lumifer · 2013-08-20T19:35:37.324Z · LW(p) · GW(p)

(cheap, practical, and yields good results) = (skilled slaves) ??

We must live in radically different environments X-D

comment by Manfred · 2013-08-20T09:46:03.046Z · LW(p) · GW(p)

You could pick words from the dictionary at random until they either describe a problem or are nonsensical - if nonsense, try again. Warning: may take a few million tries to work.

comment by blacktrance · 2013-08-23T04:38:44.744Z · LW(p) · GW(p)

I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.