Posts

The Need for Human Friendliness 2013-03-07T04:31:37.224Z
Exploring the Idea Space Efficiently 2012-04-08T04:28:46.855Z

Comments

Comment by Elithrion on Utilitarianism twice fails · 2014-11-20T19:23:05.191Z · LW · GW

I was supposed to check on this a long time ago, but forgot/went inactive on LW, but the post actually ended up at -26, so seemingly slightly lower than it was, which is evidence against your regression to 0 theory.

Comment by Elithrion on Boring Advice Repository · 2013-12-05T05:13:07.631Z · LW · GW

I agree with tut that increasing speed might help. Sometimes if I listen at default speed, I find my attention drifting off mid-sentence just because it's going so slowly. (Conversely, at higher speed, when my attention does drift off briefly, I sometimes miss a full sentence or two and have to rewind slightly.)

If that doesn't work, I don't really have many other ideas. Maybe you could try other repetitive mechanical actions to see if they coexist well with audiobooks. For example, maybe cooking, drawing, or exercising might work (if you do any of those). In general, I find it easy to not miss anything in an audiobook so long as I'm simultaneously doing something that does not also involve words.

Comment by Elithrion on Open Thread, June 16-30, 2013 · 2013-06-19T20:04:33.535Z · LW · GW

[I made a request for job finding suggestions. I didn't really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]

Comment by Elithrion on Maximizing Your Donations via a Job · 2013-06-19T19:41:03.006Z · LW · GW

Incidentally, if your discount rate is really this high (you mention 22% annual at one point), you should be borrowing as much as you can from banks (including potentially running up credit cards if you have to - many of those seem to be 20% annual) and just using your income to pay down your debt.

I'd say just use your cost of borrowing (probably 7% or so?) for the purposes of discounting your salary and things, and then decide whether you should borrow to donate or not based on whether that rate is less than the expected rate of return for charities. (This is assuming that you can get access to adequate funds at this rate - I'm not entirely sure, but it seems plausible.)

Comment by Elithrion on Open Thread, June 2-15, 2013 · 2013-06-06T22:55:42.451Z · LW · GW

I am really disappointed in you, gwern. Why would you use an English auction when you can use an incentive-compatible one (a second price auction, for example)? You're making it needlessly harder for bidders to come up with valuations!

(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)

Comment by Elithrion on Drowning In An Information Ocean · 2013-04-07T17:54:54.922Z · LW · GW

Hm, that's true, I have heard that. Although in that particular case, it's actually unknown whether the shape is constructible or not, and I was trying to prove (in)constructibility rather than construct.

Comment by Elithrion on A Rational Altruist Punch in The Stomach · 2013-04-01T17:25:23.253Z · LW · GW

This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim.

Also, the most recent comment by cournot is interesting on the topic:

You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative wealth you get a different figure [and standard price adjustments aren't great for looking far back in the past]. I think a pound was about 5 dollars. So if we assume that 1000 pounds = 5000 nominal dollars and we use the Econ History's price deflators http://www.measuringworth.com/uscompare/ we find that this comes to over $2M if we use the unskilled wage and about $5M if we use nominal GDP. As a relative share of GDP, this figure would have been an enormous $380M or so. The latter is not an irrelevant calculation.

Given how wealthy someone had to be (relative to the poor in the 18th century) to fork over a thousand pounds in Franklin's time, he might have done more good with it then than you could do with 2 to 5 million bucks today.

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-04-01T03:54:01.658Z · LW · GW

Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other.

Okay, sure, but that seems like the problem is "solved" (i.e. the donkey ends up eating hay instead of starving).

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-04-01T03:20:31.892Z · LW · GW

Does that really work for all (continuous? differentiable?) functions. For example, if his preference for the bigger/closer one is linear with size/closeness, but his preference for the left one increases quadratically with time, I'm not sure there's a stable solution where he doesn't move. I feel like if there's a strong time factor, either a) the ass will start walking right away and get to the size-preferred hay, or b) he'll start walking once enough time has past and get to the time-preferred hay. I could write down an equation for precision if I figure out what it's supposed to be in terms of, exactly...

Comment by Elithrion on A Rational Altruist Punch in The Stomach · 2013-04-01T01:44:54.028Z · LW · GW

I'm not sure what an investment in a particular far-future time would look like. Money does not, in fact, breed and multiply when left in a vault for long enough. It increases by being invested in things that give payoffs or otherwise rise in value. Even if you have a giant stockpile of cash and put it in a bank savings account, the bank will then take it and lend it out to people who will make use of it for whatever projects they're up to. If you do that, all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first while, and then when you eventually take it out you take the choice back and make it yourself. The one way I can think of to actually invest in the distant future is to find or create some project that will have a massive payoff in the distant future but low payoffs before that, and I don't think anyone knows of a project that pays off further than 100 years in the future.

Maybe you could try to create a fund that explicitly looks for far-future payoff opportunities and invests in them, but I don't think one exists right now, and the idea is non-trivial.

I dunno, maybe there's something else I'm missing, though.

Comment by Elithrion on [SEQ RERUN] You're Calling *Who* A Cult Leader? · 2013-04-01T00:42:48.977Z · LW · GW

Alternatively, if it's done by someone whom you already know decently well, and who you know isn't really a crazy obsessive pedant, it can instead signal a liking of international or British English over American.

Comment by Elithrion on [SEQ RERUN] You're Calling *Who* A Cult Leader? · 2013-04-01T00:36:45.519Z · LW · GW

That sounds like good policy, although there may be significant variation in what sounds awful to different people (specifically, "whom" is generally more popular outside the US). "Who" is probably the safer choice when in doubt, admittedly.

Comment by Elithrion on [SEQ RERUN] You're Calling *Who* A Cult Leader? · 2013-03-31T22:35:32.610Z · LW · GW

Nope, in fact that one should also be "Whom are you calling a cult leader?" Who is the subject form, i.e. it's supposed to be used when it's the "who" person that is doing the actions. In this case, though, the subject is "you", who is doing the action ("calling" someone something), and the object is the someone being called something ("whom").

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-03-31T20:53:48.718Z · LW · GW

Okay, thanks for the explanation. It does seem that you're right*, and I especially like the needle example.

*Well, assuming you're allowed to move the hay around to keep the donkey confused (to prevent algorithms where he tilts more and more left or whatever from working). Not sure that was part of the original problem, but it's a good steelman.

Comment by Elithrion on Existential risks open thread · 2013-03-31T17:32:10.171Z · LW · GW

This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.

I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.

Comment by Elithrion on [SEQ RERUN] You're Calling *Who* A Cult Leader? · 2013-03-31T17:20:14.859Z · LW · GW

Is using "whom" uncool or something? Maybe I'm just elitist (in a bad way) for liking it.

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-03-31T04:41:38.626Z · LW · GW

Thanks (and I actually read the other new comments on the post before responding this time!) I still have two objections.

The first one (which is probably just a failure of my imagination and is in some way incorrect) is that I still don't see how some simple algorithms would fail. For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random. For simplicity, let's say it takes the first letter of the word under consideration (h), plugs the corresponding number (8) as a seed into a pseudorandom integer generator, and then picks option 1 if the result is even, option 2 if it's odd. It does seem like this might induce a discontinuity in decisions, but I don't see where it would fail (so I'd like someone to tell me =)).

The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot). My very mediocre grasp of QM suggests to me that if you try to use continuity to break the ass's algorithm (and it's a sufficiently good algorithm), you'll just find the point where its decisions are dominated by quantum uncertainty and get it to make true random choices. Or something along those lines.

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-03-31T04:05:14.568Z · LW · GW

Sorry, I'm not sure I understand what you mean. What particle should we move to change the fact that the ass will eventually get hungry and choose to walk forward towards one of the piles at semi-random? It seems to me like you can move a particle to guarantee some arbitrarily small change, but you can't necessarily move one to guarantee the change you want (unless the particle in question happens to be in the brain of the ass).

Comment by Elithrion on Drowning In An Information Ocean · 2013-03-31T03:54:54.181Z · LW · GW

don't get fixed in proving the constructibility of enormously large polygons

Is this common? 'Cause um, at one point I did try to prove (or disprove) the constructibility of a hendecagon (11 sides) with neusis, but I didn't realise this was a popular pursuit. This isn't really related to the post, but I was very surprised constructibility got a mention.

(I ran into equations lacking an easy solution - they were sufficiently long/hard that Maple refused to chug through them - and decided it wasn't worth the effort to keep trying.)

Comment by Elithrion on Buridan's ass and the psychological origins of objective probability · 2013-03-30T19:51:56.759Z · LW · GW

The problem with the Problem is that it simultaneously assumes a high cost of thinking (gradual starvation) and an agent that completely ignores the cost of thinking. An agent who does not ignore this cost would solve the Problem as Vaniver says.

Comment by Elithrion on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-30T01:34:14.533Z · LW · GW

That's fair. I guess adopting exponential discounting is also good enough to rule out Christianity. Not about trying to live infinitely long, though - it would depend on how much believing in Christianity would hinder you in achieving that. (Same for other religions that don't promise sufficiently amazing bliss.)

Comment by Elithrion on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-30T01:02:49.795Z · LW · GW

Sure, but it doesn't matter how much probability mass atheism gets, because the religions are the only ones offering infinities*, and we're probably interested in best expected payoff, not highest probability. If religions have 1/10^50 residual probability mass and atheism has all the rest, you'd still probably have to choose one of them if at least one is offering immense payoffs and you haven't solved Pascal's Mugging.

*I guess one could argue that a Solomonoff prior assigns a zero probability to truly infinite things, but I'm not sure that's an argument I'd want to rely on (also I know Buddhism offers some merely vast numbers, although I'm not sure they're vast enough, and some other religions do too, I'd imagine).

Comment by Elithrion on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-29T21:50:14.585Z · LW · GW

No, it's not (at least if we take the generous view and consider the Wager as an argument for belief in some type of deity, rather than the Christian one for which it was intended), because after considering all the hypotheses, you will still have to choose one (or more, I guess) of them, and it almost certainly won't be atheism. I also feel like you completely missed the point of my previous comment, but I'm not sure why, and am consequently at a loss as to how to clarify.

Comment by Elithrion on Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold · 2013-03-29T20:36:13.279Z · LW · GW

I suppose I should have said "reasonably inhabited land".

Comment by Elithrion on [deleted post] 2013-03-29T17:27:53.580Z

I don't think it's a good idea to discuss this, not only because it may give people ideas, but also because there is only one possible side to the argument that can really be mentioned.

Comment by Elithrion on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-29T17:02:07.551Z · LW · GW

I am not. The problem with Pascal's Wager is sort of that it fails to consider other hypotheses, but not in the conventional sense that most arguments use. Invoking an atheist god, as is often done, really does not counterbalance the Christian god, because the existence of Christianity gives a few bits of evidence in favour of it being true, just like being mugged gives a few bits of probability in favour of the mugger telling the truth. So, using conventional gods and heavens and hells like that won't balance to them cancelling out, and you will end up having to believe one of these gods. On the other hand, the actual problem is that you can keep invoking new gods with fancier and more amazing heavens and hells, so that what you really end up believing is super-ultra-expanded-time-bliss-heaven and then you do whatever you think is required to go there. Which is isomorphic to Pascal's (self-)Mugging.

(I should try practising explaining things in fewer words...)

Comment by Elithrion on Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold · 2013-03-29T00:47:11.624Z · LW · GW

No, we would not. The Southern hemisphere is just generally warmer (at least on land).

Comment by Elithrion on Solved Problems Repository · 2013-03-28T20:12:12.510Z · LW · GW

I think it depends on the reading. If you read it in a sort of snooty dismissive voice, yes, certainly. But if you read it in a genuinely perplexed kind of voice, it mostly sounds confused.

Comment by Elithrion on Solved Problems Repository · 2013-03-28T19:57:13.239Z · LW · GW

I was looking up the Marines' fitness requirements at some point randomly, and for females the pull-up requirement is apparently replaced with a flexed-arm hang (wiki about.com), so you could maybe try doing that.

Comment by Elithrion on Upgrading moral theories to include complex values · 2013-03-28T19:47:33.767Z · LW · GW

By the way, 3^^3 = 3^27 is "only" 7625597484987, which is less than a quadrillion. If you want a really big number, you should add a third arrow (or use a higher number than three).

Comment by Elithrion on [SEQ RERUN] The Pascal's Wager Fallacy Fallacy · 2013-03-28T19:27:39.012Z · LW · GW

I feel like this post is dated by the fact that it came before Pascal's Mugging discussions to the point of being fairly wrong. The problem with Pascal's Wager actually is that the payoffs are really high, so they overwhelm an unbounded utility function (and they don't precisely cancel out, since we do have a little evidence). On the other hand, I suppose the core point that you shouldn't dismiss things out of hand if they have a low (but not tiny) probability and a large payoff is sound.

Comment by Elithrion on Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold · 2013-03-28T19:03:19.689Z · LW · GW

I'm really sceptical that this is as big a factor as some of the others, but I can see how it might be a significant factor. I've also lived in cold places most of my life, so I'm not in a very good position to judge. I feel like the biggest factor will ultimately turn out to be "that's how history played out", though. Looking back, it's not clear that the hypothetical dominance of the North was really noticeable until maybe the 17th century (I'm not entirely confident on this, so correct me if I'm wrong), so I'd be more inclined to attribute it to cultural, geological, and geopolitical factors. As an example, Greece and then Rome led the world in many things for a while, and they're pretty warm. So did China and many of its core areas are fairly warm as well.

Comment by Elithrion on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-23T22:49:17.575Z · LW · GW

What if the house merely floated the thing over there with reaction (pushing back on the floors/walls), and its floor rotted slightly (accumulating entropy, losing chemical energy) in proportion to the necessary force? In that case, he's only discovered ghostly energy transfer at small distances, which may be completely impractical (only one or two Nobels).

Comment by Elithrion on LessWrong help desk - free paper downloads and more · 2013-03-23T16:21:04.391Z · LW · GW

This appears to be all that exists for 3 (page 2): http://jech.bmj.com/content/suppl/2003/09/23/57.9.DC1/Abstracts.pdf

It was so small that after finding it I kept looking for a good 15 minutes, but I'm pretty sure the abstract is all there is and the full article was never published (the first author doesn't list it on his personal page, and all the references seem to be to the abstract).

Comment by Elithrion on Reflection in Probabilistic Logic · 2013-03-23T03:35:12.210Z · LW · GW

This idea reminds me of some things in mechanism design, where it turns out you can actually prove a lot more for weak dominance than for strong dominance, even though one might naïvely suppose that the two should be equivalent when dealing with a continuum of possible actions/responses. (I'm moderately out of my depth with this post, but the comparison might be interesting for some.)

Comment by Elithrion on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-23T02:18:29.512Z · LW · GW

I think there are non-anthropic problems with even rational!humans communicating evidence.

One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. However, this assessment is difficult to communicate, since there are strong incentives to lie about these assessments (and also a lot of potential for someone to turn out to not be entirely rational and just get these assessments wrong). So, the rational agent may read this post and update to believing it's much more likely that OrphanWilde either lies to people for fun (just look at all those improbable details!) or is incompetent at assessing evidence and falls prey to apophenia a lot.

This might not be an issue were it not for the second problem, which is that communication is costly. If communication were free, OrphanWilde could just tell us every single little detail about his life (including in this house and in other houses), and we could then ignore the problem of him potentially being a poor judge of evidence. Alternatively, he could probably perform some very large volume evidence-assessment test to prove that he is, in fact, competent. However, since communication is costly, this seems to be impractical in reality. (The lying issue is slightly different, but could perhaps be overcome with some sort of strong precommitment or an assumption constraining possible motivations combined with a lot of evidence.)

This doesn't invalidate Aumann agreement as such, but certainly seems to limit its practical applications even for rational agents.

Comment by Elithrion on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-23T00:41:29.489Z · LW · GW

I meant "someone close to him" in a relationship, not a spatial, sense (so, "other family member or friend he knows about"). Which I guess is still kind of just a different connotation, but I think one worth noticing separately from the "crazy lurker who's been around for a while" hypothesis.

Comment by Elithrion on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-22T19:14:00.470Z · LW · GW

Either that or maybe either OrphanWilde or his sister or someone else close to him really enjoys messing with everyone and making it seem that the house is haunted.

Comment by Elithrion on LessWrong help desk - free paper downloads and more · 2013-03-22T00:43:43.084Z · LW · GW

Here you go.

Comment by Elithrion on [LINK] Transcendence (2014) -- A movie about "technological singularity" · 2013-03-22T00:38:15.738Z · LW · GW

Imdb lists Wally Pfister as director. The shooting date is also from its Q&A thing.

Comment by Elithrion on [LINK] Transcendence (2014) -- A movie about "technological singularity" · 2013-03-22T00:22:29.444Z · LW · GW

Shooting is apparently scheduled to start April, so you probably don't have long to wait.

Comment by Elithrion on Another community about existential risk - Arctic news · 2013-03-21T18:15:41.769Z · LW · GW

Technically, LW isn't about x-risk. It's about "refining the art of human rationality", as you can see up there in the header.

I am also not sure that a blogspot blog that gets 0-6 comments per post is really worth calling "a community" or taking particular notice of. The other ones you mention seem to more closely resemble communities, but have even less to do with x-risk.

Comment by Elithrion on [LINK] Transcendence (2014) -- A movie about "technological singularity" · 2013-03-21T18:02:31.158Z · LW · GW

Apparently an early script summary leaked. Spoilers:

Nppbeqvat gb gur fhzznel, n tebhc bs nagv-grpuabybtl greebevfgf nffnffvangr Jvyy, Riryla hcybnqf uvf oenva vagb n cebgbglcr fhcrepbzchgre. Nygubhtu fur ng svefg svaqf gur rkcrevzrag frrzf gb unir tbar jebat, orsber gbb ybat Riryla svaqf Jvyy erfcbaqvat va pbzchgre sbez.

Fur tbrf ba gb pbaarpg Jvyy gb gur Vagrearg fb ur pna uryc znxr shegure fpvragvsvp oernxguebhtuf. Jvyy nfxf Riryla gb pbaarpg n zvpebcubar naq n pnzren hc gb gur pbzchgre fb ur pna frr naq fcrnx gb ure nf jryy.

Jvyy perngrf n onpxhc bs uvzfrys gb rirel pbzchgre va gur jbeyq, naq sheguref uvf jbex guebhtu npprffvat bayvar vaqrkrf. (Xbfbir gbyq GurJenc guvf cybg cbvag vf ab ybatre va gur fpevcg.) Jura gur nagv-grpuabybtl betnavmngvba svaqf bhg, gurl gel gb fgrny gur fhcrepbzchgre naq qrfgebl vg, ohg Jvyy ab ybatre arrqf gur pbzchgre gb fheivir.

Comment by Elithrion on Critiques of the heuristics and biases tradition · 2013-03-20T00:56:30.281Z · LW · GW

You put your MP3 player on random. You have a playlist of 20 songs. What are the odds that the next song played is the same song which was just played?

I think the option is more typically called "shuffle", which actually accurately represents what it does.

Comment by Elithrion on Caring about possible people in far Worlds · 2013-03-19T00:50:51.018Z · LW · GW

I care about possible people. My child, if I ever have one, is one of them, and it seems monstrous not to care about one's children.

I think you may have found one of the quickest ways to tempt me into downvoting a post without reading further (it wasn't quite successful - I read all the way through before downvoting). Poor reasoning and stereotypical appeal to emotion are probably not the ideal opener.

Beyond that, you never made clear what the purpose of the following arguments is and gave them really confusing titles.

  • I'm not sure in what way argument 1 shows the multiverse to be "Sadistic", or what position I am supposed to have held for it to be relevant to me. I guess if I cared about all hypothetical people, you may have showed that there is some subset of them I can't affect?
  • I'm going to assume by "obtains" you mean "occurs". With that in mind, I still have trouble understanding how this is relevant to anything. I guess you take "X-Risk" as an example of "generally accepted bad thing", and that any bad thing would work? As far as I can tell, this line of reasoning doesn't actually lead to paralysis, as if you can't affect the non-actual worlds, obviously you can safely make all your decisions while disregarding them. On the other hand, if you think there is some way you think you can "break out" and affect all the other worlds, you may be motivated to attempt it at nearly any cost, but I don't see this as problematic (assuming you have "solved" Pascal's Mugging). Also, while I haven't read Bostrom's paper, I'm pretty sure infinity-related paralyses hardly ever occur if you just use surreal numbers (for example).
  • I basically don't understand the third argument. You mention worlds 1, 2, 3 without explaining what the relationship is between them. I'm also not sure what you mean by "the morally relevant thing" being "qualitative" (and possible not sure what you mean by "quantifiable"). I am also not sure in what way the conclusion is different from argument 1 (there are things we can't affect that ideally we'd want to affect).

Short version: it's confusing and unclear/not relevant to my beliefs (which is fine, if it's relevant to the beliefs of someone on here at least)/really confusing/terrible opener.

Comment by Elithrion on Open thread, March 17-31, 2013 · 2013-03-18T22:06:20.952Z · LW · GW

I don't think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders.

Why not? Some CEOs (especially for smaller companies, I think) are found via specialised recruiting companies, which I'd say is pretty unaffiliated. And in any case, it's not clear to me how you think the affiliation would be increasing pay. Do you imagine potential CEO candidates hold an auction in which they offer kickbacks to major shareholders/powerholders from their pay or something? Because I haven't heard of that ever happening, and I'm having trouble imagining what more plausible scenario you have in mind. (Obviously there are cases where major shareholders also serve as CEOs/whatever, but if you're claiming that every person in such position with high pay is a major power holder shares/board-wise, I'd like to see evidence for it, since I find that extremely unlikely.)

Can you back up your first point?

If you mean about new executives receiving pay comparable to old ones, I dunno, it's hard. I think I'd have to search company-by-company and even then it would be hard to determine what's happening. For example, I looked up Barclay's, which switched Bob Diamond for Antony Jenkins last year. Diamond had a base salary of £1.3mil. Jenkins has a base salary of £1.1mil. However, Diamond got a lot of non-salary money (much of which he gave up due to scandal), and it's not clear how Jenkins' compensation compares to that. Also, it's not clear how much the reduction (if there is any) is the result of public outrage (or ongoing economic difficulties).

If you mean about high salaries probably being appropriate, I can back that up on a theoretical level. If you assume a CEO has a high level of influence over a huge company, then it's straightforward that there is going to be intense competition for the best individuals. Even someone who can improve profits by 0.1% would be worth extra millions of dollars to a multi-billion dollar company.

Related things I found while looking around: "highly concentrated ownership in listed companies in New Zealand is a significant contributor to [a] poor pay-for-performance relation", "institutional ownership concentration is positively related to the pay-for-performance sensitivity of executive compensation and negatively related to the level of compensation, even after controlling for firm size, industry, investment opportunities, and performance", "results indicate that power is more concentrated than ownership". (Not that I read much more than abstracts.)

Comment by Elithrion on Open thread, March 17-31, 2013 · 2013-03-18T18:27:53.622Z · LW · GW

I suspect there's too much of a difference in how much LW members know about basketball to get particularly wide participation. For example, I had to look up "March Madness" to figure out what this is about.

Also, there's a significant chance that either people would just copy the odds from Pinnacle, or maybe even arbitrage against it (valuing karma or whatever at 1-2 cents). Or, well, I'd certainly be tempted to =]

Comment by Elithrion on Open thread, March 17-31, 2013 · 2013-03-18T18:21:30.342Z · LW · GW

I'm pretty sure that low salaries are a dysfunction of democracies rather than high salaries being a dysfunction of companies. In particular, it's not the case with every company that a couple of people hold enormous shares. And aside from that, even when there is clear evidence that "the majority" gets directly involved in CEO compensation, it doesn't seem that the salaries go down all that much.

Or looking at it differently, if the high salaries were the consequence of an undue concentration of power, we would expect that when one CEO leaves, and a different one who was not previously affiliated with the power holders is installed, the salary of the new one would be much much lower. However, I think this is rarely the case.

Comment by Elithrion on Open thread, March 17-31, 2013 · 2013-03-18T18:07:49.496Z · LW · GW

I'm also curious, and would like to add a poll: [pollid:420]

Comment by Elithrion on Open thread, March 17-31, 2013 · 2013-03-18T17:56:14.384Z · LW · GW

Regarding the note, in statistics you could call that a population parameter. While parameters that are used are normally things like "mean" or "standard deviation", the definition is broad enough that "the centre of mass of a collection of atoms" plausibly fits the category.