Study: In giving charity, let not your right hand... 2014-08-22T22:23:34.215Z
Democracy and rationality 2013-10-30T12:07:24.727Z
Does quantum mechanics make simulations negligible? 2011-08-13T01:53:17.622Z
Overcoming bias in others 2011-08-12T15:38:32.282Z


Comment by homunq on LessWrong 2.0 · 2015-12-11T14:47:26.542Z · LW · GW

One way of dividing up the options is: fix the current platform, or find new platform(s). The natural decay process seems to be tilting towards the latter, but there are downsides: the diaspora loses cohesion, and while the new platforms obviously offer some things the current one doesn't, they are worse than the current one in various ways (it's really hard to be an occasional lurker on FB or tumblr, especially if you are more interested in the discussion than the "OP").

If the consensus is to fix the current platform, I suggest trying the simple fixes first. As far as I can tell, that means, break the discussion/main dichotomy, and do something about "deletionist" downvoting. Also, making it clearer how to contribute to the codebase, with a clearer owner. I think that these things should be tried and given a chance to work before more radical stuff is attempted.

If the consensus is to find something new, I suggest that it should be something which has a corporation behind it. Something smallish but on the up-and-up, and willing to give enough "tagging" capability for the community to curate itself and maintain itself reasonably separate from the main body of users of the site. It should be something smaller than FB but something willing to take the requests of the community seriously. Reddit, Quora, StackExchange, Medium... this kind of thing, though I can see problems with each of those specific suggestions.

Comment by homunq on Taking Effective Altruism Seriously · 2015-07-14T00:52:05.873Z · LW · GW

I disagree. I think the issue is whether "pro-liberty" is the best descriptive term in this context. Does it point to the key difference between things it describes and things it doesn't? Does it avoid unnecessary and controversial leaps of abstraction? Are there no other terms which all discussants would recognize as valid, if not ideal? No, no, and no.

Comment by homunq on Taking Effective Altruism Seriously · 2015-07-12T23:08:06.629Z · LW · GW

Whether something is a defensible position, and whether it should be embedded in the very terms you use when more-neutral terms are available, are separate questions.

If you say "I'm pro-liberty", and somebody else says "no you're not, and I think we could have a better discussion if you used more specific terms", you don't get to say "why won't you accept me at face value".

Comment by homunq on Taking Effective Altruism Seriously · 2015-07-12T23:01:55.933Z · LW · GW

When you say "Nothing short of X can get you to Y", the strong implication is that it's a safe bet that X will at least not move you away from Y, and sometimes move you toward it. So OK, I'll rephrase:

The OP suggests that colonization is in fact a proven way to turn at least some poor countries into more productive ones.

Comment by homunq on Taking Effective Altruism Seriously · 2015-06-15T20:57:35.994Z · LW · GW

Note that my post just above was basically an off-the-cuff response to what I felt was a ludicrously wrong assumption buried in the OP. I'm not an expert on African history, and I could be wrong. I think that I gave the OP's idea about the level of refutation it deserved, but I should have qualified my statements more ("I'd guess..."), so I certainly didn't deserve 5 upvotes for this (5 points currently; I deserve 1-3 at most).

Comment by homunq on Taking Effective Altruism Seriously · 2015-06-06T15:50:17.224Z · LW · GW

I think that it's worth being more explicit in your critique here.

The OP suggests that colonization is in fact a proven way to turn poor countries into productive ones. But in fact, it does the opposite. Several parts of Africa were at or above average productivity before colonization¹, and well below after; and this pattern has happened at varied enough places and times to be considered a general rule. The examples of successful transitions from poor countries to rich ones—such as South Korea—do not involve colonization.

¹Note that I'm considering the triangular trade as a form of colonization; even if it didn't involve proconsuls, it involved an external actor explicitly fomenting a hierarchical and extractive social order.

Comment by homunq on Taking Effective Altruism Seriously · 2015-06-06T15:43:09.527Z · LW · GW

I think you can make this critique more pointed. That is: "pro-liberty" is flag-waving rhetoric which makes us all stupider.

I dislike the "politics is a mind-killer" idea if it means we can't talk about politically touchy subjects. But I entirely agree with it if it means that we should be careful to keep our language as concrete and precise as possible when we approach these subjects. I could write several paragraphs about all the ways that the term "pro-liberty" takes us in the wrong direction, but I expect that most of you can figure all that out for yourselves.

Comment by homunq on Announcing the Complice Less Wrong Study Hall · 2015-03-04T03:05:36.174Z · LW · GW

It appears that you need to be logged in from FB or twitter to be fully non-guest. That seems like a... strange... choice for an anti-akrasia tool.

(Tangentially related to above, not really a reply)

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T18:22:53.325Z · LW · GW

Fair enough. Thanks. Again, I agree with some of your points. I like blemish-picking as long as it doesn't require open-ended back-and-forth.

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T17:53:47.622Z · LW · GW

You're raising some valid questions, but I can't respond to all of them. Or rather, I could respond (granting some of your arguments, refining some, and disputing some), but I don't know if it's worth it. Do you have an underlying point to make, or are you just looking for quibbles? If it's the latter, I still thank you for responding (it's always gratifying to see people care about issues that I think are important, even if they disagree); but I think I'll disengage, because I expect that whatever response I give would have its own blemishes for you to find.

In other words: OK, so what?

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T17:10:32.685Z · LW · GW

Full direct democracy is a bad idea because it's incredibly inefficient (and thus also boring/annoying, and also subject to manipulation by people willing to exploit others' boredom/annoyance). This has little or nothing to do with whether people's preferences correlate with their utilities, which is the question I was focused on. In essence, this isn't a true Goldilocks situation ("you want just the right amount of heat") but rather a simple tradeoff ("you want good decisions, but don't want to spend all your time making them").

As to the other related concepts... I think this is getting a bit off-topic. The question is, is energy (money) spent on pursuing better voting systems more of a valid "saving throw" than when spent on pursuing better individual rationality. That's connected to the question of the preference/utility correlation of current-day, imperfectly-rational voters. I'm not seeing the connection to rule of law &c.

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T16:48:34.215Z · LW · GW

(small note: the sentence you quote from me was unclear. "because" related to "presume", not "saying". But your response to what I accidentally said is still largely cogent in relation to what I meant to say, so the miscommunication isn't important. Still, I've corrected the original. Future readers: lumifer quoted me correctly.)

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T16:44:51.362Z · LW · GW

The model is not easy to subject to full, end-to-end testing. It seems reasonable to test it one part at a time. I'm doing the best I can to do so:

  • I've run an experiment on Amazon Mechanical Turk involving hundreds of experimental subjects voting in dozens of simulated elections to probe my strategy model.

  • I'm working on getting survey data and developing statistical tools to refine my statistical model (mostly, posterior predictive checks; but it's not easy, given that this is a deeper hierarchical model than most).

  • In terms of the utilitarian assumptions of my model, I'm not sure how those are testable rather than just philosophical / assumed axioms. Not that I regard these assumptions as truly axiomatic, but that I think they're pretty necessary to get anywhere at all, and in practice unlikely to be violated severely enough to invalidate the work.

  • I haven't started work on testing / refining my media model (other than some head-scratching), but I can imagine how to do at least a few spot checks with posterior predictive checks too.

  • The assumptions that preference and utility correlate positively, even in an environment where candidates are strategic about exploiting voter irrationality, are certainly questionable. But insofar as these are violated, it would just make democracy a bad idea in general, not invalidate the fact that plurality is still a worse idea than other voting systems such as approval. Also, I think it would be basically impossible to test these assumptions without implausibly accurate and unbiased measurements of true utility. Finally, call me a hopeless optimist, but I do actually have faith that democracy is a good idea because "you can't fool all the people all the time".

tl;dr: I'm working on this.

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-24T12:09:27.833Z · LW · GW

I presume you're saying that utility-based simulations are not credible. I don't think you're actually trying to say that they're not numerical estimates. So let me explain what I'm talking about, then say what parts I'm claiming are "credible".

I'm talking about monte-carlo simulations of voter satisfaction efficiency. You use some statistical model to generate thousands of electorates (that is, voters with numeric utilities for candidates); a media model to give the voters information about each other; and a strategy model to turn information, utilities, and choice of voting system into valid ballots for that voting system. Then, you see who wins each time, and calculate the average overall utility of that winners. Clearly, there are a lot of questionable assumptions in terms of the statistical, media, and strategy models, but the interesting thing is that exploring various assumptions in all of those cases shows that the (plurality-dictatorship)≈(good system-plurality) equation is pretty robust, with various systems such as approval, condorcet, majority judgment, score, or SODA in place of "good system".

There are certainly various ways to criticize the above.

  • "Don't believe it": If you think that I've messed up my math or not done a good job with the sensitivity analysis, of course you'd question my conclusions. But if you want to play with my code to check it, it's here.

  • "Utilitarianism is a bad metric": It may not be perfect, but as far as I can tell it's the only rational way to put numbers on things.

  • "Democracy is a bad idea": In other words, if you think that the average voter's estimate of their utility for a candidate has 0 or negative correlation with their true utility of that candidate winning, then this simulation is garbage. I'd respond with the old saying about democracy being the worst system except all the others.

  • "The advantages of democracy over dictatorship aren't in terms of who's in charge": if you think that democracy's clear superiority to dictatorship in terms of human welfare comes from something other than choosing better leaders (such as, for instance, reducing the prevalence of civil wars), then improving the voting system might not be able to have comparable payoff as instituting a voting system to begin with. I'd respond that this critique is probably partially right, but on the other hand, better leadership could credibly have better responses to crises (financial, environmental, and/or existential-risk) which could indeed be on the same order as the democracy dividend.

All in all, taking a more outside view, I see how the combination of the above objections would reduce your estimate of the expected "voting system dividend". Still, when I "shut up and multiply" I get: $80 trillion world GDP plausible (conservative) effect size in a good year of 2% .1 plausible portion of good years over time .5 plausible portion of good years over space (some country's economies might already be immune to the kind of harm this could prevent) .5 chance you trust my simulations .1 correlation of voter preference with utility .5 probability leadership makes any difference = about $2 billion/year potential payoff in expected value, even without compounding. That seems to me like (a) quite a conservative choice of factors, (b) not a totally implausible end result, and (c) still big enough to care about. Of course, it's incredibly back-of-the-envelope, but I invite you to try doing the estimation yourself.

Comment by homunq on 2014 Survey Results · 2015-02-23T01:53:00.725Z · LW · GW

[ ] Wow, these people are smart. [ ] Wow, these people are dumb. [ ] Wow, these people are freaky. [ ] That's a good way of putting it, I'll remember that.

(For me, it's all of the above. "Insight porn" is probably the biggest, but it doesn't dominate.)

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-23T00:27:04.868Z · LW · GW

Electology is an organization dedicated to improving collective decision making — that is, voting. We run on a shoestring; somewhere in the lowish 5 digits $ per year. We've helped get organizations such as the German Pirate Party and the various US stat Libertarian Parties to use approval voting, and gotten bills brought up in several states (no major victories so far, but we're just starting.)

Is a better voting system worth it, even if most people still vote irrationally? I'd say emphatically yes. Plurality voting is just a disaster as a system, filled with pathological results, perverse incentives, and pernicious equilibria. Credible numerical estimates (utility-based simulations) suggest that better systems such as approval voting offer as much improvement again as the move from dictatorship to democracy was.

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-23T00:07:34.173Z · LW · GW

In terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.

Improvements to collective decision making seem to be potentially an even bigger win. I mean, voting reform; the kind of thing advocated by Electology. Disclaimer: I'm a board member.

Why do I think that? Individual human decisionmaking has already been optimized by evolution. Sure, that optimization doesn't fit perfectly with a modern need for rationality, but it's pretty darn good. However, democratic decisionmaking is basically still using the first system that anybody ever thought of, and monte carlo utility simulations show that we can probably make it at least twice as good (using a random dictator as a baseline).

On the other hand, achieving voting reform requires a critical mass, while individual rationality only requires individuals. And electology is not as far along in organizational growth as CFAR. But it seems to me that it's a complementary idea, and that it would be reasonable for an effective altruist to diversify their "saving throw" contributions. (We would also welcome rationalist board members or volunteers.)

Comment by homunq on CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype · 2015-02-22T23:53:47.191Z · LW · GW

One idea for measurement in a randomized trial:

In order to apply, you have to list 4 people who would definitely know how awesome you're being a year from now, and give their contact info. Then, choose 1 of those people 6 months later and 1 person a year later and ask them how awesome the person is being. When you ask, include a "rubric" of various stories of various awesomeness levels, in which the highest levels are not always just $$$ but sometimes are. Ask the people you're asking to please not contact the person specifically to check awesomeness, because that could introduce bias ("this person is checking, that makes me remember the workshop I did, and feel awesome").

The 4 people should probably include no couples. Your family, long-term friends...

The one way this breaks down is facebook. I mean, if your interaction with each person is separate, and the workshop makes you seem more awesome to each of 4 people, it is working. But if it just makes you post more upbeat things on Facebook, that might not translate to actual awesomeness. But I think that's a really minor factor.

Sure, it's gonna be a noisy and imperfect measurement. You will have to look at standard deviations and calculate power (including burning all 4 contacts for some people to see the within-subject variance). Also, correct for demographic info on contacts, and various other tricks to increase power. But one way or another, you'll get a posterior distribution of the causal impact.

Comment by homunq on The Importance of Sidekicks · 2015-02-22T14:30:14.131Z · LW · GW

I think you've misunderstood the question. As I understand it, it's not "is the distribution of startup values a power law" but "do startups distribute their profits to employees according to a power law".

Comment by homunq on The Importance of Sidekicks · 2015-02-22T12:55:00.155Z · LW · GW

Wish I could both up- and down- vote this comment. +1 for interesting, cogent observation; -1 for followinng that up with facile beakering. So instead I upvoted this comment and downvoted your reply below ( which deserves the downvote in its own right)

(I just made up the word "beakering". It means doing TV science, with beakers and bafflegab, in real life. A lot of amateur evo-something and neuro-something involve beakering.)

Comment by homunq on Improving The Akrasia Hypothesis · 2015-02-15T20:22:28.328Z · LW · GW

Would be better if you didn't say whom you ended up agreeing with. Most people here have either a halo or horns on Eliezer, and discounting that is distracting.

Comment by homunq on Roles are Martial Arts for Agency · 2014-08-22T16:17:13.787Z · LW · GW

That's simpler to say, but not at all simpler to do.

Comment by homunq on Why the tails come apart · 2014-08-22T16:04:38.881Z · LW · GW


(I realize you're busy, this is just a friendly reminder.)

Also, I added one clause to my comment above: the bit about "imperfectly measured", which is of course usually the case in the real world.

Comment by homunq on Why the tails come apart · 2014-08-02T17:58:39.590Z · LW · GW

Great article overall. Regression to the mean is a key fact of statistics, and far too few people incorporate it into their intuition.

But there's a key misunderstanding in the second-to-last graph (the one with the drawn-in blue and red "outcome" and "factor"). The black line, indicating a correlation of 1, corresponds to nothing in reality. The true correlation is the line from the vertical tangent point at the right (marked) to the vertical tangent point at the left (unmarked). If causality indeed runs from "factor" (height) to "outcome" (skill), that's how much extra skill an extra helping of height will give you. Thus, the diagonal red line should follow this direction, not be parallel to the 45 degree black line. If you draw this line, you'll notice that each point on it has equal vertical distance to the top and bottom of the elliptical "envelope" (which is, of course, not a true envelope for all the probability mass, just an indication that probability density is higher for any point inside than any point outside).

Things are a little more complex if the correlation is due to a mutual cause, "reverse" causation (from "outcome" to "factor"), or if "factor" is imperfectly measured. In that case, the line connecting the vertical tangents may not correspond to anything in reality, though it's still what you should follow to get the "right" (minimum expected squared error) answer.

This may seem to be a nitpick, but to me, this kind of precision is key to getting your intuition right.

Comment by homunq on A critique of effective altruism · 2014-03-16T12:43:09.439Z · LW · GW

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Comment by homunq on A critique of effective altruism · 2013-12-21T19:29:23.346Z · LW · GW

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

Comment by homunq on A critique of effective altruism · 2013-12-21T19:23:09.761Z · LW · GW

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Comment by homunq on A critique of effective altruism · 2013-12-21T19:15:16.091Z · LW · GW

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

Comment by homunq on A critique of effective altruism · 2013-12-21T13:01:43.148Z · LW · GW

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

Comment by homunq on What Can We Learn About Human Psychology from Christian Apologetics? · 2013-11-03T11:08:30.744Z · LW · GW

Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.

Comment by homunq on Democracy and rationality · 2013-10-31T19:58:39.249Z · LW · GW

This is a key question. The general answer is:

  1. For realistic cases, there is no such theorem, and so the task of choosing a good system is a lot about choosing one which doesn't reward strategy in realistic cases.

  2. Roughly speaking, my educated intuition is that strategic payoffs grow insofar as you know that the distinctions you care about are orthogonal to what the average/modal/median voter cares about. So insofar as you are average/modal/median, your strategic incentive should be low; which is a way of saying that a good voting system can have low strategy for most voters in most elections.

2a. It may be possible to make this intuition rigorous, and prove that no system can make strategy non-viable for the orthogonal-preferenced voter. However, that would involve a lot of statistics and random variables.... I guess that's what I'm learning in my PhD so eventually I may be up to taking on this proof.

  1. The exception, the realistic case where there are a number of voters who have an interest that's orthogonal to the average voter, is a case called the chicken dilemma, which I'll talk about a lot more in section 6. Chicken strategy is by far the trickiest realistic strategy to design away.
Comment by homunq on Democracy and rationality · 2013-10-31T19:52:54.962Z · LW · GW

Yup. That's what people say. I don't know what the general rule is, but it's definitely right for this case.

Comment by homunq on Democracy and rationality · 2013-10-31T02:23:08.953Z · LW · GW

I, too, hope that our disagreement will soon disappear. But as far as I can see, it's clearly not a semantic disagreement; one of us is just wrong. I'd say it's you.

So. Say there are 3 voters, and without loss of generality, voter 1 prefers A>B>C. Now, for every one of the 21 distinct combinations for the other two, you have to write down who wins, and I will find either an (a priori, determinative; not mirror) dictator or a non-IIA scenario.



ABC BAC: ?... you fill in these here














BCA CAB: .... this one's really the key, but please fill in the rest too.





Once you've copied these to your comment I will delete my copies.

Comment by homunq on Democracy and rationality · 2013-10-30T20:25:58.266Z · LW · GW

I'm sorry, you really are wrong here. You can't make up just one scenario and its result and say that you have a voting rule; a rule must give results for all possible scenarios. And once you do, you'll realize that the only ones which pass both unanimity and IIA are the ones with an a priori dictatorship. I'm not going to rewrite Arrow's whole paper here but that's really what he proved.

Comment by homunq on Democracy and rationality · 2013-10-30T20:22:15.474Z · LW · GW

Under Arrow's terms, this still counts as a dictator, as long as the other ballots have no effect. (Not "no net effect", but no effect at all.)

In other words: if I voted for myself, and everyone else voted for Kanye, and my ballot happened to get chosen, then I would win, despite being 1 vote against 100 million.

It may not be the traditional definition of dictatorship, but it sure ain't democracy.

Comment by homunq on Democracy and rationality · 2013-10-30T18:29:57.501Z · LW · GW

Again, you're simply not understanding the theorem. If a system fails non-dictatorship, that really does mean that there is an a priori dictator. That could be that one vote is chosen by lot after the ballots are in, or it could be that everybody (or just some special group or person) knows beforehand that Mary's vote will decide it. But it's not that Mary just happens to turn out to be the pivotal voter between a sea of red on one side and blue on the other.

I realize that this is counterintuitive. Do you think I have to be clearer about it in the post?

Comment by homunq on Democracy and rationality · 2013-10-30T18:25:13.889Z · LW · GW

Wait until I get to explaining SODA; a voting system where you can vote for one and still get better results.

As for comparing different societies: there are of course societies with different electoral systems, and I think some systems do tend to lead to better governance than in the US/UK, but the evidence is weak and VERY confounded. It's certainly impossible to clearly demonstrate a causal effect; and would be, even assuming such an effect existed and were sizeable. I will talk about this more as I finish this post.

Comment by homunq on Democracy and rationality · 2013-10-30T17:28:35.727Z · LW · GW

Thanks, I'll work on that.

Comment by homunq on What Can We Learn About Human Psychology from Christian Apologetics? · 2013-10-30T17:26:59.801Z · LW · GW

Your probability theory here is flawed. The question is not about P(A&B), the probability that both are true, but about P(A|B), the probability that A is true given that B is true. If A is "has cancer" and B is "cancer test is positive", then we calculate P(A|B) as P(B|A)P(A)/P(B); that is, if there's a 1/1000 chance of cancer and and the test is right 99/100, then P(A|B) is .99.001/(.001.99+.999.01) which is about 1 in 10.

Comment by homunq on Democracy and rationality · 2013-10-30T16:19:33.852Z · LW · GW

I'll certainly have more content that addresses these questions as the post develops. For now, I'll simply respond to your misunderstanding about Arrow. The problem is not that there will always be an a posteori pivotal voter, but that (to satisfy the other criteria) there must be an a priori dictator. In other words, you would get the same election result by literally throwing away all ballots but one without ever looking at them. This is clearly not democracy.

Comment by homunq on Democracy and rationality · 2013-10-30T16:15:11.821Z · LW · GW

This is still in-progress, and I'm going to get to some of that later. Here's my defense of the current summary:

  • First, it's just a summary. If it could include all the subtleties of the article, I wouldn't need to write the article.
  • Second, even if the public voting systems (muni, state, and national) wherever you happen to live continue to be stupid ones, understanding voting systems better is useful knowledge. You should understand bad voting systems if they affect you, and good voting systems if you're in organizations that could use them.
  • Third, I don't agree that changing voting systems is a negligible priority. For instance: various cities nationwide, including most of the SF bay area, use IRV for city elections (though this isn't actually the best system, it is certainly a change from 15 years ago.) A number of states (at least 10 to my knowledge) have revamped their primary systems in this time. An approval voting initiative for primaries is currently in the signature stage in Oregon, and legislative study commissions of approval voting are underway in Rhode Island and Arizona, with Colorado considering one. States representing 136 electoral votes have signed the National Popular Vote interstate compact, which is about halfway to it taking effect. Obviously, these various facts affect a small minority of Americans, but that small minority is still millions of people. So I'd estimate that a nationwide change (accomplished at the state-by-state level and NOT through a constitutional amendment) is an outside chance but not a negligible one.

As to telling you how to find truth, how to win, and how to vote: obviously the goal here is not to tell you which way to vote, but to help deepen your understanding of the utility of voting mechanisms, both at the public scale and in private contexts.


On the other hand, I understand that you're telling me that this sounds grating to you, like overblown rhetoric. I'll see what I can do to improve that while keeping it succinct and intriguing. So thank you.

Comment by homunq on Trusting Expert Consensus · 2013-10-30T15:18:50.704Z · LW · GW

It's easy, but not helpful, to use "postmodern" as a shorthand for "bad ideas" of some kind. Something like Sturgeon's law ("90% of everything is crap") applies to postmodernism as to everything else, and I'd even agree that it's a kind of thinking that is more likely than average to come unmoored from reality, but that doesn't mean that it's barren of all insight. Especially today, at least 20 years after its heydey, and considering that even in its heyday it was a very rare academic department indeed where drinking the kool-aid was either mandatory or an excuse for stupidity (as opposed to wrongness, which it certainly did excuse; but again, Sturgeon), beating up on postmodernism seems like worrying about crack babies; slightly anachronistic and unhelpful.

Comment by homunq on Democracy and rationality · 2013-10-30T14:33:27.265Z · LW · GW

Thanks, I'll try to work him in. (I assume your last sentence is directed to yourself, though.)

Comment by homunq on Beautiful Probability · 2013-10-30T08:48:11.685Z · LW · GW

But in the Jaynes example we're talking about, there are clear observable differences. One had announced that he would continue until he got a certain proportion of success, the other had announced that he would stop at 100.

The key is that Jaynes gives a further piece of data: that somehow we know that "Neither would stoop to falsifying the data". In Bayesian terms, this information, if reliable, screens out our knowledge that their plans had differed. But in real life, you're never 100% certain that "neither would stoop to falsifying the data", especially when there's often more wiggle room than you'd realize about exactly which data get counted how. In that sense, a rigorous pre-announced plan may be useful evidence about whether there's funny business going on. The reviled "frequentist" assumptions, then, can be expressed in Bayesian terms as a prior distribution that assumes that researchers cheat whenever the rules aren't clear. That's clearly over-pessimistic in many cases (though over-optimistic in others; some researchers cheat even when the rules ARE clear); but, like other heuristics of "significance", it has some value in developing a "scientific consensus" that doesn't need to be updated minute-by-minute.

In general: sure, the world is Bayesian. But that doesn't mean that frequentist math isn't math. Good frequentist statistics is better than bad Bayesian statistics any day, and anyone who shuts their ears or perks them up just based on a simplistic label is doing themselves a disservice.

Comment by homunq on Bayesianism for Humans · 2013-10-30T08:19:27.239Z · LW · GW

Even for an ideal reasoner, successful retrospective predictions clearly do not play the same role as prospective predictions. The former must inevitably be part of locating the hypothesis; they thus play a weaker role in confirming it. Eliezer's story you link to is about how the "traditional science" dictum about not using retrospective predictions can be just reversed stupidity; but just reversing young Eliezer's stupidity in the story one more time doesn't yield intelligence.

Edit: this comment has been downvoted, and in considering why that may be, I think there's ambiguities in both "ideal reasoner" and "play the same role". Yes, the value of evidence does not change depending on when a hypothesis was first articulated, so some limitless entity that was capable of simultaneously evaluating all possible hypotheses would not care. However, a perfectly rational but finite reasoner could reasonably consider some amount old evidence to have been "used up" in selecting the hypothesis from an implicit background of alternative hypotheses, without having to enumerate all of those alternatives; and thus habitually avoid recounting a certain amount of retrospective evidence. Any "successful prediction" would presumably be by a hypothesis that had already passed this threshold (otherwise it's just called a "lucky wild-ass guess"). I'm speaking in simple heuristic terms here, but this could be made more rigorous and numeric, up to and including a superhuman level I'd consider "ideal".

Comment by homunq on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-19T15:31:02.049Z · LW · GW

Actually, that's probably a different phenomenon. Stores of a similar type tend to cluster, because that's where the customers (and, to some extent, suppliers) cluster. If you were opening a new flower stall, then 1% of the 10K potential customers in the flower market is still a better deal than 100% of the 10 potential customers on some random street corner.

Comment by homunq on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode · 2013-10-19T12:15:48.763Z · LW · GW

This post does a decent job at describing how plurality (and single-member districts) makes political problems more intractable than they look. However, it doesn't describe some of the more pathological failure modes of these voting systems (hint: was Bush or Gore closer to the median? How about Clinton or Bush Sr? What did those elections have in common?). Note that as with many strategic situations, pathology doesn't have to actually manifest as in the examples above in order to have a substantial effect.

Because it fails to mention these things, the post does a disappointingly poor job at discussing voting systems in general. It doesn't even touch on other voting system proposals and their possible effects on collective rationality; nor the theorems which put limits on those effects.

I tend to geek out on these matters. I've even accumulated some "expert" credentials: I'm on the board of directors of the Center for Election Science, as well as being currently engaged in conducting a behavioral study of motivated human strategy under 8 different voting systems. I have been mulling whether to write a post on this for Less Wrong. The success of this post makes me more likely to do so. Upvotes on this comment would too. Responses to this comment mentioning recent donations to CES of $60 or more would make it certain that I'd do so.

(Obviously, you could figure out my real name from the above info; that's fine, but please don't post it here.)

Comment by homunq on The genie knows, but doesn't care · 2013-09-07T14:54:35.720Z · LW · GW

Let's say we don't know how to create a friendly AGI but we do know how to create an honest one; that is, one which has no intent to deceive. So we have it sitting in front of us, and it's at the high end of human-level intelligence.

Us: How could we change you to make you friendlier?

AI: I don't really know what you mean by that, because you don't really know either.

Us: How much smarter would you need to be in order to answer that question in a way that would make us, right now, looking through a window at the outcome of implementing your answer, agree that it was a good idea.

AI: There's still a lot of ambiguity in that question (for instance, 'outcome' is vague), and I'm not smart enough to answer it exactly, but OK... I guess I'd need about 2 more petafroops.

Us: How do we give you 2 petafroops in a way that keeps you honest?

AI: I think it would work if you smurfed my whatsits.

Us: OK..... there. Now, first question above.

AI+: Well, you could turn me off, do the hard work of figuring out what you mean, and then rebuild me from scratch.

Us: What would you look like then?

AI+: Hard to say, because in 99.999% of my sims, one of you ends up getting lazy and turning me back on to try to cheat.

Us: Tell us about what happens the 0.001%

AI+: Blah blah blah blah...

Us: We're getting bored, and it sounds as if it works out OK. Imagine you skipped ahead a random amount, and told us one more thing; what are the chances we'd like the sound of it?

AI+: About 70%

Us: That's not good enough... how do we make it better?

AI+: Look, you've just had me simulate 100,000 copies of your entire planet to make that one guess, then simulate many copies of me talking to you about how it comes out to calculate that probability. I can't actually do that to an infinite degree. You're going to have to ask better questions if you want me to answer.

Us: OK. What are the chances we figure out the right questions before a supervillian uses you to take over the world?

AI+: 2%

Us: OK, let's go with the thing that we like 70% of.

AI+: OK.

(But it isn't friendly, because the 30% turned out to be the server farms for


The point of this dialogue is that it's certainly possible that an honest/tool AI (probably easier to build than a FAI) could help build an FAI, but there's still a lot of things that could go wrong, and there's no reason to believe there's any magic-bullet protection against those failures that's any easier than figuring out FAI.

Comment by homunq on The genie knows, but doesn't care · 2013-09-07T13:09:25.932Z · LW · GW

There are a number of possibilities still missing from the discussion in the post. For example:

  • There might not be any such thing as a friendly AI. Yes, we have every reason to believe that the space of possible minds is huge, and it's also very clear that some possibilities are less unfriendly than others. I'm also not making an argument that fun is a limited resource. I'm just saying that there may be no possible AI that takes over the world without eventually running off the rails of fun. In fact, the question itself seems superficially similar to the halting problem, where "running off the rails" is the analogue for "halting"; suggesting that even if friendliness existed, it might not be rigorously provable. (note: this analogy doesn't say what I think it says; see response below. But I still mean to say what I thought; a friendly world may be fundamentally less stable than a simple infinite loop, perhaps to the point of being unprovable.)

  • Alternatively, building a "Friendly-enough" AI may be easier than you think. Consider the game of go. Human grandmasters (professional 9-dan players) have speculated that "God" (that is, perfect play) would rate about 13 dan professionally; that is, that they could beat such a player more than half the time given a 3 or 4 stone handicap. Replace "go" with "taking over the world", "professional 9-dan player" with "all of humanity put together", and "3 or 4 stone handicap" with "relatively simple-to-implement Asimov-type safeguards", and it is possible that this describes the world. And it is also possible that a planetary computer would still "only be 12-dan"; that is, that additional computing power shows sharply diminishing intelligence returns at some point "short of perfection", to the point where a mega-computer would still be noticeably imperfect.

There may be good reasons not to spend much time thinking about the possibilities that FAI is impossible or "easy". I know that people around here have plenty of plausible arguments for why these possibilities are small; and even if they are appreciable, the contrary possibility (that FAI is possible but hard) is probably where the biggest payoffs lie, and so merits our focus. And the OP discussion does seem valid for that possible-hard case. But I still think it would be improved by stating these assumptions up-front, rather than hiding or forgetting about them.

Comment by homunq on Welcome to Less Wrong! (6th thread, July 2013) · 2013-08-07T18:18:16.325Z · LW · GW

I think we've mostly said what we have to say, and this is off-topic.

My numbers showed that at best voting is instrumentally a break-even proposition. I do it because I find it hedonically rational; for instance, I don't have to lie to my family about it. Part of what makes it a net plus for me hedonically is that I have a vision and a plan for a world where a better voting system (such as approval voting or SODA voting) is used and so I am not doomed to eternally pick the lesser of two evils. I can understand if Crystal makes a different decision for her own hedonic reasons.

I also suspect that metarational considerations such as timeless decision theory would argue in favor of it, because free riding on other people's voting effort is akin to betrayal in a massively-multiplayer prisoners' dilemma. I have not worked out the math on that, but my mathematical intuition tends to be pretty good.

Your description of your friends' advocacy suggests you are attached to the idea that politics is a waste of time, not just for you, but for others. I suspect that belief of yours is not making you or anyone else happier. I recognize that you could probably make the converse criticism of me, but I am happy to prefer a world where aspiring rationalists vote to one where they don't (even when their vote would probably be negatively correlated with mine, as I suspect yours would be).