Aumann voting; or, How to vote when you're ignorant

post by PhilGoetz · 2009-04-02T18:54:15.828Z · LW · GW · Legacy · 37 comments

Contents

37 comments

As Robin Hanson is fond of pointing out, people would often get better answers by taking other people's answers more into account.  See Aumann's Agreement Theorem.

The application is obvious if you're computing an answer for your personal use.  But how do you apply it when voting?

Political debates are tug-of-wars.  Say a bill is being voted on to introduce a 7-day waiting period for handguns.  You might think that you should vote on the merits of a 7-day waiting period.  This isn't what we usually do.  Instead, we've chosen our side on the larger issue (gun control: for or against) ahead of time; and we vote whichever way is pulling in our direction.

To use the tug-of-war analogy:  There's a knot tied in the middle of the rope, and you have some line in the sand where you believe the knot should end up.  But you don't stop pulling when the knot reaches that point; you keep pulling, because the other team is still pulling.  So, if you're anti-gun-control, you vote against the 7-day waiting period, even if you think it would be a good idea; because passing it would move the knot back towards the other side of your line.

Tug-of-war voting makes intuitive sense if you believe that an irrational extremist is usually more politically effective than a reasonable person is.  (It sounds plausible to me.)  If you've watched a debate long enough to see that the "knot" does a bit of a random walk around some equilibrium that's on the other side of your line, it can make sense to vote this way.

How do you apply Aumann's theorem to tug-of-war voting?

I think the answer is that you try to identify which side has more idiots, and vote on the other side.

I was thinking of this because of the current online debate between Arthur Caplan and Craig Venter on DNA privacy.  I don't have a strong opinion which way to vote, largely because it's nowhere stated clearly what it is that you're voting for or against.

So I can't tell what the right answer is myself.  But I can identify idiots.  Applying Aumann's theorem, I take it on faith that the non-idiot population can eventually work out a good solution to the problem.  My job is to cancel out an idiot.

My impression is that there is a large class of irrational people who are generally "against" biotechnology because they're against evolution or science.  (This doesn't come out in the comments on economist.com, which are surprisingly good for this sort of online debate, and unfortunately don't supply enough idiots to be statistically significant.)  I have enough experience with this group and their opposite number to conclude that they are not counterbalanced by a sufficient number of uncritically pro-science people.

So I vote against the proposition, even though the vague statement "People's DNA sequences are their business, and nobody else's" sounds good to me.  I am picking sides not based on the specific issue at hand, but on what I perceive as being the larger tug-of war; and pulling for the side with fewer idiots.

Do you think this is a good heuristic?

You might break your answer into separate parts for "tug-of-war voting" (which means to choose sides on larger debates rather than on particular issues) and "cancel out an idiot" (which can be used without adopting tug-of-war voting).

EDIT: Really, please do say if your comment refers to "tug-of-war" voting or "cancelling out an idiot".  Perhaps I should have broken them into separate posts.

37 comments

Comments sorted by top scores.

comment by JulianMorrison · 2009-04-02T23:17:52.816Z · LW(p) · GW(p)

Much more common situation: the parties are A and B. A is slightly more idiotic. The right answer is C, which has no candidate and causes both A and B to recoil in horror.

Vote how?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-03T14:08:19.645Z · LW(p) · GW(p)

You have 2 options:

  • Sit at home and whine about how stupid people are.
  • Vote B.
Replies from: ciphergoth, ialdabaoth
comment by Paul Crowley (ciphergoth) · 2009-04-03T14:21:42.971Z · LW(p) · GW(p)

You can hope to do something about "how stupid people are", but only in the long term.

In the immediate term, where the election is tomorrow, Recognising that it is only a small contribution towards staving off disaster, and acting to make other choices possible is far more important; Disliking both intensely for their corruption and self-serving bias; Holding your nose so hard it turns blue, vote B, vote B, please vote B.

(edited following parent edit)

Replies from: army1987
comment by A1987dM (army1987) · 2013-12-12T18:29:50.963Z · LW(p) · GW(p)

Holding your nose

I'd also spray some deodorant onto the ballot while I'm at it.

comment by ialdabaoth · 2013-12-12T18:40:52.040Z · LW(p) · GW(p)

Let's abstract this into a simple game:

Imagine that there are 100 agents each playing this game, and all are presented with the same choice at the next iteration:

  • A) Add 500 snargs to the polunk.
  • B) Add 400 snargs to the polunk.
  • C) Add 25 snargs to the polunk.

The polunk currently has 500 snargs in it. Once the polunk has 2,000 snargs in it, each and every agent playing the game will be cast into the outer darkness, where they will be forced to sort Precious Mao buttons for all eternity while ferocious rabid weasels gnaw at their extremities.

It is now time to choose. You know from polling that approximately 25% of the agents will tend to pick A, and approximately 20% of the agents will tend to pick B. The remainder have an equal chance of picking A, B, or C.

So which do you choose: A, B, or C?

Replies from: Lumifer
comment by Lumifer · 2013-12-12T19:16:36.984Z · LW(p) · GW(p)

Let's abstract this into a simple game

The game is considerably more complicated and involves concepts such as legitimacy and perceived support for policies.

comment by dreeves · 2009-04-13T04:29:31.003Z · LW(p) · GW(p)

I really liked Robin Hanson's essay about this, "Policy Tug-O-War":

http://www.overcomingbias.com/2007/05/policy_tugowar.html

Moral: Pull policy ropes sideways!

comment by Paul Crowley (ciphergoth) · 2009-04-02T22:55:04.639Z · LW(p) · GW(p)

If you're going to do this, you must research idiocy independently and gather statistics on its specific forms. Do not allow your impressions of where the idiots are and how numerous they are to be formed by the media.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-03T14:09:04.412Z · LW(p) · GW(p)

Taking a random sample, even of 1 (as in Heinlein, above), is stochastic, but robust against media bias.

comment by robzahra · 2009-04-05T14:18:32.549Z · LW(p) · GW(p)

Phil- clever heuristic, canceling idiots..though note that it actually applies directly from a bayesian expected value calculation in certain scenarios:

  1. Assume you have no info about the voting issues except who the idiots are and how they vote. Now either your prior is that reversed stupidity is intelligence in this domain or it's not. If it is, then you have clear bayesian grounds to vote against the idiots. If it's not, then reversed stupidity either is definite stupidity or it has 0 correlation. In case 1, reason itself does not work (e.g., a situation in which god confounds the wisdom of the wise, I.e. You're screwed precisely for being rational). If 0 correlation, then the idiots are noise and provided you can count the idiots to be sure multiple of you don't cancel one idiot, you reduce noise, which is the best you can do.

The doubtful point in this assessment is how you identify "idiots" about a voting situation which Ostensibly you know nothing else about. In your examples, the info you used to identify the idiots seemed to require some domain knowledge which itself should figure into how you vote. Assuming idiots are "cross-domain incompetent" may be true for worlds like ours, but that needs to be fleshed a lot more for soundness, I think.

comment by abigailgem · 2009-04-03T14:55:35.671Z · LW(p) · GW(p)

I think you need evidence about what effect non-tug of war voting has.

Suppose I support the free ownership of weapons, but think a seven day waiting period is better than none.

If I vote for that waiting period, am I demoralising my fellow gun supporters, and invigorating the gun control types, who will therefore struggle harder for more restrictions? Or invigorating my side, which will make sure it does not get defeated next time? Too little evidence to make a prediction.

Or what if I say, well, seven days is OK, but if they win this the gun control types will then demand gun licencing, involving gun holders needing annual psychiatrist's reports. So I have to tug against seven days, in case something worse comes along.

I would vote for the policy I supported. This has little enough effect on whether that policy gets made into law. I would think the effect on future changes is more negligible.

As a British citizen, I have never been eligible to vote in a referendum. It seems that American propositions are much more common.

Less Wrong SF quote: "The right to bear weapons is the right to be free"- The Weapon Shops of Isher.

comment by dfranke · 2009-04-03T00:21:47.822Z · LW(p) · GW(p)

"If you are part of a society that votes, then do so. There may be no candidates and no measures you want to vote for, but there are certain to be ones you want to vote against. In case of doubt, vote against. By this rule you will rarely go wrong. If this is too blind for your taste, consult some well-meaning fool (there is always one around) and ask his advice. Then vote the other way. This enables you to be a good citizen (if such is your wish) without spending the enormous amount of time on it that truly intelligent exercise of franchise requires."

--Heinlein

Replies from: John_Maxwell_IV, PhilGoetz
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-03T00:43:47.773Z · LW(p) · GW(p)

Now I've heard it: definite proof that Heinlein is a nutcase. Here he openly advocates the idea that reversed stupidity is intelligence.

comment by PhilGoetz · 2009-04-03T14:07:05.961Z · LW(p) · GW(p)

Heinlein's approach is stochastic, but more robust.

comment by SoullessAutomaton · 2009-04-02T23:04:40.115Z · LW(p) · GW(p)

Voting to "cancel out an idiot" is possibly acceptable as a first-order approximation, but sorely lacking beyond that.

Even assuming single-issue voting on a question that is completely linear, if you believe that the correct point is at point X% of the way from A to B, where Y% of the idiots vote towards A and (100-Y)% towards B, a second-order approximation of rationality would mean randomly voting toward one side or the other in proportion to the difference between X and Y such that if X and Y are equal you flip a coin.

Replies from: thomblake, PhilGoetz
comment by thomblake · 2009-04-03T18:43:13.052Z · LW(p) · GW(p)

Indeed - I've considered similar problems with Less Wrong comment voting. If I see a comment that's rated as a 20 and I think it's more like a 5, I'm tempted to vote it down. But I resist the urge because I won't look at it again but there might be 20 people later on that decide to vote it down on its merits, in which case I would want to cancel them out by voting up. So it seems best, when voting isn't one-off and closed, to vote one's conscience.

Replies from: MasterGrape, BradTaylor
comment by MasterGrape · 2009-04-04T08:22:17.490Z · LW(p) · GW(p)

Is the problem here our inclination to interpret the number of points or karma as a rating in and of itself? As I understand it, that is just a tally of the upvotes and downvotes.

A 20 isn't four times as correct as a 5. It isn't even necessarily perceived as correct by four times as many people since the total number of votes might be larger for the 5 than for the 20.

So if we see a comment rated 20 and think it's more like a 5, we need to correct our thinking. Because this rating is not a 20/20 or some other percentage. The difference between 5 and 20 isn't necessarily qualitative. Does that make sense?

Replies from: army1987
comment by A1987dM (army1987) · 2011-10-28T22:32:53.781Z · LW(p) · GW(p)

Indeed. One of the things I don't like that much about the karma system is that I'd consider 5 upvotes and 0 downvotes to be better than 24 upvotes and 20 downvotes.

comment by BradTaylor · 2009-04-04T07:34:51.288Z · LW(p) · GW(p)

Surely, other things equal, your best estimate for future voting is current voting. It's more likely that another 20 will upvote than another 20 downvote. If you're only concerned with the outcome, your best strategy will be to downvote. Of course, you may feel really bad if you downvoted a comment below what you think it deserves, because you were responsible.

comment by PhilGoetz · 2009-04-03T02:55:02.210Z · LW(p) · GW(p)

That approach would be good if there were a large number of people using this strategy, or if you voted many times on the same issue.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T14:33:04.474Z · LW(p) · GW(p)

if you voted many times on the same issue.

In this case, moving to Chicago is an option.

comment by Zvi · 2009-04-02T21:17:08.661Z · LW(p) · GW(p)

This assumes that the debate and possible solution set lies along a straight line, in which case reversed stupidity is very close to intelligence. In situations where this is strictly the case, this method might not be bad, and in markets if you can manage to buy when the idiots sell and sell when the idiots buy (again, along a straight line of possible values) in my experience you end up doing well, if you can figure out which end of the rope is which. JGWeissman, I wouldn't worry about overcancellation too much because the number of idiots is large and the number of people willing to employ heuristics like this is small.

In most situations of this type the best solutions lie far from the rope and even the smart people have long since given up doing anything but pulling. If that is not possible, and there is no cost to pull on the rope, trying to cancel out the idiots is on average likely to be better than doing nothing, but I certainly wouldn't think this is a good primary methodology to make decisions.

comment by Emile · 2009-04-02T20:21:57.427Z · LW(p) · GW(p)

I don't think it's a good heuristic, and I don't think you do either. Reversed stupidity is not intelligence, and it's more efficient to tug "sideways".

For issues that are split around 95%-5%, I wouldn't be surprised if the proportion of idiots had very little correlation to the truth of the causes.

Replies from: William
comment by William · 2009-04-24T15:26:53.541Z · LW(p) · GW(p)

The assumption is that you're in a two-choice vote, where there is no way to pull the rope sideways.

comment by MasterGrape · 2009-04-04T08:57:45.167Z · LW(p) · GW(p)

Is your advocacy to vote in order to cancel out mindless voters? Or does the heuristic promote voting to cancel out the mindless in general?

I ask because I don't think you can generally distinguish between voting idiots and non-voting idiots in a secret ballot system.

Imagine a less publicized election with low turnout. If the pro biotechnology group votes more rigorously, they might actually have more mindlessly pro-science voters because a large number of anti-science voters stayed home.

If the heuristic dictates voting against idiots in general, then it falls to the aforementioned "reversed stupidity is not intelligence". If the heuristic dictates voting against voting idiots, then you need to have good assumptions about which idiots vote and which idiots stay home. And that's virtually unattainable knowledge.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-13T05:03:10.678Z · LW(p) · GW(p)

It dictates voting against idiots in general, and it doesn't reduce to reversed stupidity is not intelligence when there are 2 options on the ballot. You are correct that it could fail if the views of voting and non-voting idiots aren't positively-correlated.

comment by mariz · 2009-04-03T16:58:57.072Z · LW(p) · GW(p)

I'm agnostic to the heuristic you propose, but I disagree with applying it to the metric that you use (being pro- or anti-science). Scientific progress might be slowed by respecting genetic privacy rights, but we could say the same of any privacy rights (or, indeed, many other things). Imagine how much faster sociology and psychology could advance if we knew what everybody does in the privacy of their homes. Surely there are considerations more important than the advancement of science.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-03T23:04:34.358Z · LW(p) · GW(p)

Don't know what you mean; being pro- or anti-science is not a metric.

Surely there are considerations more important. But some information is better than no information. It is better, in this case, to use less-important but less-biased information, than more-important, more-biased information.

comment by JGWeissman · 2009-04-02T21:00:20.756Z · LW(p) · GW(p)

My job is to cancel out an idiot.

How do you coordinate so that many others with the same strategy don't cancel out the same idiot? If you also consider everyone who uses strategy as an idiot, it could work, but it seems difficult to achieve in practice. I think it would be more effective to actually make a judgment of what you would do if you were in charge, and then vote that way.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-02T21:30:57.016Z · LW(p) · GW(p)

Part of the purpose of this heuristic is that you can use it when you can't make such a judgement.

The strategy only comes into effect when there are many idiots on one side of a 2-sided issue. Until I beome a famous political theorist, it is safe to assume there are more such idiots than people using this strategy.

Replies from: JGWeissman
comment by JGWeissman · 2009-04-02T21:57:58.729Z · LW(p) · GW(p)

So, in order to not be counterproductive, the strategy needs an environment in which it will be ineffective? Or are you suggesting that the difference in idiots on the two sides will be larger than the cancelers, but smaller than the cancelers combined with the experts? I think verifying this in a particular situation would be difficult.

On the other hand, if you actually have a position on the issue, you can use strategies that go beyond voting, like trying to persuade people. Even trying to persuade people not to vote because of their own ignorance could be more effective, if you really can't make a good judgment.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T05:24:02.719Z · LW(p) · GW(p)

Sorry - I can't figure out what you're asking in the 1st paragraph. I agree with your second paragraph.

Replies from: JGWeissman
comment by JGWeissman · 2009-04-04T06:11:34.686Z · LW(p) · GW(p)

Consider the following cases:

  1. The difference in the number of idiots on the two sides is greater than the number of cancelers plus the number of experts. The cancelers have not made a difference. The impact is neutral.

  2. The number of cancelers is sufficient to narrow the difference in the number of idiots to smaller than the number of experts (who have presumably achieved an expert consensus). The experts voting as a block can sway the election either way. The cancelers have enabled the experts to make a decision. The expected impact is positive (50% chance the experts change the decision, 50% the idiots were right anyways).

  3. The number of cancelers is greater than the difference in the number of idiots plus the number of experts. The cancelers have changed the results of the election, without empowering the experts. The expected impact is neutral. (50% chance the new decision is right, 50% it is wrong. It is worse if the strategy convinces you to cancel out the idiots you think are a little more likely to be right. If you are canceling the idiots you think are a little more likely to be wrong, you have other reasons to vote that way.)

Having a reliable positive impact depends on being in situation 2, which, given a small number of experts, seems unlikely unless you are careful to only apply the strategy in this case, which would be a lot of work. I expect other strategies to get better results for the effort.

comment by MichaelVassar · 2009-04-03T14:52:02.347Z · LW(p) · GW(p)

This is an excellent point!

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T15:30:29.042Z · LW(p) · GW(p)

I didn't think so, actually - it sounded to me like the fallacy outright of "reversed stupidity is not intelligence" - but taking your different opinion into account, I've promoted the post.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T15:38:15.213Z · LW(p) · GW(p)

In a binary proposition, reversing the largest stupidity seems likely to at least be marginally more intelligent than the alternative. Which isn't really saying much, overall.

Replies from: dlthomas
comment by dlthomas · 2011-10-28T22:58:15.045Z · LW(p) · GW(p)

Stupidity is uncorrelated with truth, not anticorrelated with truth. Reversed stupidity is still uncorrelated with truth.