A Rationalist Argument for Voting

post by Jameson Quinn (jameson-quinn) · 2018-06-07T17:05:42.668Z · LW · GW · 31 comments

The argument that voting is irrational is commonplace. From your point of view as a single voter, the chance that your one vote will sway the election are typically miniscule, and the direct effect of your vote is null if it doesn't sway the election. Thus, even if the expected value of your preferred candidate to you is hugely higher than that of the likely winner without your vote, when multiplied by the tiny chance your vote matters, the overall expected value isn't enough to justify the small time and effort of voting. This problem at the heart of democracy has been noted by many — most prominently, Condorcet, Hegel, and Downs.

There have been various counterarguments posed over the years:

Of all of the above, I find the last bullet most interesting. But I am not going to pursue that here. Here, I'm going to propose a different rationale for voting; one that, as far as I know, is novel.

Participating in democratic elections is a group skill that requires practice. And it's worth practicing this skill because there is an appreciable chance that a future election will have a significant impact on existential risk, and thus will have a utility differential so high so as to make a lifetime of voting worth it.

Let's build a toy model with the following variables:

Note that t, f, s, and b refer to individuals' marginal chances, but independence is not assumed; outcomes can be correlated across voters. So the utility benefit per election per voter of the policy of "voting iff you notice that the election is important" is uit(s-b)p, while the cost is itc+(1-i)fc. The utility benefit per election of "always voting" is ui(s-b)p, while its costs are c. If u/c can take values above 1e11 and i is above 1e-4 — values I consider plausible — then for reasonable choices of the other variables "always voting" can be a rational policy.

This model is weak in several ways. For one, the chances of swinging an election with strategic votes are not linear with the number of votes involved; it's probably more like a logistic cdf, and l could easily be large enough that the derivative isn't approximately constant. For another, adopting a policy of voting probably has side-effects; it probably increases t, possibly decreases f, and may increase one's ability to sway the votes of other voters who do not count towards l. All of these structural weaknesses would tend to lead the model to underestimate the rationality of voting. (Of course, numerical issues could lead to bias in either direction; I'm sure some people will find values of i>1e-4 to be absurdly high.)

Yet even this simple model can estimate that voting has positive expected value. And it applies whether the existential threat at issue is a genocidal regime such as has occurred in the past, or a novel threat such as powerful, misaligned AI.

Is this a novel argument? Somewhat, but not entirely. The extreme utility differential for existential risk is probably to some degree altruistic. That is, it's reasonable to exert substantially more effort to avert a low-risk possibility that would destroy everything you care about, than you would if it would only kill you personally; and this implies that you care about things beyond your own life. Yet this is not the everyday altruism of transient welfare improvements, and thus it is harder to undermine it with arguments using revealed preference.

31 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2018-06-07T19:05:08.188Z · LW(p) · GW(p)

Voting in elections is a wonderful example of logical decision theory in the wild. The chance that you are genuinely logically correlated to a random trade partner is probably small, in cases where you don't have mutual knowledge of LDT; leaving altruism and reputation as sustaining reasons for cooperation. With millions of voters, the chance that you are correlated to thousands of them is much better.

Or perhaps you'd prefer to believe the dictate of Causal Decision Theory that if an election is won by 3 votes, nobody's vote influenced it, and if an election is won by 1 vote, all of the millions of voters on the winning side are solely responsible. But that was a silly decision theory anyway. Right?

Replies from: Wei_Dai, steven0461
comment by Wei Dai (Wei_Dai) · 2018-06-11T19:30:35.174Z · LW(p) · GW(p)

Do you know if anyone has done/published a calculation on whether, given reasonable beliefs about (i.e., a large amount of uncertainty over) opportunity costs and logical correlations, voting is actually a good thing to do from an x-risk perspective?

Replies from: steven0461, steven0461
comment by steven0461 · 2018-06-12T18:31:40.202Z · LW(p) · GW(p)

If your decision is determined by an x-risk perspective, it seems to me you only correlate with others whose decision is determined by an x-risk perspective, and logical correlations become irrelevant because their votes decrease net x-risk if and only if yours does (on expectation, after conditioning on the right information). This doesn't seem to be the common wisdom, so maybe I'm missing something. At least a case for taking logical correlations into account here would have to be more subtle than the more straightforward case for acausal cooperation between egoists.

Replies from: Wei_Dai, ESRogs
comment by Wei Dai (Wei_Dai) · 2018-06-12T18:56:35.094Z · LW(p) · GW(p)

I think you're right, for x-risk logical correlation seems irrelevant. So I guess we instead want to know whether voting is good for reducing x-risk assuming that the opportunity cost comes entirely from other x-risk reducing activities, and if not, can a case for voting be made based on both x-risk and other (e.g., selfish) considerations where logical correlation is relevant.

ETA: Ironically, if everyone bought into the CDT argument for not voting based on self interest, much fewer people would vote and it would be a lot easier for people like us to flip elections based on x-risk concerns.

Replies from: jameson-quinn
comment by Jameson Quinn (jameson-quinn) · 2018-06-18T19:01:18.508Z · LW(p) · GW(p)

The case would rely on curvature in the sigmoid that describes probability of winning the election as a function of participation. And you're right, that makes it decidedly a second- or third-order effect; to first order, correlation is irrelevant.

comment by ESRogs · 2018-07-11T01:21:57.234Z · LW(p) · GW(p)
and logical correlations become irrelevant because their votes decrease net x-risk if and only if yours does

I don't understand this part. What do you mean by "their votes decrease net x-risk if and only if yours does", and why does that mean logical correlations don't matter?

And how is this situation different from the general case of voting when some other voters are like-minded?

comment by steven0461 · 2018-06-12T19:01:27.578Z · LW(p) · GW(p)

Naive and extremely rough calculation that doesn't take logical correlations into account: If you're in the US and your uncertainty about vote counts is in the tens of millions and the expected vote difference between candidates is also in the tens of millions, then the expected number of elections swayed by the marginal vote might be 1 in 100 million (because almost-equal numbers of votes have lower probability density). If 0.1% of the quality of our future lightcone is at stake, voting wins an expected 10 picolightcones. If voting takes an hour, then it's worth it iff you're otherwise winning less than 10 picolightcones per hour. If a lifetime is 100,000 hours, that means less than a microlightcone per lifetime. The popular vote doesn't determine the outcome, of course, so the relevant number is much smaller in a non-swing state and larger in a swing state or if you're trading votes with someone in a swing state.

Replies from: Wei_Dai, jameson-quinn
comment by Wei Dai (Wei_Dai) · 2018-06-13T00:12:04.621Z · LW(p) · GW(p)

If voting takes an hour, then it’s worth it iff you’re otherwise winning less than 10 picolightcones per lifetime.

Do you mean "per hour" instead?

If a lifetime is 100,000 hours, that means less than a microlightcone per lifetime.

Have you thought about how to estimate microlightcone per lifetime from our other x-risk activities?

Intuitively it seems like everyone except maybe the most productive x-risk activists probably have less than 10 picolightcones per hour of marginal productivity. Does that seem right to you?

Replies from: steven0461
comment by steven0461 · 2018-06-13T16:36:46.000Z · LW(p) · GW(p)

Thanks, I did mean per hour and I'll edit it. I think my impression of people's lightcones per hour is higher than yours. As a stupid model, suppose lightcone quality has a term of 1% * ln(x) or 10% * ln(x) where x is the size/power of the x-risk movement. (Various hypotheses under which the x-risk movement has surprisingly low long-term impact, e.g. humanity is surrounded by aliens or there's some sort of moral convergence, also imply elections have no long-term impact, so maybe we should be estimating something like the quality of humanity's attempted inputs into optimizing the lightcone.) Then you only need to increase x by 0.01% or 0.001% to win a microlightcone per lifetime. I think there are hundreds or thousands of people who can achieve this level of impact. (Or rather, I think hundreds or thousands of lifetimes' worth of work with this level of impact will be done, and the number of people who could add some of these hours if they chose to is greater than that.) Of course, at this point it matters to estimate the parameters more accurately than to the nearest order of magnitude or two. (For example, Trump vs. Clinton was probably more closely contested than my numbers above, even in terms of expectations before the fact.) Also, of course, putting this much analysis into deciding whether to vote is more costly than voting, so the point is mostly to help us understand similar but different questions.

comment by Jameson Quinn (jameson-quinn) · 2018-06-18T19:04:58.125Z · LW(p) · GW(p)

Density for almost-equal numbers of votes is not lower in most high-stakes elections. I'd say 1 in 5 million or so. That's just a bit more than one order of magnitude and doesn't substantially change the overall conclusions.

comment by steven0461 · 2018-06-12T19:20:38.500Z · LW(p) · GW(p)
With millions of voters, the chance that you are correlated to thousands of them is much better.

It seems to me there are also millions of potential acausal trade partners in non-voting contexts, e.g. in the context of whether to spend most of your effort egoistically or altruistically and toward which cause, whether to obey the law, etc. The only special feature of voting that I can see is it gives you a share in years' worth of policy at the cost of only a much smaller amount of your time, making it potentially unusually efficient for altruists.

comment by Jacob Falkovich (Jacobian) · 2018-06-08T19:45:50.721Z · LW(p) · GW(p)

I've given my own reasons against voting before. I specifically addressed the "altruistic" justification for voting, since nobody thinks they can make a case for selfish voting anymore. My two main arguments:

1. You shouldn't expect to know who the better candidate will be with any confidence, since the policies actually implemented are unpredictable, let alone their effects.

2. Voting contributes to your own mind-kill and to disliking your friends. You will think less clearly about a politician and their supporters once you cast a vote for/against them because of consistency bias, myside bias, confirmation bias etc.

With that said, I actually enjoyed this essay. The X-risk-EA argument presented here actually presents a case that's both novel and would make my two main objections irrelevant. However, there's some evidence that it's not very applicable to real life.

In summer 2016 I heard from several prominent EAs that they think EA orgs should recommend Hillary's campaign as a key cause, and that EAs should donate to it. I have also seen zero attempts at rigorous analysis showing that Trump is a bigger X-risk than Hillary. If we convince ourselves that elections are an EA cause, the false-positive rate for "important" election will quickly approach 100%, and the chance that EAs decide that the Republican candidate is actually safer will approach 0%. The only effect would be losing a lot of resources, friends and mental energy to this nonsensical theater.

Replies from: jameson-quinn
comment by Jameson Quinn (jameson-quinn) · 2018-06-11T13:13:36.672Z · LW(p) · GW(p)

On (1): if you can't tell who the better candidate is, voting is working. You shouldn't use that example to reason about what would happen if you didn't vote. It's not a one-off game.

On (2): this is true, but it's also a fully general argument. Doing anything contributes to mind-kill, as you become attached to the idea that it was the right thing to do.

I'm tempted to erase the following argument because it's a bit of a cheap shot "gotcha", but it does also serve the legit purpose of an example, so here goes: For instance, not voting contributes to assuming that anybody who thinks Clinton is an EA cause is mind-killed. (Note: I think that high-profile political campaigns are awash in cash and don't use it effectively, so I would never recommend high-profile political donations as EA. And you may be right that there's no argument of sufficient rigor to show that Clinton was better than Trump in x-risk terms. But I strongly suspect that you feel more immediate contempt for somebody who says "donating to Clinton is EA" than for somebody who says "donating to the EFF is EA", in a way that is slightly mind-killing.)

Replies from: Jacobian
comment by Jacob Falkovich (Jacobian) · 2018-06-11T15:36:59.836Z · LW(p) · GW(p)

On 1, both candidates suck, and not because someone on the margin votes or doesn't but because of a thousand upstream causes: the personality type required to succeed in politics, the voting system that ensures a two-party lock in, the inability of citizens to comprehend the complexity of modern nation governments, etc.

On 2, let me make my general argument very particular:

1. Polls show that polarization on politics ("Would you let your child marry a Democrat?") is stronger than polarization on any other major alignment.

2. Unlike other things, political party affiliation is mostly a symbolic thing with few physical implications (compared to a job, a sexual orientation, or even being a rationalist). This makes one's interaction with political parties consist mostly of signaling virtue and loyalty by vilifying the other party.

3. Unlike other things, there's an entire industry (news media) that fans the flames of political party mind-kill 24/7.

Some people are willing to die on the Batman-v-Superman-was-better-than-Avengers hill, but not a lot. On the other hand, the Romney-was-better-than-Obama hill is covered in dead bodies ten layers deep. Myside bias and tribalism are bad everywhere, but party politics is the area where they're observably already causing immense harm.

I'm a huge Sixers fan, but I don't hate Celtics fans. We bond over our mutual love of basketball. That's not how party politics works.

comment by Wei Dai (Wei_Dai) · 2018-06-07T22:43:57.733Z · LW(p) · GW(p)

I haven't tried to fully understand the argument, but it seems like you're not considering that the cost of voting c has to be the opportunity cost, which can be astronomically high if the alternative to voting is to spend the time on reducing x-risk some other way. (If you have already considered this, please clarify and I'll look more into the rest of the post.)

Replies from: DanielFilan, jameson-quinn
comment by DanielFilan · 2018-06-07T23:36:50.360Z · LW(p) · GW(p)

[edit: oops, I just repeated a point made in the post that Wei_Dai presumably already understood]

comment by Jameson Quinn (jameson-quinn) · 2018-06-08T03:26:38.267Z · LW(p) · GW(p)

I don't think that normal humans can live on the bleeding edge of maximum effectiveness every waking moment. I don't presume to give advice to those who aren't normal humans.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-06-08T05:31:01.157Z · LW(p) · GW(p)

I think normal humans tend to have something like a budget for "doing something about x-risk", so if you're asking them to vote in the name of preventing x-risk, that time is coming out of their x-risk budget, therefore you still have to consider the cost of voting as time that would otherwise be spend on reducing x-risk some other way.

Replies from: jameson-quinn
comment by Jameson Quinn (jameson-quinn) · 2018-06-09T19:56:30.822Z · LW(p) · GW(p)

I am suggesting establishing a policy of voting ("being a voter") as an x-risk strategy. Once you have that policy, voting is just an everyday action, only indirectly related to x- risk. This distinction makes sense to me but now that you mention it I'm sure there are those for whom it's nonsense.

comment by steven0461 · 2018-06-12T19:29:54.429Z · LW(p) · GW(p)

The real cost of voting is mostly the cost of following politics. Maybe you could vote without following politics and still make decent voting decisions, but that's not a decision people often make in practice.

Replies from: TAG
comment by TAG · 2018-06-13T10:54:45.354Z · LW(p) · GW(p)

if you have a firm alignment with some interest group, calls or bloc, following politics is low cost -- you just vote with them or their leaders. Parties exist for a reason.

Replies from: steven0461
comment by steven0461 · 2018-06-13T16:48:57.208Z · LW(p) · GW(p)

By "following" I just meant "paying attention to", which is automatically not low cost. I think it's plausible that you could make decent decisions without paying any attention, but in practice people who think about rationalist arguments for/against voting do pay attention, and would pay less attention (perhaps 10-100 hours' worth per election?) if they didn't vote.

comment by Ben Pace (Benito) · 2018-06-07T18:28:41.224Z · LW(p) · GW(p)

For reference, see this LW post [LW · GW] arguing that the expected utility of voting actually does justify doing so.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2018-06-07T23:57:21.714Z · LW(p) · GW(p)

Something brought up originally, but never really dealt with: this argument applies a fortiori to lobbying.

Would the defenders of these kinds of consequentialist arguments for voting also recommend effective altruists become lobbyists? It seems there is some tension there if they don't, or else I am (or they are) confused about the implications of the premises in that argument.

Replies from: jameson-quinn
comment by Jameson Quinn (jameson-quinn) · 2018-06-08T03:30:55.417Z · LW(p) · GW(p)

Lobbying, or campaigning?

I think that there are various distinctions between lobbying, campaigning, and voting. Similar logic may or may not apply across these domains.

comment by TAG · 2018-06-13T11:09:28.713Z · LW(p) · GW(p)

The idea that voting is irrational needs to be set against the observation, common amongst critics of democracy, that blocks of people in a similar position, seem to vote for nice things for themselves whether tax cuts or welfare hikes. Co-ordinated voting is rational if co-ordination is cheap enough.

Replies from: TAG
comment by TAG · 2018-06-13T11:13:32.631Z · LW(p) · GW(p)

Finding partners to co-ordinate with isn't difficult if you belong to some large group such as "welfare claimants", "business owners", "parents", etc. Looking at it in terms of x-risk obscures some very obvious common-sense aspects.

comment by Dagon · 2018-06-08T22:34:14.563Z · LW(p) · GW(p)

You've forgotten to mention the standard reason for not-obviously-beneficial behaviors: signaling. You are a more valuable ally if you wear the "I Voted" button, you have more credibility when you try to influence people's voting behavior if you (claim to) vote, and you get to claim membership in the tribe of voters (which is way more fun that the tribe of rationalists who say out loud that voting is irrational).

And like many low-cost things, it's probably cheaper emotionally as well as being more convincing for some if it's the truth when you claim to have voted.

comment by JamesFaville (elephantiskon) · 2018-06-08T01:30:04.672Z · LW(p) · GW(p)

I just listened to a great talk by Nick Bostrom [? · GW] I'd managed to miss before now which mentions some considerations in favor and opposed to voting. He does this to illustrate a general trend that in certain domains it's easy to come across knock-down arguments ("crucial considerations") that invalidate or at least strongly counter previous knock-down arguments. Hope I summarized that OK!

When I last went to the polls, I think my main motivation for doing so was functional decision theory.

comment by Donald Hobson (donald-hobson) · 2018-06-07T17:25:39.997Z · LW(p) · GW(p)

I would argue that a prisoners dilemma situation applies. Assume that most people want party A to win, but don't care enough that the tiny chance of getting a deciding vote is worth it.

If all sensible people decide not to vote, the vote is ruled by a few nutcases voting for party N. Suppose there are a million sensible people voting, they get utility +1000 from A winning and -1 from voting. Suppose that the number of nutters voting N is evenly distributed between 0 and a million. Staying home gets you 1 utility but costs everyone 0.001 utility. If everyone does it, they are all worse off.

Replies from: gjm
comment by gjm · 2018-06-07T18:48:55.695Z · LW(p) · GW(p)

I worry about this sort of argument because (at least so far as I can judge from introspection and seeing what others say) that +1000 utility is likely to come from sources that get double-counted if you just add up all the sensible people's +1000 values.

When I say "it would be much better for A to win rather than B" I don't generally mean that I personally will benefit hugely, I mean that I think everyone will benefit a bit, and when I'm choosing how to vote I attempt to do so altruistically and not count my own interests as much more important than everyone else's in the way that (alas) I generally do in the rest of my life. And that benefit to everyone doesn't really get 10x more important if the number of people who, like me, are clever and wise and good enough to have noticed A's advantages is 10x bigger. You can approximate my opinion by saying that I attach +1000 utility to A's victory, but that doesn't really mean that we want A to win because it will make me happy, and if there are a million people like me then we shouldn't say that when A wins it means +10^9 utilons rather than +10^3, because if there were only a thousand people like me then the net benefit of A's winning would still be about the same, not 1000x smaller.

I'm not objecting here to your characterization of the situation as prisoners'-dilemma-like; I agree. (Maybe more like the tragedy of the commons, but it's the same kind of thing.) But casting it in terms of the utility gained by sensible voters seems like it might encourage miscalculation.