Rationality test: Vote for trump
post by pwno · 2016-06-16T08:33:54.675Z · LW · GW · Legacy · 61 commentsContents
61 comments
If there's such a small chance of your vote making a difference in the election, you should be comfortable voting for trump.
61 comments
Comments sorted by top scores.
comment by [deleted] · 2016-06-16T10:33:03.739Z · LW(p) · GW(p)
While I am generally for lowering the bar to posting, I would consider this post lacking both content and context even if it were a comment.
Downvoted.
comment by SquirrelInHell · 2016-06-17T01:15:03.385Z · LW(p) · GW(p)
Replies from: pwnoIf there's such a small chance of your vote making a difference in the election
↑ comment by pwno · 2016-06-23T20:41:29.706Z · LW(p) · GW(p)
How's that related?
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2016-06-24T01:21:45.287Z · LW(p) · GW(p)
In short, your decision not to vote after rational deliberation means it is approximately correct for other voters to think in the same way. This works like a classical cooperation game. TDT prescribes to commit to a small personal cost for a big community gain, in a similar way as one-boxing in Newcomb.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-24T04:34:26.082Z · LW(p) · GW(p)
I don't have the same algorithm as others, and if I did, it would be good enough to choose one of us at random to be responsible. Taking votes from everyone would be highly inefficient.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2016-06-24T05:26:33.991Z · LW(p) · GW(p)
No. First, in case of humans this works by approximation, not exact copy. Second, you don't know what is the group of people who think in a similar way as you (clearly not all voters).
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-24T13:10:53.294Z · LW(p) · GW(p)
You need an argument for your claim about approximation, especially considering the fact that I am a remote outlier. And I agree that not all voters think like me. That is exactly my point.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2016-06-27T01:26:42.909Z · LW(p) · GW(p)
Just take some time to consider how TDT applies to decision in real life. You will get it, I'm sure.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-27T13:57:31.881Z · LW(p) · GW(p)
The way it applies in real life, is that all the people like me will choose not to vote, and to work together for a better, less inefficient system, which will give us much more utility than if we had all chosen to vote.
Replies from: gjm↑ comment by gjm · 2016-06-27T15:41:22.554Z · LW(p) · GW(p)
It seems to me that not voting and working for a better system are basically independent activities.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-27T15:44:13.337Z · LW(p) · GW(p)
Not entirely. It would take me at least 30 minutes to vote, and probably more, given the need to register. Together with the other people like me (and I am admitting there aren't very many, since I only include those who have a similar algorithm, not all those that happen to get the same outcome), that adds up to a good deal of time that could be spent on working for a better system, while there would still be no change in the outcome from voting, even if the group of us voted as a unit.
Replies from: gjmcomment by ChristianKl · 2016-06-16T14:53:40.819Z · LW(p) · GW(p)
Rationality is not about being a straw vulcan.
comment by Liron · 2016-06-17T02:29:42.292Z · LW(p) · GW(p)
I guess I'll be the first one to offer a steel man interpretation of pwno's post:
Assuming you're anti-Trump... If voting for Trump could be done with no logistical inconveniences, and somehow legally pay you a reward of say 10 cents, and you didn't believe this offer was being made to anyone else, then a good rationality test is whether you would take that offer.
Replies from: Lumifer, ChristianKl, Diadem↑ comment by Lumifer · 2016-06-17T14:15:56.801Z · LW(p) · GW(p)
then a good rationality test is whether you would take that offer.
In which way is it a good rationality test if you have no idea about my utility function?
Replies from: gjm, Houshalter↑ comment by gjm · 2016-06-17T14:42:24.397Z · LW(p) · GW(p)
I think it's meant as a rationality test for those who say voting is pointless. If you consider voting pointless but value being 10c richer at least a little bit, then on Liron's premises you should maybe be willing to vote whichever way gets you the 10c.
(I am unconvinced, for reasons I've given in another comment on the OP. Also because people may reasonably value voting for "internal" reasons: it makes them feel like fuller participants in their society, or something.)
Replies from: Good_Burning_Plastic, entirelyuseless, Lumifer↑ comment by Good_Burning_Plastic · 2016-06-17T16:53:31.428Z · LW(p) · GW(p)
It is possible to consider the value of voting to be more than 10 cents but less than the logistical inconveniences which are there in the real world but Liron assumed away.
↑ comment by entirelyuseless · 2016-06-17T14:55:36.276Z · LW(p) · GW(p)
I consider voting pointless according to my "utility function," if I am measuring the benefit to society that results from the fact that I voted for a particular candidate, basically because I have to take into account 1) the probability that I am mistaken about the better candidate, and 2) the bounded character of my utility function, which means that a small probability of affecting the outcome really does mean small total utility.
Given those facts, if I choose from that utility function alone, from a baseline position I would not vote at all, and I would be willing to vote for any candidate, including Trump, for a relatively small sum of money.
However, I am not a utilitarian in the first place, and apart from that, even if I were, I would have to take into account the effects on my character, as in your argument.
↑ comment by Lumifer · 2016-06-17T14:50:29.419Z · LW(p) · GW(p)
If you consider voting pointless but value being 10c richer at least a little bit, then on Liron's premises you should maybe be willing to vote whichever way gets you the 10c.
Only in the simplified abstract model of the situation. In reality things like the ability to brag that you did (or did not) vote for that bastard (or that bitch) or, say, even minor shifts in self-perception are worth more than 10c.
The point is that if you don't know my value system you cannot say what would or would not be rational for me to do.
↑ comment by Houshalter · 2016-06-17T23:05:27.227Z · LW(p) · GW(p)
Unless you have a really weird utility function that values voting in and of itself, what matters is the outcome of your vote. If the predicted outcome of changing your vote is really small, say 0.00000001% chance of being the tie breaking vote, otherwise nothing changes, then the utility of your vote should be near zero.
Replies from: Lumifer↑ comment by Lumifer · 2016-06-18T01:32:49.742Z · LW(p) · GW(p)
Unless you have a really weird utility function that values voting in and of itself, what matters is the outcome of your vote.
Not at all. My utility function might value my self-perception as a person who votes for X. It might value the ability to rant about how I did or did not vote for X and therefore all the bad policies are not my responsibility -- if only they have listened to me! It might value the warm glow of having done my civic duty of helping the forces of light triumph over the spawn of evil. Etc, etc. None of this is particularly weird.
Replies from: Houshalter↑ comment by Houshalter · 2016-06-18T03:14:33.121Z · LW(p) · GW(p)
I would argue all those values are irrational. Ticking a box that has no effect on the world, and that no one will ever know about, should not matter. And I don't think many people would claim that they value that, if they accepted that premise. I think people value voting because they don't accept that premise, and think there is some value in their vote.
Replies from: Lumifer, Yosarian2↑ comment by Lumifer · 2016-06-18T03:48:16.721Z · LW(p) · GW(p)
I would argue all those values are irrational.
Please do.
The expression "irrational values" sounds like a category mistake to me.
Replies from: Liron, Jayson_Virissimo, PhilGoetz, Houshalter↑ comment by Liron · 2016-06-22T12:21:02.589Z · LW(p) · GW(p)
You're right that "those values are irrational" is a category mistake, if we're being precise. But Houshalter has an important point...
Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.
Like, "Yes the money pump just took some of my money, but you haven't considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me."
While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
A "rationality test" is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.
Having so grounded all the terms, I mostly agree with pwno and Houshalter.
Replies from: Lumifer, Val↑ comment by Lumifer · 2016-06-22T14:36:50.317Z · LW(p) · GW(p)
you can always use meta factors to argue why your revealed preferences actually were coherent.
Three observations. First, those aren't meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, "you can always use" does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency -- why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?
we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people's lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.
Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?
Replies from: Liron↑ comment by Liron · 2016-06-22T15:27:39.686Z · LW(p) · GW(p)
Well I guess I'll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).
Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.
Replies from: Lumifer↑ comment by Lumifer · 2016-06-22T17:14:30.254Z · LW(p) · GW(p)
to their benefit
You assert this as if it were an axiom. It doesn't look like one to me. Show me the benefit.
And I still don't understand why would I want to become an ideal utility maximizer.
Replies from: Liron, JEB_4_PREZ_2016↑ comment by JEB_4_PREZ_2016 · 2016-06-25T15:50:56.146Z · LW(p) · GW(p)
And I still don't understand why would I want to become an ideal utility maximizer.
If you could flip a switch right now that makes you an ideal utility maximizer, you wouldn't do it?
Replies from: Lumifer, entirelyuseless↑ comment by entirelyuseless · 2016-06-25T21:04:16.410Z · LW(p) · GW(p)
I would never flip a switch like that.
↑ comment by Val · 2016-06-23T19:07:19.924Z · LW(p) · GW(p)
And why should we be utility maximization agents?
Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.
Would you?
Replies from: gjm, Fyrius, Liron↑ comment by gjm · 2016-06-23T19:39:10.925Z · LW(p) · GW(p)
As a utility maximization agent, based on what you just wrote, you should.
Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.
(Also, your scenario is unrealistic; you couldn't really be sure of not getting caught. If you're very rich, the probability of getting caught doesn't have to be very large to make this an expected loss even from a purely selfish point of view.)
↑ comment by Fyrius · 2017-03-01T10:02:18.494Z · LW(p) · GW(p)
Surely you 'should' only do something like this iff acquiring this amount of money has a higher utility to you than not ruining this lady's day. Which, for most people, it doesn't.
Since you're saying 'you are very rich' and 'some money which is a lot from her perspective', you seem to be deliberately presenting gaining this money as very low utility, which you seem to assume should logically still outweigh what you seem to consider the zero utility of leaving the lady alone. But since I do actually give a duck about old ladies getting home safely (and, for that matter, about not feeling horribly guilty), mugging one has a pretty huge negative utility.
↑ comment by Jayson_Virissimo · 2016-06-30T01:46:58.217Z · LW(p) · GW(p)
The expression "irrational values" sounds like a category mistake to me.
I'd be comfortable describing someone with a preference set that violates the axiom of quasi-transitivity as having "irrational values," but certainly not for valuing a "self-perception as a person who" engages is some kind of activity, such as voting.
Replies from: Lumifer↑ comment by Lumifer · 2016-06-30T04:09:43.330Z · LW(p) · GW(p)
I'd be comfortable describing someone with a preference set that violates the axiom of quasi-transitivity as having "irrational values,"
At a particular moment in time, right? There is nothing which says preferences have to be stable as time passes.
↑ comment by PhilGoetz · 2016-06-30T00:23:26.104Z · LW(p) · GW(p)
What you're really doing by saying "My utility function might value my self-perception as a person who votes for X" is phrasing virtue ethics as utilitarianism. That's a move which confuses rather than clarifies. If you value your self-perception as a person who votes for X, you aren't a consequentialist; you believe in virtue ethics.
Can you say it? Yes; you can in theory be a virtue utilitarian. But no real-life virtue ethicists are utilitarians. Hence, confusion.
Replies from: Lumifer↑ comment by Houshalter · 2016-06-25T19:10:47.108Z · LW(p) · GW(p)
First of all, humans are 99.99% similar to each other. So I think we reasonably can have arguments about values. It's possible to be mistaken about what their values are. And people can come to agree on different values after hearing arguments and thought experiments. That's what debates about morality and ethics are after all.
I don't think there is a human being that actually values ticking a box that says "democrat", knowing that it will have no consequence whatsoever. I think there are many beliefs and feelings that lead people to vote. Like "if everyone like me did this, it would make a difference", or perhaps "it's a duty as a member of my tribe to do this", etc.
Some people cast spoiled ballots for similar reasons. Though they aren't changing the election, they believe just the statistic matters. Like how voting for a third party shows that the third party has some support in the population, and encourages them to keep trying.
But all these arguments for voting are about some tangible effect on the world. And they could empirically be shown incorrect. E.g. maybe no one does read those statistics, or you live in a heavily gerrymandered district.
Now imagine you find someone that really believes their vote matters. And somehow you explained all this to them and came to agreement that it really doesn't. And then they went and voted anyway.
You could reasonably ask if they are being irrational. If they haven't really updated their beliefs. If their stated reason for doing a thing is shown wrong, and they don't change their behavior.
You could ask them why they voted, and I doubt they would say "because it gives me good feelies" or whatever. Because people never say they do things because of that. And so somewhere they must hold a belief that is false and inconsistent.
If they did admit that, at least to themselves, then fine. They are at least consistent. But then I think, they would probably stop voting. When people honestly admit the only reason they do a thing is because it feels good, but has no effect on the world. Well it tends to stop feeling good. Realizing something is pointless tends to make it feel pointless.
Our feelings are not independent our beliefs after all. We feel good feelings because we believe we are doing a good thing.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-25T21:29:46.167Z · LW(p) · GW(p)
I would not assume that people necessarily have any reason, at least the kind that can be formulated as a statement about the world, like "this gives me good feelings," before you ask them why they did it. Of course, once you ask, they will come up with something, but it may be something that in fact had nothing to do with the fact that they did it.
↑ comment by Yosarian2 · 2016-07-03T23:43:28.159Z · LW(p) · GW(p)
I would say that values that may not be utility maximizing on the individual level, but which are on the cultural or national or even species level so long as most people hold those values, are totally rational. It's like chosing cooperate in the prisoner's dillema but with billions of players; so long as most of us choose cooperate we are all better off. So in that situation it's rational to cooperate, to encourage others to cooperate, and to signal that you cooperate and reward others who do.
"The civic virtue of voting and taking your vote seriously" is a great example of a virtue like that. It doesn't directly matter to you if you do, but we all are much better off if most people do.
↑ comment by ChristianKl · 2016-06-17T13:19:32.838Z · LW(p) · GW(p)
What kind of hourly wage do you have that you think you should vote for 10 cents?
Replies from: gjm, Val↑ comment by Val · 2016-06-23T18:59:44.726Z · LW(p) · GW(p)
There are some people who think punishment and reward work linearly.
If I remember correctly (please correct me if I'm wrong) even Eliezer himself believes that if we assign a pain value in the single digits to very slightly pinching someone so they barely feel anything, and a pain value in the millions to torturing someone with the worst possible torture, then you should choose torturing a thousand people over slightly pinching half of the planet's inhabitants, if your goal was to minimize suffering. With such a logic, you could assign rewards and punishments to anything, and calculate pretty strange things out of that.
↑ comment by Diadem · 2016-06-17T08:26:54.630Z · LW(p) · GW(p)
The US federal budget is 3.7 trillion. The president probably can't meaningfully affect the spending of most of that, but his impact is still significant. If I had to ballpark it I'd say a trillion over 4 years seems likely. Plus his long term effect on the nation through laws and regulations.
How many Americans vote? About a hundred million? So the average value of a vote is in the area of $10,000. That is a lot of money. Sure it is much less if you live outside a swingstate, but not by a factor 100k.
Even if your non-swingstate vote was meaningless there is still a tragedy of the commons. If every California Democrat stayed home because the Democrats will win California anyway... they won't. The rational solution to a tragedy of the commons is not to defect.
Replies from: DanArmak, gjm, Lumifer, username2↑ comment by DanArmak · 2016-06-17T12:58:14.500Z · LW(p) · GW(p)
The rational solution to a tragedy of the commons is not to defect.
Only if you think your solution also affects or chooses for the other players. It's not the rational unilateral solution if you believe the other players will defect anyway, as they usually do.
↑ comment by gjm · 2016-06-17T10:06:54.791Z · LW(p) · GW(p)
I agree with the general approach here but not with the details.
It may well be true that the president can affect how $1T is spent over four years. But that doesn't mean that the difference between one president and another is $1T over four years.
(I mean, it could. For instance, if that $1T is spent on munitions that get used destructively in a war whose other net consequences are negative. But most of the controversial things federal money gets spent on are of some value either way.)
↑ comment by Lumifer · 2016-06-17T14:22:52.933Z · LW(p) · GW(p)
So the average value of a vote is in the area of $10,000.
That's a meaningless phrase. You want to buy my vote for $10,000? No? Who is going to buy it, then?
Not to mention that when you're calculating "affecting spending" you need to look not at the total number, but at the marginal difference between Hillary and Trump plus weight that difference by your values.
The rational solution to a tragedy of the commons is not to defect.
In which way is it a rational solution if you're the only one doing that?
↑ comment by username2 · 2016-06-17T13:35:01.689Z · LW(p) · GW(p)
How many Americans vote? About a hundred million? So the average value of a vote is in the area of $10,000. That is a lot of money. Sure it is much less if you live outside a swingstate, but not by a factor 100k.
But this is a total budget spending per voter, not a difference between your income under hypothetical scenarios where each of the candidates win.
comment by Lyyce · 2016-06-16T09:54:34.596Z · LW(p) · GW(p)
That's true if you live in solidly blue or red state, but then why not vote for a third party candidate more aligned to your convictions? Or not voting at all, saving time?
Replies from: gjm↑ comment by gjm · 2016-06-16T12:29:03.149Z · LW(p) · GW(p)
I think pwno is proposing that we do it precisely because it doesn't align with our convictions. (He might advise Trump supporters to vote for Clinton.)
I'm sure I remember reading, but can't now find, an anecdote from Eliezer back in the OB days: he was with a group of people at the Western Wall in Jerusalem, where there's this tradition of writing prayers on pieces of paper and sticking them in cracks in the wall, so as a test of the sincerity of his unbelief he wrote "I pray for my parents to die" and stuck that in the wall. Same principle.
(Personally I think it's a silly principle. Human brains aren't very good at detaching themselves from their actions, and I would only cast a vote if I were happy for my preferences to get shifted a little bit towards the candidate I was voting for.)
Replies from: pwno↑ comment by pwno · 2016-06-23T08:16:46.782Z · LW(p) · GW(p)
Funny you mention that anecdote because I actually wrote it http://lesswrong.com/lw/1l/the_mystery_of_the_haunted_rationalist/w9
Human brains aren't very good at detaching themselves from their actions
Isn't that what rationality is supposed to reduce?
Replies from: gjm, ChristianKl↑ comment by gjm · 2016-06-23T10:22:34.783Z · LW(p) · GW(p)
I actually wrote it
Oh, very good! I wonder why I thought it was Eliezer. I see that he endorsed the idea, anyway. But I think my objection to it still stands (and is closely related to the one I expressed two comments upthread here).
Isn't that what rationality is supposed to reduce?
Inter alia, yes. But the step from "rationality is supposed to reduce X" to "I will act as if X has been reduced to negligibility" is not a valid one.
Replies from: pwno↑ comment by pwno · 2016-06-23T20:31:30.234Z · LW(p) · GW(p)
Inter alia, yes. But the step from "rationality is supposed to reduce X" to "I will act as if X has been reduced to negligibility" is not a valid one.
Well, isn't that a good technique to reduce X? Obviously not in all cases, but I think it's a valid technique in the cases we're talking about.
Replies from: gjm↑ comment by gjm · 2016-06-23T21:13:09.417Z · LW(p) · GW(p)
Certainly, as you say, not in all cases. I don't see any very good reason to think it would be effective in this case. Apparently you do; what's that reason?
Replies from: pwno↑ comment by pwno · 2016-06-23T21:41:28.654Z · LW(p) · GW(p)
In the case of voting for Trump and writing the note in the Wailing Wall, I think there's little to no risk of having it change your prior beliefs or weaken your self-deception defense mechanisms. They both require you to be dishonest about something that clashes with so many other strong beliefs that it's highly unlikely to contaminate your belief system. The more dangerous lies are the ones that don't clash as much with your other beliefs.
↑ comment by ChristianKl · 2016-06-23T13:05:44.865Z · LW(p) · GW(p)
Isn't that what rationality is supposed to reduce?
No, rationality is about winning. Having certain values isn't irrational.
If you value your belief that's there are no ghost then it's irrational to be scared by ghosts.
The relationship of most of us to democracy is different. We generally do value it and think the rituals of democracy are valuable for our society.
Replies from: pwno↑ comment by pwno · 2016-06-23T19:41:20.688Z · LW(p) · GW(p)
If you value your belief that's there are no ghost then it's irrational to be scared by ghosts.
Are you talking about "real" ghosts? You shouldn't be afraid of real ghosts because they don't exist, not because you value your belief that there are no ghosts. Why should beliefs have any value for you beyond their accuracy?
comment by Dagon · 2016-06-16T20:02:14.991Z · LW(p) · GW(p)
Upvoted, but only for level, not direction. This doesn't deserve worse than -6 or so.
I actually agree with the statement (in most states). However, the conclusion is wrong and there's not enough text for me to know why the author comes to this conclusion.
Voting for a disliked candidate is dominated by not voting at all, or voting for a liked candidate that cannot win. For myself, Gary Johnson is that candidate.