Nash Score for Voting Techniques
post by abramdemski · 2020-11-26T19:29:31.187Z · LW · GW · 12 commentsContents
Correcting the No Confidence Problem Assume Confidence Assume Confidence + STAR Collective No Confidence Embracing No-Confidence None 12 comments
This is a follow-up to my thoughts on voting methods [LW · GW]. As I discussed there, I worry that a utilitarian score for voting techniques, called VSE, doesn't capture everything important. In particular, I worry that it doesn't sufficiently push toward compromise solutions. I also raised several other concerns, but that's the concern I'll be discussing here.
The basic idea here will be to use the Nash bargaining solution to define "equitable" outcomes. I mentioned in my previous post that there were two conflicting ways of pointing out what's wrong with VSE: (a) it doesn't push toward equitable outcomes, (b) it doesn't push against transfers of wealth. An equitabillity-first solution would push toward wealth redistribution to reduce inequality. An anti-transfer solution would instead seek to uphold property rights more, decrease taxes, etc. Interestingly, the current proposal ends up somewhere between the two.
As I've described before [LW · GW], the Nash bargaining solution first shifts utilities so that each person's zero point is their best alternative to negotiating an agreement (called the BATNA point). Then, all utilities are multiplied together. This has the advantage of being invariant to meaningless transformations of people's utility functions, unlike just summing up utilities. (It also has a number of other advantages which Nash outlined.)
We can think of the BATNA point as the utility of "vote of no confidence" -- some electoral reformists propose that "no confidence" should be added to all ballots, representing the possibility of rejecting all candidates and putting together a new election.
Just as score voting would maximize VSE best if voters were entirely honest, we can design a voting method which would maximize Nash Score if voters were perfectly honest. The ballot is exactly like score voting plus a score for "no confidence":
Fill in one bubble in each row. | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Alice | O | O | O | O | O | O | O | O | O | O | O |
Bob | O | O | O | O | O | O | O | O | O | O | O |
Carol | O | O | O | O | O | O | O | O | O | O | O |
No Confidence | O | O | O | O | O | O | O | O | O | O | O |
However, instead of adding up all the scores to find the winner, we proceed as follows:
- Subtract the "no confidence" score from the rest of the scores, on each ballot.
- Eliminate any candidates who, on any ballot, score below zero. If all candidates are eliminated in this way, "no confidence" wins.
- Score the remaining candidates by multiplying together all their scores on individual ballots.
This voting method is, of course, patently absurd. Any one person can throw the whole election by putting "no confidence" above everything else. This could only be appropriate for very small group elections, and even then, it's questionable.
This is because a real democracy doesn't actually seek the voluntary participation of every single member; even small groups aren't always OK giving every single member veto power over the group's existence. (I'm assuming that repeated "no confidence" votes would at some point lead to the dissolution of whatever governing body is holding the elections.) Bargaining theory is about contracts which have not yet been made, so it is more appropriate to assume voluntary compliance is required of each person.
Note that this is not just a problem of dishonest voting -- it's true that in a large vote such as a national election, some random person would vote "no confidence" regardless of whether it was rational for them to do so; but, it's also true that in a large election there would be someone who would honestly prefer to nullify the election, and even the government.
I'll think about solutions to that problem in a minute, but let's talk about advantages of this system (under an assumption of honesty):
- Egalitarian: because we're maximizing the product, we much prefer a candidate who is a 2 for everyone as opposed to another candidate who is a 1 for half of people and a 3 for the other half.
- Doesn't sacrifice Pareto-optimality: we never prefer egalitarianism at the expense of costless improvements to someone's welfare. This could be a problem if we just modified utilitarianism with a bonus score for equality, or something.
- Moderately anti-transfer: people with a lot of resources are usually going to have higher BATNA points, which means that taking a bunch of their resources away isn't going to count as increasing equality. (This might be a point against the system, depending on your politics, and depending on how big of an effect this turns out to be.)
Correcting the No Confidence Problem
Alright, so, this is mostly unusable -- the score will too often be zero, which is what you get for dropping below anyone's BATNA (it should arguably go negative, but I don't know what the formula for that should be). Likewise, the Nash-score voting method will too often result in "no confidence".
What's to be done?
Assume Confidence
One correction could be to get rid of the "no confidence" option, and just rate everything on a scale starting at 1 (so no one can "zero out" the election):
Fill in one bubble in each row. | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Alice | O | O | O | O | O | O | O | O | O | O |
Bob | O | O | O | O | O | O | O | O | O | O |
Carol | O | O | O | O | O | O | O | O | O | O |
If voters are honest, this is just like score voting except that we multiply everything rather than add. (In a large election, this will get you astronomical numbers, but we can deal with that by calculating everything in the log domain.) This still gets us a more egalitarian result.
The corresponding score isn't very theoretically motivated, but at least it won't just report "zero" all the time.
Assume Confidence + STAR
Realistically, of course, voters will be strategic, meaning that a lot of people will rate most candidates 1 or 10. This devolves into range voting, which isn't too bad, but we can do better.
To be more precise: you should estimate the expected utility of the election (based on who you think will win), and score any candidate better than that average as a 10, and any candidate lower than that average as a 1.
To encourage voters to report a more honest spread of votes, we can take the STAR voting idea: select the two finalists by taking the candidates with highest multiplicative score, and then select the winner by whoever is most often scored higher than the other person. This heavily incentivises differentiating your preferences by scoring different candidates differently, because otherwise your vote doesn't count in the final step. It can also be justified as a kind of instant runoff: rather than letting voters estimate the probable winners and extreme-ize the scores based on that, you're finding the top two probable winners for them, and strategically extreme-izing voter scores for them to select between them. This is an approximate application of the revelation principle. [LW · GW]
Unfortunately, this is vulnerable to clone problems (which I discussed extensively in a previous post [LW · GW]). I'm not sure how to solve this problem while preserving the spirit of Nash Score voting.
Collective No Confidence
It doesn't really make sense to totally eliminate the "no confidence" option, though. If just one person would be better off with their BATNA, this shouldn't totally cancel the election. If a significant number of people prefer to cancel the election, however, then it should be cancelled.
Let's modify the Nash bargaining game. We can pick a number -- for example, if at least 50% of people are better off with their BATNA, then we get "no deal". If a majority of people want the bargain to go through, however, then they can force the bargain on the minority. This reflects the reality that if a majority are unhappy with an election, the election won't stand, but an unhappy minority can be forced to accept the result. (50% is probably too high, realistically; a sizable minority who refuse to accept the election result is still a big problem.)
Now, ideally, we would solve that game by finding a game-theoretic equilibrium, in order to derive a generalization of the Nash score. However, this is a quick post, so I'm not going to really do the work of solving the game. Instead, my rough solution is as follows:
As with the first scoring method I described, subtract BATNA values from other utilities to set a zero value for each voter. Then, score each candidate by multiplying the utilities of each voter for whom they score positively, ignoring zero or negative scores -- except if a candidate gets a zero or negative score for 50% or more of ballots (or whatever the chosen cutoff), in which case, that candidate scores zero.
This score kind of sucks, because it entirely ignores the preferences which fall below the BATNA point, so long as less than 50% of people think similarly. Possibly a proper game-theoretic solution would be more sensitive to these negative values.
In any case, the voting method corresponding to this score is pretty obvious:
- Use the Nash-inspired ballot from before.
- Score candidates as described.
- If any candidate scores above zero, then the candidate with the highest score wins. Otherwise, the election gets a "no confidence" result.
If voters are honest, then this obviously maximizes our score.
Of course, voters won't be 100% honest. We can apply the STAR-like correction again, using score to select two finalists and then doing an instant runoff. This encourages honesty for how voters score candidates. I'm not sure about the strategy for setting BATNA values, though. On the one hand, voters should set their BATNAs fairly high in order to try and eliminate candidates they don't like. On the other hand, if you can't successfully eliminate candidates, then a high BATNA means your preferences below the BATNA value don't count (until the final runoff step). So that incentivises you not to set your BATNA too high. But nothing about this suggests that people will use their true BATNA values.
Not sure how good this method is overall.
Embracing No-Confidence
Another route would be to embrace the high degree of no-confidence outcomes as a new type of voting, specifically for situations where everyone does need to be on board.
However, this seems like a fairly uncommon use-case.
A more common use-case would be a kickstarter-like situation, where some number of people have to be on board for anything to happen, but unlike regular kickstarter, the outcome is also in question. For example, the rationality community's location problem [LW · GW]. This is a difficult problem because it requires aggregating the preferences of a lot of people, and creating consensus about the outcome, but also requires a great deal of deliberation. It would be nice if we had a voting method with the following properties:
- We can elect representatives who speak for our interests. For example, it could be like liquid democracy where anyone can delegate their vote to anyone. Delegates can then put in the time and effort to form coherent opinions about which options best serve their constituency -- or, can delegate even further, creating a hierarchy of delegates. Delegation implies consent to abide by the result of the vote, so it's a consensus-building tool.
- Each person can indicate additional conditions for a move. For example, Alice could state a condition that >100 other rationalists sign on to a move, and further state that the list must include Bob and Carol.
- Delegates representing a sufficient number of people get involved with discussions, so that the difficult deliberations can (1) represent a sufficient number of people, while (2) involving a feasible number of people in discussion. Unlike classic liquid democracy, this creates a significant incentive to delegate your vote to another person (even if you are personally well-informed in the issues), because a small number of delegates can have a meaningful discussion and reach consensus, whereas a large number of people can't, no matter how well-informed they are.
I'm not sure what such a voting method should really look like, however.
12 comments
Comments sorted by top scores.
comment by Yoav Ravid · 2021-11-05T06:08:56.162Z · LW(p) · GW(p)
A more common use-case would be a kickstarter-like situation, where some number of people have to be on board for anything to happen, but unlike regular kickstarter, the outcome is also in question.
I've also been interested in this use case for assurance contracts [? · GW]. I suspect an assurance contract website would have to have this functionality, for exactly these sorts of cases where the desired outcome isn't obvious.
This is also useful for something I've thought about, which is crowdfunded Bounties, where people gather funds for an idea they're interested in in the hopes that it will motivate someone to do it.
Let's say for example I want an HPMOR video game made. In regular crowdfunding I would open a campaign to fund my own project, and get funds from people to help me make a video game. Problem is, I don't know how to make video games.
In bounty crowdfunding I'll open a campaign that explains why this is a good idea that should be funded, people who also want an HPMOR video game will support it, and then anyone can make the game and reap the funds as if it was his own crowdfunding campaign.
The problem is, how do you make sure someone worthy gets the funds? In regular crowdfunding you see the offer before you give it money. Here you need some way to verify worthiness.
So I think something like a no-confidence vote can be used. When someone makes a claim to the funds, a vote is held to choose who if any of the bidders get the funds, and if no one gets it then the campaign stays open.
What I'm not sure how could be done well, but is more a technical problem than a theoretical problem, is the commitment mechanism for pledging money. In Kickstarter this isn't as much a problem for two reasons, one is the shorter time frame (campaigns usually run for 30 days), and second that someone has already put in some effort beforehand with the belief people will pay them for it.
Here, you have lots of people saying they will pay for something, so you might be motivated to work on it, but how can you be sure they will really pay? If you solve that by sending money to a shared fund held by the platform, then there's a different problem where people might be sending their money to do nothing for years until someone comes and takes on the bounty.
Replies from: abramdemski↑ comment by abramdemski · 2021-11-06T16:08:25.524Z · LW(p) · GW(p)
Yeah, this seems really interesting but really tricky. Someone puts a lot of effort into an HPMOR game but it's just not quite up to snuff and gets no-confidence from the community of funders? Makes it seem like an unappealing prospect even for people who are fairly confident in their skills -- which in turn means the potential reward needs to be higher to compensate for the risk, making it a more expensive funding method. Let's say there's a 5% risk of no-confidence (as estimated subjectively by a potential programmer). That does not just mean a 5% markup, because for small firms / individuals, 5% chance of no compensation is a high risk (ie, they're not very risk tolerant -- they Kelly bet, or realistically, are even less risk-tolerant than that; so a 5% chance of no compensation is effectively a big downside risk).
So you also need something resembling "no-confidence insurance" built into the platform. This could take the form of some kind of mechanism for investing in potential programmers: people can vet the candidates and extend funds immediately, with some returns for them in the case that their candidate succeeds.
But, all this complexity also probably makes the funding model a harder sell, especially for early adopters.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-11-06T16:39:34.166Z · LW(p) · GW(p)
The way I thought about it was that someone gives a proposal, and if they win then they get the funds and are now obligated to create the thing (like with kickstarter), or perhaps it could be a half now half later scheme. Anyway the point is that the most important thing for a potential developer is how much effort they need to put in before they secure/get the funds. I guess that changes mainly depending on reputation. If someone is known for successfully taking on projects after getting funded, people will be more likely to accept their proposal with less initial investment than other people put in, and other people might have to put in a lot of effort and resources into a proposal to stand out and prove themselves.
There's a balance here of who takes the risk - the funders who aren't sure beforehand if the developer is going to do a good job, or the developer who doesn't know beforehand if they'll be accepted by the funders. Maybe the balance could be different between different projects.
(I'm sure they figured it out already in dath ilan [? · GW], haha)
Replies from: abramdemski↑ comment by abramdemski · 2021-11-08T19:04:18.916Z · LW(p) · GW(p)
I'm reminded of the common idea in the startup-funding community, that you're investing in the team, not the idea. The idea is an important foundation, but projects can and should pivot. So, to invest, you need to have confidence in where it could go -- which depends on the people who will take it there.
The whole "crowdfunding bounties" idea is a bit like "kickstarter, but decouple who proposes a project from who works on it", right? So, this might result in a big mess if it's not done very carefully. That's one of the main motives of my previous reply -- worry about the chesterton fence of investing-in-people-not-ideas.
Not to say it is an insurmountable obstacle; just to say that's one area where I'd poke at the idea a bunch.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-11-09T15:34:35.655Z · LW(p) · GW(p)
The whole "crowdfunding bounties" idea is a bit like "kickstarter, but decouple who proposes a project from who works on it", right?
Yes, that's a very good way to put it.
And I agree that investing-in-the-team is important and shouldn't disappear with this model.
So to reiterate the idea:
Someone would come up with an idea, which would be far more general/bare bones than the usual kickstarter project. Something more like "An HPMOR video game" than "An HPMOR video game, with these mechanics, and these graphics, and this story, etc...".
And people would commit money as a show of interest in an idea, sort of like an investor saying "I'm interested in startups working on VR, send me your pitch".
Then if enough money has been committed to attract teams/individuals to the project, they would have to pitch themselves and their vision to the crowd (with varying degrees of investment before the pitch, that part doesn't change from kickstarter/startups), and the crowd would have to decide together whether to accept that pitch based both on the team/individual and their vision.
If they do, the promised funds have to be transferred, though not necessarily in one go. It could be dependent on certain milestones, for example (I think this is also common in the startup world?).
-
But even with lots of talking and figuring it out theoretically, it would probably need a lot of actual testing to get right.
A simple first test of this could be to crowdfund a simple a bounty with kickstarter (perhaps for LessWrong posts), create some voting system to let the funders pick who wins the bounty, and see what we learn from it.
Replies from: abramdemski↑ comment by abramdemski · 2021-11-10T15:28:47.553Z · LW(p) · GW(p)
Ah, ok, fair enough, so the initial "commitment" of money isn't really much of a commitment, it is basically a collective signal of interest in kickstarting projects of a specific sort. And then, specific teams basically run a regular kickstarter "under that banner", hoping to attract those specific investors. Plus, perhaps, some additional flexibility on when funds actually get transferred.
So it sounds to me like the simplest version is to make a website "parallel" to kickstarter, ie, where kickstarter investors can indicate how much $ they think they would invest in various projects (and where kickstarter projects can submit their kickstarters as supposedly meeting those requirements, and investors get notified, and have an easy interface to approve funds).
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-11-10T16:45:46.505Z · LW(p) · GW(p)
I did mean an actual commitment, that's why I said it's a technical problem in my original comment. I don't imagine It's a commitment you can't walk back on at all, but there should be terms on that (say, a week/month notice, perhaps changing depending on the project), so potential developers* can be confidant the amount shown is the amount they'll get (That's really the important bit). Otherwise they have to guess whether people are still interested, and they have to get them to then actually pledge money, so it would end up not actually helping them that much - and if it doesn't help the developer then it also doesn't help the investors.
So what you suggested can work as a precursor without the commitment mechanism, but I think the commitment mechanism is important enough that it wouldn't actually tell us much (so it wouldn't really be an MVP).
*What's the right word for the person/team that takes on a projects? executors? because they execute the project? Sounds a bit dark though, lol.
comment by supposedlyfun · 2020-11-27T00:01:12.351Z · LW(p) · GW(p)
This voting method is, of course, patently absurd. Any one person can throw the whole election by putting "no confidence" above everything else. This could only be appropriate for very small group elections, and even then, it's questionable.
This is because a real democracy doesn't actually seek the voluntary participation of every single member...
This is great explaining throughout, but especially the block quote. You communicated to me (as in, I had a click/eureka moment) the idea you were trying to teach using very close to the least number of words required.
comment by Gurkenglas · 2020-11-27T08:29:46.486Z · LW(p) · GW(p)
If 30% of people can block the election, someone's going to have to be in command of the troops. The least perverse option seems to be the last president. Trump could probably have gotten 30% to block it to stay in that chair. A minority blocking the election seems supposed to simulate (aka give a better alternative to) civil war, which is uncommon because it is costly. So perhaps blocking should be made costly to the populace. Say, tax everyone heavily for each blocked election and donate the money to foreign charities. This also incentivizes foreign trolls to cause blocked elections, which seems fair enough - if the enemy controls your election, it should crash, not put a puppet in office.
STAR is useless if people can assign real-valued scores. That makes me think that if it works, it's for reasons of discrete mathematics, so we should analyze the system from the perspective of discrete mathematics before trusting it.
Instead of multiplying values >= 1 and "ignoring" smaller values, you should make explicit that you feed the voter scores through a function (in this case \x -> max(0, log(x))) before adding them up. \x -> max(0, log(x)) does not seem like the optimal function for any seemly purpose.
Replies from: abramdemski↑ comment by abramdemski · 2020-11-27T14:17:17.803Z · LW(p) · GW(p)
STAR is useless if people can assign real-valued scores. That makes me think that if it works, it's for reasons of discrete mathematics, so we should analyze the system from the perspective of discrete mathematics before trusting it.
A fair point. If voters were allowed real-valued scores, they could make scores very close, and things still basically devolve into approval voting.
\x -> max(0, log(x)) does not seem like the optimal function for any seemly purpose.
Also true. I just don't know how to continue log into the negative ;p
comment by jimv · 2020-11-26T21:34:32.250Z · LW(p) · GW(p)
Have you thought about treating 'no confidence' as a candidate? How would it play out if there were a variant of the approach detailed under 'Assume Confidence + STAR' where instead of assuming confidence you have an extra n.c. 'candidate' who gets scored the same as the others, and if it wins then the election is rerun?
Replies from: abramdemski↑ comment by abramdemski · 2020-11-27T00:49:14.273Z · LW(p) · GW(p)
I think part of the point (for me) of the Nash bargaining analogy is that "no confidence" isn't like other candidates... but, yeah, that being said, treating it as a candidate would produce more reasonable results here. I agree that "assume confidence + STAR" with an extra no-confidence candidate would be pretty reasonable compared to what I've come up with so far.
Still holding out hope for a more theoretically justified solution if the game theory can be solved for the "collective no confidence" bargaining game, though.