post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by cousin_it · 2012-07-18T23:15:51.447Z · LW(p) · GW(p)

Great work! Your problem statement seems to be on the right track. But it's not very clear why a specific polynomial would be a good answer, because the simulation model described in section 3 looks a little arbitrary... or maybe I have missed some obvious argument.

Replies from: badger
comment by badger · 2012-07-19T03:22:09.665Z · LW(p) · GW(p)

Thanks! The polynomial rules are approximations to my attempted calculation of the optimal rule. They are proper scoring rules on their own. They look close to optimal, and there is something to be said for simplicity. Deeper theory would be nice though...

What about the simulations stands out as particularly arbitrary? Presumably how agents form predictions about the opinions of others. Without stronger assumptions on agents' rationality, it's hard not to be little ad-hoc.

comment by Decius · 2012-07-19T04:32:05.007Z · LW(p) · GW(p)

It still fails by conspiracy- people who want to get the payoff and change the outcome simply band together and provide consistent lies. If they know what percentage of the total they are, they can be the most accurate predictors of their answer (and by extension, all other answers). When the conspiracy reaches some critical size, the new Nash equilibrium for individual profit is to agree with the conspirators. To create that, I simply publicly claim that I have 25% of the voting bloc already agreed to force answer y, which is clearly false to everyone. Assuming perfectly rational identical agents, If they believe that there is a 10% chance of that claim being true, then their expected outcome if the vote honestly is slightly lower than if they vote dishonestly (If they are honest, then there is a 10% chance that 25% of the voters disagree with them; their expected y-value is 2.5%. Since I don't control any percentage of votes, their accuracy value is 1 and their prediction value is .975. If they behave rationally and collude, then their accuracy value is 1 and their prediction value is 1, since they still voted the same as everyone else but now can accurately predict how everyone else voted. The numbers work in the same direction if I claim to have one vote (mine) forcing answer y with any level of certainty. If everyone votes for x and assigns 99.999% probability to others voting x (including me, except I assign 100% probability to x) then they score slightly less than 2 points. If everyone votes y and guesses 100% of everybody will vote y, then everybody gets 2 points. Since everyone is perfectly rational and wants to maximize their own score, everyone goes along with the one person with ulterior motives.

Granted, I had to break the initial conditions to allow agents to communicate and to allow one person in the entire group to have an outside agenda. Given that a large number of agents in a huge group will have different agendas, it comes down to everyone going along with the most credible set of agendas.

Replies from: badger
comment by badger · 2012-07-19T13:39:44.202Z · LW(p) · GW(p)

I'm confused about what you are addressing.

My mechanism is designed to operate without payments (in contrast to the original BTS). Since no payments are being made and no score is necessarily being tracked, the incentive to participate is the same in a typical poll: to influence the outcome. Recruiting people to vote with you helps your side, same as in a regular poll, but that won't make anyone else want to switch their vote. A public announcement about a committed voting bloc affects the predictions of both sides equally. A manipulator should persuade everyone else he is on their side, which helps his bloc and hurts everyone else.

With Prelec's BTS, you should falsely announce the other side has lots of support so people will predict it is popular, but it ends up looking lackluster.

comment by Psychosmurf · 2012-07-29T22:44:13.577Z · LW(p) · GW(p)

As to the question for why it works, it seems to me that it's because it takes into account the rationality of each participant (by using the accuracy of their prediction about how many people will agree with them) and then gives the more rational participants' answers greater weight.

If that's the case, then any rationality test could be used as a truth serum. If you want to know whether or not string theory is true, you're probably better off asking people who don't believe that the Earth is flat.