[LINK] The P + epsilon Attack (Precommitment in cryptoeconomics)

post by DanielVarga · 2015-01-29T02:02:16.615Z · LW · GW · Legacy · 22 comments

Contents

22 comments

Vitalik Buterin has a new post about an interesting theoretical attack against Bitcoin. The idea relies on the assumption that the attacker can credibly commit to something quite crazy. The crazy thing is this: paying out 25.01 BTC to all the people who help him in his attack to steal 25 BTC from everyone, but only if the attack fails. This leads to a weird payoff matrix where the dominant strategy is to help him in the attack. The attack succeeds, and no payoff is made.

Of course, smart contracts make such crazy commitments perfectly possible, so this is a bit less theoretical than it sounds. But even as an abstract though experiment about decision theories, it looks pretty interesting.

By the way, Vitalik Buterin is really on a roll. Just a week ago he had a thought-provoking blog post about how Decentralized Autonomous Organizations could possibly utilize a concept often discussed here: decision theory in a setup where agents can inspect each others' source code. It was shared on LW Discussion, but earned less exposure than I think it deserved.

EDIT 1: One smart commenter of the original post spotted that an isomorphic, extremely cool game was already proposed by billionaire Warren Buffett. Does this thing already have a name in game theory maybe?

 

EDIT 2: I wrote the game up in detail for some old-school game theorist friends:

The attacker orchestrates a game with 99 players. The attacker himself does not participate in the game.

Rules:

Each of the players can either defect or cooperate, in the usual game theoretic setup where they do announce their decisions simultaneously, without side channels. We call "aggregate outcome" the decision that was made by the majority of the players. If the aggregate outcome is defection, we say that the attack succeeds. A player's payoff consists of two components:

1. If her decision coincides with the aggregate outcome, the player gets 10 utilons.

and simultaneously:

2. if the attack succeeds, the attacker gets 1 utilons from each of the 99 players, regardless of their own decision.

                | Cooperate  | Defect
Attack fails    |        10  | 0
Attack succeeds |        -1  | 9

There are two equilibria, but the second payoff component breaks the symmetry, and everyone will cooperate.

Now the attacker spices things up, by making a credible commitment before the game. ("Credible" simply means that somehow they make sure that the promise can not be broken. The classic way to achieve such things is an escrow, but so called smart contracts are emerging as a method for making fully unbreakable commitments.)

The attacker's commitment is quite counterintuitive: he promises that he will pay 11 utilons to each of the defecting players, but only if the attack fails.

Now the payoff looks like this:

                | Cooperate  | Defect
Attack fails    |        10  | 11
Attack succeeds |        -1  | 9

Defection became a dominant strategy. The clever thing, of course, is that if everyone defects, then the attacker reaches his goal without paying out anything.

22 comments

Comments sorted by top scores.

comment by Woett · 2015-02-01T05:10:03.119Z · LW(p) · GW(p)

I'm confused about defection becoming a dominant strategy.. Because the existence of a dominant strategy suggests to me that there should exist a unique Nash equilibrium here, which is not the case. Everyone defecting is a Nash equilibrium, but 50 people cooperating and 49 defecting is a Nash equilibrium as well, and a better one at that. Something (quite likely my intuition regarding Nash equilibria in games with more than 2 players) is off here. Also, it is of course possible to calculate the optimal probability that we should defect and I agree with FeepingCreature that this should be 0.5-e, where e depends on the size of the player base and goes to 0 when the player base becomes infinite. But I highly doubt that there's an elegant formula for it. It seems (in my head at least) that already for, say, n=5 you have to do quite a bit of calculation, let alone n=99.

Replies from: DanielVarga
comment by DanielVarga · 2015-02-01T11:55:58.865Z · LW(p) · GW(p)

Nice. If we analyze the game using Vitalik's 2x2 payoff matrix, defection is a dominant strategy. But now I see that's not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it's a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.

There's a sleight of hands going on in Vitalik's analysis, and it is located at the phrase "regardless of one’s epistemic beliefs [one is better off defecting]". If my epistemic belief is that 49 other people are cooperating, and 49 other people are defecting, then it's not true that defection is my best strategy. Of course, Vitalik's 2x2 matrix just does not allow me to have such refined epistemic beliefs: I have to get by with "attack succeeds" versus "attack fails".

Which kind of makes sense, because it's true that I probably won't find myself in a situation where I know for sure that 49 other people are cooperating, and 49 other people are defecting, so the correct game theoretic definition of dominant strategy is probably less relevant here than something like Vitalik's "aggregate" version. Still, there are assumptions here that are not clear from the original analysis.

Replies from: vbuterin
comment by vbuterin · 2015-02-02T22:38:23.254Z · LW(p) · GW(p)

So, I did not forget about that particular case. In my particular brand of cryptoeconomic analysis, I try to decompose cooperation incentives into three types:

  1. Incentives generated by the protocol
  2. Altruism
  3. Incentives arising from the desire to have the protocol succeed because one has a stake in it

I often group (2) and (3) into one category, "altruism-prime", but here we can separate them.

The important point is that category 1 incentives are always present as long as the protocol specifies them, category 2 incentives are always present, but the size of category 3 incentives is proportional to the "probability of being pivotal" of each node - essentially, the probability that the node actually is in a situation where its activity will determine the outcome of the game.

Note that I do not consider 49/50 Nash equilibria realistic; in real massively multiplayer games, the level of confusion, asynchronicity, trembling hands/irrational players, bounded rationality, etc, is such that I think it's impossible for such a finely targeted equilibrium to maintain itself (this is also the primary keystone of my case against standard and dominant assurance contracts). Hence why I prefer to think of the probability distribution on the number of players that will play a particular strategy and from there the probability of a single node being pivotal.

In the case of cryptoeconomic consensus protocols, I consider it desirable to achieve a hard bound of the form "the attacker must spend capital of at least C/k" where C is the amount of capital invested by all participants in the network and k is some constant. Since we cannot prove that the probability of being pivotal will be above any particular 1/k, I generally prefer to assume that it is simply zero (ie, the ideal environment of an infinite number of nodes of zero size). In this environment, my usage of "dominant strategy" is indeed fully correct. However, in cases where hostile parties are involved, I assume that the hostile parties are all colluding; this maximally hard double-standard is a sort of principle of charity that I believe we should hold to.

comment by DanielVarga · 2015-01-30T16:00:34.904Z · LW(p) · GW(p)

I don't know too much about decision theory, but I was thinking about it a bit more, and for me, the end result so far was that "dominant strategy" is just a flawed concept.

If the agents behave superrationally, they do not care about the dominant strategy, and they are safe from this attack. And the "super" in superrational is pretty misleading, because it suggests some extra-human capabilities, but in this particular case it is so easy to see through the whole ruse, one has to be pretty dumb not to behave superrationally. (That is, not to consider the fact that other agents will have to go though the same analysis as ourselves.)

Superrationality works best when we actually know that the others have the same input-output function as ourselves, for example when we know that we are clones or software copies of each others. But real life is not like that, and now I believe that the clean mathematical formulation of such dilemmas (with payoff matrices and all that) is misleading, because it sweeps under the rug another, very fuzzy, hard to formalize input variable: the things that we know about the reasoning processes of the other agents. (In the particular case of the P+epsilon attack, we don't have to assume too much about the other agents. In general, we do.)

comment by FeepingCreature · 2015-01-29T11:10:02.946Z · LW(p) · GW(p)

Weird question: superrationally speaking, wouldn't the "correct" strategy be to switch to B with 0.49 probability? (Or with however much is needed to ensure that if everybody does this, A probably still wins)

[edit] Hm. If B wins, this strategy halves the expected payoff. So you'd have to account for the possibility of B winning accidentally. Seems to depend on the size of the player base - the larger it is, the closer you can drive your probability to 0.5? (at the limit, 0.5-e?) Not sure. I guess it depends on the size of the attacker's epsilon as well.

I'm sure there's some elegant formula here, but I have no idea what it is.

Replies from: vbuterin
comment by vbuterin · 2015-02-02T22:39:46.362Z · LW(p) · GW(p)

The superrational strategy is indeed to switch to B with some probability approaching 0.5 (or, if the system allows it, vote for A with 51% of one's capital and for B with 49% of it).

comment by [deleted] · 2015-11-06T12:58:16.314Z · LW(p) · GW(p)

Consider the following. If smart contracts where implementable, one of the easier rationality raising measures and most profitable things to do would be to immediately hold a number of contractually enforceable dollar auctions until there remained no one naive to that exact class of irrationality

Replies from: gjm, ChristianKl
comment by gjm · 2015-11-06T14:41:47.850Z · LW(p) · GW(p)

Similarly, you could burgle people's houses until there remained no one poorly protected against whatever sort of burglary you committed. You could go around insulting people until there remained no one so thin-skinned they couldn't cope OK being being insulted by strangers. You could put disease-causing viruses and bacteria in the water supply until there remained no one alive who wasn't resistant to the diseases.

I hope it's obvious what's unsatisfactory about those courses of action. Why is the one you propose any better?

Replies from: Jiro
comment by Jiro · 2015-11-06T15:25:35.892Z · LW(p) · GW(p)

If you put diseases in the water until there remains no one who isn't resistant, this will happen partly because people will be killed by the disease. If everyone developed immunity against the disease but nobody was seriously harmed, it would not be such a bad idea--this is why we have universal vaccination.

I highly doubt that doing a bunch of dollar auctions would lead there being no remaining naive people because the naive people were all killed off rather than because they became non-naive.

Replies from: gjm
comment by gjm · 2015-11-06T17:51:53.107Z · LW(p) · GW(p)

They would become non-naive by being harmed (by losing a pile of money). Of course that's a lesser harm than being killed, and indeed "kill the non-resistant ones" is different from "harm the non-resistant ones until they become resistant", so I probably shouldn't have included the diseases-in-the-water example because it uses both effects. It's the latter that I had in mind as common to the examples I listed (as well as, of course, Clarity's original proposal).

Replies from: Jiro
comment by Jiro · 2015-11-06T18:55:05.377Z · LW(p) · GW(p)

Unless they're 12 years old, losing a couple of dollars is not really all that damaging.

Replies from: gjm
comment by gjm · 2015-11-06T20:45:16.827Z · LW(p) · GW(p)

I bet the actual gain in wisdom, relative to just telling them "don't do that", is in proportion to the damage.

comment by ChristianKl · 2015-11-06T15:33:03.479Z · LW(p) · GW(p)

If smart contracts where implementable

Ethereum does have smart contracts.

Replies from: None
comment by [deleted] · 2015-11-06T23:35:40.359Z · LW(p) · GW(p)

No they don't. They're developing a platform for them. They're not usable outside of just alleged mathematical proof of concept type stuff that have already been rubbished by academics. What they have is a great approach to marketing, a supportive community, venture capitalists and a house of cards for now. It may turn into something, but they haven't done enough that I wouldn't bet on the next outsider that comes along with proper cryptological credentials.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-07T00:12:45.170Z · LW(p) · GW(p)

They launched on 30 July 2015.

They have enough that the prediction market Augur get's build on their technology. There's http://slock.it/ and http://airlock.me/ build smart locks with Ethereum

Replies from: None
comment by [deleted] · 2015-11-07T12:20:44.261Z · LW(p) · GW(p)

Take a look at Slockit or Airlock's page. Can you buy or implement their system? No.

We are developing a framework...

They are working on developing something contingent on something that doesn't exist

Ethereum launched as a currency already, but that doesn't mean it's implemented it's claim to value - smart contracts. It's borderline a scam dare I say. That, or they're is something seriously wrong about the way they are conducting themselves. Altcoins really have been underwhelming as a whole since bitcoin - and even bitcoin is so poorly integrated into existing financial infrastructure it's not an immediately tractable concept and until it is...I'd rather not hedge my bank account into something that may very well end of catastrophic failure - some day there will probably be an ''attack'' which destroys a virtual coin economy and networks and delegitimises the movement, killing confidence in the concept.

Augur doesn't actually implement smart contracts in a prediction market, not yet at least. The best solution we have to that problem is ''oracles'', which are somewhat decentralised but not at all original and not at all trustless smart contracts.

edit: btw, Augur specifically is a fork of ethereum. It's a big way of saying fuck you and thanks for the code, basically. Consider for example the case of anoncoin planning to fork into zerocash - zero cash will entirely dominate anoncoin in theory so all those people who hold anoncoin can basically kiss their holdings goodbye thanks to their developers defecting to the new technology. If augur is successful, what can augur holders do that ethereum holders can? Everything. Coins of any currency become redundant when their selling point is a gimmick for which there are low barriers of entry to improving upon their inherent value (imagine being able to easily alchemise a more useful prettier version of gold, then sell it for less than gold initially - certainly doesn't make gold feel like a good investment - particularly as in the case of Ethereum were the use is merely aspirational...)

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2015-11-07T16:10:48.353Z · LW(p) · GW(p)

and even bitcoin is so poorly integrated into existing financial infrastructure it's not an immediately tractable concept and until it is...I'd rather not hedge my bank account into something that may very well end of catastrophic failure

Of course investing in new technology is always risky.

Altcoins really have been underwhelming as a whole since bitcoin

I haven't heard of any altcoin besides Ethereum that provides a significant benefit over Bitcoin, so it's natural that other altcoins don't lead anywhere.

edit: btw, Augur specifically is a fork of ethereum.

I haven't read anything about Augur wanting to fork ethereum. Where did you get that idea? The FAQ says that Augur wants to run on Ethereum (https://augur.zendesk.com/hc/en-us)

From Augur's perspective I don't think it makes much sense to fork Ethereum. There's more security against attacks without forking. Augur also profits from having Ether as the core currency that's payed inside the network and not have a separate currency for that.

imagine being able to easily alchemise a more useful prettier version of gold, then sell it for less than gold initially

None of the direct Bitcoin forks are more useful than Bitcoin because of network effects. The same is true for Ethereum. Ethereum is better than Bitcoin because it allows applications like Augur to run on it. It's more difficult to legally attack Augur when it runs on Ethereum instead of it's own network.

If you are a company renting out lookers than it useful to have the payment for the lockers to be in Ether instead of your own currency XEther because Ether is a currency for which other applications exist.

Sooner or later there will be a SilkRoad type service on Ethereum. In contrast to a lot of BitCoin markets, the market owner won't be able to run away with the money in that market.

Consider for example the case of anoncoin planning to fork into zerocash - zero cash will entirely dominate anoncoin in theory so all those people who hold anoncoin can basically kiss their holdings goodbye thanks to their developers defecting to the new technology.

I don't see a huge market for those currencies. If trades happen in a Ethereum based markets it's quite easy to provide a trustworthy mixer where the trust is based on Ethereum escrow that internally uses zerocash.

edit:

There no good reason to edit instead of writing a new post to continue the conversation.

Replies from: None
comment by [deleted] · 2015-11-08T02:13:37.627Z · LW(p) · GW(p)

Hey I was thinking earlier this morning that I lots of my reasoning above has been emotionally motivated after losing ~$110 in transaction fees cashing in what would otherwise have been a profitable trade in bitcoins haha.

I'm gonna come back and reply to this if/once I'm thinking more objectively

comment by ChristianKl · 2015-11-07T12:28:23.024Z · LW(p) · GW(p)

Take a look at Slockit or Airlock's page. Can you buy or implement their system? No.

No, but Ethereum works well enough that they are in the process of building their system.

Augur doesn't actually implement smart contracts in a prediction market, not yet at least.

Smart contracts are a building block fo what Augur is doing.

The application of dollar auctions is much simpler. You don't need any 'oracles' for it.

comment by ike · 2015-01-29T19:58:21.281Z · LW(p) · GW(p)

Isn't this an attack against something called Schelling coin, not bitcoin?

Replies from: Lumifer
comment by Lumifer · 2015-01-29T20:21:30.335Z · LW(p) · GW(p)

If you read till the end, there is a scenario how this attack could be used against bitcoin.

comment by lisaharris7 · 2015-01-30T05:14:45.636Z · LW(p) · GW(p)

Another fantastic post! This could be one particular and mentioned a lot of things in equal manner was wonderful. Please Visit: http://www.prostarjackets.com/product/chris-evans-captain-america-avengers-age-of-ultron-jacket.html