Prediction Markets are for Outcomes Beyond Our Control

post by lsusr · 2022-02-09T09:54:40.891Z · LW · GW · 23 comments

Contents

23 comments

Betting markets are the gold standard of expert predictions because bets are the ultimate test of what people truly believe.

The best betting markets are highly liquid. A liquid market is one where you can place a large bet without moving the price very much. Liquid prediction markets work when no individual person can influence the outcome. Bettering markets are a great way to find out if "it will rain tomorrow" or whether "candidate will be elected president next year".

But what if a single person can influence the outcome? For example, what would happen if I created a betting market for "Lsusr will publish a blog post tomorrow"?

Suppose I am ambivalent about whether I will publish a blog post tomorrow. If the price of "Lsusr will publish a blog post tomorrow" drops below 1.00 then I will buy shares of "Lsusr will publish a blog post tomorrow" and then pocket a risk-free profit by posting a blog post tomorrow. If the price of "Lsusr will publish a blog post tomorrow rises above 0.00 then I will buy shares of "Lsusr will not publish a blog post tomorrow" and then pocket a risk-free profit by not posting a blog post tomorrow.

The market equilibrium occurs even if I am unaware that the prediction market exists. Suppose the price drops to 0.99. A trader could buy shares and then pay me a small fee to influence the outcome.

I am not truly ambivalent. Suppose I'm willing to influence the outcome in exchange for $500. What happens? If the market liquidity is less than $500 then we have a functional prediction market. If the market liquidity is more than $500 then we have a regular market.

Prediction markets function best when liquidity is high, but they break completely if the liquidity exceeds the price of influencing the outcome. Prediction markets function only in situations where outcomes are expensive to influence.

23 comments

Comments sorted by top scores.

comment by hamnox · 2022-02-09T15:21:57.936Z · LW(p) · GW(p)

So... what I'm getting is that prediction markets will be just as annoying but necessary to police for insider trading as the stock market? Alas.

Replies from: Yoav Ravid, abramdemski
comment by Yoav Ravid · 2022-02-09T19:29:16.951Z · LW(p) · GW(p)

Not exactly. You do want people who have insider knowledge to contribute (say, Lsusr friend, who knows him well and has a better guess if he would post). But you don't want people abusing their influence (rather than knowledge) to buy and then tip things away from what the rest of the market thinks will happen, or trying to make sure something does happen just because they bet on it already.

comment by abramdemski · 2022-02-09T16:14:13.763Z · LW(p) · GW(p)

Yep. 

comment by SimonM · 2022-02-09T10:25:20.319Z · LW(p) · GW(p)

Prediction markets function best when liquidity is high, but they break completely if the liquidity exceeds the price of influencing the outcome. Prediction markets function only in situations where outcomes are expensive to influence.

 

There are a ton of fun examples of this failing:

comment by Dagon · 2022-02-09T15:49:36.936Z · LW(p) · GW(p)

I think one of the strengths of a prediction market is that they ARE markets.  Your participation in a market you control (by buying "will post" shares) makes it more accurate, just like any other good predictor.  And if someone knows you have this behavior, they can use their own capital to keep the price artificially high, reducing your incentive to post.  

People who want things to be "fair" hate this.  People who want to maximize information love it.

Replies from: tailcalled
comment by tailcalled · 2022-02-09T17:11:43.816Z · LW(p) · GW(p)

People who want to maximize information love it.

If by "maximize information", you mean "make the world more predictable", I think this is wrong and sometimes the exact opposite of right. When all of the outcomes are equally cheap to boost, prediction markets incentivize increasing the likelihood of the least likely outcome (because this is where you get the best odds), making the world less predictable.

Replies from: Dagon
comment by Dagon · 2022-02-09T17:25:31.516Z · LW(p) · GW(p)

"equally cheap to boost" is explicitly NOT the result of markets.  The true prediction is cheapest (in that it's profitable).  In cases where the market can influence the outcome, it's cheapest to encourage the outcome that actually wins (i.e. truth), and it's cheapest when there is less weight against it.

The key to markets is that "weight" of prediction/vote/influence is inversely proportional to risk/cost of failure.   It costs a lot to move the market significantly, so it better be worth it.

Replies from: tailcalled, Pattern
comment by tailcalled · 2022-02-09T17:38:14.850Z · LW(p) · GW(p)

Suppose a prediction market predicts 1% probability that I will make a post tomorrow. In that case, I can earn 100x returns by buying some bets and making the post anyway. So I buy a bunch of bets - but that of course changes the odds, and so changes my incentives. As it reaches 50% probability, I can only earn 1x returns by buying some bets, which is still something, but I might hold off on buying more, since I've already earned the vast majority of the possible value, and there might come something up tomorrow which makes it impractical for me to write a post.

Suppose people pick up on the fact that the odds have risen, and assume that therefore I've participated in my own market and conclude that I will be highly incentivized to write a post, so therefore they bid it up to 99% probability. In that case, I can earn money by selling my bets, bringing it back down to 50% probability and leaving me in basically as good a position as if I had just written the post, but eliminating my incentives to write the post.

In general, whenever there is a very low-probability event that some person in the market can influence, they can get extremely good odds by buying into the market, which gives them an incentive to cause that low-probability event to happen. However, the odds worsen as they bid it up, so they are generally only weakly incentivized to make it ~100% likely to happen. Instead, they are merely incentivized to cause chaos, to make low-probability events maybe happen.

Replies from: Dagon
comment by Dagon · 2022-02-09T19:50:27.745Z · LW(p) · GW(p)

The problem underlying this is lack of liquidity in the specific market.  When one or a few participants can cheaply have outsided impact, it's not a very functional market.

Replies from: tailcalled
comment by tailcalled · 2022-02-09T20:00:45.517Z · LW(p) · GW(p)

I don't see how the example changes when you add liquidity. Could you clarify, e.g. by tracing out a modified example?

Replies from: Dagon
comment by Dagon · 2022-02-09T21:30:25.105Z · LW(p) · GW(p)

I think so.  It's kind of a made-up example because so few people actually care whether you make a post, but say they did, and that you post somewhat rarely so 1% is a reasonable outside view.  You can earn great returns by buying "will post" and then posting.  And when you do, others will notice and buy it up, as you say.  You'll keep buying the "yes" side until you're risk-neutral that something will prevent you from posting (because you've invested enough that you're uncomfortable, or the price is at your true belief that you'll post).  

If it gets to 99.9%, you STILL can't make a profit by switching sides - you're too heavily invested on the "yes".  

So the market ends up correct - very high probability of a post.

Replies from: tailcalled
comment by tailcalled · 2022-02-10T08:35:54.231Z · LW(p) · GW(p)

If it gets to 99.9%, you STILL can't make a profit by switching sides - you're too heavily invested on the "yes".

Why not? If you sell your yes shares, that seems like a guaranteed profit to me? Are you assuming very large transaction costs or something?

Like let's make it dead-simple:

  1. The market sells contracts that pay $100 if you make the post, and pay nothing otherwise.
  2. Initially the market price is at $1. This means you can earn $99 by buying a contract and the making a post, which sounds like a cost-effective way of spending your time, so you buy a contract. This increases the price, maybe to $2, maybe to $50, depending on the liquidity. (It sounds like you prefer the high-liquidity limit?)
  3. The market somehow finds out about your plan, maybe by watching the prices, maybe because you announced your intention on social media, maybe for some other reason, so they conclude that you will make the post and therefore bid up the contract to $99.
  4. Now you could make the post and earn a total profit of $99. But you could also just sell the contract at the current market price, yielding a guaranteed profit of $98, which is less risky and requires less work.

It sounds you are saying that this story fails at step 4, because you are saying that one couldn't make a profit in this circumstance due to being "too heavily invested on the "yes"". But I don't see how you're too heavily invested in the yes; it's true that you've got a "yes" contract, but you bought it at $1 and the market price is now $99, so you can sell it and make a profit of $99-$1=$98.

Replies from: Dagon
comment by Dagon · 2022-02-10T15:41:46.835Z · LW(p) · GW(p)

Actually, that is a fair example, and I have changed my opinion.  Where there is actual control (not just asymmetrical information), prediction markets don't work.  This may generalize to prediction markets not working where prediction is impossible.  For instance, betting on the flip of a biased coin won't make good predictions unless some participants know the bias.

comment by Pattern · 2022-02-09T23:07:49.372Z · LW(p) · GW(p)

What? What is the "true prediction"?


In cases where the market can influence the outcome, it's cheapest to encourage the outcome that actually wins

This looks like a recursive definition, with no base case.

comment by iceman · 2022-02-09T21:41:04.869Z · LW(p) · GW(p)

Isn't this Moldbug's argument in the Moldbug/Hanson futarchy debate?

(Though I'd suggest that Moldbug would go further and argue that the overwhelming majority of situations where we'd like to have a prediction market are ones where it's in the best interest of people to influence the outcome.)

Replies from: Pattern
comment by Pattern · 2022-02-09T23:10:04.605Z · LW(p) · GW(p)

Doesn't that argument prove too much?

comment by aphyer · 2022-02-09T16:23:28.301Z · LW(p) · GW(p)

To some extent this is because our definition of 'function' is somewhat more complicated than 'predict well'.

If by 'function' you mean 'successfully predict an outcome', the example above is great! We started out unsure if you would publish a blog post, then you entered the market, and now we are certain of the result! Hooray!

But in practice we have a fuzzier definition of 'function' along the lines of 'predict the outcome as accurately as you can without actually affecting it', and prediction markets suffer an AI-alignment-esque issue in pursuing this goal.

Replies from: tailcalled
comment by tailcalled · 2022-02-09T17:16:30.149Z · LW(p) · GW(p)

But in practice we have a fuzzier definition of 'function' along the lines of 'predict the outcome as accurately as you can without actually affecting it', and prediction markets suffer an AI-alignment-esque issue in pursuing this goal.

Part of the issue is that "predict the outcome as accurately as you can without actually affecting it" is a causal concept, but prediction markets can't do anything about causality because they can't model counterfactuals such as what would have happened if someone was not allowed to make the bet. This makes it a fundamentally unsolvable problem if one is restricted to purely mechanical rules like those of prediction markets.

If by 'function' you mean 'successfully predict an outcome', the example above is great! We started out unsure if you would publish a blog post, then you entered the market, and now we are certain of the result! Hooray!

Note that this doesn't work in general; see my response to Dagon.

comment by Scott Garrabrant · 2022-02-10T07:44:30.317Z · LW(p) · GW(p)

I think this is a special case of the more general fact that probabilities are for outcomes beyond our control.

comment by Pattern · 2022-02-09T23:12:32.306Z · LW(p) · GW(p)

So suppose there is a desired outcome, like lsusr writing posts. How can that be incentivized?

Or to put it another way, how can accuracy (and greater yields) both be incentivized?

Replies from: lsusr
comment by lsusr · 2022-02-10T00:20:20.835Z · LW(p) · GW(p)

So suppose there is a desired outcome, like lsusr writing posts. How can that be incentivized?

You buy shares in the outcome you don't want. [LW · GW]

Replies from: Dagon
comment by Dagon · 2022-02-10T00:47:24.601Z · LW(p) · GW(p)

In most cases, there are more direct ways to pay for what you want.  In cases where the controller is obscured or partial, you can look at encouragement and hedging as related: you want to make bets that make the dis-preferred outcome less painful for you. 

comment by jbash · 2022-02-09T14:16:02.691Z · LW(p) · GW(p)

Betting markets are the gold standard of expert predictions

Citation needed.