Capturing Uncertainty in Prediction Markets
post by hawkebia (hawk-ebia) · 2022-02-24T17:23:30.682Z · LW · GW · 7 commentsContents
A Possible Solution If more people are placing bets on uncertainty than on making a prediction, I make money. None 7 comments
I've been trying to make sense of what a 50% prediction means. Does it convey different meanings based on the question asked, and is it really as uninformative as it's made out to be. As a part of that discussion, I believe there is a possible solution to two big problems prediction markets currently face.
- How to extract useful signal from 50% predictions
- How to incentivize those with low certainty to participate
Predictions tend to be comprised of two steps:
- What do I believe is going to happen?
- How certain am I?
There are many cases where even if one has a sense of what is going to happen, there is little to no certainty that it will. The options here are skipping the question (not placing a bet), or putting in 50% if you're forced to choose.
I think it's generally understood that using 50% to represent "I don't know" is problematic. Picking 50% also holds an entirely different meaning for the question "Will this coin toss result in heads?" vs. "Will China invade Taiwan this year?".
Similarly, as an observer of the market, does seeing the market at 50% really represent 1:1 odds? Or is it an indication that the market is extremely uncertain. There seems to be a useful signal missing here.
Finally, given that some of the most interesting questions in prediction markets are also about the most uncertain events, many people avoid participating or betting altogether. This is the opposite of what we want if these markets are to be useful.
A Possible Solution
There are two types of uncertainty, or "I don't know":
- I don't know enough about this, so I won't bet
- I've studied and researched this subject, and I still don't know, so I won't bet
Incentivizing the the first type to place bets would add noisy signal. But capturing the second type of "I don't know" seems to be a pretty important signal about the studied uncertainty of the event.
There have been some ideas about how to encourage more participation, like by providing interest free loans as incentive. But I think that suffers from a few of problems. First, it doesn't distinguish between noisy vs. studied signals. Second, it incentivizes picking the option with more upside. And third, it encourages exactly the most horrible and awful kind of punditry: Picking an opinion with certainty even though you have no idea.
So, is there a way to capture studied uncertainty as useful signal, AND incentivize the participation of those who are highly uncertain?
I think this could be accomplished with an Uncertainty Index coupled with every question, on which one can place bets. The index moves up and down based on what percentage of people/money interacting with the question place a bet on the price/outcome vs the Uncertainty Index itself.
If more people are placing bets on uncertainty than on making a prediction, I make money.
In some ways, it would represent a kind of volatility index, but not exactly. As someone only cursorily familiar with derivitives, this seems like it would only partially be tied to the existing price, and the direction the price takes. And to whatever extent it is tied, it would present a way to hedge bets on the original question.
There is at least some evidence that this would work based on few of the meta questions on Manifold. Here is an example related to Russia-Ukraine:
- Will Russia invade Ukraine before the end of February?
- Will there be an edge case where it is hard to determine if Russia has invaded the Ukraine before March?
To implement this, Manifold could make their options 'YES', 'NO', 'UNCERTAIN' which make it quite intuitive to place the different types of bets.
The ability to place bets on an Uncertainty Index, or something similar that captures the core concept behind it, has the potential to encourage a lot more participation while also capturing an important signal of predictions: doubt that a meaningful prediction is even possible.
7 comments
Comments sorted by top scores.
comment by SimonM · 2022-02-25T11:51:38.039Z · LW(p) · GW(p)
There are a bunch of different metrics which you could look at on a prediction market / prediction platform to gauge how "uncertain" the forecast is:
- Volatility - if the forecast is moving around quite a bit, there are two reasons:
- Lots of new information arriving and people updating efficiently
- There is little conviction around "fair value" so traders can move the price with little capital
- Liquidity - if the market is 49.9 / 50.1 in millions of dollars, then you can be fairly confident that 50% is the "right" price. If the market is 40 / 60 with $1 on the bid and $0.50 on the offer, I probably wouldn't be confident the probability lies between 40 and 60, let along "50% is the right number". (The equivalent on prediction platforms might be number of forecasters, although CharlesD has done some research on this [EA · GW] which suggests there's little addition value being added by large numbers of forecasters)
- "Spread of forecasts" - on Metaculus (for example) you can see a distribution of people's forecasts. If everyone is tightly clustered around 50% that (usually) gives me more confidence that 50% is the right number than if they are widely spread out
↑ comment by hawkebia (hawk-ebia) · 2022-02-25T13:13:49.617Z · LW(p) · GW(p)
All these indicators are definitely useful for a market observer. And betting on these indicators would make for an interesting derivatives market - especially on higher volume questions. The issue I was referring to is that all these indicators are still only based on traders who felt certain enough to bet on the market.
Say 100 people who have researched East-Asian geopolitics saw the question "Will China invade Taiwan this year?". 20 did not feel confident enough to place a bet. Of the remaining 80 people, 20 bet small amounts because of their lack of certainty.
The market and most of the indicators you mentioned would be dominated by the 60 that placed large bets. A LOT of information about uncertainty would be lost. And this would have been fairly useful information about an event.
The goal would be to capture the uncertainty signal of the 40 that did not place bets, or placed small bets. One way to do that would be to make "uncertainty" itself a bettable property of the question. And one way to accomplish that would be to bet on what percentage of bets are on "uncertainty" vs. a prediction.
Replies from: SimonM, SimonM↑ comment by SimonM · 2022-02-25T16:51:30.112Z · LW(p) · GW(p)
And one way to accomplish that would be to bet on what percentage of bets are on "uncertainty" vs. a prediction.
How do you plan on incentivising people to bet on "uncertainty"? All the ways I can think of lead to people either gaming the index, or turning uncertainty into a KBC.
↑ comment by SimonM · 2022-02-25T16:48:28.644Z · LW(p) · GW(p)
The market and most of the indicators you mentioned would be dominated by the 60 that placed large bets
I disagree with this. Volatility, liquidity, # predictors, spread of forecasts will all be affected by the fact that 20 people aren't willing to get involved. I'm not sure what information you think is being lost by people stepping away? (I guess the difference between "the market is wrong" and "the market is uninteresting"?)
Replies from: hawk-ebia↑ comment by hawkebia (hawk-ebia) · 2022-02-26T02:50:25.880Z · LW(p) · GW(p)
What is being lost is related to your intuition in the earlier comment:
if the market is 49.9 / 50.1 in millions of dollars, then you can be fairly confident that 50% is the "right" price.
Without knowing how many people of the "I've studied this subject, and still don't think a reasonable prediction is possible" variety didn't participate in the market, it's very hard to place any trust in it being the "right" price.
This is similar to the "pundit" problem where you are only hearing from the most opinionated people. If 60 nutritionist are on TV and writing papers saying eating fats is bad, you may try to draw the "wrong" conclusion from that.; because unknown to you, 40 nutritionists believe "we just don't know yet". And these 40 are provided no incentives to say so.
Take the Russia-Kiev question on Metaculus which had a large number of participants. It hovered at 8% for a long time. If prediction markets are to be useful beyond just pure speculation, that market didn't tell me how many knowledgable people thought an opinion was simply not possible.
The ontological skepticism signal is missing - people saying there is no right or wrong that "exists" - we just don't know. So be skeptical of what this market says.
As for KBC - most markets allow you to change/sell your bet before the event happens; especially for longer-term events. So my guess is that this is already happening. In fact, the uncertainty index would seperate out much of the "What do other people think?" element into it's own question.
For locked in markets like ACX where the suggestion is to leave your prediction blank if you don't know, imagine every question being paired with "What percentage of people will leave this prediction blank?"
comment by Carlos Javier Gil Bellosta (carlos-javier-gil-bellosta) · 2022-02-24T21:33:40.928Z · LW(p) · GW(p)
First, I want to dispute the statement that a 50% is uninformative. It can be very informative depending on value of the outcomes. E.g., if I am analyzing transactions looking for fraud, that a transaction has 50% prediction of being fraudulent is "very informative": most fraudulent transactions may have fraud probabilities much, much lower than that.
Second, it is true that beliefs on probabilities need not be "sharp". The Bayesian approach to the problem (which is in fact the very problem that Bayes originally discussed!) would require you to provide a distribution of your "expected" (I want to avoid the terms "prior" or "subjective" explicitly here) probabilities. Such distribution could be more or less concentrated. The beta distribution could be used to encode such uncertainty; actually, it is the canonical distribution to do so. The question would remain how to operationalize it in a prediction market, particularly from the UX point of view.
Replies from: hawk-ebia↑ comment by hawkebia (hawk-ebia) · 2022-02-25T01:54:22.393Z · LW(p) · GW(p)
First, I want to dispute the statement that a 50% is uninformative. It can be very informative depending on value of the outcomes.
Yes, absolutely. 50% can be incredibly useful. Unfortunately, it also represents the "I don't know" calibration option in most prediction markets. A market at 50% for "Will we discover a civilization ending asteroid in the next 50 years?" would be cause for much concern.
Is the market really saying that discovering this asteroid is essentially a coin flip with 1:1 odds? More likely it just represents the entire market saying "I don't know". It's these types of 50% that are considered useless, but I think do still convey information - especially if saying "I don't know" is an informed opinion.
The Bayesian approach to the problem (which is in fact the very problem that Bayes originally discussed!) would require you to provide a distribution of your "expected" (I want to avoid the terms "prior" or "subjective" explicitly here) probabilities
I think there might an ontological misunderstanding here? I fully agree that ones expectations are often best represented by a non-normal distribution of outcomes. But this presumes that such a distribution "exists"? If it does, then one way to capture it would be to place multiple bets at different levels like one does with options for a stock. Metaculus already captures this distribution for the market as a whole - but only for those who were confident and certain enough to place bets.
My suggestion is to also capture signal from those with studied uncertainty who don't feel comfortable placing bets on ANY distribution. It's not that their distribution is flat - it's that for them a meaningful distribution does not exist. Their belief is "doubt that a meaningful prediction is even possible".