Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems)

post by NunoSempere (Radamantis) · 2020-12-03T22:00:26.889Z · LW · GW · 28 comments

Contents

  Introduction
  Fake polls by PredictIt forecasters
  Stock markets
  US election
  Ebola forecast may have run into fixed-point problem
  ReplicationMarkets participants may have tried to cheat Keynesian beauty contest.
  Superforecasters learning to choose easier questions
  Surnames as a mechanism of control and taxation
  Conclusion
None
30 comments

Thanks to Ozzie Gooen for reviewing this post.

Introduction

The Parable of the Predict-O-Matic [LW · GW] is a short story which considers a forecasting system which is ostensibly set-up to maximize accuracy, and which ends up interfering with the world in ways not intended. In the original story, some of these problems were:

“Its answers will shape events. If it says stocks will rise, they'll rise. If it says stocks will fall, then fall they will. Many people will vote based on its predictions.”

“You keep thinking of the line from Orwell's 1984 about the boot stamping on the human face forever, except it isn't because of politics, or spite, or some ugly feature of human nature, it's because a boot stamping on a face forever is a nice reliable outcome which minimizes prediction error.”

“Suppose you have a prediction market that's working well. It makes good forecasts, and has enough money in it that people want to participate if they know significant information. Anything you can do to shake things up, you've got a big incentive to do. Assassination is just one example. You could flood the streets with jelly beans. If you run a large company, you could make bad decisions and run it into the ground, while betting against it -- that's basically why we need rules against insider trading, even though we'd like the market to reflect insider information." 

“You understand what you are. It isn't quite right to say you are the Predict-O-Matic. You are a large cluster of connections which thinks strategically. You generate useful information, and therefore, the learning algorithm keeps you around. You create some inaccuracies when you manipulate the outputs for any purpose other than predictive accuracy, but this is more than compensated for by the value which you provide.”

Below, I give some real-life examples of these problems, though some are speculative.

Previous work:

Fake polls by PredictIt forecasters

Example of: Markets for entropy.

PredictIt traders created fake polls to fool and troll other forecasters and the media, per FiveThirtyEight’s Fake Polls Are A Real Problem. Quoting liberally from the article:

Delphi Analytica released a poll fielded from July 14 to July 18. Republican Kid Rock earned 30 percent to Sen. Debbie Stabenow’s 26 percent. A sitting U.S. senator was losing to a man who sang the lyric, “If I was president of the good ol’ USA, you know I’d turn our churches into strip clubs and watch the whole world pray.”

the poll was quickly spread around the political sections of the internet. [...] There was just one problem: Nobody knew if the poll was real. Delphi Analytica’s website came online July 6, mere weeks before the Kid Rock poll was supposedly conducted. The pollster had basically no fingerprint on the web.

...some PredictIt users started gathering in a chat room on Discord, a voice and text application often used by gamers, to talk politics and betting. McDonald shared screenshots from that chat room, where a person going by the screen name “Autismo Jones,” who claimed to have started Delphi Analytica, bragged about the publicity the Kid Rock poll was receiving. Jones, apparently reacting to an email I had sent to Delphi, wrote, “we dont [sic] need Harry Enten. we got governors tweeting out our polls. we are already famous.”

McDonald believes that “Jones” and whoever may have helped him or her did so for two reasons. The first: to gain notoriety and troll the press and political observers. (The message above seems to support that theory.) The second: to move the betting markets. That is, a person can put out a poll and get people to place bets in response to it — in this case, some people may have bet on a Kid Rock win — and the poll’s creators can short that position (bet that the value of the position will go down). In a statement, Lee said Delphi Analytica was not created to move the markets. Still, shares of the stock for Michigan’s 2018 Senate race saw their biggest action of the year by far the day after Delphi Analytica published its survey.

The price for one share — which is equivalent to a bet that Stabenow will be re-elected — fell from 78 cents to as low as 63 cents before finishing the day at 70 cents. (The value of a share on PredictIt is capped at $1.) McDonald argued that the market motivations were likely secondary to the trolling factor, but the mere fact that the markets can be so easily manipulated is worrisome.

In this case, Delphi Analytica’s claims may have made Kid Rock more seriously consider entering the Michigan Senate race. He retweeted the results, after all. And while the singer has not made any official moves toward running for Senate, such as filing a statement of candidacy, it wasn’t too long after Delphi Analytica published its poll that Kid Rock said he’d take a “hard look” at a Senate bid and that former New York Gov. George Pataki endorsed him.

(the story then continues).

The paper Fake Polls, Real Consequences: The Rise of Fake Polls and the Case for Criminal Liability contains many more examples in pages 140 to 150 (13 to 23 of the linked pdf):

a PredictIt user seeking to purchase a futures contract on the outcome of the Republican primary in Alabama’s 2017 special U.S. Senate election who comes across a poll predicting a result of that exact election, allegedly conducted by CSP Polling, might reasonably consider that poll in their purchasing decision – even if they do not know that CSP lacks a track record or any indicia of reliability.  And given the speed with which PredictIt users buy and sell contracts, a user seeing this information might reasonably conclude that if she is to use this information to her benefit, she needs to act quickly.

CSP Polling – which, according to University of Florida political science professor Michael McDonald and Jeff Blehar of the National Review, stands for “Cuck Shed Polling” – alleged that it conducted polls in the 2017 special congressional election in Montana, the special congressional election in Georgia, and the Virginia Democratic primary for Governor. Even after being identified in FiveThirtyEight as a fake pollster, CSP Polling continued to release polls, though the seriousness of the poll “releases” noticeably deteriorated in the year that followed.

Stock markets

Example of: Self-fulfilling prophecies, markets for entropy.

This example was mentioned in the original Predict-O-Matic story: "If it says stocks will rise, they'll rise." One sometimes sees this effect with companies Warren Buffet is rumored to be buying.

Additionally, hedge funds normally try to predict which companies will do better, but companies such as Third Point Management also exist:

New York magazine noted that Loeb's "preferred strategy" is to buy into troubled companies, replace inefficient management, and return the companies to profitability, which "is the key to his success." (source)

Further, rules against insider trading exist in order to avoid markets for entropy; otherwise a CEO of a company could profit by shorting its stock and running the company to the ground. More narratively satisfying, in Casino Royale the villain buys put options on an experimental aerospace manufacturer, betting on the company's failure and then organizing a terrorist attack on their only experimental plane.

Outside the realm of fiction:

In July 2003, the U.S. Department of Defense publicized a Policy Analysis Market on their website, and speculated that additional topics for markets might include terrorist attacks. A critical backlash quickly denounced the program as a "terrorism futures market" and the Pentagon hastily canceled the program. (source, source)

US election

Example of: Fixed-point problems

Plausibly, in the 2016 election, overconfident win predictions for Hillary Clinton led to lower turnout, which led to her loss. Note that Trump got around 63M votes in 2016, and around 74M in 2020, whereas Democrats got 66M and 81M respectively.

This paper (available on sci-hub) makes a similar point (note in particular Figure 3, with two fixed points):

We see that the only way in which the pollster can arrive at a prediction that will coincide with the election result is by privately adjusting his poll results (which we assume for the moment to be an accurate estimate of I) for the effect that their publication will have upon the voters' behavior. But is even this possible? If he makes such an adjustment, will not the adjustment itself alter the effect of the prediction and again lead to its own falsification? Is there not involved here a vicious circle, where-by any attempt to anticipate the reactions of the voters alters those reactions and hence invalidates the prediction? 

It can be seen from the figure (and can be shown rigorously by another application of the fixed-point theorem) that there always exists at least one prediction, P1, with the following two properties: (a) the prediction, if published, will be confirmed, and (b) publication of the prediction will not change the outcome of the election (i.e., P1>50% only if I>50%). However, examination of the figure will show that there may also exist other values of P possessing the first property but not the second. If one of these latter predictions is published, it will be confirmed by the election result, but the candidate who would have won if no prediction had been published will be defeated.

This NYT article makes a similar point:

There’s an even more fundamental point to consider about election forecasts and how they differ from weather forecasting. If I read that there is a 20 percent chance of rain and do not take an umbrella, the odds of rain coming down don’t change. Electoral modeling, by contrast, actively affects the way people behave. 

In 2016, for example, a letter from the F.B.I. director James Comey telling Congress he had reopened an investigation into Mrs. Clinton’s emails shook up the dynamics of the race with just days left in the campaign. Mr. Comey later acknowledged that his assumption that Mrs. Clinton was going to win was a factor in his decision to send the letter. 

Similarly, did Facebook, battered by conservatives before the 2016 election, take a hands-off approach to the proliferation of misinformation on its platform, thinking that Mrs. Clinton’s odds were so favorable that such misinformation made little difference? Did the Obama administration hold off on making public all it knew about Russian meddling, thinking it was better to wait until after Mrs. Clinton’s assumed win, as has been reported?

Ebola forecast may have run into fixed-point problem

Example of: Fixed point problems.

A fatalistic Ebola forecast may have played a role in Ebola having been contained early.

One forecast that gained particular attention during the epidemic was published in the summer of 2014, projecting that by early 2015 there might be 1.4 million cases. This number was based on unmitigated growth in the absence of further intervention and proved a gross overestimate, yet it was later highlighted as a “call to arms” that served to trigger the international response that helped avoid the worst-case scenario.

Source: Assessing the Performance of Real-Time Epidemic Forecasts: A Case Study of Ebola in the Western Area Region of Sierra Leone, 2014-15.

ReplicationMarkets participants may have tried to cheat Keynesian beauty contest.

Example of: Markets for entropy.

ReplicationMarkets is an experiment to see if the replication of papers can be predicted. They run contests, structured with a survey round, in which participants make predictions alone, followed by a market round, in which participants trade contracts in a market. 

Some of the papers are then chosen for replication, and the contracts resolve, giving some payouts to the participants. But this happens far in the future, and in the meantime, participants are also paid according to their predictions during the survey round. I suspect some participants coordinated to exploit this mechanism, coordinating to predict something unlikely during the survey round:

Yes, the survey round is potentially a Keynesian beauty contest, though it takes some doing. You're not forecasting the market round. You're forecasting the best estimate we can make using peer prediction on the independent surveys. Harvard's peer prediction algorithm has done well in previous tests, and in theory takes a lot of coordination to defeat.

We got to test that a bit in Round 8 when we discovered a coordinated "attack" that accounted for ~1/3 of our surveys. Some forecasts would have changed, prizes would have been won, but neither so much as we feared. 

Source: Speculation, ReplicationMarkets newsletter, this comment [LW(p) · GW(p)].

Superforecasters learning to choose easier questions

Example of: Other.

Tetlock explicitly mentions this in one of his Ten Commandments for Superforecasters: "Focus on questions where your hard work is likely to pay off," so Superforecasters learn to not forecast on the more intractable questions.

Surnames as a mechanism of control and taxation

Example of: Nudge towards legibility and predictability.

The introduction of surnames facilitated identification, taxation and statistical aggregation, and was often resisted by the local population. In this example, the prediction problem is usually “how much can the authorities tax or conscript?,” and the interference is forcing or incentivizing locals to adopt unambiguous name-surname combinations. 

One can see an example of this need in this scene from The Wire (the big guy is ironically called "Little Kevin", and the police can't identify him.)

Source: The Production of Legal Identities Proper to States: The Case of the Permanent Family Surname (available on sci-hub):

The fixing of personal names, and, in particular, permanent patronyms, as legal identities seems, everywhere, to have been, broadly-speaking, a state project. As an early and imperfect legal identification, the permanent patronym was linked to such vital administrative functions as tithe and tax collection, property registers, conscription lists, and census rolls.

In many cultures, an individual's name will change from context to context and, within the same context, over time. It is not uncommon for a newborn to have had one or more name changes in utero in the event the mother's labor seemed to be going badly. Names often vary at each stage of life (in- fancy, childhood, adulthood, parenthood, old age) and, in some cases, after death. Added to these may be names used for joking, rituals, mourning, nick- names, school names, secret names, names for age-mates or same-sex friends, and names for in-laws.

...locally-kept census rolls have often under-reported the population (to evade taxes, corvée labor, or conscription) and understated both arable land acreage and crop yields.

The modern state-by which we mean a state whose ideology encompasses large-scale plans for the improvement of the population's welfare — requires at least two forms of legibility to be able to achieve its mission. First, it requires the capacity to locate citizens uniquely and unambiguously. Second, it needs standardized information that will allow it to create aggregate statistics about property, income, health, demography, productivity, etc.

Conclusion

Above are some real-life examples of prediction systems problematically interfering with the real world. More examples are welcome! In particular, I’d appreciate more examples of prediction systems making the world more predictable.

28 comments

Comments sorted by top scores.

comment by Ruby · 2020-12-06T03:43:02.000Z · LW(p) · GW(p)

Curated.

There's a certain challenge in articulating theories, but another challenge in showing that those theories borne out in the real world. I really value this post for taking the contents of a highly upvoted post that only used a vivid illustration and showing that you actually see those things in the wild. It's confirming that the map actually matches the territory, and I'd love to see that happen for even more of the ideas developed on LessWrong. Kudos!

comment by Davidmanheim · 2020-12-07T14:24:38.023Z · LW(p) · GW(p)

Another possible example:

If we view markets as prediction systems, there is a great example of self-fulfilling prophecy in the form of the Black-Scholes option pricing model. Before its publication, the price of options were very random, and the prices could be almost anywhere. Once a (supposedly) normative model for prices was available, people's willingness to trade converged to those prices fairly quickly.

(This simplifies slightly, because part of the B-S model was arbitrage, which allowed markets to reinforce these "correct" prices, but it's a useful example of when a prediction can stabilize the system.)

Replies from: liam-donovan-1
comment by Liam Donovan (liam-donovan-1) · 2020-12-07T20:40:29.099Z · LW(p) · GW(p)

For anyone interested, the keyword to read about things like this in the economics literature is "performativity"

Replies from: Davidmanheim, Radamantis
comment by Davidmanheim · 2020-12-08T09:13:07.252Z · LW(p) · GW(p)

Thanks - this is super-helpful! And after looking briefly, a citation for the above example is here.

comment by NunoSempere (Radamantis) · 2020-12-12T18:43:18.398Z · LW(p) · GW(p)

Thanks to both; this is a great example; I might add it to the main text

comment by CronoDAS · 2020-12-06T09:05:39.942Z · LW(p) · GW(p)

There's a legend about a stock market prediction scam:

Pick 2^N potential targets. Send half of them a prediction that a stock will go up and the other half a prediction will go down. Eliminate the people who got a wrong prediction, and then do this again and again. Eventually you'll end up with one guy who's convinced you're never wrong, so charge him an arm and a leg for investment advice.

Replies from: Radamantis, Davidmanheim
comment by NunoSempere (Radamantis) · 2020-12-12T18:44:47.170Z · LW(p) · GW(p)

You can see this effect for election predictions, such that there are plenty of smallish predictors which predicted the result of the current election closely (but such that it's easy to speculate that they're just a selection effect) 

comment by Davidmanheim · 2020-12-08T09:19:15.660Z · LW(p) · GW(p)

Thankfully, this scam is far less viable now that people can google the writers of these predictions.

And there was always the simple defense of not trusting stock picks from people who aren't very wealthy  themselves, and already managing people's money successfully in public view.

comment by adamShimi · 2020-12-04T10:23:28.629Z · LW(p) · GW(p)

Thanks for this selection of examples! Predict-o-Matic scenarios are some of the short term scenarios that worry me the most, and it's great to see someone tackling them.

What I would personally want to know is "Which minimal conditions are necessary for a Predict-O-Matic scenario to appear?". Splitting the issues as you did will definitely help in answering that question!

Replies from: Radamantis
comment by NunoSempere (Radamantis) · 2020-12-04T10:53:06.853Z · LW(p) · GW(p)

Which minimal conditions are necessary for a Predict-O-Matic scenario to appear?

One answer to that might be "either inner or outer alignment failures" in the forecasting system. See here [? · GW] for that division made explicit

comment by Davidmanheim · 2020-12-07T14:20:29.459Z · LW(p) · GW(p)

"Superforecasters learning to choose easier questions"


Just wanted to note that it's not easier questions, per se, it's ones where you have a marginal advantage due to information or skill asymmetry. And because it's a competition, at least sometimes, you have an incentive to predict on questions that are being ignored as well. There are definitely fewer people forecasting more intrinsically uncertain questions, but since participants get scored with the superforecaster median for questions they don't answer, that's a resource allocation question, rather than the system interfering with the real world. We see this happening broadly when prediction scoring systems don't match incentives, but I've discussed that elsewhere, and there was a recent LW post on the point as well [LW · GW].

Mostly, this type of interference is from real-world goals to predictions, rather than the reverse. We do see some interference in prediction markets in order to change real world outcomes happens, in the first half of the 20th century: "The newspapers periodically contained charges that the partisans were manipulating the reported betting odds to create a bandwagon effect." (Rhode and Strumpf, 2003)

Replies from: Radamantis, AllAmericanBreakfast
comment by NunoSempere (Radamantis) · 2020-12-12T18:45:55.388Z · LW(p) · GW(p)

Thanks. I keep missing this one, because Good Judgment Open, the platform used to select forecasters, rewards both Brier score and relative Brier score.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-12-14T09:10:29.986Z · LW(p) · GW(p)

Yes - GJO isn't actually quite doing superforecasting as the book describes - for example, it's not team-based.

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-12-08T19:24:59.706Z · LW(p) · GW(p)

I read that line differently, though I agree with your remarks. "Superforecasters learning to choose easier questions" was, to me, at least as much about the suite of questions posed to the forecasters as the questions each individual forecaster chooses to answer. If a forecasting firm wants to build a reputation, they could potentially learn how to ask questions that look harder to answer than they really are.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-12-14T09:11:51.997Z · LW(p) · GW(p)

That's a good point. For some of the questions, that's a reasonable criticism, but as GJ Inc. becomes increasingly based on client-driven questions, it's a less viable strategy.

comment by Unnamed · 2020-12-04T00:44:17.925Z · LW(p) · GW(p)

Note that Trump got around 63M votes in 2016, and around 71M in 2020, whereas Democrats got 66M and 75M respectively.

The 2020 results are 81M-74M with some votes still left to count. 75M-71M might have been the margin a few weeks ago when there were still a bunch more not-yet-counted votes.

Replies from: Radamantis
comment by NunoSempere (Radamantis) · 2020-12-04T08:12:54.633Z · LW(p) · GW(p)

Thanks, changed

comment by Pablo (Pablo_Stafforini) · 2020-12-08T11:39:31.700Z · LW(p) · GW(p)

Related to the ReplicationMarkets example: on Metaculus, there is an entire category of self-resolving questions, where resolution is at least in part determined by how users predict the question will resolve. We have seen at least one instance of manipulation of such questions. And there is even a kind of meta-self-resolving question, asking users to predict what the sentiment of Metaculus users will be with regard to self-resolving questions.

Replies from: Radamantis
comment by NunoSempere (Radamantis) · 2020-12-08T16:47:34.762Z · LW(p) · GW(p)

Looks pretty fun!

comment by DirectedEvolution (AllAmericanBreakfast) · 2020-12-04T23:39:16.059Z · LW(p) · GW(p)

Predictive accuracy brings trust, and trust brings power. Making a series of correct and meaningful predictions can bring fame and fortune.

It's actually surprising that people don't do it more. Even if they're just guessing, it's a little bit like buying a lottery ticket. Maybe this is because our society has enforcement mechanisms against wild prognostication. You have to earn the right to make forecasts.

Perhaps we can view credentialism, in this light, as a guard against false positives.

Unfortunately, we don't take the further step of vetting the predictive track record of the people with credentials. We just kind of assume we know what they're talking about.

Replies from: Davidmanheim
comment by Davidmanheim · 2020-12-08T09:16:35.385Z · LW(p) · GW(p)

I think this is exactly what most pundits do, and it's well known that correct predictions are reputation makers.

The problem is that making more than one correct but still low-probability prediction is incredibly unlikely, since you multiply two small numbers. This functions as a very strong filter. And you don't need to carefully vet track records to see when someone loudly gets it wrong, so as we see, most pundits stop making clear and non-consensus predictions once they start making money as pundits.

comment by NunoSempere (Radamantis) · 2020-12-12T18:42:42.578Z · LW(p) · GW(p)

Another example, from @albrgr

"This is kind of crazy: https://nber.org/digest-202012/corporate-reporting-era-artificial-intelligence Companies have learned to use (or exclude) certain words to make their corporate filings be interpreted more positively by financial ML algorithms."

Then quoting from the article:

The researchers find that companies expecting higher levels of machine readership prepare their
disclosures in ways that are more readable by this audience. "Machine readability" is measured in
terms of how easily the information can be processed and parsed, with a one standard deviation
increase in expected machine downloads corresponding to a 0.24 standard deviation increase in
machine readability. For example, a table in a disclosure document might receive a low readability
score because its formatting makes it difficult for a machine to recognize it as a table. A table in a
disclosure document would receive a high readability score if it made effective use of tagging so
that a machine could easily identify and analyze the content.
Companies also go beyond machine readability and manage the sentiment and tone of their
disclosures to induce algorithmic readers to draw favorable conclusions about the content. For
example, companies avoid words that are listed as negative in the directions given to algorithms.
The researchers show this by contrasting the occurrence of positive and negative words from the
Harvard Psychosocial Dictionary — which has long been used by human readers — with those
from an alternative, finance-specific dictionary that was published in 2011 and is now used
extensively to train machine readers. After 2011, companies expecting high machine readership
significantly reduced their use of words labelled as negatives in the finance-specific dictionary,
relative to words that might be close synonyms in the Harvard dictionary but were not included in
the finance publication. A one standard deviation increase in the share of machine downloads for a
company is associated with a 0.1 percentage point drop in negative-sentiment words based on the
finance-specific dictionary, as a percentage of total word count.

comment by lambdaphagy · 2021-03-14T00:53:56.734Z · LW(p) · GW(p)

In particular, I’d appreciate more examples of prediction systems making the world more predictable.

 

There is a possibly apocryphal anecdote about how, prior to the publication of the Black-Scholes model, option prices approximately reflected theory.  After the publication of the model, option prices precisely reflected theory because everyone was using the model to price options!

I have never been able to find a source for this story but it should be easy enough to verify through historical options data.

 

EDIT: I apparently failed to read as far as the first comment: https://www.lesswrong.com/posts/6bSjRezJDxR2omHKE/real-life-examples-of-prediction-systems-interfering-with?commentId=2kKZ87cQxmMviyJmc

comment by lukehmiles (lcmgcd) · 2020-12-11T20:55:26.739Z · LW(p) · GW(p)

I think crypto markets can't be regulated except by random moderators' filtering on bets and betters' choices of where to put money. It seems someone could put a million dollars against a terrorist attack on a certain date and hope someone bets against it & executes to get the money. So a betting market allows hiring for certain tasks (not most tasks) with reliable verification & payout, and you get your money back if it doesn't happen. I have some faith in moderators' filters, though. I hope they would have the wisdom to forbid bets on terrorist attacks, assassinations, etc. Insider trading cannot be prevented (as far as I can tell) if betting is anonymous…  

comment by MMM · 2023-05-22T05:55:41.206Z · LW(p) · GW(p)

One such example that comes to my mind that happens all the time is: a grocery store sends a forecast to the supplier of how much you want to buy, the supplier gets the goods and sends them to you. You cannot sell more than you have =) So the forecast will impact reality, if you actually have two times more customers wanting to buy that product, you will sell only what you forecasted. So 100% accuracy in forecast (because you will sell everything that you forecasted), but in fact, it was a very very bad forecast with 100% accuracy.

comment by NunoSempere (Radamantis) · 2021-01-11T11:00:02.421Z · LW(p) · GW(p)

Two other examples:

  • Youtube's recommender system changes the habits of Youtube video producers (e.g., using keywords at the beginning of the titles, and at the beginning of the video now that Youtube can parse speech)
  • Andrew Yang apparently received death threats over a prediction market on the number of tweets. 
Replies from: Radamantis