The Bayesian Tyrant

post by abramdemski · 2020-08-20T00:08:55.738Z · LW · GW · 21 comments

Contents

21 comments

Long ago and far away, there was a kingdom called Estimor in a broad green valley surrounded by tall grey mountains. It was an average kingdom in most respects, until the King read the works of Robin Hanson and Eliezer Yudkowsy, and decided to institute a Royalist Futarchy.

(This is a parable about the differences between Bayesian updates and logical induction. See also: Radical Probabilism [LW(p) · GW(p)].)

The setup was very simple. It followed the futarchic motto, "Vote Values, But Bet Beliefs" -- the only special consideration being that there was just one voting constituent (that being the King). A betting market would inform the King of everything He needed to know in order to best serve His interests and the interests of Estimor (which were, of course, one and the same).

The Seer's Hall -- a building previously devoted to religious prophecy -- was repurposed to serve the needs of the new betting market. (The old prophets were, of course, welcome to participate -- so long as they were willing to put money on the line.)

All went well at first. The new betting market allowed the King to set the revenue-maximizing tax rate, via the Laffer curve. An early success of the market was the forecasting of a grain shortage based on crop growth early in the season, which allowed ample time for grain to be purchased from neighboring lands.

Being an expert Bayesian Himself, the King would often wander the Seer's Hall, questioning and discussing with the traders at the market. Sometimes the King would be shocked by what he learned there. For example, many of the traders were calculating the Kelly betting criterion to determine how much to invest in a single bet. However, they then proceeded to invest only a set fraction of the Kelly amount (such as 80%). When questioned, traders replied that they were hedging against mistakes in their own calculations, or reducing volatility, or the like.

One day, the King noticed a man who would always run in and out of the Hall, making bets hastily. This man did particularly well at the betting tables -- he ended the day with a visibly heavy purse. However, when questioned by The King as to the source of his good luck, the man had no answers. This man will subsequently be referred to as the Fool. 

The King ordered spies to follow the Fool on his daily business. That evening, spies returned to report that The Fool was running back and forth between the Seer's Hall and Dragon's Foot, a local tavern. The Fool would consult betting odds at Dragon's Foot, and return to the Seer's Hall to bet using those odds.

Evidently, Dragon's Foot had become an unlicensed gambling den. But were they truly doing better than the Seer's Hall, so that this man could profit simply by using their information?

The King had the Fool brought in for questioning. As it turned out, the Fool was turning a profit by arbitrage between the two markets: whenever there was a difference in prices, the Fool would bet in favor at the location where prices were low, and bet against at the location where prices were high. In this way, he was making guaranteed money.

The King was disgusted at this way of making money without bringing valuable information to the market. He ordered all other gambling in the Kingdom shut down, requiring it to all take place at the Seer's Hall.

Soon after that, the Fool showed his face again. Once again, he did well in the market. The King had his spiel follow the Fool, but this time, he went nowhere of significance.

Questioning the Fool a second time, he learned that this time the Fool was making use of calibration charts. The Fool would make meticulous records of the true historical frequency of events given their probabilistic judgement -- for example, he had recorded that when the market judges an event to be 90% probable, that event actually occurs about 85% of the time. The Fool had made these records about individual traders as well as the market as a whole, and would place bets accordingly.

The King was once again disgusted by the way the Fool made money off of the market without contributing any external information. But this time, He felt that He needed a more subtle solution to the problem. Thinking back to his first days of reading about Bayes' Law, the King realized the huge gap between His vision of perfected reasoning and the reality of the crowded, noisy, irrational market. The iron law of the market was buy low sell high. It did not follow rational logic. The Fool had proved it: the individual traders were poorly calibrated, and so was the market itself.

What the King needed to do was reform the market, making it a more rational place.

And so it was that the King instituted the Bayesian Law: all bets on the market are required to be Kelly bets made on valid probability estimates. Valid probability estimates are required to be Bayesian updates of previously registered probability estimates.

All traders on the market would now proceed according to Bayes' Law. They would pre-register their probability distributions, pre-specifying what kind of information would update them, and by how much it would update them.

The new ordinance proved burdensome. Only a few traders continued to visit the Seer's Hall. They spent their days in meticulous calculation, updating detailed prior models of grain and weather with all the data which poured in.

Surprisingly, the Fool was amongst the hangers-on, and continued to make a tidy profit, even to the point of driving out some of the remaining traders -- they simply couldn't compete with him.

The King examined the registered probability distribution the Fool was using. It proved puzzling. The Fool's entire probability distribution was based on numbers which were to be posted to a particular tree out by Mulberry road. Updating on these numbers, the Fool was somehow a tidy profit. But where were the numbers coming from?

The King's spies found that the numbers were being posted by a secretive group, whose meetings they were unable to infiltrate.

The King had all the attendees arrested, accusing them of running an illegal gambling ring. The Fool was brought in for questioning once more.

"But it wasn't a gambling ring!" the Fool protested. "They merely got together and compiled odds for gambling. They were quite addicted when the Bayesian Law shut their sort out of the Seer's Hall, after all. And I took those odds and used them to bet in the Seer's Hall, perfectly legally."

"And redistributed the winnings?" accused the King.

"As is only fair," agreed the Fool. "But that is not gambling. I simply paid them as consultants."

"You took money from honest Bayesians, and drove them out of my Hall!"

"As is the advantage of Bayesianism, no?" The Fool cocked an eyebrow. "The money flows to he who can give the best odds."

"Take him away!" the king bellowed, waving a hand for the guards.

At that moment, the guards removed their helmets, revealing themselves to be comrades-in-arms with the Fool. The outcasts of the Seer's Hall had foreseen that the King would move against them, and with the power of Futarchy, had prepared well -- they staged a bloodless revolution that day.

The King, his family, and his most loyal staff were forced into exile. They went to stay with a distant cousin of the King, who ruled the nation Ludos, in the next valley over.

The King of Ludos had, upon seeing Estimor's success with prediction markets, set up His own. Unlike the Seer's Hall of Estimor, that of Ludos continued to thrive.

The King in Exile asked his cousin: "What did I do wrong? All I wanted was to serve Estimor. The prediction market worked so well at first. And I only tried to improve it."

The King of Ludos sat in thought for a time, and then spoke. "Cousin, We cannot tell You that You did anything wrong. Revolutions will happen. But We will say this: the many prediction markets of Ludos strengthen each other. Runners go back and forth between them, profiting from arbitrage, and this only makes them stronger. Calibration traders correct any bias of the market, ensuring unbiased results. You tried to outlaw the irrational at your market -- but remember, the wise gambler profits off the foolhardy gambler. Without any novices throwing away their money, none would profit."

"But most of all, cousin, We think You lost sight of the power of betting. It is a truth more fundamental than Bayes' Law that money will flow from the unclever to the clever. You lost Your trust in that system. Even if You had enforced Kelly betting, but left it to each individual trader to set his probability however he liked -- rather than updating via Bayes' Law alone -- you would have been fine. If Bayes' Law were truly the correct way, money would have flowed to those who excelled at it. If not, then money would have flowed elsewhere. But instead you overwhelmed them with the bureaucracy of Bayes -- requiring them to record every little bit of information they used to reach a conclusion."


The arbitrage between different betting halls represented outside view / modest epistemology, trying to reach agreement between different reasoners. It's a questionable thing to include, in terms of the point I'm making, since this is not exactly a thing that happens in logical induction. However, it fits in the allegory so well that I felt I couldn't not include it. One argument for the common prior assumption (an assumption which underpins the Aumann Agreement Theorem, and is closely related to modest-epistemology arguments) is that a bookie can Dutch Book any group of agents who do not have a common prior, via performing arbitrage on their various beliefs.

[Edit: actually, what we can conclude from the analogy is that bets on different markets should converge to the same thing if they ever pay out, which is also true in logical induction.]

The calibration-chart idea, clearly, represented calibration properties [LW(p) · GW(p)].

The idea of the Bayesian Law represented requiring all hypotheses/traders to update in a Bayesian manner. Starting from Bayesian hypothesis testing, one step we can take in the direction of logical induction is to allow hypotheses to themselves make non-Bayesian updates. The overall update between hypotheses would remain Bayesian, but an individual hypothesis could change its mind in a non-Bayesian fashion. A hypothesis would still be required to have a coherent probability distribution at any given time; just, the updates could be non-Bayesian. A fan of Bayes' Law might suppose that, in such a situation, the hypotheses which update according to Bayes' Law would dominate -- in other words, a meta-level Bayesian would learn to also be an object-level Bayesian. But I see no reason to suspect this to be the case. Indeed, in situations where logical uncertainty is relevant, non-Bayesian updates can be used to continue improving one's probability distribution over and above the explicit evidence which comes in. It was this idea -- that we could take one step toward logical induction by being a meta-level Bayesian without being an object-level Bayesian -- which inspired this post (although the allegory didn't end up having such a strong connection with this idea).

The main point of this post, anyway, is that Bayes' Law would be a bad law. Don't institute a requirement that everyone reason according to it.

21 comments

Comments sorted by top scores.

comment by Donald Hobson (donald-hobson) · 2020-08-20T12:12:10.460Z · LW(p) · GW(p)
One argument for the common prior assumption (an assumption which underpins the Aumann Agreement Theorem, and is closely related to modest-epistemology arguments) is that a bookie can Dutch Book any group of agents who do not have a common prior, via performing arbitrage on their various beliefs.

There exists an agent that believes with certainty that all coins only ever land on heads as its prior.

There also exists an agent that is equally confident in tails. (Exists in the mathematical sense that there is some pattern of code that would consist of such an agent, not in the sense that these agents have been built)

Lets say that and , by construction will always take any bet that they would profit from if all involved coins come up heads or tails respectively.

Consider a bet on a single coin flip that costs 2 if a coin comes up heads, and pays if the coin lands on tails. () If you would be prepared to take that bet for sufficiently large , then a bookie can offer you this bet, and offer a bet that wins 1 if the coin lands heads, and looses if the coin lands tails. will take this bet. So the bookie has Dutch booked the pair of you.

If you ever bet at all and your betting decisions don't depend on who you are sitting next to, then you can be part of a group that is Dutch booked.

If you want to avoid ever betting as part of a group that is being Dutch booked, then if you are in the presense of and , you can't bet at all, even about things that have nothing to do with coin flips.

If you bet 5 against 5 that gravity won't suddenly disappear, then the bookie can Dutch book and for 100, and the 3 of you have been Dutch booked for at least 95 as a group.

If you have some reason to suspect that a mind isn't stupid, maybe you know that they won a lot of money in a prediction market, maybe you know that they were selected by evolution, ect then you have reason to take what the mind says seriously. If you have strong reasons to think that Alice is a nearly perfect reasoner, then getting Dutch booked when you are grouped with Alice indicates you are probably making a mistake.

Replies from: Richard_Kennaway, abramdemski, ben-lang
comment by Richard_Kennaway · 2020-08-20T13:04:05.124Z · LW(p) · GW(p)

Why should I care if I together with other people are jointly getting Dutch booked, if I myself am not? If my neighbour loses money but I do not, I do not care that "we" lost money, if his affairs have no connection with mine.

Replies from: abramdemski
comment by abramdemski · 2020-08-20T16:03:35.366Z · LW(p) · GW(p)

First off, I am quite sympathetic, and by no means would argue that the Dutch Book for the common prior assumption is as convincing as other Dutch Book Arguments.

However, it's still intriguing.

If you and your neighbor are game-theoretic partners who sometimes cooperate in prisoner's-dilemma like situations, then you might consider this kind of joint Dutch Book concerning. A coalition which does not manage to jointly coordinate to act as one agent is a weaker coalition.

Replies from: donald-hobson, simon
comment by Donald Hobson (donald-hobson) · 2023-03-18T18:13:33.582Z · LW(p) · GW(p)

If the collision is able to negotiate bets internally, then for any instance of the coalition getting Dutch booked, they can just agree to run that bet without the bookie, and split the bookies cut. 

comment by simon · 2020-08-21T13:35:45.478Z · LW(p) · GW(p)

If everyone else changes to my prior, that's great. But if I change from my prior to their prior, I am just (from the point of view of someone with my prior, which obviously includes myself) making myself vulnerable to be beaten in ordinary betting by other agents that have my prior.

Replies from: abramdemski
comment by abramdemski · 2020-08-21T16:24:42.158Z · LW(p) · GW(p)

This isn't always true!

Let's say your prior is P and mine is Q. I take your argument to be that P always prefers bets made according to P (bets made according to Q are at best just as good). But this is only true if P thinks P knows better than Q.

It's perfectly possible for P to think Q knows better. For example, P might think Q just knows all the facts. Then it must be that P doesn't know Q (or else P would also know all the facts.) But given the opportunity to learn Q, P would prefer to do so; whereupon, the updated P would be equal to Q.

Similar things can happen in less extreme circumstances, where Q is merely expected to know some things that P doesn't. P could still prefer to switch entirely over to Q's beliefs, because they have a higher expected value. It's also possible that P trusts Q only to an extent, so P moves closer to Q but does not move all the way. This can even be true in the Aumann agreement setting: P and Q can both move to a new distribution R, because P has some new information for Q, but Q also has some new information for P. (In general, R need not even be a 'compromise' between P and Q; it could be something totally different.)

So it isn't crazy at all for rational agents to prefer each other's beliefs.

A weaker form of the common prior assumption could assert that this is always the case: two rational agents need not have the same priors, but upon learning each other's priors, would then come to agree. (Either P updates to Q, or Q updates to P, or P and Q together update to some R.)

comment by abramdemski · 2020-08-20T16:15:15.987Z · LW(p) · GW(p)

I don't think H and T are rational agents in the first place, since they violate non-dogmatism: they place zero probability on non-tautologous propositions.

The common prior assumption, if true, is only supposed to apply amongst rational agents.

I would further point out that although I can't use a classic Dutch Book to show H and T are irrational, I can use the relaxed Dutch Books of the sort used in the definition of logical induction -- H and T are irrational because they expose themselves to unbounded exploitation. So I'm broadly using the same rationality framework to rule out H and T, as I am to argue for the common prior assumption.

The claim is more like: two TDT agents should never knowingly disagree about probabilities.

Here's an intuition-pump. If I am sitting next to Alice and we disagree, we should have already bet with each other. Any bookie who comes along and tries to profit off of our disagreement should be unable to, because we've already made all the profitable exchanges we can. We've formed a Critch coalition, in order to coordinate rationally. So our apparent beliefs, going forward, will be a Bayesian mixture of our (would-be) individual beliefs. We will apparently have a common prior, when betting behavior is examined.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2020-08-20T17:08:13.516Z · LW(p) · GW(p)

Sure, you can fix unbounded downside risk by giving a finite budget. You can fix the dogmatism by making have an probability of tails.

If you and have a chance to bet with each other before going to the bookies, then the bookie won't be able to Dutch book the two of you because you will have already separated and 's money.

If you can't bet with directly for some reason, then a bookie can Dutch book you and , by acting as a middle man and skimming off some money.

comment by Ben (ben-lang) · 2022-08-23T15:32:15.510Z · LW(p) · GW(p)

I don't think this really matters though.

Lets say I am in the same betting hall as a man called "The Idiot". The Idiot has access to some undepletable fortune and will accept any bet (with any odds) that any person proposes to them. Now, whenever I bet on anything at all the Dutch bookmaker can protect themselves from a loss by making a compensating bet with the Idiot. (Although there are clearly simpler ways of getting money off the Idiot). Why should I feel worried that the category {me and the idiot} can be Dutch-booked? It doesn't mean I am loosing or being foolish in my bets. At most it just means that I should be betting against the idiot not the Dutch bookmaker.

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2022-08-23T15:46:52.257Z · LW(p) · GW(p)

Correct. This was basically my point.

Replies from: ben-lang
comment by Ben (ben-lang) · 2022-08-23T16:31:18.439Z · LW(p) · GW(p)

Oops. Yes, I have just re-read your comment. I somehow didn't absorb that all-important "if you have reason to think they are at all sensible" clause!

comment by jessicata (jessica.liu.taylor) · 2020-08-21T03:03:25.904Z · LW(p) · GW(p)

The basic point here is that Bayesians lose zero sum games in the long term. Which is to be expected, because Bayesianism is a non adversarial epistemology. (Adversarial Bayesianism is simply game theory)

This sentence is surprising, though: "It is a truth more fundamental than Bayes’ Law that money will flow from the unclever to the clever".

Clearly, what wins zero sum games wins zero sum games, but what wins zero sum games need not correspond to collective epistemology.

As a foundation for epistemology, many things are superior to "might makes right", including Bayes' rule (despite its limitations).

Legislating Bayesianism in an adversarial context is futile; mechanism design is what is needed.

Replies from: abramdemski
comment by abramdemski · 2020-08-21T16:29:36.530Z · LW(p) · GW(p)

Did you read radical probabilism [LW · GW] yet?

I think legislating Bayesianism is just actually a bad idea (even if we could get around the excessive-paperwork-to-show-all-your-evidence problem). I don't think prediction markets are a perfect mechanism, but I think the non-Bayesianism highlighted here is a feature, not a bug. But all my actual arguments for that are in my Radical Probabilism post.

I'm curious what alternative mechanism design you might propose.

comment by Ben Pace (Benito) · 2021-12-27T18:37:56.774Z · LW(p) · GW(p)

This is an extensions of the Embedded Agency philosophical position. It is a story told using that understanding, and it is fun and fleshes out lots of parts of bayesian rationality. I give it +4.

(This review is taken from my post Ben Pace's Controversial Picks for the 2020 Review [LW · GW].)

comment by habryka (habryka4) · 2020-09-12T18:27:22.225Z · LW(p) · GW(p)

Promoted to curated: Man, Abram, you are just writing too many amazing posts. But as I said on the Radical Probabilism post [LW(p) · GW(p)], I wanted to curate that post like 4 times anyways, and this post gives me an excuse to basically curate it again with a different (possibly more accessible framing). 

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2021-11-20T05:16:23.185Z · LW(p) · GW(p)

You (or, rather, Dr. Hanson) should definitely rename futarchy... I can't stop thinking about it meaning rule of futanaris :D

On a more serious note, I think the allegory fails to disentangle the faults of Bayes's Law being necessary to follow and the faults of the need to maintain the corresponding bureaucracy.

Replies from: abramdemski
comment by abramdemski · 2021-11-24T18:11:40.532Z · LW(p) · GW(p)

I think you are right. The allegory would have been stronger if those mistakes were more tightly connected.

Here is (I think) the reason why I wrote it the way I did. I was going for the analogy of a dogmatic bayesian AI (implemented as such in code) vs a more flexible AI. The dogmatic code takes longer to run in order to calculate an entirely coherent probability distribution, and also throws out some opportunities (such as not-dogmatically-bayesian "models" which correct calibration errors w/o being able to make predictions on their own).

So the unnecessary bureaucracy is a stand-in for the slowness, and throwing out uncertified traders is like throwing out useful but less-rigidly-bayesian models.

If I can think of a way to re-write it to address your critique, I will try.

comment by Orome · 2020-09-12T19:07:41.491Z · LW(p) · GW(p)

It is a truth more fundamental than Bayes' Law that money will flow from the unclever to the clever.

Clearly the words of someone with a monopoly on the use of violence (in all its forms, not just head bashing but endless rules and laws configured to concentrate power and money). Money accumulates in the hands of those with the power to take it from those who lack that power. Only those with that power don't see that.

Replies from: abramdemski, alextes
comment by abramdemski · 2020-09-15T17:04:53.475Z · LW(p) · GW(p)

I'd edit it to be something like "on an even playing field, money will flow from the unclever to the clever", but.... you make a cogent point on how appropriate it is that royalty would not include such a qualification.

comment by alextes · 2020-09-15T11:41:26.924Z · LW(p) · GW(p)

Not sure this contradicts the point the author is trying to make. In today's society, the smart are often the powerful. Not always, there is definitely more nuance there, but I'd guess most agree where previously those with strength would accumulate power over time, it became those social and clever that accumulate power over time. Especially for a definition of intelligence that posits intelligence is your capacity to conform your surroundings to your will. At that point the only ingredient left is greed / ego and we'd see money (power), from the unclever to the clever 😬.

Replies from: abramdemski
comment by abramdemski · 2020-09-15T17:06:23.768Z · LW(p) · GW(p)

WRT the point of the post, I'm not really talking about society at all, but rather, betting markets.