Willing gamblers, spherical cows, and AIs

post by ChrisHallquist · 2013-04-08T21:30:24.813Z · LW · GW · Legacy · 40 comments

Note: posting this in Main rather than Discussion in light of recent discussion that people don't post in Main enough and their reasons for not doing so aren't necessarily good ones. But I suspect I may be reinventing the wheel here, and someone else has in fact gotten farther on this problem than I have. If so, I'd be very happy if someone could point me to existing discussion of the issue in the comments.

tldr; Gambling-based arguments in the philosophy of probability can be seen as depending on a convenient simplification of assuming people are far more willing to gamble than they are in real life. Some justifications for this simplification can be given, but it's unclear to me how far they can go and where the justification starts to become problematic.

In "Intelligence Explosion: Evidence and Import," Luke and Anna mention the fact that, "Except for weather forecasters (Murphy and Winkler 1984), and successful professional gamblers, nearly all of us give inaccurate probability estimates..." When I read this, it struck me as an odd thing to say in a paper on artificial intelligence. I mean, those of us who are not professional accountants tend to make bookkeeping errors, and those of us who are not math, physics, engineering, or economics majors make mistakes on GRE quant questions that we were supposed to have learned how to do in our first two years of high school. Why focus on this particular human failing?

A related point can be made about Dutch Book Arguments in the philosophy of probability. Dutch Book Arguments claim, in a nutshell, that you should reason in accordance with the axioms of probability because if you don't, a clever bookie will be able to take all your money. But another way to prevent a clever bookie from taking all your money is to not gamble. Which many people don't, or at least do rarely.

Dutch Book Arguments seem to implicitly make what we might call the "willing gambler assumption": everyone always has a precise probability assignment for every proposition, and they're willing to take any bet which has a non-negative expected value given their probability assignments. (Or perhaps: everyone is always willing to take at least one side of any proposed bet.) Needless to say, even people who gamble a lot generally aren't that eager to gamble.

So how does anyone get away with using Dutch Book arguments for anything? A plausible answer comes from a joke Luke recently told in his article on Fermi estimates:

Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, "I have the solution, but it only works in the case of spherical cows in a vacuum."

If you've studied physics, you know that physicists don't just use those kinds of approximations when doing Fermi estimates; often they can be counted on to yield results that are in fact very close to reality. So maybe the willing gambler assumption works as a sort of spherical cow, that allows philosophers working on issues related to probability to generate important results in spite of the unrealistic nature of the assumption.

Some parts of how this would work are fairly clear. In real life, bets have transaction costs; they take time and effort to set up and collect. But it doesn't seem too bad to ignore that fact in thought experiments. Similarly, in real life money has declining marginal utility; the utility of doubling your money is less than the disutility of losing your last dollar. In principle, if you know someone's utility function over money, you can take a bet with zero expected value in dollar terms and replace it with a bet that has zero expected value in utility terms. But ignoring that and just using dollars for your thought experiments seems like an acceptable simplification for convenience's sake.

Even making those assumptions so that it isn't definitely harmful to accept bets with zero expected (dollar) value, we might still wonder why our spherical cow gambler should accept them. Answer: because if necessary you could just add one penny to the side of the bet you want the gambler to take, but always having to mention the extra penny is annoying, so you may as well assume the gambler takes any bet with non-negative expected value rather than require positive expected value.

Another thing that keeps people from gambling more in real life is the principle that if you can't spot the sucker in the room, it's probably you. If you're unsure whether an offered bet is favorable to you, the mere fact that someone is offering it to you is pretty strong evidence that it's in their favor. One way to avoid this problem is to stipulate that in Dutch Book Arguments, we just assume the bookie doesn't know anything more about whatever the bets are about than the person being offered the bet, and the person being offered the bet knows this. The bookie has to construct her book primarily based on knowing the propensities of the other person to bet. Nick Bostrom explicitly makes such an assumption in a paper on the sleeping beauty problem. Maybe other people explicitly make this assumption, I don't know.

In this last case, though, it's not totally clear whether limiting the bookie's knowledge is all you need to bridge the gap between the willing gambler assumption and how people behave in real life. In real life, people don't often make very exact probability assignments, and may be aware of their confusion about how to make exact probability assignments. Given that, it seems reasonable to hesitate in making bets (even if you ignore transaction costs and declining marginal utility and know that the bookie doesn't know any more about the subject of the bet than you do), because you'd still know the bookie might be trying to exploit your confusion over how to make exact probability assignments.

At an even simpler level, you might adopt a rule, "before making multiple bets on related questions, check to make sure you aren't guaranteeing you'll lose money." After all, real bookies offer odds such that if anyone was stupid enough to bet on each side of a question with the same bookie, they'd be guaranteed to lose money. In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.

In the end, I'm confused about how useful the willing gambler assumption really is when doing philosophy of probability. It certainly seems like worthwhile work gets done based on it, but just how applicable are those results to real life? How do we tell when we should reject a result because the willing gambler assumption causes problems in that particular case? I don't know.

One possible justification for the willing gambler assumption is that even those of us who don't literally gamble, ever, still must make decisions where the outcome is not certain, and we therefore we need to do a decent job of making probability assignments for those situations. But there are lots of people who are successful at their chosen field (including in fields that require decisions with uncertain outcomes) who aren't weather forecasters or professional gamblers, and therefore can be expected to make inaccurate probability estimates. Conversely, it doesn't seem that the skills acquired by successful professional gamblers give them much of an edge in other fields. Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don't specifically require them is weak.

Another justification for pursing lines of inquiry based on the willing gambler assumption, a justification that will be particularly salient for people on LessWrong, is that if we want to build an AI based on an idealization of how rational agents think (Bayesianism or whatever), we need tools like the willing gambler assumption to figure out how to get the idealization right. That sounds like a plausible thought at first. But if we flawed humans have any hope of building a good AI, it seems like an AI that's as flawed as (but no more flawed than) humans should also have a hope of self-improving into something better. An AI might be programmed in a way that makes it a bad gambler, but aware of this limitation, and left to decide for itself whether, when it self-improves, it wants to focus on improving its gambling ability or improving other aspects of itself.

As someone who cares a lot about AI, this issue of just how useful various idealizations are for thinking about AI and possibly programming an AI one day are especially important to me. Unfortunately, I'm not sure what to say about them, so at this point I'll turn the question over to the comments.

40 comments

Comments sorted by top scores.

comment by Oscar_Cunningham · 2013-04-09T15:25:33.302Z · LW(p) · GW(p)

Dutch-book arguments in fact have the form "be rational or else you'll accept a sure-losing bet OR refuse a sure-winning bet". The second case negates the need for a "willing gambler" assumption.

comment by Qiaochu_Yuan · 2013-04-08T21:50:37.191Z · LW(p) · GW(p)

Humans are not only gambling when another human explicitly offers them a bet. Humans implicitly gamble all the time: for example, when you cross the street, you're gambling that the probability that you might get hit by a car and die doesn't outweigh whatever gain you expect from crossing the street (e.g. getting to school or work). Dutch book arguments in this context are an argument that if an agent doesn't play according to the rules of probability, then under adversarial assumptions the world can screw them over. It's valuable to know what can happen under adversarial assumptions even if you don't expect those assumptions to hold.

Therefore, it seems that making inaccurate probability estimates is compatible with success in a fields that require making decisions with uncertain outcomes.

This isn't strong evidence; you're mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).

Replies from: Kawoomba, ChrisHallquist, shminux, ChrisHallquist
comment by Kawoomba · 2013-04-09T06:28:14.060Z · LW(p) · GW(p)

Don't you think humans cross the street not because they've weighed the benefits versus the dangers, or some such, but because that's what they've been taught to do, and probability calculations be damned?

When you live in a county where many people drive without seatbealts, you're prone to emulate that behavior. It's not like you're collectively "betting" in a different manner, or evaluating the dangers differently. It's more of a monkey-see, monkey-do heuristic.

Replies from: Qiaochu_Yuan, Richard_Kennaway
comment by Qiaochu_Yuan · 2013-04-09T06:37:50.417Z · LW(p) · GW(p)

Just because you don't understand the game you're playing doesn't mean you're not playing it. The street is offering you a bet, and if you don't understand that, then... well, not much happens, but the bet is still there.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-09T06:42:37.773Z · LW(p) · GW(p)

By the same token, fish in an aquarium - or Braitenberg vehicles - are constantly engaging in bets they don't realize. Swim to this side, be first to the food but exert energy getting there.

Your perspective is valid, but if the agents refuse/are incapable of seeing the situation from a betting perspective, you have to ask how useful it is (not necessarily thinking in estimated utility, best case, worst case etcetera, but in the "betting" aspect of it). It may be a good intuition pump, as long as we keep in mind that people don't work that way.

Replies from: khafra, drethelin
comment by khafra · 2013-04-09T15:33:01.985Z · LW(p) · GW(p)

Do fish think in terms of expected value? Of course not. Evolutions make bets, and they can't think at all. Refactored Agency is a valuable tool--anything that can be usefully as a goal-seeking process with uncertain knowledge can also be modeled usefully as making bets. How useful is it to view arbitrary things through different models? Well, Will Newsome makes a practice of it. So, it's probably good for having insights, but caveat emptor.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-09T15:48:40.081Z · LW(p) · GW(p)

The more complete the models describe the underlying phenomenon, the more isomorphic all models should be (in their Occamian formulation), until eventually we're only exchanging variable names.

Replies from: khafra
comment by khafra · 2013-04-09T17:07:23.697Z · LW(p) · GW(p)

Yes; to check your visual acuity, you block off one eye, then open that one and block the other. To check (and improve) your conceptual acuity, you block off everything that isn't an agent, then you block of everything that isn't an algorithm, then you block of everything that isn't an institution, etc.

Unless you can hypercompute, in which case that's probably not a useful heuristic.

comment by drethelin · 2013-04-10T18:21:40.942Z · LW(p) · GW(p)

this is off topic but I'm really disappointed that braitenberg vehicles didn't turn out to be wheeled fish tanks that allowed the fish to explore your house

comment by Richard_Kennaway · 2013-04-09T10:15:26.858Z · LW(p) · GW(p)

Don't you think humans cross the street not because they've weighed the benefits versus the dangers, or some such, but because that's what they've been taught to do, and probability calculations be damned?

What they've been taught to do is weigh the benefits versus the dangers (although there are not necessarily any probability calculations gong on). The emphasis in teaching small children how to cross the road is mainly on the dangers, since those will invariably be of a vastly larger scale than the trifling benefit of saving a few seconds by not looking.

Replies from: Kawoomba
comment by Kawoomba · 2013-04-09T10:33:18.794Z · LW(p) · GW(p)

Does "Mommy told me to look for cars, or bad things happen" and "if I don't look before I cross, Mommy will punish me" count as weighing the benefits versus the dangers? If so, we agree.

I just wonder if the bet analogy is the most natural way of carving up reality, as it were.

Why did the rationalist cross the road? - He made a bet. (Badum-tish!)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-09T10:45:55.489Z · LW(p) · GW(p)

Does "Mommy told me to look for cars, or bad things happen" and "if I don't look before I cross, Mommy will punish me" count as weighing the benefits versus the dangers?

Perhaps these things are done differently in different cultures. This is how it is done in the U.K. Notice the emphasis throughout on looking to see if it is safe, not on rules to obey because someone says so and punishment, which figures not at all.

The earlier "Kerb Drill" mentioned in that article was a set of rules: look right, look left, look right again, and if clear, cross. That is why it was superceded.

comment by ChrisHallquist · 2013-04-09T05:27:39.692Z · LW(p) · GW(p)

One thing I should have mentioned earlier: it's one thing to claim that humans implicitly gamble all the time, another to claiming that they implicitly assign probabilities when they do. It seems like when people make decisions whose outcomes they aren't sure of, most of the time "they're using heuristics that bypass probability" is a better model of their behavior than "they're implicitly assigning such-and-such probabilities."

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-09T05:30:15.277Z · LW(p) · GW(p)

Well, I think that depends on what you mean by "implicitly." As I mentioned in another comment, I think there's a difference between assigning probabilities in System 1 and assigning probabilities in System 2, and that probably many people are good at the former in their domains of expertise but bad at the latter. Which do you mean?

comment by shminux · 2013-04-08T22:05:02.991Z · LW(p) · GW(p)

What would be such adversarial assumptions in your street-crossing example?

Replies from: HonoreDB
comment by HonoreDB · 2013-04-08T23:09:21.190Z · LW(p) · GW(p)

I'm standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This means that if I cross one street, and then change my mind about where I want to eat based on the fact that I didn't die, I've been dutch-booked by reality.

That might need a few more elements before it actually restricts you to VNM-rationality.

Replies from: SilasBarta
comment by SilasBarta · 2013-04-23T20:21:05.516Z · LW(p) · GW(p)

Where is reality's corresponding utility gain?

Replies from: HonoreDB
comment by HonoreDB · 2013-04-30T15:46:58.143Z · LW(p) · GW(p)

The bad news is there is none. The good news is that this means, under linear transformation, that there is such a thing as a free lunch!

comment by ChrisHallquist · 2013-04-08T23:59:49.132Z · LW(p) · GW(p)

It's valuable to know what can happen under adversarial assumptions even if you don't expect those assumptions to hold.

That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.

This isn't strong evidence; you're mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).

Tweaked the wording, is that better? ("Compatible" was a weasel word anyway.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-09T00:05:57.063Z · LW(p) · GW(p)

Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don't specifically require them is weak.

I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don't explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn't evolve to cope with. See, for example, Cosmides and Tooby 1996.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-04-09T00:34:37.540Z · LW(p) · GW(p)

Thanks. I was not familiar with that hypothesis, will have to look at C&T's paper.

comment by lukeprog · 2013-04-08T22:04:39.361Z · LW(p) · GW(p)

Recommend adding a tl;dr.

Replies from: fubarobfusco, MarkusRamikin, ChrisHallquist
comment by fubarobfusco · 2013-04-09T00:25:44.072Z · LW(p) · GW(p)

The somewhat less anti-intellectual words for this are "summary" and "abstract".

Replies from: Eliezer_Yudkowsky, Wei_Dai
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-04-09T19:24:06.223Z · LW(p) · GW(p)

I like TL;DR. It reminds the author of the basic writing principle that nobody wants to read anything you write, they're doing you a favor by reading the first two sentences and if you're still saying boring things by the third sentence that's it. Writing is terrible in proportion to how its circumstances cause writers to ignore this principle, for example school textbooks that children are forced to read, or journal papers that adults are either forced to read or that aren't being written for the sake of producing understanding in anyone.

Replies from: wedrifid, PrawnOfFate
comment by wedrifid · 2013-04-11T11:08:08.827Z · LW(p) · GW(p)

I like TL;DR. It reminds the author of the basic writing principle that nobody wants to read anything you write, they're doing you a favor by reading the first two sentences and if you're still saying boring things by the third sentence that's it. Writing is terrible in proportion to how its circumstances cause writers to ignore this principle, for example school textbooks that children are forced to read, or journal papers that adults are either forced to read or that aren't being written for the sake of producing understanding in anyone.

You present a strong argument that "TL;DR" is an excellent thing to keep in your mind while writing. It is not a terribly good reason to write "TL;DR" as the first section header in a post.

"Too Long; Didn't Read" is just wrong. It doesn't mean "short summary of the key point which can be used to establish whether to bother reading the body". That's what we have words like 'summary', 'abstract', 'introduction' and 'synopsis' for.

comment by PrawnOfFate · 2013-04-10T17:20:05.081Z · LW(p) · GW(p)

Whatever you call them, I like seeing them at the top. Your own postings would be hugely improved by the addition of abstracts.

comment by Wei Dai (Wei_Dai) · 2013-04-11T09:32:23.467Z · LW(p) · GW(p)

"Abstract" is typically used for academic papers, and often has a certain formal structure#Structure). (See also this explanation.) Using it for a LW post sounds pretentious (unless of course the post happens to consist of an academic paper or paper draft).

comment by ChrisHallquist · 2013-04-09T00:01:36.120Z · LW(p) · GW(p)

Added.

comment by ThrustVectoring · 2013-04-09T02:33:45.332Z · LW(p) · GW(p)

Qiaochu_Yuan already made much of the point that I wanted to make. I'd like to add to it that there are a lot of non-gambling examples of things that work like decision-making under uncertainty and probability.

IIRC, Eliezer used an example about an economic pundit trying to allocate preparation time to explaining why the market went up or why it went down. Even if you can't get them to take a bet, they have only so much time, and must divide it between the two cases.

Anyhow, my point is that when you map "willing to spend $X on lottery Y" to "willing to spend X time preparing for eventuality Y", getting dutch-booked looks a lot sillier. You'd be trying to spend more time than you had for all eventualities, for example.

comment by Sniffnoy · 2013-04-08T22:19:27.990Z · LW(p) · GW(p)

If you're unsure whether an offered bet is favorable to you, the mere fact that someone is offering it to you is pretty strong evidence that it's in their favor.

It seems to me that this is potentially informative -- if you're not certain enough that you're right to be willing to take the bet, this is a signal that maybe you should rethink your probability estimates or position. (Really of course you'd want to update according to Bayes, of course, but that's hard in general.)

(Spelling nitpick -- you've repeatedly misspelled "lose" as "loose". Would you mind fixing that?)

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-04-08T23:57:23.342Z · LW(p) · GW(p)

It's not a sign that your initial probability estimate was off.

Suppose that... I dunno, elections and sports matches aren't great examples, I'm just going to use an arbitrary proposition p and it helps if you imagine p isn't the kind of thing people make dumb bets about out of enthusiasm for "their team." If you'd reject an even-odds bet for both p and ~p (even ignoring transaction costs and declining marginal utility and tossing in the extra penny and so on), plausibly that's because you should start off with a probability estimate of 0.5 or so but once you know which side of the bet the other guy wants to take, you should update to give the side of the bet the other guy wants a probability higher than 0.5.

That's the motivation for stipulating the bookie doesn't know anything more about the proposition they want to bet on on than you do.

Also, thanks for catching the spelling errors.

Replies from: jmmcd
comment by jmmcd · 2013-04-10T09:38:55.157Z · LW(p) · GW(p)

"loosing" is still incorrect.

In a sense, bookies could be interpreted as "money pumping" the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they're apparently irrational enough to be gambling in the first place.

Suggest making the link explicit with something like this: "in spite of the fact that they're apparently irrational enough to be part of that public in the first place."

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-04-11T19:48:13.778Z · LW(p) · GW(p)

Gah. Now it should be fixed.

comment by private_messaging · 2013-04-10T17:37:20.892Z · LW(p) · GW(p)

One needs to keep in mind that a lot of the time, "correct" probability is either very close to 0 or very close to 1, you just can't figure out which because of limited computing time. It does feel like you should use probability that's somewhere in the middle in those cases, but a: there's no formalism for doing so, and even more devastatingly, b: spending time on consistent probabilities leaves less time for actual simulation, inference, or what ever it is that you do to find if its very close to 1 or very close to 0.

Replies from: ygert
comment by ygert · 2013-04-10T17:47:05.068Z · LW(p) · GW(p)

Remember, that by the Bayesian formulation of probability, there is no such thing as the "correct" probability. All probabilities are conditional on your personal knowledge. Using frequentist language like you are just muddles up the issue. If you had written your post in the Bayesian formulation, your point would be trivial. (And that, by the way, is the argument for using the Bayesian formulation and not the frequentist one.)

Replies from: private_messaging
comment by private_messaging · 2013-04-10T18:24:05.946Z · LW(p) · GW(p)

Often, you do have enough knowledge to get to something close to 1 or close to 0, you just can't run the computation because its too expensive.

comment by [deleted] · 2013-04-09T09:46:50.302Z · LW(p) · GW(p)

Another reason people are often reluctant to gamble is that people don't tend to make decisions to maximise utility. Instead, people's evaluation of risk is generally more along the lines of Prospect theory, which makes a "fair" gamble seem unattractive because losses are felt more than gains.

comment by alex_zag_al · 2016-02-23T21:56:36.082Z · LW(p) · GW(p)

I don't know about the role of this assumption in AI, which is what you seem to care most about. But I think I can answer about its role in philosophy.

One thing I want from epistemology is a model of ideally rational reasoning, under uncertainty. One way to eliminate a lot of candidates for such a model is to show that they make some kind of obvious mistake. In this case, the mistake is judging something as a good bet when really it is guaranteed to lose money.