Weird Things About Money

post by abramdemski · 2020-10-03T17:13:48.772Z · LW · GW · 31 comments

Contents

  1. Money wants to be linear, but wants even more to be logarithmic.
  2. Money wants to go negative, but can't.
None
32 comments

1. Money wants to be linear, but wants even more to be logarithmic.

2. Money wants to go negative, but can't.

31 comments

Comments sorted by top scores.

comment by gjm · 2020-10-03T23:33:12.246Z · LW(p) · GW(p)

It's true that diminishing marginal utility can produce some degree of risk-aversion. But there's good reason to think that no plausible utility function can produce the risk-aversion we actually see -- there are theorems along the lines of "if your utility function makes you prefer X to Y then you must also prefer A to B" where pretty much everyone prefers X to Y and pretty much no one prefers A to B.

[EDITED to add:] Ah, found the specific paper I had in mind: "Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion" by Matthew Rabin. An example from the paper: if you always turn down a 50/50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50/50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)

Replies from: johnswentworth, alexander-gietelink-oldenziel, abramdemski
comment by johnswentworth · 2020-10-04T00:20:58.758Z · LW(p) · GW(p)

I didn't believe that claim, so I looked at the paper. The key piece is that you must always turn down the 50/50 lose 10/gain 10.10 bet, no matter how much wealth you have - i.e. even if you had millions or billions of dollars, you'd still turn down the small bet. Considering that assumption, I think the real-world applicability is somewhat more limited than the paper's abstract seems to indicate.

That said, there are multiple independent lines of evidence in various contexts suggesting that humans' degree of risk-aversion is too strong to be accounted for by diminishing marginals alone, so I do still think that's true.

Replies from: gjm
comment by gjm · 2020-10-04T14:58:37.040Z · LW(p) · GW(p)

The paper has some more sophisticated examples that make less stringent assumptions. Here are a couple. "Suppose, for instance, we know a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than (say) $350,000, but know nothing about her utility function for wealth levels above $350,000, except that it is not convex. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670. If we only know that a person turns down lose $100/gain $125 bets when her lifetime wealth is below $100,000, we also know she will turn down a 50-50 lose $600/gain $36 billion bet beginning from a lifetime wealth of $90,000."

Replies from: Jiro, Jiro
comment by Jiro · 2020-10-05T03:02:55.027Z · LW(p) · GW(p)

Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.

The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.

Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can't exploit such fixed costs to money pump someone.

Replies from: gjm
comment by gjm · 2020-10-05T23:55:04.495Z · LW(p) · GW(p)

Yup, I agree with all that, and I think it is one of the reasons for (at least some instances of) loss aversion. I wonder whether there have been attempts to probe loss aversion in ways that get around this issue, maybe by asking subjects to compare scenarios that somehow both have the same overheads

comment by Jiro · 2020-10-05T03:03:41.502Z · LW(p) · GW(p)
comment by abramdemski · 2020-10-04T17:08:49.077Z · LW(p) · GW(p)

if you always turn down a 50/50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50/50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)

Is the idea supposed to be that humans always turn down such a bet?

Replies from: gjm, koreindian
comment by gjm · 2020-10-04T23:47:22.851Z · LW(p) · GW(p)

The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet -- which I think a lot of people do -- should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos's entire fortune.

(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren't about losing $10 feeling like a bigger deal than gaining $10.10, they're about the "overhead" of making the bet, the possibility that my counterparty doesn't pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)

Replies from: abramdemski
comment by abramdemski · 2020-10-05T15:04:46.056Z · LW(p) · GW(p)

But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.

(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.)

If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.

Also see my reply to koreindian [LW(p) · GW(p)].

Replies from: gjm
comment by gjm · 2020-10-05T23:58:02.580Z · LW(p) · GW(p)

My bankroll is already enough bigger than $10.10 that shortage of money isn't the reason why I would not take that bet.

I might well take a bet composed of 100 separate $10/$10.10 bets (I'd need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn't take one of them in isolation, but that's a different bet.

comment by koreindian · 2020-10-04T20:54:15.301Z · LW(p) · GW(p)

Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.

Replies from: abramdemski
comment by abramdemski · 2020-10-05T14:57:32.238Z · LW(p) · GW(p)

OK.

But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.

Unless the point is that they're so small that it's not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon -- like, they don't reliably pick up expected utility when it's under $20 or so.

I'm not contesting the overall conclusion that humans aren't EV maximizers, but this doesn't seem like a particularly good argument.

comment by Bunthut · 2020-10-04T21:30:08.736Z · LW(p) · GW(p)

Money wants to be linear, but wants even more to be algorithmic

I think this is mixing up two things. First, a diminishing marginal utility in consumption measured in money. This can lead to risk averse behaviour, but it could be any sublinear function, not just logarithmic, and I have seen no reason to think it's logarithmic in actually existing humans.

if you have risk-averse behavior, other agents can exploit you by selling you insurance.

I wouldn't call it "exploit". It's not a money pump that can be repeated arbitrarily often, its simply a price you pay for stability.

This "money" acts very much like utility, suggesting that utility is supposed to be linear in money.

Only the utility of the agent in question is supposed to be linear in this "money", and that can always be achieved by a monotone transformation. This is quite different from suggesting there's a resource everyone should be linear in under the same scaling.

The second thing is the Kelly criterion. The Kelly criterion exist because money can compound. This is also why it produces specifically a logarithmic stucture. Kelly theory recommends you to use the criterion regardsless of the shape of your utility in consumption, if you expect many more games after this one - it is much more like a convergent instrumental goal. So this:

Kelly betting is fully compatible with expected utility maximization, since we can maximize the expectation of the logarithm of money.

is just wrong AFAICT. This is compatible from the side of utility maximization, but not from the side of Kelly as theory. Of course you can always construct a utility function that will behave in a specific way - this isn't saying much.

This means the previous counterpoint was wrong: expected-money bettors profit in expectation from selling insurance to Kelly bettors, but the Kelly bettors eventually dominate the market

Depends on how you define "dominate the market". In most worlds, most (by headcount) of the bettors still around will be Kelly bettors. I even think that weighing by money, in most worlds Kelly bettors would outweigh expectation maximizers. But weighing by money across all worlds, the expectation maximizers win - by definition. The Kelly criterion "almost surely" beats any other strategy when played sufficiently long - but it only wins by some amount in the cases where it wins, and its infinitely behind in the infinitely unlikely case that it doesn't win.

Kelly betting really is incompatible with expectation maximization. It deliberately takes a lower average. The conflict is essentially over two conflicting infinities: Kelly notes that for any sample size, if theres a long enough duration Kelly wins. And maximization notes that for any duration, if theres a big enough sample size maximisation wins.

Money wants to go negative, but can't.

A lot of what you say here goes into monetary economics, and you should ask someone in the field or at least read up on it before relying on any of this. Propably you shouldn't rely on it even then, if at all avoidable.

Replies from: abramdemski
comment by abramdemski · 2020-10-19T15:07:03.381Z · LW(p) · GW(p)

is just wrong AFAICT. This is compatible from the side of utility maximization, but not from the side of Kelly as theory. Of course you can always construct a utility function that will behave in a specific way - this isn't saying much.

I agree that (1) I'm just constructing a utility function that results in the Kelly behavior, and (2) there's still a conceptual incompatibility between the classic argument for Kelly and EV theory. But I still think it's important to point out that the behavioral recommendations of Kelly do not violate the VNM axioms in any way, so the incompatibility is not as great as it may seem. This is important because it would be nice to reconcile the two philosophies, forging a new philosophy which is more robust than either.

Depends on how you define "dominate the market". In most worlds, most (by headcount) of the bettors still around will be Kelly bettors. I even think that weighing by money, in most worlds Kelly bettors would outweigh expectation maximizers. But weighing by money across all worlds, the expectation maximizers win - by definition.

Right.

Kelly betting really is incompatible with expectation maximization. It deliberately takes a lower average.

And yet it doesn't violate VNM, which means the classic argument for maximizing expected utility goes through. How can this paradox be resolved? By noting that utility is just whatever quantity expectation maximization does go through for, "by definition" (as I said in the post).

The conflict is essentially over two conflicting infinities: Kelly notes that for any sample size, if theres a long enough duration Kelly wins. And maximization notes that for any duration, if theres a big enough sample size maximisation wins.

Right, agreed.

I'm curious if you're taking a side, here, wrt which limit one should take.

To the extent that it's Kelly vs VNM, Kelly seems more practical (applying to real betting), while VNM provides a much more general theory of decision making (since money (or another compounding good) does not need to be present in order for VNM to be relevant).

Replies from: Bunthut
comment by Bunthut · 2020-10-28T11:57:37.479Z · LW(p) · GW(p)

But I still think it's important to point out that the behavioral recommendations of Kelly do not violate the VNM axioms in any way, so the incompatibility is not as great as it may seem.

I think the interesting question is what to do when you expect many more, but only finitely many rounds. It seems like Kelly should somehow gradually transition, until it recommends normal utility maximization in the case of only a single round happening ever. Log utility doesn't do this. I'm not sure I have anything that does though, so maybe it's unfair to ask it from you, but still it seems like a core part of the idea, that the Kelly strategy comes from the compounding, is lost. 

And yet it doesn't violate VNM, which means the classic argument for maximizing expected utility goes through. How can this paradox be resolved? By noting that utility is just whatever quantity expectation maximization does go through for, "by definition".

This is the sort of argument you want to be very suspicious of if youre confused, as I suspect we are. For example, you can now just apply all the arguments that made Kelly seem compelling again, but this time with respect to the new, logarithmic utility function. Do they actually seem less compelling now? A little bit, yes, because I think we really are sublinear in money, and the intuitions related to that went away. But no matter what the utility function, we can always construct bets that are compounding in utility, and then bettors which are Kelly with respect to that utility function will come to dominate the market. So if you do this reverse-inference of utility, the utility function of Kelly bettors will seems to change based on the bets offered.

I'm curious if you're taking a side, here, wrt which limit one should take.

Not really, I think we're to confused to say yet. I do think I understand decisions with bounded utility (all the classical foundations imply bounded utilities, including VNM. This doesn't seem to be well known here). Bounded utility makes maximization a lot more Kelly: it means that the maximizers can no longer have the arbitrarily high pay-offs that are needed to balance the near-certainty of elimination. I also think it should make it not matter which limit you take first, but I don't think that leads to Kelly, either, because the betting structure that leads to Kelly assumes unbounded utility. Perhaps it would end up as a local approximation somewhere.

Now I also think that bounded decision theory is inadequate. I think a decision theory should be able to implement a paperclip maximizer, and it should work in worlds that last infinitely long. But I don't have something that fulfills that. I think theres a good chance the solution doesn't look like utility at all: A theorem that needs its problem to be finite propably won't do well in embedded problems.

Replies from: abramdemski
comment by abramdemski · 2020-10-28T16:25:37.775Z · LW(p) · GW(p)

I think the interesting question is what to do when you expect many more, but only finitely many rounds. It seems like Kelly should somehow gradually transition, until it recommends normal utility maximization in the case of only a single round happening ever. Log utility doesn't do this. I'm not sure I have anything that does though, so maybe it's unfair to ask it from you, but still it seems like a core part of the idea, that the Kelly strategy comes from the compounding, is lost. 

Ah, I see, interesting.

This is the sort of argument you want to be very suspicious of if youre confused, as I suspect we are. For example, you can now just apply all the arguments that made Kelly seem compelling again, but this time with respect to the new, logarithmic utility function. Do they actually seem less compelling now? A little bit, yes, because I think we really are sublinear in money, and the intuitions related to that went away. But no matter what the utility function, we can always construct bets that are compounding in utility, and then bettors which are Kelly with respect to that utility function will come to dominate the market. So if you do this reverse-inference of utility, the utility function of Kelly bettors will seems to change based on the bets offered.

Yeah, I agree with this.

(all the classical foundations imply bounded utilities, including VNM. This doesn't seem to be well known here)

Yeah. I'm generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows). So while I agree that boundedness should be thought of as part of the classical notion of real-valued utility, this doesn't seem like a huge deal to me.

OTOH, logical uncertainty / radical probabilism introduce new reasons to require boundedness for expectations. What is the expectation of the self-referential quantity "one greater than your expectation for this value"? This seems problematic even with hyperreals/surreals. And we could embed such a quantity into a decision problem.

Replies from: Bunthut
comment by Bunthut · 2020-10-28T22:56:59.343Z · LW(p) · GW(p)

I'm generally OK with dropping continuity-type axioms, though, in which case you can have hyperreal/surreal utility to deal with expectations which would otherwise be problematic (the divergent sums which unbounded utility allows).

Have you worked this out somewhere? I'd be interested to see it but I think there are some divergences it can't adress. There is for one the Pasadena paradox, which is also a divergent sum but one which doesn't stably lead anywhere, not even to infinity. The second is an apparently circular dominance relation: Imagine you are linear in monetary consumption. You start with 1$ which you can either spend or leave in the bank, which doubles it every year even after accounting for your time preference/uncertainty/other finite discounting. Now for every n, leaving it in the bank for n+1 years dominates leaving it for n years, but leaving it in the bank forever gets 0 utility. Note that if we replace money with energy here, this could actually happen in universes not too different from ours.

What is the expectation of the self-referential quantity "one greater than your expectation for this value"?

What is the expectation of the self-referential quantity "one greater than your expectation for this value, except when that would go over the maximum, in which case it's one lower than expectation instead"? Insofar as there is an answer it would have to be "one less than maximum", but that would seem to require uncertainty about what your expectations are.

Replies from: abramdemski
comment by abramdemski · 2020-11-02T15:36:33.125Z · LW(p) · GW(p)

Have you worked this out somewhere? I'd be interested to see it but I think there are some divergences it can't adress.

It's a bit of a mess due to some formatting changes porting to LW 2.0, but here it is [LW · GW].

I've gotten the impression over the years that there are a lot of different ways to arrive at the same conclusion, although I unfortunately don't have all my sources lined up in one place.

  • I think if you just drop continuity from VNM you get this kind of picture, because the VNM continuity assumption corresponds to the Archimedian assumption for the reals.
  • I think there's a variant of Cox's theorem which similarly yields hyperreal/surreal probabilities (infinitesimals, not infinities, in that case).
  • If you want to condition on probability zero events, you might do so by rejecting the ratio formula for conditional probabilities, and instead giving a basic axiomatization of conditional probability in its own right. It turns out that, at least under one such axiom system, this is equivalent to allowing infinitesimal probability and keeping the ratio definition of conditional probability.

(Sorry for not having the sources at the ready.)

There is for one the Pasadena paradox, which is also a divergent sum but one which doesn't stably lead anywhere, not even to infinity.

Here's how it works. I have to assign expectations to gambles. I have some consistency requirements in how I do this; for example, if you modify a gamble  by making a probability  outcome have  less value, then I must think the new gamble  is worth  less. However, how I assign value to divergent sums is subjective -- it cannot be determined precisely from how I assign value to each of the elements of the sum, because I'm not trying to assume anything like countable additivity.

In a case like the St Petersburg Lottery, I believe I'm required to have some infinite expectation. But it's up to me what it is, since there's no one way to assign expectations in infinite hyperreal/surreal sums.

In a case like the Pasadena paradox, though, I'm thinking I'll be subjectively allowed to assign any expectation whatsoever -- so long as all my other infinite-sum expectations are consistent with the assignment.

You start with 1$ which you can either spend or leave in the bank, which doubles it every year even after accounting for your time preference/uncertainty/other finite discounting. Now for every n, leaving it in the bank for n+1 years dominates leaving it for n years, but leaving it in the bank forever gets 0 utility. Note that if we replace money with energy here, this could actually happen in universes not too different from ours.

Perhaps you can try to problematize this example for me given what I've written above -- not sure if I've already addressed your essential worry here or not.

What is the expectation of the self-referential quantity "one greater than your expectation for this value, except when that would go over the maximum, in which case it's one lower than expectation instead"? Insofar as there is an answer it would have to be "one less than maximum", but that would seem to require uncertainty about what your expectations are.

Yes, uncertainty about your own expectations is where this takes us. But that seems quite reasonable, especially because we only need a very small amount of uncertainty, as is illustrated in this example.

Replies from: Bunthut
comment by Bunthut · 2020-11-07T21:28:49.696Z · LW(p) · GW(p)

However, how I assign value to divergent sums is subjective -- it cannot be determined precisely from how I assign value to each of the elements of the sum, because I'm not trying to assume anything like countable additivity.

This implies that you believe in the existence of countably infinite bets but not countably infinite dutch booking processes. Thats seems like a strange/unphysical position to be in - if that were the best treatment of infinity possible, I think infinity is better abandoned. Im not even sure the framework in your linked post can really be said to contain infinte bets: the only way a bet ever gets evaluated is in a bookie strategy, and no single bookie strategy can be guaranteed to fully evaluate an infinte bet. Is there a single bookie strategy that differentiates the St. Petersburg bet from any finite bet? Because if no, then the agent at least cant distinguish them, which is very close to not existing at all here.

In a case like the St Petersburg Lottery, I believe I'm required to have some infinite expectation. 

Why? I haven't found any finite dutch books against not doing so.

Perhaps you can try to problematize this example for me given what I've written above -- not sure if I've already addressed your essential worry here or not.

I dont think you have. That example doesn't involve any uncertainty or infinite sums. The problem is that for any finite n, waiting n+1 is better than waiting n, but waiting indefinitely is worse than any. Formally, the problem is that I have a complete and transitive preference between actions, but no unique best action, just a series that keeps getting better.

Note that you talk about something related in your linked post:

I'm representing preferences on sets only so that I can argue that this reduces to binary preference.

But the proof for that reduction only goes one way: for any preference relation on sets, theres a binary one. My problem is that the inverse does not hold.

comment by Lukas Finnveden (Lanrian) · 2020-10-04T18:49:04.942Z · LW(p) · GW(p)

However, it is a theorem that a diverse market would come to be dominated by Kelly bettors, as Kelly betting maximizes long-term growth rate. This means the previous counterpoint was wrong: expected-money bettors profit in expectation from selling insurance to Kelly bettors, but the Kelly bettors eventually dominate the market.

I haven't seen the theorem, so correct me if I'm wrong, but I'd guess it says that for any fixed number of bettors, there exists a time at which the Kelly bettors dominate the market with arbitrary probability. (Alternate phrasing: a market with a finite number of bettors would be dominated by Kelly bettors over infinite time.) But if we flip it around, we can also say that for any fixed time-horizon, there exists a number of bettors such that the EV-maximizers dominate the market throughout that time with arbitrary probability. (Alternate phrasing: a market with an infinite number of bettors would be dominated by EV-maximizers for any finite time.)

I don't see why we should necessarily prefer the first ordering of the quantifiers over the second.

Replies from: johnswentworth
comment by johnswentworth · 2020-10-04T19:43:33.642Z · LW(p) · GW(p)

But if we flip it around, we can also say that for any fixed time-horizon, there exists a number of bettors such that the EV-maximizers dominate the market throughout that time with arbitrary probability.

The number of bettors isn't the relevant parameter here. The relevant parameter is what fraction of the bettors are Kelly vs EV. However you set it up, the fraction of money in the hands of EV bettors will decrease over long time periods with high probability. If we have some fixed time-horizon, as long as that time horizon is fairly long, EV-maximizers will only dominate the market throughout that time with high probability if the market is essentially all EV-maximizers at the beginning.

An analogy: if one species has higher reproductive fitness than another, will that species eventually dominate? The math for Kelly betting is identical to the usual setup for natural selection models.

Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2020-10-04T20:58:28.393Z · LW(p) · GW(p)

The point with having a large number of bettors is to assume that they all get independent sources of randomness, so at least some will win all their bets. Handwavy math follows:

Assume that we have n EV bettors and n Kelly bettors (each starting with $1), and that they're presented with a string of bets with 0.75 probability of doubling any money they risk. The EV bettors will bet everything at each time-step, while the Kelly bettors will bet half at each time-step. For any timestep t, there will be an n such that approximately  of EV bettors have won all their bets (by the law of large numbers), for a total earning of . Meanwhile, each Kelly bettor will in expectation multiply their earnings by 1.25 each time-step, and so in expectation have  after t timesteps. By the law of large numbers, for a sufficiently large n they will in aggregate have approximately . Since , the EV-maximizers will have more money, and we can get an arbitrarily high probability with an arbitrarily large n.

Replies from: johnswentworth
comment by johnswentworth · 2020-10-05T00:30:48.134Z · LW(p) · GW(p)

Ah, I see. The usual derivation of the Kelly criterion explicitly assumes that there is a specific sequence of events on which people are betting (e.g. stock market movements or horse-race outcomes); the players do not get to all bet separately on independent sources of randomness. If they could do that, then it would change the setup completely - it opens the door to agents making profits by trading with each other (in order to diversify their portfolios via risk-trades with other agents). Generally speaking, with idealized agents in economic equilibrium, they should all trade risk until they all effectively have access to the same randomness sources.

Another way to think about it: compare the performance of counterfactual Kelly and EV agents on the same opportunities. In other words, suppose I look at my historical stock picks and ask how I would have performed had I been a Kelly bettor or an EV bettor. With probability approaching 1 over time, Kelly betting will seem like a better idea than EV betting in hindsight.

Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2020-10-05T10:45:46.515Z · LW(p) · GW(p)

Thanks, that way to derive it makes sense! The point about free trade also seems right. With free trade, EV bettors will buy all risk from Kelly bettors until the former is gone with high probabiliity.

So my point only applies to bettors that can't trade. Basically, in almost every market, the majority of resources are controlled by Kelly bettors; but across all markets in the multiverse, the majority of resources are controlled by EV bettors, because they make bets such that they dominate the markets which contain most of the multiverse's resources.

(Or if there's no sufficiently large multiverse, Kelly bettors will dominate with arbitrary probability; but EV bettors will (tautologically) still get the most expected money.)

comment by PeterMcCluskey · 2020-10-06T23:15:43.426Z · LW(p) · GW(p)

But if the economy were a computer program, debt would seem like a big hack. There’s no absolute guarantee that debt can be collected.

I can see how mathematicians would dislike an entity that lacks absolute guarantees, but it seems like a quite normal attribute to encounter in the real world.

We can be in a situation where “no one has enough money”—the great depression was a time when there were too few jobs and too much work left undone.

That's mostly accurate, but it leaves out an important step in the causal chain: the "too little money" meant that the wages which workers were accustomed to getting became too high. For reasons that are likely related to bargaining strategies, workers wouldn't accept (or sometimes weren't allowed to accept) wages that gave them fewer dollars, even when those fewer dollars bought them more goods than they were accustomed to.

In other words, there's a path for the value of money to re-adjust, but there's enough opposition to it that most economists have given up on it.

The scarcity problem would not exist if money could be reliably manufactured through debt. ... So it seems like we want to facilitate negative bank accounts “as much as possible, but not too much”?

I'm unclear what "facilitate" is doing here. "Negative bank accounts" is one way to describe a solution, but deflation meant that pretty much everyone preferred a positive bank account to "borrow and invest".

Central banks know how to manufacture money. The main problems are figuring out the right amounts, and ensuring that central banks create those amounts.

Replies from: ChristianKl
comment by ChristianKl · 2020-10-07T15:00:19.378Z · LW(p) · GW(p)

I can see how mathematicians would dislike an entity that lacks absolute guarantees, but it seems like a quite normal attribute to encounter in the real world.

I think the main point here is that debt has different quality then 'normal money'. Debt doesn't exist in M0 and only exist for other forms of money then M0. Going from M0 to M1 and M2 is the hack that allows for negative money.

comment by Lukas Finnveden (Lanrian) · 2020-10-04T19:07:51.049Z · LW(p) · GW(p)

Money-pump arguments, on the other hand, can establish this from other assumptions.

Can you say more about this? Stuart's arguments weren't that convincing to me, absent other assumptions. In particular, it seems like the existence of a contract that exactly cancels out your own contract could increase the value of your own contract; and that there's no guarantee that such a contract exists (or can be made without exposing anyone else to risk that they don't want). Stuart seems to acknowledge this in other parts of the comments, instead referring to the possibility of aggregation.

From this, I'm guessing that you need to assume that the risk is almost independent of the total market value (e.g. because it's small in comparison with the total market value, and independent of all other sources of risk), and there exists an arbitrarily large number of traders whose utility is linear in small amounts of money (that you can spread out the risk between). Are these the necessary assumptions to establish linearity of utility in money?

Replies from: abramdemski
comment by abramdemski · 2020-10-05T17:48:28.478Z · LW(p) · GW(p)

I basically think you're right, those arguments are weak, but this post was about me reasoning out some of the details for myself.

You make a good point about independent risk. I had only half-noticed that point when thinking about this.

comment by Dagon · 2020-10-07T16:14:40.104Z · LW(p) · GW(p)

I think this (especially the second part) is missing a fundamental aspect of ... well, not just money, but decision-making.  It's about expectations and projections into the future, not about the current definition or valuation.

Debt is no more a hack than is money itself.  Neither actually exist, they are simply contingent future values.  Zero is not special in this world.  

comment by AaronF (aaron-franklin-esq) · 2020-10-05T23:02:43.965Z · LW(p) · GW(p)

Here is Ole Peters: [Puzzle] "Voluntary insurance contracts constitute a puzzle because they increase the expectation value of one party’s wealth, whereas both parties must sign for such contracts to exist [Answer]: Time averages and expectation values differ because wealth changes are non-ergodic." 

Peters again: "Conceptually, its power derives from a new notion of rationality. Many reasonable models of wealth are non-stationary processes. Observables representing wealth then do not have the ergodic property of Section I, and therefore rationality must not be defined as maximizing expectation values of wealth. Rather, we propose as a null model to define rationality as maximizing the time-average growth of wealth." 

You write: "Kelly betting, on the other hand, assumes a finite bankroll -- and indeed, might have to be abandoned or adjusted to handle negative money." [Negative Interest rate?]  Can you explain more? Would love to fit this conceptually into Peter's Non-ergodic growth rate theory

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-10-05T04:00:09.610Z · LW(p) · GW(p)

I really like this.  I read part 1 as being about the way the economy or society implicitly imposes additional pressures on individuals' utility functions.  Can you provide a reference for the theorem that Kelly betters predominate?

EtA: an observation: the arguments for expected value also assume infinite value is possible, which (module infinite ethics style concerns, a significant caveat...) also isn't realistic.