When betting, consider non-ergodicity and absorbing states

post by Nina Rimsky (NinaR) · 2023-06-01T18:51:48.973Z · LW · GW · 2 comments

This is a link post for https://ninarimsky.substack.com/p/when-betting-consider-non-ergodicity

Contents

  Ergodic and Non-Ergodic Processes
  Absorbing States
  Betting in Ergodic vs. Non-Ergodic Processes
  Go for Long Term Survival
    Diversification
    Dynamic Betting Sizes
    Limit Exposure to Absorbing States
  When to Be Less Risk Averse
  Practical Decision Making given Absorbing States
None
2 comments

This post was inspired by listening to this episode of the EconTalk podcast, where Russ Roberts interviews Luca Dellanna.

Understanding non-ergodicity and absorbing states provides vital insights into how to approach betting/decision-making when there are risks of catastrophic outcomes. 

Ergodic and Non-Ergodic Processes

A process is ergodic if it behaves the same when observed over a long period as it does, on average, over many independent realizations at a single point in time - in an ergodic process, time averages are the same as ensemble averages.

Consider a simple coin toss, a classic example of an ergodic process. The probability of getting heads in any given toss is 50%, irrespective of past events. Hence, the average outcome over a long series of tosses will match the individual toss probability.

In an ergodic process, the time average equals the ensemble average as  approaches infinity. Here,  are the individual random variables of the process,  is the number of trials or steps in the process, and  is the ensemble average. The left-hand side represents the time average, and the right-hand side is the expected value or ensemble average. 

In contrast, a process is non-ergodic when time averages and ensemble averages differ. This means that the historical path of the process influences future outcomes. For example, in a game of Russian roulette, the outcome is always fatal if played for enough rounds. It's a clear example of a non-ergodic process, as the player cannot repeat the game indefinitely.

Absorbing States

An absorbing state in a stochastic process is a state that, once entered, cannot be left. It's called "absorbing" because the system gets trapped in this state. The game of Russian roulette mentioned earlier has a clear absorbing state: death. 

These states are critical in betting because they represent points of no return. If you're betting in a system with an absorbing state, your strategy needs to account for the reality that specific outcomes will permanently impact your capacity to continue.

The expected number of steps before being absorbed in any absorbing state when starting in transient state  is given by the th entry of the vector . Here,  is the identity matrix, and  is the transition matrix excluding absorbing states.

A process with an absorbing state will be non-ergodic if there's a non-zero probability of transitioning from a recurrent state (a state that the process is guaranteed to return to eventually, given enough time) to the absorbing state. This is because once the absorbing state is entered, it is irreversible and prevents the process from revisiting all other states infinitely often, a necessary condition for ergodicity. Thus, the non-zero possibility of ending up in an irreversible state disrupts the ergodic property.

Betting in Ergodic vs. Non-Ergodic Processes

Betting strategies in ergodic processes can differ significantly from those in non-ergodic processes. In ergodic processes, where past events do not influence future ones, you can use methods like the Kelly Criterion, a well-known formula used to determine the optimal size of a series of bets, to maximize your expected growth rate over time. The process's memoryless nature means you can base your betting strategy on the known odds without considering the outcomes of previous bets.

The basic formula for the Kelly Criterion -  is the fraction of the current bankroll to wager,  is the net odds received on the wager (i.e., odds are "b to 1"),  is the probability of winning, and  is the probability of losing, which is .

In contrast, non-ergodic processes require a more careful approach. Here, it's crucial to consider the path dependence and the existence of absorbing states. Strategies like the Kelly Criterion may not be suitable in these cases because they don't account for the possibility of a catastrophic loss. 

Go for Long Term Survival

Maximizing growth isn't always optimal in non-ergodic processes with absorbing states. Sometimes, avoiding the worst outcomes is the winning bet - long-term survival outweighs short-term gains. Therefore, betting strategies in these scenarios should heavily emphasize risk management. 

Here are a few general suggestions for such a betting strategy:

Diversification

Diversification is an effective way to reduce risk exposure by spreading bets across various non-correlated outcomes. This way, if one bet leads to an absorbing state, it won't necessarily result in a total loss, as the other bets may balance out the loss.

Dynamic Betting Sizes

Adapt your bet size based on your overall wealth and the risk associated with each bet. This approach is akin to the Kelly Criterion but with a cautious modification: you bet a fraction of the optimal Kelly size to buffer against the amplified consequences of losing bets in systems with absorbing states. This strategy is often referred to as "Fractional Kelly Betting."

If you choose to bet a fraction  of the Kelly amount, the growth rate of your wealth would be . Here,  is the variance of the outcome of the bet. This equation shows that betting less than the entire Kelly fraction can reduce your wealth’s expected growth rate but also reduce the volatility and the risk of ruin.

Limit Exposure to Absorbing States

If you can identify the outcomes that would lead to an undesired absorbing state, you should limit your exposure to those outcomes as much as possible. This might involve avoiding specific bets altogether, or stopping betting after a predetermined number of losses.

When to Be Less Risk Averse

In the example of Russian roulette, the absorbing state in question is clearly bad. However, what if the absorbing state is a desirable one? For example, consider the process of applying for jobs. Each individual application may have a low chance of success and require significant effort (hence negative expected value in an ensemble average scenario), but if you do land a job that you enjoy and find fulfilling, you reach a positive absorbing state: you stop applying for jobs and enjoy a significant increase in life satisfaction. Therefore, you should be willing to take more risk than by default as this process is non-ergodic in the positive sense.

The notion of pursuing a positive absorbing state also applies to the realm of entrepreneurship. If an entrepreneur continually seeks promising business opportunities, even if each venture has a low probability of success due to various factors such as market competition and technological challenges, their cumulative chance of reaching a prosperous terminal state rises as they persistently attempt new ventures. Of course, this is only a good strategy if each attempt does not risk bankruptcy or some other kind of bad absorbing state, such as ending up on the wrong side of the law. 

Practical Decision Making given Absorbing States

The typical method of cost-benefit analysis involves examining a specific situation or act and weighing the likelihood of a positive outcome against the cost of action. However, if the decision can be made repeatedly and there are potential absorbing states, it's essential to consider the average outcome over time. Specifically, consider the following:

  1. How many times can I anticipate making this decision before reaching an absorbing state?
  2. What potential absorbing state(s) could I encounter?

If the response to 1) is a practically attainable number, your decision-making process should be predominantly influenced by the characteristics of the absorbing states rather than the initial probabilities of success or failure.

For example, some enjoy the exhilaration of riding a motorcycle, and it's likely that nothing bad will happen on any given ride. However, every ride represents a repeated decision and a chance to reach a terminal state of a severe accident or even death. On average, you can expect to experience severe injury or death every 200,000 miles traveled, based on data from 2020 (there are 468 injuries + 31.64 fatalities per 100 million vehicle miles traveled, 100,000,000 miles / 500 = 200,000 miles). If each ride is 50 miles on average, you would expect an injury or death after 4,000 rides. This is why I don't want my friends to get motorcycles.

2 comments

Comments sorted by top scores.

comment by jkraybill · 2023-06-01T23:37:53.203Z · LW(p) · GW(p)

As a poker player, this post is the best articulation I've read that explains why optimal tournament play is so different from optimal cash-game play. Thanks for that!

Replies from: meijer1973
comment by meijer1973 · 2023-06-02T09:06:09.726Z · LW(p) · GW(p)

Agreed, one of the objectives of a game is to not die during the game. This is also true for possible fatal experiments like inventing AGI. You have one or a few shots to get it right. But to win you got to stay in the game.