You must on average be improving to not guarantee failure.

post by eeegnu · 2022-03-26T05:28:12.690Z · LW · GW · 1 comments

Contents

1 comment

I was thinking about what happens when you run independent Bernoulli trials forever, with probability  of success, but you lose the first time you fail. If all  are the same, then with probability 1 we will eventually fail[1]. Similarly, if the supremum of the sequence of probabilities  is not 1, then we're also guaranteed to fail, since . The condition that  is necessary but not sufficient to guarantee that failure isn't certain[2]. That seems to say that not only do we need to improve, but we also can't improve too slowly, or else we're sure to fail. So just how quickly do we need to improve? Well we have that [3], which is much easier to work with (though this isn't a bijection so it can only be used to show that a sequence  improves quickly enough.) Let , which can be interpreted as our probability of failure. We know right away that any kind of geometric progression of failure minimization, e.g.  will give us a better than 0 chance. That says that for any positive constant , it's sufficient for  to not guarantee failure! But surely we can do better than this, i.e. there's no need to have to improve at a constant rate. We know that  converges, so here we need only for  or simply , so the further out we go, the less we need to improve relative to our previous rate of failure. This analysis was rather quick, and I'm curious if we can do better than this.

I thought about this problem after thinking about processes where even a single failure would be catastrophic. While in the real world a sufficient number of 0's for the probability of an event makes it negligible, things are different if you live forever and you're dealing with something that can never be minimized to a 0% chance of failure. You need to at least on average improve your odds between samplings, or failure is eventually certain.

  1. ^

    Technically this isn't guaranteed, there's nothing stopping a fair coin from landing on heads forever other than our finite lifespans preventing us from observing it. But I think it's fair to call a 0% chance of success a failure.

  2. ^

    For example, 0.9 (10 times), 0.99 (100 times), 0.999 (1000 times), ..., and if we take the product of these regions we get increasingly good approximations of , whose products will tend to 0.

  3. ^

    https://math.stackexchange.com/questions/209108/infinite-product-problem-sum-p-n-infty-implies-prod-1-p-n0

    Note that we don't care about when  since this means we're guaranteed to fail anyhow.

1 comments

Comments sorted by top scores.

comment by gwern · 2022-03-26T19:17:35.670Z · LW(p) · GW(p)

https://www.fhi.ox.ac.uk/reports/2012-1.pdf https://www.gwern.net/docs/philosophy/mind/2004-perry.html make similar points about asymptotically converging to constant fractions to avoid almost surely dying.