Has Pascal's Mugging problem been completely solved yet?

post by EniScien · 2022-11-06T12:52:17.811Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    8 Noosphere89
    5 JBlack
    3 Dagon
    2 Charlie Steiner
    1 ZT5
    1 Samuel Hapák
None
1 comment

Are Kolmogorov's difficulty and Hanson's leverage penalty combined into one? Has the more important general problem of probabilities not converging been solved? (The lesswrong posts I looked at did not mention if there is a better solution than the "patch" of the time. Reading of concept also don't give more understanding)

Answers

answer by Noosphere89 · 2022-11-06T13:32:56.176Z · LW(p) · GW(p)

The issue is Adversarial reasoning is basically impossible for Bayesians, so weird results crop up.

This also shows a failure of an assumption in expected value reasoning, that you are logically omniscient and have infinite time to reason about something. This decidedly does not exist in our world, because if we accept logical omniscience, then we can brute force everything by making infinitely complex simulations, thus becoming omnipotent. Pretty obviously, this isn't true, so Pascal's mugging results when we take the approximation as literal truth.

answer by JBlack · 2022-11-07T02:26:42.466Z · LW(p) · GW(p)

It has been solved in many ways, with different people viewing different solutions as having varying degrees of acceptability and relevance. I don't personally know of anyone who views it as "not acceptably solved", but I'd hesitate to say that it's completely solved in the sense of having one single argument that everyone accepts over all of the others.

answer by Dagon · 2022-11-06T16:27:18.875Z · LW(p) · GW(p)

Not solved that I know.  I personally have solved it by biting the bullet that some branches of the universe are going to be suboptimal, and I'm going to put most of my effort into more likely cases.  It doesn't have to converge for me to give up on it.

I think this generalizes - EVERY real agent will be finite, and will violate Bayesean tenets in some cases, like "no 0 or 1 in your probabilities".  At some point of unlikelihood, the probability is treated as 0, and there is no evidence that will convince the agent otherwise.  
 

answer by Charlie Steiner · 2022-11-07T02:36:46.705Z · LW(p) · GW(p)

One prong is that I think people have shifted more towards bounded utility functions (infinite ethics arguments are another reason).

It's also plausible that infrabayesianism helps operate in adversarial environments, but maybe not in the obvious way, since you have to set things up so that worst-case reasoning over certain variables gives the right answer.

answer by Victor Novikov (ZT5) · 2022-11-30T11:55:12.799Z · LW(p) · GW(p)

The situation described in Pascal's mugging is OOD (out-of-distribution) for human values. Human values have not been trained/tested on scenarios with tiny probabilites of vast utilities.

What answer does a system that goes OOD give us? It doesn't matter, we are not supposed to use a system in OOD context.

Naively extrapolating human values too far is not permitted.

Giving an arbitrary/random answer is not permitted.

But we need to make some sort of decision, and we nothing but our values to guide us.

But out values are not defined for the decision we are trying to make.

And we are not allowed to define our values arbitrarily.

I think the answer is really complex, and involves something like "taking all our values and meta-values in account, what is the least arbitrary way we can extend our value system into the space in which we are trying to make a decision"

So, my answer to Pascal's mugging is: human values are probably not yet ready to answer questions like that, at least not in a consistent manner.

comment by Lao Mein (derpherpize) · 2022-11-30T13:45:27.212Z · LW(p) · GW(p)

Pascal's Mugging isn't OOD. It's very much in-distribution for human beings historically - there is always a scammer waiting on a street corner offering a product that gives you extremely high utility, at very low probability, of course (imagine a tonic that claims to cure smallpox). 

Imagine that I, a Lesswrong forum user, claimed to be from outside the simulation and capable of offering you infinite utility (I'm from a universe where that's possible) in exchange for a rare Pepe. That's not a hypothetical offer in a thought experiment. I just did. You had to make a decision when you decided to ignore it. You had to incorporate your values into your decision. You have to do so any time you ignore a scammer, Pascal or otherwise. That's a revealed preference. And humanity as a whole is remarkably consistent in recognizing its human values to ignore Pascal's mugger. If Pascal's mugger is OOD for human values, then anything claiming to give infinite/extremely high utility is also OOD for human values, which depending on your cutoff definitely includes Abrahamic religions and may include the industrial revolution. But those aren't OOD. They're a part of life.

Replies from: lc, ZT5
comment by lc · 2022-11-30T13:53:17.611Z · LW(p) · GW(p)

I don't think the commenter is saying that muggings and charlatans are out of distribution for humans. I think he is saying that actual, genuine high utility+low probability decisions are unlikely to occur naturally. Your example isn't a counterexample because it's not true and you made it up.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2022-11-30T14:06:27.497Z · LW(p) · GW(p)

No, I was actually Pascal's Mugging him. And you. Show them Pepes, mister. Pascal's Mugger is defined as "a guy who is almost certainly lying extorts you for rare Pepes by promising extreme utility values". Just by making the offer, I become a very real instance of Pascal's Mugger.

Similar offers happen every day. Every time a Mormon knocks on your door (or a JW if you're a Mormon), you have to reason about extreme utility values. And of course "genuine high utility+low probability decisions are unlikely to occur naturally"! They are by definition.

comment by Victor Novikov (ZT5) · 2022-11-30T15:09:09.541Z · LW(p) · GW(p)

Hmm. You are absolutely right, I didn't think of all these examples.

Let me rephrase:
I think probabilitites on the order of 1/(3^^^3) are OOD for expected utility calculations.
We mostly don't care about expected utility for probabilities that small.

Pascal's mugging is bucketed either into "this is a scam" or "lottery ticket" by human values. And that is fine, unless this results in a contradiction with some of our other values. But I don't think it does.

Abrahamic religions

Extremely high utility yes, extremely low probability no. Usually the idea is that you can get the "infinite" reward though hard work, dedication, belief, etc.

answer by Samuel Hapák · 2022-11-06T19:42:59.516Z · LW(p) · GW(p)

My understanding of Pascal Mugging is following:
 

Robber approaches you promising you lots of utility in exchange of giving him $1. The probability he is not lying is extremely low, yet the utility is extremely high, so you give him $1.

The above reasoning has one trivial flaw. How do you know that there isn't a person testing your virtues, which would actually give you lots of utility if you refused to give this person $1? What makes you think that receiving lots of utility when you succumb to the robber is more probable than receiving lots of utility if you stand up to the robber?

1 comment

Comments sorted by top scores.

comment by Shmi (shminux) · 2022-11-06T19:37:02.969Z · LW(p) · GW(p)

How do you get a general agreement on when a problem of that type is "completely solved"?