Posts
Comments
Note that a slightly different worded problem gives the intuitive result :
A_k is the event "I roll a dice k times, and it end up with 66, with no earlier 66 sequence".
B_k be the event "I roll a dice k times, and it end up with a 6, and one and only one 6 before that (but not necessarily the roll just before the end : 16236 works)".
C_k is the event "I roll a dice k times, and I only get even numbers".
In this case we do have the intuitive result (that I think most mathematicians intuitively translate this problem into) :
Σ[k * P(A_k|C_k)] > Σ[k * P(B_k|C_k)]
Now the question is : why are not the two formulations equivalent ? How would you write "expected number of runs" more formally, in a way that would not yield the above formula, and would reproduce the numbers of your Python program ?
(this is what I hate in probability theory, where slightly different worded problems, seemingly equivalent, yields completely different results for no obvious reason).
Also, the difference between the two processes is not small :
Expected rolls until two 6s in a row (given all even): 2.725588
Expected rolls until second 6 (given all even): 2.999517
vs (n = 10 millions)
k P(A_k|C_k) P(B_k|C_k)
-------------------------
1 0.000000 0.000000
2 0.111505 0.111505
3 0.074206 0.148227
4 0.074719 0.148097
5 0.066254 0.130536
6 0.060060 0.108174
7 0.053807 0.086706
8 0.049360 0.067133
9 0.046698 0.050944
10 0.040364 0.038915
11 0.038683 0.029835
12 0.030691 0.024297
13 0.034653 0.011551
14 0.036450 0.014263
15 0.024138 0.006897
16 0.007092 0.007092
17 0.012658 0.000000
18 0.043478 0.000000
19 0.000000 0.000000
20 0.000000 0.000000
Expected values:
E[k * P(A_k|C_k)] = 6.259532
E[k * P(B_k|C_k)] = 5.739979
I get a strong "our physical model says that spherical cows can move with way less energy by just rolling, thereby proving that real cows are stupid when deciding to walk" vibe here.
Loss aversion is real, and is not especially irrational. It’s simply that your model is way too simplistic to properly take it into account.
If I have $100 lying around, I am just not going to keep it around "just in case some psychology researcher offers me a bet". I am going to throw out in roughly 3 baskets of money : spending, savings, and emergency fund. The policy of the emergency fund is "as small as possible, but not smaller". In other words : adding to the balance of that emergency funds is low added util, but taking from it is high (negative) util.
The loss from an unexpected bet is going to mostly be taken from the emergency fund (because I can’t take back previous spendings, and I can’t easily take from my savings). On the positive side (gain), any gain will be put into spendings or savings.
So the "ratio" you’re measuring is not a sample from a smooth, static "utility of global wealth". I am constantly adjusting my wealth assignment such that, by design and constraint, yes, the disutility of loss is brutal. If I weren’t, I would just be leaving util lying on the ground, so to speak (I could spend or save).
You want to model this ?
Ignore spending. Start with an utility function of the form U(W_savings, W_emergency_fund). Notice that dU/dW_emergency_fund is large and negative on the left. Notice that your bet is 1/2 U(W_savings + 110, W_emergency_fund) + 1/2 U(W_savings, W_emergency_fund - 100).
I have not tested, but I’m ready to bet (heh !) that it is relatively trivial to construct a reasonable utility function that says no to the first bet and yes to the second if you follow this model and those assumptions about the utility function.
(there is a slight difficulty here : assuming that my current emergency fund is at its target level, revealed preference shows that obviously dU/dW_savings > dU/dW_emergency_funds. An economist would say that obviously, U is maximized where dU/dW_savingqs = dU/dW_emergency_funds)
From what I can tell, the far right in France supports environmentalism.[1]
Yes (with some minor caveats). It is also pro-choice on abortion (https://www.lemonde.fr/politique/article/2022/11/22/sur-l-ivg-marine-le-pen-change-de-position-et-propose-de-constitutionnaliser-la-loi-veil_6151030_823448.html) (with some minor caveats), and pro-gun-control (can’t find a link for that, sorry — the truth is that they are pro-gun-control because there is literally no one debating for the side pro-gun-rights at all, pro-gun-control is an across-the-board consensus).
Environmentalism is not partisan in many other countries, including in highly partisan countries like South Korea or France
French here. I think diving into details will shed some light.
Our mainstream right is roughly around your Joe Biden. Maybe a bit more on the right, but not much more. Our mainstream left is roughly around your Bernie Sanders. We just don’t have your republicans in the mainstream. And it turns out that there’s not much partisanship relative to climate change between Biden and Sanders.
This can be observed on other topics. There is no big ideological gap in gun control or abortion in France, because the pro-gun-rights and pro-life positions are just not represented here at all.
I’m not sure how you measure "highly partisan", but I don't think it captures the correct picture, namely the ideological gap between mainstream right and mainstream left.
I think you’re trying to point towards multimodal distributions ?
If you can decompose P(X) as P(X) = P(X|H1)P(H1) + ... + P(X|Hn)P(Hn), and the P(X|Hn) are nice unimodal distributions (like a normal distribution), you end up with a multimodal distribution.
But what can you do with a singular story? Twain and all the witnesses are long dead, and no new observations can be made. All we have is his anecdote and whatever confirmatory recollections may have been recorded by the others in his story.
It is, in principle, reproducible and testable. Ask every husband, wife, sibling, parent to a soldier involved in an ongoing conflict (such as Russia/Ukraine war, Israel/Palestine war) to record those "sentiments of dread for a loved one". See if it matches with recorded casualties.
if you know that the random variable D is Monday
Yes, that’s kind of my point. There’s two wildly different problems that looks the same on the surface, but they are not. One gives the answer of your post, the other is 1/3. I suspect that your initial confusion is your brain trying to interpret the first problem as an instance of the second. My brain sure did, initially.
On the first one, you go and interview 1000 fathers having two children. You ask them the question "Do you have at least one boy born on a Monday ?". If they answer yes, you then ask then "Do you have two boys ?". You ask the probability that the second answer is yes, conditioning on the event that the first one is yes. The answer is the one of your post.
On the second one, you send one survey to 1000 fathers having two children. It reads something like that. "1. Do you have at least one boy ? 2. Give the weekday of birth of the boy. If you have two, pick any one. 3. Do you have two boys ?". Now the question is, conditioning on the event that the first answer is yes, and on the random variable given by the second answer, what is the probability that the third answer is yes ? The answer is 1/3.
My main point is that none of the answers are counter-intuitive. In the first problem, your conditioning on Monday is like always selecting a specific child, like always picking the youngest one (in the sentence "I have two children, and the youngest one is a boy", which gives then a probability of 1/2 for two boys). With low n, the specificity is low and you're close to the problem without selecting a specific child and get 1/3. With large n, the specificity is high and you’re close to the problem of selecting a specific child (eg the youngest one) and get 1/2. In the second problem, the "born on the monday" piece of information is indeed irrelevant and get factored out.
I don’t think you’re modeling your problem correctly, unless I misunderstood the question you’re trying to answer. You have those following random variables :
X_1 is bernoulli, first child is a boy
X_2 is bernouilli, second child is a boy
Y_1 is uniform, weekday of birth of the first child
Y_2 is uniform, weekday of birth of the second child
D is a random variable which corresponds to the weekday in the sentence "one of them is a boy, born a (D)". There is many ways to construct one like this, but we only require that if X_1=1 or X_2=1, then D=Y_1 or D=Y_2, and that D=Y_i implies X_i=1.
Then what you're looking for is not P(X_1=1,X_2=1 | (X_1=1,Y_1=monday) or (X_2=1,Y_2=monday)) (which, indeed, is not 1/3), but P(X_1=1,X_2=1 | ((X_1=1,D_1=D) or (X_2=1,D_2=D)) and D=monday). This is still 1/3, as illustrated by this Python snippet (I’m too lazy to properly demonstrate this formally) : https://gist.github.com/sloonz/faf3565c3ddf059960807ac0e2223200
There wass a similar paradox presented on old lesswrong. If someone can manage to find it (a quick google search returned nothing, but i may have misremembered the exact terms of the problem…), the solution would be way better presented there :
Alice, Bob and Charlie are accused of treason. To make an example, one of them, chosen randomly, will be executed tomorrow. Alice ask for a guard, and give him a letter with those instructions : "At least Bob or Charlie will not be executed. Please give him this letter. If I am to be executed and both live, give the letter to any one of them". The guard leaves, returns and tell Alice : "I gave the letter to Bob".
Alice is unable to sleep the following night : "Before doing this, I had a 1/3 chance of being executed. Now that it’s either me or Charlie, I have a 1/2 chance of being executed. I shouldn’t have written that letter".