Posts

Do policy makers frequently underestimate non-human threats (ie: epidemic, natural disasters, climate change) when compared to threats that are human in nature (ie: military conflict, economic competition, Cold War)? 2019-06-15T16:29:47.755Z · score: 5 (3 votes)
Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? 2019-05-23T12:36:47.564Z · score: 7 (4 votes)
My poorly done attempt of formulating a game with incomplete information. 2019-04-29T04:36:15.784Z · score: 3 (5 votes)
Could waste heat become an environment problem in the future (centuries)? 2019-04-03T14:48:05.257Z · score: 8 (5 votes)
Parable of the flooding mountain range 2019-03-29T15:07:02.265Z · score: 10 (8 votes)

Comments

Comment by rorschak on Do policy makers frequently underestimate non-human threats (ie: epidemic, natural disasters, climate change) when compared to threats that are human in nature (ie: military conflict, economic competition, Cold War)? · 2019-06-16T04:33:14.051Z · score: 1 (1 votes) · LW · GW

I like your last point a lot, does it mean that governments/institutions are more interested in protecting the systems they are in than their constituents? It indeed seems possible and can explain this situation.

I still wonder if such thing happens on an individual level as well, which can help shed some light.

Comment by rorschak on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-29T06:01:16.550Z · score: 1 (1 votes) · LW · GW

My assumption is that promises are "vague", playing $99 or $100 both fulfil the promise of giving a high claim close to $100, for which there is no incentive to break.

I think the vagueness stops the race to the bottom in TD, compared to the dollar auction in which every bid can be outmatched by a tiny step without risking going overboard immediately.

I do think I overcomplicated the matter to avoid modifying the payoff matrix.

Comment by rorschak on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-28T09:29:56.797Z · score: 1 (1 votes) · LW · GW

"breaking a promise" or "keeping a promise" has no intrinsic utilities here.

What I state is that under this formulation, if the other player believes your promise and plays the best response to your promise, your best response is to keep the promise.

Comment by rorschak on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:38:11.786Z · score: 1 (1 votes) · LW · GW

" in this case, "trust" is equivalent to changing the payout structure to include points for self-image and social cohesion "

I guess I'm just trying to model trust in TD without changing the payoff matrix. The payoff matrix of the "vague" TD works in promoting trust--a player has no incentive breaking a promise.

Comment by rorschak on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:34:59.663Z · score: 1 (1 votes) · LW · GW

This is true. The issue is that the Nash Equilibrium formulation of TD predicts that everyone else will bid $2, which is counter-intuitive and does not confirm empirical findings.

I'm trying to convince myself that the NE formulation in TD is not entirely rational.

Comment by rorschak on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:32:18.729Z · score: 1 (1 votes) · LW · GW

If Alice claims close to $100 (say, $80), Bob gets a higher payoff claiming $100 (getting $78) instead of claiming $2 (getting $4).

Comment by rorschak on Could waste heat become an environment problem in the future (centuries)? · 2019-05-23T11:34:10.675Z · score: 1 (1 votes) · LW · GW

I would assume Kelvin users to outnumber Fahrenheit users on LW.

Comment by rorschak on My poorly done attempt of formulating a game with incomplete information. · 2019-05-05T14:45:06.731Z · score: 1 (1 votes) · LW · GW

I think we should still keep b even with the iterations, since I made the assumption that "degrees of loyalty" is a property of S, not entirely the outcome of a rational-game-playing.

(I still assume S rational outside of having b in his payoffs)

Otherwise those kind of tests probably makes little sense.

I also wonder what happens if M doesn't know the repulsiveness of the test for certain, only a distribution of it (ie: CIA only knows that on average killing your spouse is pretty repulsive, except this lady here really hates her husband, oops), could that make a large impact.

I guess I was only trying to figure out whether this "repulsive loyalty test" story that seems to exist in history/mythology/real life in a few different cultures has any basis in logic.

Comment by rorschak on Nash equilibriums can be arbitrarily bad · 2019-05-03T11:32:17.686Z · score: 3 (2 votes) · LW · GW

Thanks, I forgot the proof before replying your comment.

You are correct that in PD (D,C) is Pareto, and so the Nash Equilibrium (D,D) is much closer to a Pareto outcome than the Nash Equilibrium (0,0) of TD is to its Pareto outcomes (somewhere along each person getting a million pounds, give or take a cent)

It still strange to see a game with only one round and no collusion to land pretty close to the optimal, while its repeated version (dollar auction) seems to deviate badly from the Pareto outcome.

Comment by rorschak on My poorly done attempt of formulating a game with incomplete information. · 2019-05-03T11:05:39.500Z · score: 2 (2 votes) · LW · GW

Thanks, the final result is somewhat surprising, perhaps it's a quirk of my construction.

Setting r to be higher than v does remove the "undercover agents" that have practically 0 obedience, but I didn't know it's the optimal choice for M.

Comment by rorschak on Nash equilibriums can be arbitrarily bad · 2019-05-03T05:32:42.214Z · score: 2 (2 votes) · LW · GW

I think "everybody launches all nukes" might not be a Nash Equilibrium.

We can argue that once one side launched their nukes the other side does not necessarily have an incentive to retaliate, given they won't really care whether the enemy got nuked or not after they themselves are nuked, and they probably will have an incentive to not launch the nukes to prevent the "everybody dies" outcome, which can be argued to be negative for someone who is about to die.

Comment by rorschak on Nash equilibriums can be arbitrarily bad · 2019-05-03T05:28:43.091Z · score: -2 (2 votes) · LW · GW

I haven't found any information yet, but I suspect there is a mixed Nash somewhere in TD.

Comment by rorschak on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T11:27:58.762Z · score: 1 (1 votes) · LW · GW

Spoilers? That sounds intriguing, I'll wait :)

Comment by rorschak on Nash equilibriums can be arbitrarily bad · 2019-05-02T09:54:27.162Z · score: 4 (3 votes) · LW · GW

It is interesting that experimental results of traveller's dilemma seems to give results which deviate strongly from the Nash Equilibrium, and in fact quite close to the Pareto Optimal Solution.

This is pretty strange for a game that has only one round and no collusion (you'd expect it to end as Prisoner's Dilemma, no?)

It is rather different from what we would see from the dollar auction, which has no Nash Equilibrium but always deviate far away from the Pareto optimal solution.

I suspect that the this game being one round-only actually improves the Pareto efficiency of its outcomes:

Maybe if both participants are allowed to change their bid after seeing each other's bid they WILL go into a downward spiral one cent by one cent until they reach zero or one player gives up at some point with a truce, just like how dollar auctions always stop at some point.

When there is only one round, however, there is no way for a player to make their bid exactly 1 or 2 cents less than the other player, and bidding any less than that is suboptimal compared to bidding more than the other player, so perhaps there is an incentive against lowering one's bidding indefinitely to 0 before the game even starts, just like no one would bid $1000 in the dollar auction's first round.

Comment by rorschak on [Answer] Why wasn't science invented in China? · 2019-04-27T14:40:42.739Z · score: 3 (2 votes) · LW · GW

I think there are economic factors under the play, although it will be more subtle than just a plain comparison of "alleged GDP per capita".

I recall that both China and the Middle East went through a process of "de-industrialisation“ from the European High Middle Ages to the Early Modern period. Essentially both China and the Middle East started substituting machines for simple human labour, causing cranes, water mills, etc to become rarer over time.

And strangely enough a study showed that when this was happening there was little difference in real wages between Western Europe and the Middle East (so the substitution of capital with labour was not due to low wgaes), and I guess China won't be too different in this regard.

Why did this happen is beyond me, I think susceptibility to nomadic raid/invasion was mentioned.

Comment by rorschak on Could waste heat become an environment problem in the future (centuries)? · 2019-04-05T11:57:38.106Z · score: 1 (1 votes) · LW · GW

This is an interesting study, it seems that his numbers are not too far off what I plugged in as a placeholder (that our current energy consumption is within a couple magnitudes from becoming climate altering)

Though I'm not making sense of the nanobots yet haha

Comment by rorschak on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T05:49:45.971Z · score: 1 (1 votes) · LW · GW

Ah thanks, so the equilibrium is more robust than I initially assumed, didn't expect that to happen.

So the issue won't be as pressing as climate change could be, although some kind of ceiling still exists for energy consumption on Earth nevertheless...

Comment by rorschak on Parable of the flooding mountain range · 2019-04-03T15:48:22.960Z · score: 2 (2 votes) · LW · GW

Oh yes! This can make more sense now.

#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.

I do think individuals have "some" concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I'm not sure where am I going here...)

Comment by rorschak on Parable of the flooding mountain range · 2019-03-30T03:01:06.812Z · score: 1 (1 votes) · LW · GW

Ah, I never thought about this being a secretary problem.

Well, initially I used it as an analogy for evolution and didn't think too much about memorising/backtracking.

Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn't moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.

The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual's chance of survival.

But it still seems that "a higher chance that someone survives" is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.

For example, if a coming disaster gives us two choices

(a): 50% chance that humans will go extinct, 50% chance nothing happens.

(b): 90% chance that 80% of humans will die.

The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.

Comment by rorschak on Parable of the flooding mountain range · 2019-03-30T02:39:42.917Z · score: 2 (2 votes) · LW · GW

Yes, those two pieces can change the situation dramatically (and I have tried writing another parable including them, but found it a bit difficult for me)

I'm pondering about what is the best strategy with communication. Initially I thought I can spread them out and each mountaineer knows the location/height of other mountaineers in a given radius (significantly larger than the visibility in the fog) and add that information into their "move towards the greatest height" algorithm. Which might work, but I cannot vigorously show how useful that will be.

Regardless, I think evolution can't get much better than the third scenario, it doesn't seem to backtrack and most likely doesn't communicate.

There is also that my analogy fails to consider that the "environment" changes overtime, so the "mountain landscape" will not stay the same when you come back to a place after leaving it. This probably prevents backtracking, but doesn't change the outcome that you'd most likely be stuck on a hilltop that isn't the optimal.