Posts

Do policy makers frequently underestimate non-human threats (ie: epidemic, natural disasters, climate change) when compared to threats that are human in nature (ie: military conflict, economic competition, Cold War)? 2019-06-15T16:29:47.755Z
Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? 2019-05-23T12:36:47.564Z
My poorly done attempt of formulating a game with incomplete information. 2019-04-29T04:36:15.784Z
Could waste heat become an environment problem in the future (centuries)? 2019-04-03T14:48:05.257Z
Parable of the flooding mountain range 2019-03-29T15:07:02.265Z

Comments

Comment by RocksBasil (RorscHak) on Will the growing deer prion epidemic spread to humans? Why not? · 2023-09-03T13:54:23.323Z · LW · GW

I was thinking of iatrogenic transmissions, yeah (and prions have been a long term psychological fear of mine, too...so I perhaps crawled too much publicly available information about prions to be a normal person)

I wonder if there are any instances of FFI transmitted through the iatrogenic pathway, and whether it is possible to be distinguished from the typical CJD, and whether iatrogenic prions could become a significant issue for healthcare (more instances of prion diseases due to aging population could possibly mean more contaminated medical equipments, and the possible popularisation of brain-computer interface might give us some problems too) given the difficulty of sterilising prions.

Maybe the sample size is too small for us to know.

Comment by RocksBasil (RorscHak) on Will the growing deer prion epidemic spread to humans? Why not? · 2023-06-28T15:29:59.884Z · LW · GW

How much do we know about the presence of prion diseases in other animals we frequently consume? 

A quick search shows that even fish have some variant of the prion protein and so perhaps all vertebrates pose a theoretical risk of being the carrier of a prion disease, although the species barrier will likely be too high for prions of non-mammal origin. 

I'm quite concerned about pigs.

Apparently pigs are considered to be prion resistant as no naturally occurring prion diseases among pigs have been identified, but it is possible to infect them with some prion strains under laboratory conditions.

A prion disease in pigs could be very bad for two reasons:

1: Pigs are often fed with leftover food from human consumption, this can become a potent vector of prion diseases between pigs.

2: Pork brain, spine, tongue, etc are frequently consumed in some parts of the world, this would make some humans exposed to a large amount of prion protein, if such an illness spreads between pigs.

Fortunately, nothing of this sort happened with pigs yet, and we have been consuming pork brain (and feeding pigs with leftover food) for centuries without known issues (but I doubt pre-modern people would've noticed the pattern, since the incubation period can be very long) so maybe pigs are resistant (enough) to prions that pork is very safe.

Comment by RocksBasil (RorscHak) on Will the growing deer prion epidemic spread to humans? Why not? · 2023-06-28T14:25:31.039Z · LW · GW

"Infectious" means "transmissible between people". As the name suggests, fatal familial insomnia is a genetic condition. (FFI and the others listed are also prion diseases - the prion just emerges on its own without a source prion and no part of the disease is contagious. This is an interesting trait of prions that could not happen with, say, a disease caused by a virus.)

Can someone catch FFI from coming into contact with the neural tissues of a patient with FFI? 

I suspect it's possible that FFI genes cause the patient's body to create prions, but can those prions lead to illness in a person without the FFI gene? If yes, then FFI would still be "infectious"  in some sense, I suppose.

Comment by RocksBasil (RorscHak) on Do policy makers frequently underestimate non-human threats (ie: epidemic, natural disasters, climate change) when compared to threats that are human in nature (ie: military conflict, economic competition, Cold War)? · 2019-06-16T04:33:14.051Z · LW · GW

I like your last point a lot, does it mean that governments/institutions are more interested in protecting the systems they are in than their constituents? It indeed seems possible and can explain this situation.

I still wonder if such thing happens on an individual level as well, which can help shed some light.

Comment by RocksBasil (RorscHak) on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-29T06:01:16.550Z · LW · GW

My assumption is that promises are "vague", playing $99 or $100 both fulfil the promise of giving a high claim close to $100, for which there is no incentive to break.

I think the vagueness stops the race to the bottom in TD, compared to the dollar auction in which every bid can be outmatched by a tiny step without risking going overboard immediately.

I do think I overcomplicated the matter to avoid modifying the payoff matrix.

Comment by RocksBasil (RorscHak) on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-28T09:29:56.797Z · LW · GW

"breaking a promise" or "keeping a promise" has no intrinsic utilities here.

What I state is that under this formulation, if the other player believes your promise and plays the best response to your promise, your best response is to keep the promise.

Comment by RocksBasil (RorscHak) on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:38:11.786Z · LW · GW

" in this case, "trust" is equivalent to changing the payout structure to include points for self-image and social cohesion "

I guess I'm just trying to model trust in TD without changing the payoff matrix. The payoff matrix of the "vague" TD works in promoting trust--a player has no incentive breaking a promise.

Comment by RocksBasil (RorscHak) on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:34:59.663Z · LW · GW

This is true. The issue is that the Nash Equilibrium formulation of TD predicts that everyone else will bid $2, which is counter-intuitive and does not confirm empirical findings.

I'm trying to convince myself that the NE formulation in TD is not entirely rational.

Comment by RocksBasil (RorscHak) on Why the empirical results of the Traveller’s Dilemma deviate strongly away from the Nash Equilibrium and seems to be close to the social optimum? · 2019-05-27T04:32:18.729Z · LW · GW

If Alice claims close to $100 (say, $80), Bob gets a higher payoff claiming $100 (getting $78) instead of claiming $2 (getting $4).

Comment by RocksBasil (RorscHak) on Could waste heat become an environment problem in the future (centuries)? · 2019-05-23T11:34:10.675Z · LW · GW

I would assume Kelvin users to outnumber Fahrenheit users on LW.

Comment by RocksBasil (RorscHak) on My poorly done attempt of formulating a game with incomplete information. · 2019-05-05T14:45:06.731Z · LW · GW

I think we should still keep b even with the iterations, since I made the assumption that "degrees of loyalty" is a property of S, not entirely the outcome of a rational-game-playing.

(I still assume S rational outside of having b in his payoffs)

Otherwise those kind of tests probably makes little sense.

I also wonder what happens if M doesn't know the repulsiveness of the test for certain, only a distribution of it (ie: CIA only knows that on average killing your spouse is pretty repulsive, except this lady here really hates her husband, oops), could that make a large impact.

I guess I was only trying to figure out whether this "repulsive loyalty test" story that seems to exist in history/mythology/real life in a few different cultures has any basis in logic.

Comment by RocksBasil (RorscHak) on Nash equilibriums can be arbitrarily bad · 2019-05-03T11:32:17.686Z · LW · GW

Thanks, I forgot the proof before replying your comment.

You are correct that in PD (D,C) is Pareto, and so the Nash Equilibrium (D,D) is much closer to a Pareto outcome than the Nash Equilibrium (0,0) of TD is to its Pareto outcomes (somewhere along each person getting a million pounds, give or take a cent)

It still strange to see a game with only one round and no collusion to land pretty close to the optimal, while its repeated version (dollar auction) seems to deviate badly from the Pareto outcome.

Comment by RocksBasil (RorscHak) on My poorly done attempt of formulating a game with incomplete information. · 2019-05-03T11:05:39.500Z · LW · GW

Thanks, the final result is somewhat surprising, perhaps it's a quirk of my construction.

Setting r to be higher than v does remove the "undercover agents" that have practically 0 obedience, but I didn't know it's the optimal choice for M.

Comment by RocksBasil (RorscHak) on Nash equilibriums can be arbitrarily bad · 2019-05-03T05:32:42.214Z · LW · GW

I think "everybody launches all nukes" might not be a Nash Equilibrium.

We can argue that once one side launched their nukes the other side does not necessarily have an incentive to retaliate, given they won't really care whether the enemy got nuked or not after they themselves are nuked, and they probably will have an incentive to not launch the nukes to prevent the "everybody dies" outcome, which can be argued to be negative for someone who is about to die.

Comment by RocksBasil (RorscHak) on Nash equilibriums can be arbitrarily bad · 2019-05-03T05:28:43.091Z · LW · GW

I haven't found any information yet, but I suspect there is a mixed Nash somewhere in TD.

Comment by RocksBasil (RorscHak) on My poorly done attempt of formulating a game with incomplete information. · 2019-05-02T11:27:58.762Z · LW · GW

Spoilers? That sounds intriguing, I'll wait :)

Comment by RocksBasil (RorscHak) on Nash equilibriums can be arbitrarily bad · 2019-05-02T09:54:27.162Z · LW · GW

It is interesting that experimental results of traveller's dilemma seems to give results which deviate strongly from the Nash Equilibrium, and in fact quite close to the Pareto Optimal Solution.

This is pretty strange for a game that has only one round and no collusion (you'd expect it to end as Prisoner's Dilemma, no?)

It is rather different from what we would see from the dollar auction, which has no Nash Equilibrium but always deviate far away from the Pareto optimal solution.

I suspect that the this game being one round-only actually improves the Pareto efficiency of its outcomes:

Maybe if both participants are allowed to change their bid after seeing each other's bid they WILL go into a downward spiral one cent by one cent until they reach zero or one player gives up at some point with a truce, just like how dollar auctions always stop at some point.

When there is only one round, however, there is no way for a player to make their bid exactly 1 or 2 cents less than the other player, and bidding any less than that is suboptimal compared to bidding more than the other player, so perhaps there is an incentive against lowering one's bidding indefinitely to 0 before the game even starts, just like no one would bid $1000 in the dollar auction's first round.

Comment by RocksBasil (RorscHak) on [Answer] Why wasn't science invented in China? · 2019-04-27T14:40:42.739Z · LW · GW

I think there are economic factors under the play, although it will be more subtle than just a plain comparison of "alleged GDP per capita".

I recall that both China and the Middle East went through a process of "de-industrialisation“ from the European High Middle Ages to the Early Modern period. Essentially both China and the Middle East started substituting machines for simple human labour, causing cranes, water mills, etc to become rarer over time.

And strangely enough a study showed that when this was happening there was little difference in real wages between Western Europe and the Middle East (so the substitution of capital with labour was not due to low wgaes), and I guess China won't be too different in this regard.

Why did this happen is beyond me, I think susceptibility to nomadic raid/invasion was mentioned.

Comment by RocksBasil (RorscHak) on Could waste heat become an environment problem in the future (centuries)? · 2019-04-05T11:57:38.106Z · LW · GW

This is an interesting study, it seems that his numbers are not too far off what I plugged in as a placeholder (that our current energy consumption is within a couple magnitudes from becoming climate altering)

Though I'm not making sense of the nanobots yet haha

Comment by RocksBasil (RorscHak) on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T05:49:45.971Z · LW · GW

Ah thanks, so the equilibrium is more robust than I initially assumed, didn't expect that to happen.

So the issue won't be as pressing as climate change could be, although some kind of ceiling still exists for energy consumption on Earth nevertheless...

Comment by RocksBasil (RorscHak) on Parable of the flooding mountain range · 2019-04-03T15:48:22.960Z · LW · GW

Oh yes! This can make more sense now.

#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.

I do think individuals have "some" concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I'm not sure where am I going here...)

Comment by RocksBasil (RorscHak) on Parable of the flooding mountain range · 2019-03-30T03:01:06.812Z · LW · GW

Ah, I never thought about this being a secretary problem.

Well, initially I used it as an analogy for evolution and didn't think too much about memorising/backtracking.

Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn't moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.

The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual's chance of survival.

But it still seems that "a higher chance that someone survives" is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.

For example, if a coming disaster gives us two choices

(a): 50% chance that humans will go extinct, 50% chance nothing happens.

(b): 90% chance that 80% of humans will die.

The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.

Comment by RocksBasil (RorscHak) on Parable of the flooding mountain range · 2019-03-30T02:39:42.917Z · LW · GW

Yes, those two pieces can change the situation dramatically (and I have tried writing another parable including them, but found it a bit difficult for me)

I'm pondering about what is the best strategy with communication. Initially I thought I can spread them out and each mountaineer knows the location/height of other mountaineers in a given radius (significantly larger than the visibility in the fog) and add that information into their "move towards the greatest height" algorithm. Which might work, but I cannot vigorously show how useful that will be.

Regardless, I think evolution can't get much better than the third scenario, it doesn't seem to backtrack and most likely doesn't communicate.

There is also that my analogy fails to consider that the "environment" changes overtime, so the "mountain landscape" will not stay the same when you come back to a place after leaving it. This probably prevents backtracking, but doesn't change the outcome that you'd most likely be stuck on a hilltop that isn't the optimal.