Posts
Comments
I was thinking of iatrogenic transmissions, yeah (and prions have been a long term psychological fear of mine, too...so I perhaps crawled too much publicly available information about prions to be a normal person)
I wonder if there are any instances of FFI transmitted through the iatrogenic pathway, and whether it is possible to be distinguished from the typical CJD, and whether iatrogenic prions could become a significant issue for healthcare (more instances of prion diseases due to aging population could possibly mean more contaminated medical equipments, and the possible popularisation of brain-computer interface might give us some problems too) given the difficulty of sterilising prions.
Maybe the sample size is too small for us to know.
How much do we know about the presence of prion diseases in other animals we frequently consume?
A quick search shows that even fish have some variant of the prion protein and so perhaps all vertebrates pose a theoretical risk of being the carrier of a prion disease, although the species barrier will likely be too high for prions of non-mammal origin.
I'm quite concerned about pigs.
Apparently pigs are considered to be prion resistant as no naturally occurring prion diseases among pigs have been identified, but it is possible to infect them with some prion strains under laboratory conditions.
A prion disease in pigs could be very bad for two reasons:
1: Pigs are often fed with leftover food from human consumption, this can become a potent vector of prion diseases between pigs.
2: Pork brain, spine, tongue, etc are frequently consumed in some parts of the world, this would make some humans exposed to a large amount of prion protein, if such an illness spreads between pigs.
Fortunately, nothing of this sort happened with pigs yet, and we have been consuming pork brain (and feeding pigs with leftover food) for centuries without known issues (but I doubt pre-modern people would've noticed the pattern, since the incubation period can be very long) so maybe pigs are resistant (enough) to prions that pork is very safe.
"Infectious" means "transmissible between people". As the name suggests, fatal familial insomnia is a genetic condition. (FFI and the others listed are also prion diseases - the prion just emerges on its own without a source prion and no part of the disease is contagious. This is an interesting trait of prions that could not happen with, say, a disease caused by a virus.)
Can someone catch FFI from coming into contact with the neural tissues of a patient with FFI?
I suspect it's possible that FFI genes cause the patient's body to create prions, but can those prions lead to illness in a person without the FFI gene? If yes, then FFI would still be "infectious" in some sense, I suppose.
I like your last point a lot, does it mean that governments/institutions are more interested in protecting the systems they are in than their constituents? It indeed seems possible and can explain this situation.
I still wonder if such thing happens on an individual level as well, which can help shed some light.
My assumption is that promises are "vague", playing $99 or $100 both fulfil the promise of giving a high claim close to $100, for which there is no incentive to break.
I think the vagueness stops the race to the bottom in TD, compared to the dollar auction in which every bid can be outmatched by a tiny step without risking going overboard immediately.
I do think I overcomplicated the matter to avoid modifying the payoff matrix.
"breaking a promise" or "keeping a promise" has no intrinsic utilities here.
What I state is that under this formulation, if the other player believes your promise and plays the best response to your promise, your best response is to keep the promise.
" in this case, "trust" is equivalent to changing the payout structure to include points for self-image and social cohesion "
I guess I'm just trying to model trust in TD without changing the payoff matrix. The payoff matrix of the "vague" TD works in promoting trust--a player has no incentive breaking a promise.
This is true. The issue is that the Nash Equilibrium formulation of TD predicts that everyone else will bid $2, which is counter-intuitive and does not confirm empirical findings.
I'm trying to convince myself that the NE formulation in TD is not entirely rational.
If Alice claims close to $100 (say, $80), Bob gets a higher payoff claiming $100 (getting $78) instead of claiming $2 (getting $4).
I would assume Kelvin users to outnumber Fahrenheit users on LW.
I think we should still keep b even with the iterations, since I made the assumption that "degrees of loyalty" is a property of S, not entirely the outcome of a rational-game-playing.
(I still assume S rational outside of having b in his payoffs)
Otherwise those kind of tests probably makes little sense.
I also wonder what happens if M doesn't know the repulsiveness of the test for certain, only a distribution of it (ie: CIA only knows that on average killing your spouse is pretty repulsive, except this lady here really hates her husband, oops), could that make a large impact.
I guess I was only trying to figure out whether this "repulsive loyalty test" story that seems to exist in history/mythology/real life in a few different cultures has any basis in logic.
Thanks, I forgot the proof before replying your comment.
You are correct that in PD (D,C) is Pareto, and so the Nash Equilibrium (D,D) is much closer to a Pareto outcome than the Nash Equilibrium (0,0) of TD is to its Pareto outcomes (somewhere along each person getting a million pounds, give or take a cent)
It still strange to see a game with only one round and no collusion to land pretty close to the optimal, while its repeated version (dollar auction) seems to deviate badly from the Pareto outcome.
Thanks, the final result is somewhat surprising, perhaps it's a quirk of my construction.
Setting r to be higher than v does remove the "undercover agents" that have practically 0 obedience, but I didn't know it's the optimal choice for M.
I think "everybody launches all nukes" might not be a Nash Equilibrium.
We can argue that once one side launched their nukes the other side does not necessarily have an incentive to retaliate, given they won't really care whether the enemy got nuked or not after they themselves are nuked, and they probably will have an incentive to not launch the nukes to prevent the "everybody dies" outcome, which can be argued to be negative for someone who is about to die.
I haven't found any information yet, but I suspect there is a mixed Nash somewhere in TD.
Spoilers? That sounds intriguing, I'll wait :)
It is interesting that experimental results of traveller's dilemma seems to give results which deviate strongly from the Nash Equilibrium, and in fact quite close to the Pareto Optimal Solution.
This is pretty strange for a game that has only one round and no collusion (you'd expect it to end as Prisoner's Dilemma, no?)
It is rather different from what we would see from the dollar auction, which has no Nash Equilibrium but always deviate far away from the Pareto optimal solution.
I suspect that the this game being one round-only actually improves the Pareto efficiency of its outcomes:
Maybe if both participants are allowed to change their bid after seeing each other's bid they WILL go into a downward spiral one cent by one cent until they reach zero or one player gives up at some point with a truce, just like how dollar auctions always stop at some point.
When there is only one round, however, there is no way for a player to make their bid exactly 1 or 2 cents less than the other player, and bidding any less than that is suboptimal compared to bidding more than the other player, so perhaps there is an incentive against lowering one's bidding indefinitely to 0 before the game even starts, just like no one would bid $1000 in the dollar auction's first round.
I think there are economic factors under the play, although it will be more subtle than just a plain comparison of "alleged GDP per capita".
I recall that both China and the Middle East went through a process of "de-industrialisation“ from the European High Middle Ages to the Early Modern period. Essentially both China and the Middle East started substituting machines for simple human labour, causing cranes, water mills, etc to become rarer over time.
And strangely enough a study showed that when this was happening there was little difference in real wages between Western Europe and the Middle East (so the substitution of capital with labour was not due to low wgaes), and I guess China won't be too different in this regard.
Why did this happen is beyond me, I think susceptibility to nomadic raid/invasion was mentioned.
This is an interesting study, it seems that his numbers are not too far off what I plugged in as a placeholder (that our current energy consumption is within a couple magnitudes from becoming climate altering)
Though I'm not making sense of the nanobots yet haha
Ah thanks, so the equilibrium is more robust than I initially assumed, didn't expect that to happen.
So the issue won't be as pressing as climate change could be, although some kind of ceiling still exists for energy consumption on Earth nevertheless...
Oh yes! This can make more sense now.
#humans has a decreasing marginal returns, since really the main concern for #humanity is the ability to recover, and that while increases with #humans it is not linear.
I do think individuals have "some" concerns about whether humanity in general will survive, since all humans still share *some* genes with each individual, the survival and propagation of strangers can still have some utility for a human individual (I'm not sure where am I going here...)
Ah, I never thought about this being a secretary problem.
Well, initially I used it as an analogy for evolution and didn't think too much about memorising/backtracking.
Oh wait, the mountaineer has memory about each peak he saw then he should go back to one of the high peaks he encountered before (assuming the flood hasn't moped the floor yet, which is a given since he is still exploring), there is probably no irrecoverable rejections here like in secretary problem.
The second choice is a strange one. I think the entire group taking the best chance on one peak ALSO maximises the expected number of survivals, together with maximising each individual's chance of survival.
But it still seems that "a higher chance that someone survives" is something that we want to take into the utility calculation when humanity wants to make choices in face of a catastrophes.
For example, if a coming disaster gives us two choices
(a): 50% chance that humans will go extinct, 50% chance nothing happens.
(b): 90% chance that 80% of humans will die.
The number of expected deaths of (b) significantly exceeds the one of (a), and (a) expects a greater number of humans surviving. But I guess many will agree that (b) is the better option to choose.
Yes, those two pieces can change the situation dramatically (and I have tried writing another parable including them, but found it a bit difficult for me)
I'm pondering about what is the best strategy with communication. Initially I thought I can spread them out and each mountaineer knows the location/height of other mountaineers in a given radius (significantly larger than the visibility in the fog) and add that information into their "move towards the greatest height" algorithm. Which might work, but I cannot vigorously show how useful that will be.
Regardless, I think evolution can't get much better than the third scenario, it doesn't seem to backtrack and most likely doesn't communicate.
There is also that my analogy fails to consider that the "environment" changes overtime, so the "mountain landscape" will not stay the same when you come back to a place after leaving it. This probably prevents backtracking, but doesn't change the outcome that you'd most likely be stuck on a hilltop that isn't the optimal.