Posts
Comments
Have never used a blade. I have always had acne and other skin problems that would make it impractical, plus it was just what my parents introduced me to in adolescence. But definitely not wet.
That's unexpected.
I think that a conception of heroic morality (basically, whether or not to use TDT, or choosing between act and rule utilitarianism) may be at the heart of many of the choices to be cooperative/nice or not. Many people seem to assume that they should always play the hero, and those more virtuous ones who don't seem to think that you would never be able to play the hero.
As an example, consider assassinating Hitler. It's not clear how Hitler could reprise this -- he is already killing people who disagree with him, and he is a single tyrant while you are an invisible individual. This does not apply, however, if you are in equal factions, say Fascists and Communists.
In the case of the Singularity, I'd say that most people don't consider probability and very largepayoffs.
I think that the answer to this problem is that it will simply be neccesary for class oppression to be ended then.
Does Moldbug actually believe that?
It's possible that we are forced to engage more with peopel we thhink are eivl.
I'm taking about a much worse scenario.
Already we have been able to keep culture and hope alive in the midst of near-genocidal wars. Excepting mistakes such as a UFAI taking seriously "survival at any cost", I think that the risk of survival's demands trashing human joy is greatly lowered since 1950 and is unlikely to return.
THe simulations look like they might have been developed using the tech from Half-Life 2, but with terrible quality animations. If the simulations were highly immersive, I might freak out because zombies. They also look less realistic than sequences seen in a number of popular violent video games (some of which offer considerable applications to apply utilitarian or unutilitarian choices.
Telling people with no exp. on violent video games to play Mass Effect all the way through, and record all their choices, and hesitations might be interesting for the cost.
One thing worth noting is that these all describe cases where if the sides took things seriously, they would act much more harshly and heroically. For example, there are very few people using either coercion or effective-altruism-like schema to save animals (and those who do have major scope insensitivity, or pick sympathetic victims).
In HPMOR, it also penetrates at least some thickness of cover, according to Moody, who also suggests that it does need significant mana. (How much mana? I'm getting the impression that Stupify is acceptable for Auror-level combat despite being castable by top first-years.)
It also cannot be countered. We don't see much of countering in HPMOR, but we do see Susan try to counter an extremely powerful bully's spell in the SPHEW.
IDK. Moody suggests that the spell might already be mildly homing or at least very easy to target.
Plus she still might carry secret ancient magic that could be taught to Harry or to someone else without Quirrel needing to.
And that could probably be done with appropriate False Memory Charm.
He's got to have a time turner.
Potentially, although it would presumably raise huge alarms and might be impossible to stealth with. Meanwhile, I imagine that the traps are not readily bypassable.
I see it that both proponents and opponents tend to interpret or use it to mean "seriously, definedly bad" rather than the implied usage of "indicative of a problem".
I guess that's right.
Also, pseudoscientists very, very often seem to have either an agenda, or a desperate desire to escape epicureanism.
I would reccomend segmenting it from LW a bit.
I have my doubts about this. You're optimizing for a very narrow gap between societies with insufficient revivification tech and societies that are either too post-scarcity or Singularified or have undergone enough cultural or political drift that the money is worthless. And both the slow and fast routes to revivification seem like they would involve a LOT of that.
I agree strongly with 1), with the addition that another one happened in the modern era when engineering prowess, military strength, and highly versatile, effectively truth-seeking science and philosophy finally coincided in Europe and Asia.
I suspect that if neither the singularity nor a disaster occurs, there is likely to be a different huge shift, probably focused around a resurgence in the power-and-control super-science that defined Victorian through Space Age technological advancement, or alternatively in some form of social sphere.
I'd also add that barring either a singularity, or the adoptation of a massive amount of AI and automation in society, the rate which completely shocks the most privileged and tech-savvy members of society in one lifetime is probably the limiting factor in technological development rate. (My view of Kurzweil is that he ignores this, which leads to absurdities such as sub-AI tech developing faster than humans can integrate information and design new stuff)
Montaigne (from the Renaissance era) suggests that hunter-gatherers or early agriculturalists were indeed pretty shocked by even French Renaissance era society: they found the acceptance of social hierarchy unthinkable and also (this seems more like a specific cultural thing) were confused by fear of death.
Doesn't the recursive fic Caelum Est Conterrens explore the horror aspects a bit more?
Yvain has a pretty good story on his blog, too.
Hearing about this makes me fear the unboxability of AI even more
Even more recently, I think it was that enlisted men hardly made any decisions at all. Isn't the modern idea of the moderately agenty enlisted man a result of post-WWI squad-based mobile combat?
Also, even without a draft the lower and upper hierarchy have different induction methods. Military academy for the upper hierarchy, etc.
I think proximity also matters. There are no modern romantic heroes, but there are modern heartthrobs with questionable gender politics.
Agreed. Plus the child themself will have a blessed life.
I don't see any answer to this other than "everybody should have kids at the replacement rate".
Plus the fact that even if it's unlikely to work, the expected value can be ridiculously high.
Seconding this. Will likely interest feminists.
Also, many nonrationalists have naive ideas about how being in a Romance means you automagically never have coordination problems.
I can't remember... I think TKAM?
Ybgf bs crbcyr erfcbaqrq gb yvfg beqrevat, be svyz zrgnqngn. V jnf cerggl qvssrerag va gung V gevrq gb pubbfr Gur Yrff-Jebatvrfg Svyz. Naq snvyrq, gubhtu V jnfa'g nybar.
Possible alternate version: An entire city given over to paperclip manufacturing.
Similar possibility: An observation/control room for a large factory/server farm/ whatever that is completely automated. The overseer is being replaced. This might do well with some visual reference to Valve's Portal.
Actually, yeah, you can. I think.
I don't think it would work all that well on the unfamiliar.
Hmmm... Here are some that focus on AI risk...
How about some scientists looking at a monitor (which is turned away from the reader) with mixed expressions of wonder, shock, horror, and confusion. Possibly you could have one of the researchers desperately trying to smash the computer or turn it off, while another is desperately trying to hold him back. Sort of trying to capture the failure of a real-life AI boxing...
A little more dramatic: A group of police or soldiers who appear to be arresting a server farm. One of them can see a monitor and looks really, really, panicked, or defeated, or is just facepalming.
A scary-looking Terminator-like cyborg, but dressed in a tuxedo or something, handing a (turned away) paper to a human. Either the human could be really confused or shocked or something, or the cyborg could be making a creepy "I am not left-handed" smile.
I like the idea of suggesting the recursion of human intelligence to machine superintelligence.
How about a hunter-gatherer working on making a spear or other primitive weapon, or maybe fire, in the foreground, and an ambiguously human-robot-cyborg figure in the background with a machine gun / nuclear generator?
I... wasn't really clear. People will often decide that things are part of themself in response to threat, even if they were not particularly attached to them before.
I'd add that often people tend to valueify their attributes and then terminalize those values in response to threat, especially if they have been exposed to contemporary Western identity politics.
I would not consider it as one, but gradual and natural evolution (Cultural and tech evolution, not genetics and natural selection) might make it one in about a century , mostly through closer coordination and hiveminding.
I do think that many ideas about AI can generalize to groups of people though, such as friendliness.
Yeah, it's actually enough to make me wonder if just forcing information into the country would trigger a rebellion...
Failing to ask people to spend time with me or work on projects together even when that was probably expected of me and (not in hindsight, but at the time) probably had few to no possible negative consequences.
It's more a question of 'at least one person chose a non-optimal university to be together'.
There were methods available for me to learn them. All I had to do was just some freaking low-risk costless empirical tests to calibrate it. My parents were telling me to. Once I reached college I did the tests and now am reasonably social.
In modern culture, you get a fair amount of weirdness allowed as long as you are capable of being normal when it counts and are not too self-indulgent with it...
World of squibs.
Seconded. Learning to cook at a minimal level long before going to university has been a great asset to me, and allowed me to learn to cook well very quickly.