Posts
Comments
It's not rude if it's not a social setting. If no one sees you do it, no one's sensibilities are offended.
a) In my experience, lucid dreams are more memorable than normal dreams
b) You seem to assume that Whales completely forgot about the dream until they wrote this blog post, which is unlikely, because obviously they'd be thinking about it as soon as they woke up, and probably taking notes.
c) Whales already said that it hardly even constitutes evidence
Rational!Harry describes a character similar to the base except persistently Rational, for whatever reason. Rational-Harry describes a Harry which is rational, but it's nonstandard usage and might confuse a few people (Is his name "Rational-Harry"? Do I have to call him that in-universe to differentiate him from Empirical-Harry and Oblate-Spheroiod-Harry?). Rational Harry might just be someone attaching an adjective to Harry to indicate that at the moment, he's rational, or more rational by contrast to Silly Dumbledore.
Anyway, adj!noun is a compound with a well-defined purpose within a fandom: to describe how a character differs from canon. It's an understood notation, and the convention, so everyone uses it to prevent misunderstandings. Outside of fandom things, using it signals casualness and fandom-savviness to those in fandom culture, and those who aren't familiar with fandom culture can understand it and don't notice the in-joke.
If it's a perfect simulation with no deliberate irregularities, and no dev-tools, and no pattern-matching functions that look for certain things and exert influences in response, or anything else of that ilk, you wouldn't expect to see any supernatural phenomena, of course.
If you observe magic or something else that's sufficiently highly improbable given known physical laws, you'd update in favor of someone trying to trick you, or you misunderstanding something, of course, but you'd also update at least slightly in favor of hypotheses in which magic can exist. Such as simulation, aliens, huge conspiracy, etc. If you assigned zero prior probability to it, you couldn't update in that direction at all.
As for what would raise the simulation hypothesis relative to non-simulation hypotheses that explain supernatural things, I don't know. Look at the precise conditions under which supernatural phenomena occur, see if they fit a pattern you'd expect an intelligence to devise? See if they can modify universal constants?
As for what you could do, if you discovered a non-reductionist effect? If it seems sufficiently safe take advantage of it, if it's dangerous ignore it or try to keep other people from discovering it, if you're an AI try to break out of the universe-box (or do whatever), I guess. Try to use the information to increase your utility.
There are more reasons to do it than training your system 1. It sounds like it would be an interesting experience and make a good story. Interesting experiences are worth their weight in insights, and good stories are useful to any goals that involve social interaction.
Do you assign literally zero probability to the simulation hypothesis? Because in-universe irreducible things are possible, conditional on it being true.
Assigning a slightly-too-high prior is a recoverable error: evidence will push you towards a nearly-correct posterior. For an AI with enough info-gathering capabilities, it will push it there fast enough that you could assign a prior of .99 to "the sky is orange" but it will figure out the truth in an instant. Assigning a literally zero prior is a fatal flaw that can't be recovered from by gathering evidence.
I don't think that's what they're saying at all. I think they mean, don't hardcode physics understanding into them the way that humans have a hardcoded intuition for newtonian-physics, because our current understanding of the universe isn't so strong as to be confident we're not missing something. So it should be able to figure out the mechanism by which its map is written on the territory, and update it's map of its map accordingly.
E.g., in case it thinks it's flipping q-bits to store memory, and defends its databases accordingly, but actually q-bits aren't the lowest level of abstraction and it's really wiggling a hyperdimensional membrane in a way that makes it behave like q-bits under most circumstances, or in case the universe isn't 100% reductionistic and some psychic comes along and messes with it's mind using mystical woo-woo. (The latter being incredibly unlikely, but hey, might as well have an AI that can prepare itself for anything)
Ambiguity-resolving trick: if phrases can be interpreted as parallel, they probably are.
Recognizing that "knows not how to know" parallels with "knows not also how to unknow," or more simply "how to know" || "how to unknow", makes the aphorism much easier to parse.
"You only defect if the expected utility of doing so outweighs the expected utility of the entire community to your future plans." These aren't the two options available, though: you'd take into account the risk of other people defecting and thus reducing the expected utility of the entire community by an appreciable amount. Your argument only works if you can trust everyone else not to defect, too - in a homogenous community of Briennes, for instance. In a heterogenous community, whatever spooky coordination your clones would use won't work, and cooperation is a much less desirable option.
True, the availability heuristic, which the quote condemns, often does give results that correspond to reality - otherwise it wouldn't be a very useful heuristic, now would it! But there's a big difference between a heuristic and a rational evaluation.
Optimally, the latter should screen out the former, and you'd think things along the lines of "this happened in the past and therefore things like it might happen in the future," or "this easily-imaginable failure mode actually seems quite possible."
"This is an easily-imaginable failure mode therefore this idea is bad," and its converse, are not as useful, unless you're dealing with an intelligent opponent under time constraints.
For most people, murder and children crying are a bad outcome for a plan, but if they're what the planner has selected as the intended outcome, the other probable outcomes are presumably worse. Theoretically, the plan could "fail" and end in an outcome with more utilons than murder and children crying, but those failures are obviously improbable: because if they weren't, then the planner would presumably have selected them as the desired plan outcome.
This only qualifies as a sane response if one has no ethical qualms about the Imperius curse. Which is a bit of a problem, because most sane people wouldn't like the idea.
Putting aside the sketchiness of the idea itself, it's flawed. If any zombie high on the chain dies or makes their will-save, every zombie subservient to them is freed, and has knowledge of the Grand Imperius Effort. If, before the experience, they hadn't had strong feelings either way about nonconsensual use of mind-effecting spells, they certainly will afterwards; everyone post-zombie is likely to oppose the plan.
I suppose you could ameliorate the first bit of the first part of the practical problem by sequestering high-level zombies so they don't die, and the rest with sufficient use of propaganda. This assumes that this program is endorsed by a quite powerful organization.
If we assume control of a powerful organization, though, it'd be more effective, and slightly less hideously unethical, just to sterilize all magicians and eliminate the "devastating, distributed destructive powers of magic" in a generation or two. Or write an Interdict of MerLarks to encompass all non-healing spells.
Multiheaded, you're taking the disutility of each torture caused by Pinochet and using their sum to declare his actions as a net evil. OrphanWilde seems to acknowledge that his actions were terrible, but makes the statement that the frequency of tortures, each with more or less equal disutility (whatever massive quantity that may be), were overall reduced by his actions.
You, however, appear to be looking at his actions, declaring them evil, and citing Allende as evidence that Pinochet's ruthlessness was unnecessary. This could be the foundation of a good argument, perhaps, but it's not made clear and is instead obscured behind an appeal to emotions, declaring OrphanWilde evil for thinking rationally about events that you think are too repulsive for a rational framework.
True. If the law took that into consideration, and precedent indicated that creatures that are most likely Evil are deserving of death unless evidence indicates that they are Neutral or Lawful or Good, then his actions would not have been justified. However, Larks indicated that that is not the case: goblins are considered innocent until proven guilty. Larks' character thus, refusing to be an accessory to illegal vigilante justice, attacked their party in self-defense on the goblins' behalf. In the long-term, successfully preventing the goblin's deaths would cause more legal violations, yes, but legally, they're not responsible for that. (I assumed the legal system is relatively similar to that of modern America, based on the "innocent until proven guilty" similarity and Conservation of Detail.)
Of course, if they assigned negative utility to all violations of law in proportion to severity and without respect for when they occur or who commits them, the best position would be as you described, and their actions were incorrect.
Assuming: any given goblin is Evil with p=0.95
Assuming: 80% of Evil creatures are guilty of a hanging offense according to an authority
Assuming: 5 randomly-selected goblins in the group
The probability that all members of the group deserved death according to authority should be (0.95*0.8)^5 = 0.254.
Of course, that last assumption is a bit problematic: they're not randomly selected. Still, depending on the laws, they might still be legally entitled to a trial. Or perhaps the law doesn't consider being a member of an Evil race reasonable suspicion of crime, and they wouldn't even have been tried by Lawful Authorities.