Posts

Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 2012-03-27T18:07:15.176Z
Musings on probability 2010-03-14T23:17:10.250Z

Comments

Comment by bogdanb on Where I agree and disagree with Eliezer · 2023-07-21T07:02:40.900Z · LW · GW

I’m not sure I understand your weighting argument. Some capabilities are “convergently instrumental” because they are useful for achieving a lot of purposes. I agree that AIs construction techniques will target obtaining such capabilities, precisely because they are useful.

But if you gain a certain convergently instrumental capability, it then automatically allows you to do a lot of random stuff. That’s what the words mean. And most of that random stuff will not be safe.

I don’t get what the difference is between “the AI will get convergently instrumental capabilities, and we’ll point those at AI alignment” and “the AI will get very powerful and we’ll just ask it to be aligned”, other than a bit of technical jargon.

As soon as the AI it gets sufficiently powerful [convergently instrumental capabilities], it is already dangerous. You need to point it precisely at a safe target in outcomes-space or you’re in trouble. Just vaguely pointing it “towards AI alignment” is almost certainly not enough; specifying that outcome safely is the problem we started with. 

(And you still have the problem that while it’s working on that someone else can point it at something much worse.)

Comment by bogdanb on Where I agree and disagree with Eliezer · 2022-07-02T20:56:45.349Z · LW · GW

Exactly. You can’t generalize from “natural” examples to adversarial examples. If someone is trying hard to lie to you about something, verifying what they say can very well be harder than finding the truth would have been absent their input, particularly when you don’t know if and what they want to lie about.

I’m not an expert in any of these and I’d welcome correction, but I’d expect verification to be at least as hard as “doing the thing yourself” in cases like espionage, hacking, fraud and corruption. 

Comment by bogdanb on Where I agree and disagree with Eliezer · 2022-07-02T20:24:06.935Z · LW · GW

AI accelerates the timetable for things we know how to point AI at

It also accelerates the timetable for random things that we don’t expect and don’t even try to point the AI at but that just happen to be easier for incrementally-better AI to do.

Since the space of stuff that helps alignment seems much smaller than the space of dangerous things, you’d expect most things the AI randomly accelerates without us pointing it at will be dangerous.

Comment by bogdanb on Parable: The Bomb that doesn't Explode · 2022-06-21T10:04:45.555Z · LW · GW

See above. Don’t become a munitions engineer, and, being aware that someone else will take that role, try to prevent anyone from taking that role. (Hint: That last part is very hard.)

The conclusions might change if planet-destroying bombs are necessary for some good reason, or if you have the option of safely leaving the planet and making sure nobody that comes with you will also want to build planet-destroying bombs. (Hint: That last part is still hard.)

Comment by bogdanb on Bragging Thread May 2015 · 2015-06-02T21:14:26.941Z · LW · GW

For what it’s worth, the grammar and spelling was much better than is usual for even the native English part of the Internet. That’s probably fainter praise than it deserves, I don’t remember actually noticing any such fault, which probably means there are few of them.

The phrasing and wording did sound weird, but I guess that’s at least one reason why you’re writing, so congratulations and I hope you keep it up! I’m quite curious to see where you’ll take it.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 113 · 2015-02-28T23:19:20.369Z · LW · GW

Indeed, the only obvious “power” Harry has that is (as far as we know) unique to him is Partial Transfiguration. I’m not sure if Voldie “knows it not”; as someone mentioned last chapter, Harry used it to cut trees when he had his angry outburst in the Forbidden Forest, and in Azkhaban as well. In the first case Voldie was nearby, allegedly to watch out for Harry, but far enough that to be undetectable via their bond, so it’s possible he didn’t see what exact technique Harry used. In Azkhaban as well he was allegedly unconscious.

I can’t tell if he could have deduced the technique only by examining the results. (At least for the forest occasion he could have made time to examine the scene carefully, and I imagine that given the circumstances he’d have been very interested to look into anything unusual Harry seemed to be able to do.)

On the plus side, Harry performed PT by essentially knowing that objects don’t exist; so it could well be possible to transfigure a thin slice of thread of air into something strong enough to cut. For that matter, that “illusion of objects” thing should allow a sort of “reverse-Partial” transfiguration, i.e. transfigure (parts of) many objects into a single thing. Sort of like what he did to the troll’s head, but applied simultaneously to a slice of air, wands, and Death Eaters. Dumbledore explicitly considers it as a candidate against Voldemort (hint, Minerva remembers Dumbledore using transfiguration in combat). And, interestingly, it’s a wordless spell (I’m not even sure if Harry can cast anything* else wordlessly), and Harry wouldn’t need to raise his wand, or even move at all, to cast it on air (or on the time-space continuum, or world wave-function, whatever).

On the minus side, I’m not sure if he could do it fast enough to kill the Death Eaters before he’s stopped. He did get lots of transfiguration training, and using it in anger in the forest suggests he can do it pretty fast, but he is watched, and IIRC transfiguration is not instantaneous. He probably can’t cast it on Voldie nor on his wand, though he might be able to destroy the gun. And Voldemort can certainly find lots of ways to kill him without magic or touching him directly; hell, he probably knows kung fu and such. And even if Harry managed to kill this body, he’d have to find a way to get rid of the Horcruxes. (I still don’t understand exactly what the deal is with those. Would breaking the Resurrection Stone help?)

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-17T22:18:34.337Z · LW · GW

Well, we only know that Harry feels doom when near Q and/or his magic, and that in one case in Azkhaban something weird happened when Harry’s Patronus interacted with what appeared to be an Avada Kedavra bolt, and that Q appears to avoid touching Harry.

Normally I’d say that faking the doom sensations for a year, and faking being incapacitated while trying to break someone out of Azkhaban, would be too complicated. But in this case...

Comment by bogdanb on The Great Filter is early, or AI is hard · 2015-02-17T20:50:56.478Z · LW · GW

Both good points, thank you.

Comment by bogdanb on The Great Filter is early, or AI is hard · 2015-02-17T20:38:22.699Z · LW · GW

Thank you, that was very interesting!

Comment by bogdanb on Truth vs Utility · 2014-09-05T23:57:56.784Z · LW · GW

I sort of get your point, but I’m curious: can you imagine learning (with thought-experiment certainty) that there is actually no reality at all, in the sense that no matter where you live, it’s simulated by some “parent reality” (which in turn is simulated, etc., ad infinitum)? Would that change your preference?

Comment by bogdanb on The Great Filter is early, or AI is hard · 2014-09-05T23:23:43.718Z · LW · GW

most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth

How do you know? (Not rhethorical, I have no idea and I’m curious.)

Comment by bogdanb on The Great Filter is early, or AI is hard · 2014-09-05T23:19:41.728Z · LW · GW

If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)

Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?

I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.)

I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.

Comment by bogdanb on Memory is Everything · 2014-09-05T23:06:02.004Z · LW · GW

Honestly, I can’t really find anything significant in this comment I disagree with.

Comment by bogdanb on Memory is Everything · 2014-08-31T13:41:03.782Z · LW · GW

It's a bit like opening a thread arguing that the Spanish inquisition was right for torturing nonbelievers because they they acted under the assumption that they could save souls from eternal damnation by doing so.

But the OP didn’t argue in support of torturing people, as far as I can tell. In the terms of your analogy, my reading was of the OP was a bit like:

“Hey, if the Spanish Inquisition came to you and offered the following two options, would you pick either of them, or refuse both? The options are (1) you’re excommunicated, then you get all the cake you want for a week, then you forget about it, or (2) you’re sanctified, then you’re tortured for a week, then you forget about it. Option (3) means nothing happens, they just leave.”

Which sounds completely different to my ears.

Comment by bogdanb on Memory is Everything · 2014-08-31T08:56:40.168Z · LW · GW

Sure, but then why do you expect memory and experience would also behave in a common sense manner? (At least, that’s what I think you did in your first comment.)

I interpreted the OP as “I’m confused about memory and experience; let’s try a thought experiment about a very uncommon situation just to see what we think it would happen”. And your first comment reads to me as “you picked a bad thought experiment, because you’re not describing a common situation”. Which seems to completely miss the point, the whole purpose of the thought experiment was to investigate the consequences of something very distinct from situations where “common sense” has real experience to rely on.

The part about torturing children I don’t even get at all. Wondering about something seems to me almost the opposite of the philosophy of “doing something because you think you know the answer”. Should we never do thought experiments, because someone might act on mistaken assumptions about those ideas? Not thinking about something before doing it sounds to me like exactly the opposite of the correct strategy.

Comment by bogdanb on The Great Filter is early, or AI is hard · 2014-08-31T07:57:19.089Z · LW · GW

Once AI is developed, it could "easily" colonise the universe.

I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?

Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)

But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.

Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.

Comment by bogdanb on The Great Filter is early, or AI is hard · 2014-08-31T07:38:23.729Z · LW · GW

The problem with that is that life on Earth appeared about 4 billion years ago, while the Milky Way is more than 13 billion years old. If life were somewhat common, we wouldn’t expect to be the first, because there was time for it to evolve several times in succession, and it had lots of solar systems where it could have done it.

A possible answer could be that there was a very strong early filter during the first part of the Milky Way’s existence, and that filter lessened in intensity in the last few billion years.

The only examples I can think of are elemental abundance (perhaps in a young galaxy there are much fewer systems with diverse enough chemical compositions) and supernova frequency (perhaps a young galaxy is sterilized by frequent and large supernovas much more often than an older one’s). But AFAIK both of those variations can be calculated well enough for a Fermi estimate from what we know, so I’d expect someone who knows the subject much better than I would have made that point already if they were plausible answers.

Comment by bogdanb on Memory is Everything · 2014-08-31T07:15:10.155Z · LW · GW

Your rephrasing essentially says that you torture an identical copy of a person for a week.

If you read it carefully, my first rephrasing actually says that you torture the original person for a week, and then you (almost) perfectly erase their memories (and physical changes) during that week.

This is not changing the nature of the thought experiment in the OP; it is exactly the same experiment, plus a hypothetical example of how it could be achieved technically, because you implied that the experiment in the OP is impossible to achieve and thus ill-posed.

Or, at least, that’s how I interpreted “Of course I'm fighting the hypothetical thought experiment. I think the notion of experience without being affected doesn't make any sense.” I just gave an example of how one can experience something and not be affected. It was a somewhat extreme example, but it seems appropriate when Omega is involved.

Comment by bogdanb on Memory is Everything · 2014-08-25T20:29:05.709Z · LW · GW

It seems rather silly to argue about that, when the thought experiment starts with Omega and bets for amounts of a billion dollars. That allows glossing over a lot of details. Your position is like objecting to a physics thought experiment that assumes frictionless surfaces, while the same thought experiment also assumes mass-less objects.

As a simple example: Omega might make a ridiculously precise scan of your entire body, subject you to the experiment (depending on which branch you chose), then restore each molecule to the same position and state it was during the initial scan, within the precision limits of the initial scan. Sure, there’ll be quantum uncertainty and such, but there’s no obvious reason why the differences would be greater than, say, the differences appearing during nodding off for a couple minutes. Omega even has the option of anesthetizing and freezing you during the scan and restoration, to reduce errors. You’d remember that part of the procedure, but you still wouldn’t be affected by what happened in-between.

(If you think about it, that’s very nearly equivalent to applying the conditions of the bet, with extremely high time acceleration, or while you’re suspended, to a very accurate simulation of yourself. The end effect is the same: an instance of you experienced torture/ultra-pampering for a week, and then an instance of you, which doesn’t remember the first part, experiences gaining/loosing a billion dollars.)

Comment by bogdanb on Dark Arts of Rationality · 2014-01-28T19:30:28.271Z · LW · GW

perhaps costly, but worth the price

How about extending the metaphor and calling these techniques "Rituals" (they require a sacrifice, and even though it’s not as “permanent” as in HPMOR, it’s usually dangerous), reserving “Dark” for the arguably-immoral stuff?

Comment by bogdanb on Dark Arts of Rationality · 2014-01-28T19:04:37.513Z · LW · GW

The nice thing about hacking instrumental goals into terminal goals is that while they’re still instrumental you can easily change them.

In your case: You have the TG of becoming fit (BF), and you previously decided on the IG of going to the gym (GG). You’re asking about how to turn GG into a TG, which seems hard.

But notice that you picked GG as an instrument towards attaining BF before thinking about Terminal Goal Hacking (TGH), which suggests it’s not optimal for attainging BF via TGH. The better strategy would be to first ask yourself if another IG would work better for the purpose. For example, you might want to try lots of different sports, especially those that you instinctively find cool, or, if you’re lucky, that you’re good at, which means that you might actually adopt them as TGs more-or-less without trying.

(This is what happened to me, although in my case it was accidental. I tried bouldering and it stuck, even though no other sport I’ve tried in the previous 30 years did.)

Part of the trick is to find sports (or other I/TG candidates) that are convenient (close to work or home, not requiring more participants than you have easy access to) and fun to the point that when you get tired you force yourself to continue because you want to play some more, not because of how buff you want to get. In the sport case try everything, including variations, not just what’s popular or well known, you might be surprised.

(In my case, I don’t much like climbing tall walls—I get tired, bored and frustrated and want to give up when they’re too hard. One might expect that bouldering would be the same (it’s basically the same thing except with much shorter but harder walls), but the effect in my case was completely different: if a problem is too hard I get more motivated to figure out how climb it. The point is not to try bouldering, but to try variations of sports. E.g., don’t just try tennis and give up; try doubles and singles, try squash, try ping-pong, try real tennis, try badminton, one of those might work.)

Comment by bogdanb on The Limits of Intelligence and Me: Domain Expertise · 2013-12-22T08:53:40.288Z · LW · GW

It doesn’t work if you just click the link, but if you copy the link address and paste it in a browser then it works. (Because there isn’t a referrer header anymore.)

Comment by bogdanb on Lotteries & MWI · 2013-11-25T07:17:33.184Z · LW · GW

Medical issues that make life miserable but can be fixed with ~1M$ would be a (bit more concrete) example. Relatively rare, as you said.

Comment by bogdanb on Halloween thread - rationalist's horrors. · 2013-11-02T19:08:58.887Z · LW · GW

I have a rare but recurring dream that resembles very much what you describe.

Comment by bogdanb on Rationality Quotes September 2011 · 2013-10-26T20:14:10.449Z · LW · GW

There's no good reason to assume

I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).

But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than intelligence, which stated the other way around means we’re much worse at detecting them.)

Also, the reason I specifically asked about Cephalopods is that they’re pretty close to as far away from humans as they can be and still be animals; they’re so far away we can’t even find fossil evidence of the closest common ancestor. It still had a nervous system, but it was very simple as far as I can tell (flatworm-level), so I think it’s pretty safe to assume that any high level neuronal structures have evolved completely separately between us and cephalopods.

Which is why I’m reluctant to just assume things like emotions, which in my opinion are harder to prove.

On the other hand, this means any similarity we do find between the two kinds of nervous systems (including, if demonstrated, having emotions) would be pretty good evidence that the common feature is likely universal for any brain based on neurons. (Which can be interesting for things like uploading, artificial neuronal networks, and uplifting.)

Comment by bogdanb on A game of angels and devils · 2013-09-27T22:25:40.646Z · LW · GW

Personally I agree, but if I were a devil I’d just fall in love with the kind of double-think you’d need to . After all, I wouldn’t actually want to suppress faith, I’d just want to create in people’s minds associations between atheism and nice places like Stalinist Russia. Phrases like “scientific socialism” would just send nice little shivers of pleasure down any nice devil’s spine, wouldn’t they?

Comment by bogdanb on A game of angels and devils · 2013-09-27T20:59:22.347Z · LW · GW

Funny how if I were a devil, and I tried to make the world miserable through faith, and I were getting concerned about those dangerous anti-faith ideas, I’d try to create a horrible political regime based on suppressing faith ;)

Comment by bogdanb on The Up-Goer Five Game: Explaining hard ideas with simple words · 2013-09-09T18:47:50.172Z · LW · GW

I see your point (I sometimes get the same feeling), but if you think about it, it’d be much more astonishing if someone built a universal computer before having the idea of a universal computer. It’s not really common to build something much more complex than a hand ax by accident. Natural phenomena are often discovered like that, but machines are usually imagined a long time before we can actually build them.

Comment by bogdanb on Types of recursion · 2013-09-09T18:01:23.070Z · LW · GW

Everyone can remember a phone number because it's three numbers, where they might have problems remembering ten separate digits

This is slightly irrelevant, but for some reason I can’t figure out at all, pretty much all phone numbers I learned (and, incidentally, the first thirty or so decimals of π) I learned digit-by-digit rather than in groups. The only exception was when I moved to France, I learned my french number by-separate-digits (i.e., five-eight instead of fifty-eight) in my native language but grouped in tens (i.e., by pairs) in French. This isn’t a characteristic of my native language, either, nobody even in my family does this.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-09-09T08:21:46.862Z · LW · GW

You’re right, I remember now.

Hmm, it still sounds like they should be used more often. If you’re falsely accused and about to be condemned to Azkhaban, wouldn’t you sacrifice a portion on of your magic if it could compel your accuser to confess? As corrupt as the Wizengamot is, it should still happen on occasion.

Comment by bogdanb on September 2013 Media Thread · 2013-09-09T08:14:28.870Z · LW · GW

Yeah, but I think he was mentioned before (and he shows up in most of the guards books). Vetinari is awesome in kind of an obvious way, but he’s not very relevant outside the city. (Well, except for a few treaties with dwarves and the like.)

In contrast, Granny (and sometimes the other witches) arguably saved the entire world several times. There are other characters who do that, but it’s more... luck I guess. The witches actually know what they’re doing, and work hard to achieve their goals.

(For example, though it’s never explicitly said, I got a very strong suspicion that Granny remained a life-long virgin specifically because she expected that it might be useful against unicorns.)

Comment by bogdanb on September 2013 Media Thread · 2013-09-09T07:06:34.097Z · LW · GW

I’ve seen people here repeatedly mention the city watch books, but I’m surprised the witches books are almost never mentioned. Seriously, am I the only one who thought Granny Weatherwax and her team are basically the most useful people on the disc?

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-09-08T12:46:42.896Z · LW · GW

Perhaps, although “story logic” can imply parents being willing to sacrifice for their children. That’s a problem with thinking of the world in terms of stories, you can find a trope to justify almost anything. Authors always can (and often do) pull deus ex machinas out of their nether regions.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-08T12:37:54.913Z · LW · GW

I wouldn’t be surprised if it did happen, at least once or twice. After all, it happened with the adults too, e.g. Juergen or whatever his name was.

Comment by bogdanb on Yet more "stupid" questions · 2013-09-08T12:11:06.421Z · LW · GW

Well, yes, but the whole point of building AI is that it work for our gain, including deciding what that means and how to balance between persons. Basically if you include in “US legal system” all three branches of government, you can look at it as a very slow AI that uses brains as processor elements. Its friendliness is not quite demonstrated, but fortunately it’s not yet quite godlike.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-09-02T22:58:35.010Z · LW · GW

A couple more recent thoughts:

  • Dodging Deatheaters (at least competent ones) on a broom is not something I expect to happen in MoR. Well, not unless it’s rocket powered, and I wouldn’t expect that to work more than once either.

  • Most of the big, non-line-of-sight weapons we (muggles) have arose for the purpose of killing lots of people in big battles (even though we’re using them of other stuff now), which isn’t really useful for wizards due to their low numbers, but:

  • The Interdict of Merlin is MoR-specific, and at the beginning of the Ministry chapters it is specifically mentioned that the purpose of that was to prevent what appeared to be the wizard equivalent of a nuclear holocaust. So in while magic can probably get really bad, you’re probably right that living wizards in MoR don’t know anymore extremely destructive non-line-of-sight spells, or at least they’re very rare. (Though that doesn’t mean that they aren’t much more powerful than handguns. I expect almost every spell thrown in that Quirrell–auror duel was above “high caliber machine gun” in deadliness, were it not for the shields.)

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-02T22:34:39.864Z · LW · GW

Well, she certainly was, and plausibly will be. I’m not quite sure about is, but it’s mostly because my intuition seems to think that either evaluating age() on a currently-not-living person should throw an IllegalStateException, or comparing its result with that for living persons should throw ClassCastException. But that’s probably just me :)

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-02T20:19:20.970Z · LW · GW

Well, regardless of whatever other plotting among each other, all participants actually do have a very good reason to join—their kids still go to Hogwarts, they want to keep them safe but at the same time groom them to inherit the family fortunes, and as was pointed out explicitly in the chapter, there are still good reasons, both politically and for safety, not to go to the other schools. An (at least temporary) alliance for the protection of their children is actually quite logical for all concerned.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-02T20:06:42.713Z · LW · GW

Well, Hermione is (well, was) slightly older than Harry, and she seemed to have entered the romantic stage already. A couple years to let Harry catch up might not be such a bad thing.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98 · 2013-09-02T20:03:28.143Z · LW · GW

I agree with your analysis, but I also thought this was intended as a straightforward signal to the other students that “we have to fight for ourselves” is not just the usual adult “lording over” the kids. I think it was meant to reinforce solidarity, defuse instinctive teenage rebellion against “the adults’ rules”, and also reinforces the message that the professors are no longer to be trusted to handle things.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T22:42:25.668Z · LW · GW

I think the rapid expansion when the transfiguration ends would be enough to set it off.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T22:13:49.938Z · LW · GW

Also: Metallic sodium and potassium, as well as phosphorus, are quite reactive with human tissue. For bonus points you can make a mixed projectile with two transfigurations, e.g. a core transfigured from ice surrounded by a shell transfigured from sodium, which will explode once the two transfigurations end.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T22:06:15.499Z · LW · GW

To be fair, it ate her legs, not just her feet.

To be even fairer, that might be just because the legs were bite-sized, and polite trolls are taught by their mothers not to nibble their food.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T22:03:55.771Z · LW · GW

only person Dumbledore knows and has access to that really matters to Harry

Well, he could have killed Harry’s parents. It might not trigger Harry’s “kill death by any means necessary” reaction, but then I don't think anyone would have anticipated that in-universe, given that even Q was surprised by the prophecy.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T21:44:06.371Z · LW · GW

but of higher status than craftsmen and peasants.

I don’t think that was intrinsic to being a merchant, just a consequence of (some of them) being richer.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T21:37:45.336Z · LW · GW

With regards to (2), I think you’re confusing first-year war games with actual combat magic.

Actual “I really want to kill you” spells are probably much more powerful. Fiendfyre for example has at least the destructive potential of a tank, and in canon even Goyle could cast it. (It’s hard to control, but then again so is a tank.) Avada Kedavra can probably kill you even through a nuclear bunker wall, and it can be used by at least some teenagers. Sectumsempra is probably a instant-kill against a muggle, even with body armor, and it was invented by Snape while he was still a student.

By contrast, pretty much the most powerful potential weapon normal people (well, outside the US at least) have ready access to is a car, and a very tiny fraction of people can easily make something much more destructive than a crude bomb. Also, due to the effects of magic on electronics, pretty much everything other than kinetic impactors would be fried by any kind of spell that manages to connect.

We’re never shown really bad stuff, and during a discussion in MoR it’s mentioned that thermonuclear weapons are only a bit worse than most really bad spells, and that Atlantis was erased from time.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T21:14:14.935Z · LW · GW

ask whether wizard shields actually do prevent inert lumps of lead from hitting their caster

Almost certainly they do. Minerva mentions that guns aren’t a big threat to a prepared witch, and even if you assume she’s not really knowledgeable, I’m pretty sure someone would have tried throwing (with magic) hard, heavy things at their opponent during life-and-death fights. Or at least using bows and arrows.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T20:43:29.461Z · LW · GW

In the end, at the highest level, their life is a story

I wouldn’t put it above Eliezer to find a way of having Harry be “the End of the World” literally by just ending the story somehow. But I can’t think of any explanation in that vein for destroying the stars, other than maybe breaking the ceiling in the Hogwards hall, which doesn’t fit. And style-wise it doesn’t feel right.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T20:36:23.803Z · LW · GW

and Draco can attest to these under Veritaserum!

Technically speaking, Draco can only attest that Harry claimed those things. (Harry’s an Occlumens, and the way Occlumency works in MoR implies that an Occlumens is very good at lying. So he can plausibly claim that he lied to his enemies.)

I don’t remember, does Eliezer allow unbreakable vows, or are those nerfed in MoR like Felix Felicis? Because I’m pretty sure even an Occlumens can’t lie if he vows to say the truth without suffering the penalty.

Comment by bogdanb on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-28T20:26:43.986Z · LW · GW

I think the orbs only come to people (things that think, and can make decisions), and it’s not clear Dementors pass that test. (In particular, Harry leans against that hypothesis. He’s certainly not infallible, but he’s basically the best expert on the subject whose thoughts we have access to.)

Otherwise prophecies mentioning things like life, wands and clothes would attack everyone.