Major spoilers for Madoka Magica, a show where spoilers matter!
Kyubey Shuts Up And Multiplies
Meet Kyubey. Kyubey is a Longtermist.
In the Madokaverse, changes in human emotion are, somehow, net-positive in the creation of energy from out of nothing. The Incubators (of which Kyubey is one, pictured above) are an alien species who've discovered a way to farm human emotions for energy.
Most of the Incubators don't feel emotion, and the few that do are considered to be mentally ill. But humans are constantly leaking our juicy, negentropy-positive feelings all over the place. With human angst as a power source, it's possible to prevent the heat death of the universe!
Do the math, people. The suffering of a few teenage girls is nothing compared to pushing back the heat death of the universe.
And this isn't just some Omelas situation where the girls get nothing out of it. They get wishes! Who could object to a cause this noble?
Homura has Something to Protect
If you want to see Homura kicking ass, you could watch up to 2:22 before reading on.
There's something subtle here—something to notice confusion about, even—where is she getting all these guns from?
This is hauntingly sobering when you consider that Homura's magical ability has nothing to do with guns, only with time manipulation. That means all those tens of thousands, hundreds of thousands of pounds of explosive material and weapons arms weren't just made from nothing like Mami's guns were- they were individually tracked down and gathered, one after the other, by one little girl.
How many hundreds of repetitions did it take to find them all, every time making a new doomed timeline? How many thousands of hours did she spend looking for where to get them from, and how many failed attempts finding the most effective way to arrange them?
When Kyubey creates a magical girl, he offers them an atomic contract: they gain a sparkly transformation and fight witches for the rest of their life, and in exchange, they're granted a wish.
There's a minor risk here: Kyubey can't actually stop this process: the wish will be granted whether he likes it or not.
(I sure hope the mesa-objective pursued by human girls is the same as the outer objective (negentropy) pursued by Kyubey. There are no possible ways [? · GW] this could go wrong)
The Incubators were reckless. I'm glad humans would never apply large amounts of optimization power without guarantees for how it's aimed.
Hopefully you've already seen the anime (otherwise, sorry for all the spoilers you just read!) but if you haven't, go watch it now. It's great, and incidentally chock-full of fables like these. (For a bonus fable on Kyoko and the complexity of wishes [LW · GW] see Ep7, 8:05 - 12:14.)
If you have already seen the anime and want to read something with similar themes, I would recommend Qualia The Purple.
Though it isn't spelled out in the show, humans appear to be the only species that has feelings, so depending on whether you're a positive or negative utilitarian, a universe full of emotionless beings may or may not be a compelling vision for you.
Rereading a bit of Hieronym's PMMM fanfic "To The Stars" and noticing how much my picture of dath ilan's attempt at competent government was influenced / inspired by Governance there, including the word itself.
To the Stars is an interesting universe in which AI alignment was solved (or, perhaps, made possible at all) via magical girl wish! Quoting (not really a spoiler since this is centuries in the past of the main story):
It'd be nice if, like Kekulé, I could claim to have some neat story, about a dream and some snake eating itself, but mine was more prosaic than that.
I had heard about the Pretoria Scandal, of course, on the day the news broke. To me, it was profoundly disturbing, enough that I ended up laying awake the whole night thinking about it.
It was an embarrassment and a shame that we had been building these intelligences, putting them in control of our machines, with no way to make sure that they would be friendly. It got people killed, and that machine, to its dying day, could never be made to understand what it had done wrong. Oh, it understood that we would disapprove, of course, but it never understood why.
As roboticists, as computer scientists, we had to do better. They had movies, back then, about an AI going rogue and slaughtering millions, and we couldn't guarantee it wouldn't happen. We couldn't. We were just tinkerers, following recipes that had magically worked before, with no understanding of why, or even how to improve the abysmal success rate.
I called a lab meeting the next day, but of course sitting around talking about it one more time didn't help at all. People had been working on the problem for centuries, and one lab discussion wasn't going to perform miracles.
That night, I stayed in late, pouring over the datasets with Laplace, [the lab AI,] all those countless AI memory dumps and activity traces, trying to find a pattern: something, anything, so that at least we could understand what made them tick.
Maybe it was the ten or something cups of coffee; I don't know. It was like out of a fairy tale, you know? The very day after Pretoria, no one else in the lab, just me and Laplace talking, and a giant beaker of coffee, and all at once, I saw it. Laplace thought I was going crazy, I was ranting so much. It was so simple!¹
Except it wasn't, of course. It was another year of hard work, slogging through it, trying to explain it properly, make sure we saw all the angles…
And I feel I must say here that it is an absolute travesty that the ACM does not recognize sentient machines as possible award recipients.² Laplace deserves that award as much as I do. It was the one that dug through and analyzed everything, and talked me through what I needed to know, did all the hard grunt work, churning away through the night for years and years. I mean, come on, it's the Turing Award!
The MSY has confirmed that the timing of this insight corresponds strongly with a wish made on the same day. The contractee has requested that she remain anonymous.
The ACM removed this restriction in 2148.
— Interview with Vladimir Volokhov, Turing Award Recipient, 2146.
(The actual content of the alignment solution is elsewhere described to be something like a chain of AIs designing AIs via a mathematically-provable error-correcting framework, continuing until the output stabilized—for what it's worth.)
unpopular opinion: I like the ending of the subsequent film
IMO it's a natural continuation for Homura. After spending decades of subjective time trying to save someone would you really let them go like that? Homura isn't an altruist, she doesn't care about the lifetime of the universe - she just wants Madoka.