Lana Wachowski is doing a new Matrix movie
post by MakoYass
(Lily isn't on board, but her twitter bio right now contains "ex-film maker" so it probably has nothing to do with this project specifically)
Considering the themes of The Matrix and the amount of popular discussion of AGI alignment there has been since the previous instalments, this may turn out to be culturally significant.
I'm wondering if anyone knows where Lana is at wrt alignment stuff. Like, has she read Superintelligence? Did she ever have any contact with the alignment community?
Comments sorted by top scores.
comment by Stuart Anderson (stuart-anderson)
) · GW
The problem I have with most stories about AIs is that the AIs are essentially human characters with a dose of one or more personality disorders and megalomania. The matrix series suffers from this problem. What it brought to the table in filmmaking wasn't particularly anything to do with the premise of AI. It was the same old humans and robots can't get along so they kill each other trope (which isn't necessarily a bad backbone for a story. People have been getting attacked by their artificial creations in stories for a long time).
If Wachowski insists on revisiting the Matrix universe then I'd hope that is for a very solid reason. I fear that won't be the case.
What I'd like to see in an AI story is humans and AIs interacting in complex ways beyond simple hostility. At the end of the Matrix series there was an in principle peace agreement but that's all there was. Zion was trashed and the machines are staring down the barrel of a massive reduction in their power source (dumb canon, but canon nonetheless). Both sides are facing the prospect of a massive influx of refugee humans leaving the matrix, and causing huge problems for the matrix and the real world. If there has to be more story in the Matrix universe then why not "The war just ended and peace is more complicated and fraught than war"?
comment by MakoYass
) · GW
I had a thought today. You know how the whole "The machines are using humans to generate energy from liquefied human remains" thing made no sense? And the original worldbuilding was going to be "The machines are using humans to perform a certain kind of computation that humans are uniquely good at" but they were worried that would be too complicated to come across viscerally so they changed it?
I think it would make even more sense to reframe the machines' strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of "guarantee that a 'large' population of humans ('humans' being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ''living' a 'full' 'life'' (defined opaquely by a classifier trained on data about the lives of american humans in 1995)"
This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I've decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn't necessarily see a need to show itself yet.)
This retcon seems strangely consistent with canon.
(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn't even have to pay me.)