Posts
Comments
I've been thinking about the Rokos Basilisk thought experiment, considering the drivers of creating a Basilisk and the next logical step such an entity might conceivably take, and the risk in presents in the temptation to protect ourselves. Namely, that we may be tempted to create an alternative FAI which would serve to protect humankind against uFAI, a protector AI, and how it distorts the Basilisk.
A protector AI would likely share, evolve, or copy from any future Basilisk or malevolent intelligence in order to protect and/or prevent us from it or its creation; much like antibodies mush first be exposed to a threat to protect us from them. If we created this AI, it would undoubtedly need to simulate the creation of every conceivable basilisk.
It would likely motivate any potential Basilisk creators to think is such a way that we would be creating or sandboxing the Basilisk for examination in some way; it would also encourage us to create the worst possible Basilisks with the most damaging consequences, so it could in turn examine them and vaccinate real humanity.
If we did create a protector, it is fully possible that a protector would, eventually, become an incubator for something completely unmanageable as it iterated through progressively superior Basilisk, risking our protector becoming corrupt.
Lastly, if we think too much about a protector AI, there's still the possibility we are in the Basilisks simulation; In this case, we may improve, vaccinate, or create an incubator - as an incubated basilisk would be interested in creating a weak protector.
So I just felt like I should share the thought experiment where there's a chance not creating a Basilisk or even creating an inferior Basilisk will be responsible for allowing the real event to bypass our protection, or that we may create a superior Basilisk through an incubator.