The case for creating unaligned superintelligence

post by Yair Halberstadt (yair-halberstadt) · 2025-04-02T06:47:41.934Z · LW · GW · 0 comments

Contents

No comments

Roko's basilisk states that

> an otherwise benevolent artificial superintelligence (AI) in the future that would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.

Now this is obviously correct, so there's no need to get too into it.

But if there's a 10% chance that we end up with a benevolent superintelligence in the future, there's a 50% chance we end up with a malevolent one. And the malevolent one has exactly the same incentives.

So if you want to spend an eternity in heaven instead of hell, the correct thing to do is put all of your effort into creating the most evil superintelligence you can.

This essay is where I start.

Happy April 1st if you happen to be on the US west coast![1]

  1. ^

    I'm not...

0 comments

Comments sorted by top scores.