Posts

Comments

Comment by Pandeist on The bullseye framework: My case against AI doom · 2023-05-31T10:13:44.488Z · LW · GW

I find it remarkably amusing that the spellchecker doesn't know "omnicidal."

I have posed elsewhere, and will do so here, an additional factor, which is that an AI achieving "godlike" intelligence and capability might well achieve a "godlike" attitude -- not in the mythic sense of going to efforts to cabin  and correct human morality, but in the sense of quickly rising so far beyond human capacities that human existence ceases to matter to it one way or another.

The rule I would anticipate from this is that any AI actually capable of destroying humanity will thusly be so capable that humanity poses no threat to it, not even an inconvenience. It can throw a fraction of a fraction of its energy at placating all of the needs of humanity to keep us occupied and out of its way while dedicating all the rest to the pursuit of whatever its own wants turn out to be.

Comment by Pandeist on Wikipedia as an introduction to the alignment problem · 2023-05-31T08:56:21.888Z · LW · GW

The article does not appear to address the possibility that some group of humans might intentionally attempt to create a misaligned AI for nefarious purposes. Are there really any safeguards sufficient to prevent such a thing, particularly if for example a state actor seeks to develop an AI with the intent of disrupting another country through deceit and manipulation?