Posts

What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it? 2023-02-03T16:00:43.198Z
Oracle AGI - How can it escape, other than security issues? (Steganography?) 2022-12-25T20:14:09.834Z

Comments

Comment by RationalSieve on What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it? · 2023-02-03T17:55:08.655Z · LW · GW

While I think the scenario I described is very unlikely, it nonetheless remains a possibility. 

More specifically - that there might be a simpler theory of physics that explains all "naturally" occurring conditions (in a contemporary particle accelerator, in a black hole, quasar, supernova, etc.), but doesn't predict the possibility of false vacuum in some unnatural conditions that may be created by AGI, while a more complicated theory of physics does. 

If AGI prefers a simpler theory, that may cause it to create a false vacuum decay.

Comment by RationalSieve on What qualities does an AGI need to have to realize the risk of false vacuum, without hardcoding physics theories into it? · 2023-02-03T17:50:53.660Z · LW · GW

This is a hypothetical question, regarding possible (albeit not the most likely) existential risks. Maybe a non-artificial intelligence can realize it, but I'm talking about artificial because it can be programmed in different ways.

By hardcoded, I mean forced to prefer - in this case, a more complicated physics theory with false vacuum decay over a simple one without it.

Comment by RationalSieve on Oracle AGI - How can it escape, other than security issues? (Steganography?) · 2022-12-27T12:43:05.302Z · LW · GW

Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet. 

If you're trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...

To fix something, we need to know what to fix first.