Posts

Comments

Comment by Christopher “Chris” Upshaw (christopher-chris-upshaw) on Contra EY: Can AGI destroy us without trial & error? · 2022-06-14T21:02:00.050Z · LW · GW

"Either they’re perfectly doable by humans in the present, with no AGI help necessary."

So, your argument about why this is a relevant statement is that AI isn't adding danger? That seems to me to be using a really odd standard for "perfectly doable" .. the actual number of humans who could do those things is not huge, and humans don't usually want to. 

Like either ending the world is easy for humans, in which AI is dangerous because it will want to, or its hard for humans in which case AI is dangerous because it will do them better.

I don't think that works to dismiss that category of risk.

Comment by Christopher “Chris” Upshaw (christopher-chris-upshaw) on AGI Ruin: A List of Lethalities · 2022-06-13T18:09:48.368Z · LW · GW

So what should I do with this information, like what other option than "nod along and go on living their lives" is there for me?

Comment by Christopher “Chris” Upshaw (christopher-chris-upshaw) on Underappreciated points about utility functions (of both sorts) · 2020-01-05T07:27:03.942Z · LW · GW

I don't believe that infinite gambles are a thing. In fact they feel almost self evidently at best an approximation.