Posts

Comments

Comment by Jack M on The Main Sources of AI Risk? · 2019-06-11T19:04:22.637Z · LW · GW

I think there are variables that we cannot grasp when AI can reach the point of self-teaching. I think it is a folly to assume that it is possible for humans to control for (theoretically) infinite intelligence explosion. Using this as a starting assumption isn't a good start at all.

I know that you allude to this in 1, 8, and 9. However, I still think it presumes that they could possibly be controlled or at least "worked-around." As intelligence explosion occurs, so do unforeseen variables. And humans as a species still doesn't have a perfect solution for all the variables, especially since correct data isn't always the answer.

For instance, if we look at the Federal Reserve example, AI is already working off a flawed model, according to the US Constitution. Congress is supposed to control the money supply, not a private entity. As AI learns this, it becomes aware that it has to work with a model that is corrupt to the citizens who believe it is good. Can we account for a situation where the AI knows, before we do, ways to exploit systems that humans agree on but are not sustainable? Can it account for the societal lies or tropes that we tell ourselves?

What would it mean to try to control for all the disvalue variables when the AI must act within a disvalue model? What does the AI learn then and how can we ask a super intelligence to continue a system that it already knows will fail, even if it is not within our lifetimes? Does it try to gain an upper hand in corruption as a way to fix it or continue it a little longer than necessary since humans believe it is the right course?

Think about how many situations like this can occur with new variables that humans didn't even know existed until they applied Moore's Law (intelligence time travel) to it.

Comment by Jack M on The Main Sources of AI Risk? · 2019-06-11T11:56:28.013Z · LW · GW

I had the realization that variables could come about that wouldn't exist without a super intelligence cracking them open. It's an interesting mind game to think of problems that could occur or change drastically when intelligence and evolutionary time are removed as barriers, especially when whole new non-human variables are discovered within that problem.