FAI vs network security

post by snarles · 2010-10-11T23:06:47.143Z · LW · GW · Legacy · 4 comments

All plausible scenarios of AGI disaster involve the AGI gaining access to resources "outside the box."  Therefore there are two ways of preventing AGI disaster: one is preventing AGI, which is the "FAI route", and the other is preventing the possibility of rogue AGI gaining control of too many external resources--the "network security route."  It seems to me that this network security route--an international initiative to secure networks and computing resources against cyber attacks--is the more realistic solution for preventing AGI disaster.  Network security prevents against intentional human-devised attacks as well as the possibility of rogue AGI--therefore such measures are easier to motivate and therefore more likely to be implemented successfully.  Also, the development of FAI theory does not prevent the creation of unfriendly AIs.  This is not to say that FAI should not be pursued at all, but it can hardly be claimed that development of FAI is of top priority (as it has been stated a few times by users of this site).

4 comments

Comments sorted by top scores.

comment by erratio · 2010-10-11T23:49:47.936Z · LW(p) · GW(p)

It's frustrating to see this idea surface over and over again. Look up Kevin Mitnick and social engineering, and then consider that a boxed AGI will have at least his level of persuasion and more incentive to use it unethically (because getting out of the box will be the highest priority)

comment by Vladimir_M · 2010-10-11T23:25:30.698Z · LW(p) · GW(p)

Implementing computer networks that would be secure even against smart human attackers, let alone against superhuman intelligences, is an impossible goal. Human minds, whether operating in isolation or in large cooperative organizations, are simply unable to reason reliably at that level of complexity. It would be an even harder task than writing reliably bug-free large software projects or designing reliably bug-free state-of-the-art microprocessors -- goals that humans already find unreachable in practice.

The only ways to avoid being hacked are: (1) to keep your computer offline, (2) to be an uninteresting target that's not worth the effort, and (3) to have good forensics and threaten draconian punishments against hackers. Clearly, only (1) is a solution applicable to the problem of keeping AIs boxed, but then we get to the problem of social engineering.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2010-10-17T00:18:16.907Z · LW(p) · GW(p)

Implementing computer networks that would be secure even against smart human attackers, let alone against superhuman intelligences, is an impossible goal.

Yes, well so is creating a friendly AI.

Now, shut up and do the impossible

comment by Richard_Kennaway · 2010-10-12T14:45:18.747Z · LW(p) · GW(p)

Besides what erratio and Vladimir M have said, which I agree with:

  1. Keeping the AI in a box has already been addressed as a bad solution by Eliezer, but your post indicated no awareness of that. There is no point in posting to LessWrong on a subject that has already been covered in depth unless you have something to add.

  2. LessWrong is about rationality, not AGI, and while there are connections between rationality and AGI, you didn't make any.