How easy/fast is it for a AGI to hack computers/a human brain?

post by Noosphere89 (sharmake-farah) · 2022-06-21T00:34:34.590Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    4 JBlack
None
No comments

If an AGI had to create and hack a computer or a human brain, how easy and how fast would they be able to do it so that it does what the AI wants?

Answers

answer by JBlack · 2022-06-21T05:05:31.233Z · LW(p) · GW(p)

I don't really have an answer, just an enormous amount of "it depends on what the scenario is", with a big dollop of "we don't know what we don't know" on top.

We do know that hacking computers is definitely not very hard as a general rule. Sometimes people can do it by accident. Hacking brains is also observationally not very difficult, in the sense that there are quite a few people who can successfully tell other humans to do things even when the things being done are detrimental to those humans.

There are of course limits to both, but also the possibility of greater-than-human capabilities for both.

In the limit, we can be pretty sure that a sufficiently superintelligent AI could devise microscopic machines/bacteria/whatever that could make their way into a human brain and change which neurons fire as well as other effects. It seems almost certain that enough of this would enable complete control over the person including their thoughts and feelings. The same sort of thing could be used to hijack memory buses and so on in computer hardware.

That's a pretty scary scenario, but it does depend upon weasel phrases like "sufficiently superintelligent", and also questions about whether these things could be prevented or at least detected during manufacture and deployment.

There are even more speculative possibilities such as crafted images, sounds, or other sensory inputs that can bypass or take over various brain functions. We don't know, and may never know. It doesn't seem like the sort of thing that could be deduced from current human knowledge or first principles, so if this sort of thing can exist then it would likely require substantial experimentation by an AI before it could be designed.

For computers, these sorts of hacks are routine. Many software and sometimes hardware flaws allow data inputs to execute arbitrary programs on the computer. There are also plenty of ways to stealthily gather information about whether a given system has any of the flaws you know about. It seems likely that nearly every general purpose computer in use has thousands of such flaws that we have not yet discovered and fixed. New software with new flaws is coming out daily. An AI that can think faster and better about computer code could in principle discover them in much shorter periods of time than any human response.

But again, we can't really know. There are too many possible scenarios. However, we should expect that an AI that thinks faster and better than us, and attempts to hack us or our computers will find ways to do it that we not only haven't thought of, but that we literally cannot imagine.

No comments

Comments sorted by top scores.