Self-improvement without self-modification

post by Stuart_Armstrong · 2015-07-23T09:59:01.156Z · LW · GW · Legacy · 5 comments

Contents

5 comments

This is just a short note to point out that AIs can self-improve without having to self-modify. So locking down an agent from self-modification is not an effective safety measure.

How could AIs do that? The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas).

Or it the AI remains unchanged and in charge, it could change the whole process around itself, so that the whole process changes and improves. For instance, if the AI is inconsistent and has to pay more attention to problems that are brought to its attention than problems that aren't, it can start to act to manage the news (or the news-bearers) to hear more of what it wants. If it can't experiment on humans, it will give advice that will cause more "natural experiments", and so on. It will gradually try to reform its environment to get around its programmed limitations.

Anyway, that was nothing new or deep, just a reminder point I hadn't seen written out.

 

5 comments

Comments sorted by top scores.

comment by Sean_o_h · 2015-07-24T08:11:51.381Z · LW(p) · GW(p)

"The easiest and the most trivial is to create a subagent, and transfer their resources and abilities to it ("create a subagent" is a generic way to get around most restriction ideas)." That is, after all, how we humans are planning to get around our self-modification limitations in creating AI ;)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-07-27T09:44:20.114Z · LW(p) · GW(p)

Indeed ^_^

comment by turchin · 2015-07-23T10:56:45.514Z · LW(p) · GW(p)

I would like to add also that learning is the best known way of self-improvement. One can get a strategy which could raise its effective intelligence several orders of magnitude. (One of such strategies is: "if you have a question, ask Google" :)

Also even AI not capable to self improvement or self modification could still be very strong and very dangerous, if it have IQ 200, and works very quickly. It does not need to self-improve to take over the Internet and create virus that will kill all humans. In fact this means that condition of ability to self-improve is unnecessary in the Friendly AI research.

But if an AI does not know its own source code or even basic principles of which it is created it would not be able create strong subagent. So here maybe temporary solution: AI could work in outside world, except one black box, which contains its own source code (assuming that no other similar codes exist outside, which hardly will happen).

comment by Richard_Kennaway · 2015-07-23T19:04:41.650Z · LW(p) · GW(p)

What distinction are you making between self-improvement and self-modification? Trivially, an improvement is a change, that is, a modification. So presumably you mean something else by modification.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-07-27T09:43:36.801Z · LW(p) · GW(p)

I was trying to get at the distinction between training yourself to run faster so you can get to work faster (self-modification, ie modification targeted at the self) versus telecommuting (self-improvement, ie improvement of the self).