Is there a recursive self-improvement hierarchy?

post by PhilGoetz · 2015-10-29T02:55:00.909Z · LW · GW · Legacy · 9 comments

When we talk about recursively self-improving AI, the word "recursive" there is close enough to being literal rather than metaphoric that we glide over it without asking precisely what it means.

But it's not literally recursion—or is it?

The notion is that an AI has a function optimize(X) which optimizes itself. But it's recursion in the sense of modifying itself, not calling itself. You can imagine ways to do this that would use recursion—say, the paradigmatic executable that rewrites its source code, compiles it, and exec's it—but you can imagine many ways that would not involve any recursive calls.

Can we define recursive self-improvement precisely enough that we can enumerate, explicitly or implicitly, all possible ways of accomplishing it, as clearly as we can list all possible ways of writing a recursive function? (You would want to choose one formalism to use, say lambda calculus.)

To see why this is important, consider this set of LISP functions:

[1]> (defun f(x) (g x))
F
[2]> (defun g(x) (f x))
G
[3]> (f 0)

*** - Lisp stack overflow. RESET

The composition of f and g is a recursive function, though neither f nor g is itself recursive. This is analogous to two AIs, neither of which wishes to nor can modify itself, but each of which can modify the other. I can imagine that AI #1 wants to use AI #2 for some task, and tweaks its code to improve it. Meanwhile AI #2 wants to use AI #1 for something else, and tweaks its code to improve it. And so on. The effect is something like recursive self-improvement of both AIs. Without being guided by the analogy between recursive functions and recursive self-improvement, it would be easy to overlook this possibility.

The notion of an AI which can rewrite its own source code is just one special case, which is both unlikely and poorly-defined. Unlikely, because lots of programmers already write self-modifying code, and they don't give the program a text file with its source code. Poorly defined, because many languages and computational architectures blur the distinction between code and data.

Once we try to be specific about what it means, we see there are many different ways of accomplishing it. We could provide a LISP program with a pointer to the data structure of the running program. We could write Perl code that can read and modify functions defined in the symbol table. We could write Prolog programs that assert new rules.

Do all these different ways have the same self-improvement power, in a way analogous to how all good programming languages are Turing-complete? For instance, imagine one logic program that can rewrite its facts, another that can also add new rules, a third that can also delete rules, and a fourth that can also rewrite its logic. Are these different enough that we should consider them as different cases when we enumerate the methods of recursive self-improvement?

9 comments

Comments sorted by top scores.

comment by SilentCal · 2015-10-30T16:00:09.136Z · LW(p) · GW(p)

At the heart of this question is some concept of resource permission that I'm trying to nail down--that is, agent X has 'self-modified' into agent Y iff agent Y has the same hardware resources that agent X had. This distinguishes self-modification from emulation, which is important; humans have limited self-modification, but with a long paper tape we can emulate any program.

A proposed measure: Define the 'emulation penalty' of a program that could execute on the AI's machine as the ratio of the runtime of the AI's fastest possible emulation of that program to the runtime of the program executing directly on the machine. The maximum emulation penalty over all possible programs puts at least an lower bound on the AI's ability to effectively self-modify into any possible agent.

An AI that can write and exec assembly would have a max emulation penalty of 1; one that can write and exec a higher-level language would probably have 10-100 (I think?); and one that could only carry out general computation by using an external paper tape would have a max emulation penalty in the billions or higher.

Replies from: Gurkenglas
comment by Gurkenglas · 2016-01-09T11:38:52.020Z · LW(p) · GW(p)

Therefore, for a computer in Greg Egan's Permutation City, emulation is self-modification?

comment by Gunnar_Zarncke · 2015-10-29T07:15:26.021Z · LW(p) · GW(p)

Do all these different ways have the same self-improvement power, in a way analogous to how all good programming languages are Turing-complete?

I think self-improvement power splits into two aspects: a) which layer of 'execution substrate' the optimization process can reach (source code, byte code, machine code, logic gates, production processes, physics) and b) which level of optimization complexity the process can reach. I think you refer to the latter. Also note that a Turing complete optimization process is probably harder to control than some process this is still fairly general but has some of the limitations proposed by Stuart Armstrong built in.

comment by turchin · 2015-10-29T09:30:38.364Z · LW(p) · GW(p)

It would also interesting to note that the program can't run and optimise itself simultaneously. Probably it need to copy its source code, edit it, than terminate itself and start the new code. Or edit only subagent which is not in use in current moment.

Replies from: Luke_A_Somers, Thomas
comment by Luke_A_Somers · 2015-10-29T11:47:49.481Z · LW(p) · GW(p)

Even if it was Neumann architecture, it should be able to modify any function not in its call chain freely, and would only have to include special handling code that edits itself out for when it returns to a function in its call chain.

In other architectures, I can imagine anything changing anything short of the architecture and the code the execution pointer will traverse while editing. Only in that one case do I foresee a need to work in scratch space.

Replies from: turchin
comment by turchin · 2015-10-29T12:16:45.441Z · LW(p) · GW(p)

Basically you say the same as I said. The program could edit one of its parts which is not now in use, and can't edit part there the pointer may be during the editing process, because it will crash all editing process halfway.

Replies from: Luke_A_Somers, PhilGoetz
comment by Luke_A_Somers · 2015-10-29T15:32:47.404Z · LW(p) · GW(p)

You said a different sub-agent. That sounds to me like a much larger structure than the current deepest function.

A function does not need to have most of the characteristics which go with 'agency', like modeling the world and having preferences. It just needs to execute machine instructions. If you just meant 'right in front of the execution pointer', then ok.

comment by PhilGoetz · 2015-10-29T13:52:25.809Z · LW(p) · GW(p)

Not all programs are compiled from source code. AGI architectures are often written in LISP or Prolog, which are interpreted. They typically have their key knowledge structures, such as their goals and some of their algorithms, encoded in data structures. It's unclear whether they would need to rewrite any code to recursively self-improve. It probably depends on particulars of the logic used. Hence this post.

I'm going to delete the footnote your initial comment referred to, because it's a distraction.

comment by Thomas · 2015-10-29T10:53:12.647Z · LW(p) · GW(p)

the program can't run and optimise itself simultaneously

I think, the hot updating is to consider as well.