Relevance of the STEPS project to (F)AI teams

post by Gunnar_Zarncke · 2013-08-31T17:27:55.473Z · LW · GW · Legacy · 7 comments

Contents

7 comments

The View Point Reasearch Institute http://www.vpri.org/ (which by the way brought us the squeek etoys) has a project called STEPS Toward The Reinvention of Programming.

The goal is basically to create a complete operating system, tool chain and program suite in 20000 lines of code. The reason for this and other goals are not relevant for this post but I recommend that you take a look into it anyway.

I have followed the progress of the STEPS project more or less since its beginning in 2007: http://www.vpri.org/pdf/tr2007008_steps.pdf

This project is interesting to the (F)AI community in so far as it contains a complete operating system in a form that contains more or less no noise or redundancy in its implementation, is extremely powerful (in terms of expressivity and generality) and thus more amenable to artificial reasoning and optimization than any other system.

The current report http://www.vpri.org/pdf/tr2011004_steps11.pdf contains a very interesting paragraph on page 22:

"An awkward and complicated ʹoptimization tower of babelʹ is avoided by giving our DSLs the ability to act on their own implementation and optimizations, flattening the ʹtowerʹ into a collection of reflexive and metacircular mechanisms. The line between strategy and implementation, between coordination and computation, is eliminated ‑‑ or at least guaranteed to be eliminable whenever necessary."

That means that an AI system running on top of that could optimize itself as is needed for an intelligence explosion I guess. And you may guess who might be running AI on top of it.

EDIT: Some seem to think that this post is somewhat inappropriate. But I can't make out why. Because it is too short? Because it is about AI? Because it requires reading up on that project? Because it is about software engineering instead of rationality? I'm a newbie and don't know which or how to improve. Please comment your negative rating.

7 comments

Comments sorted by top scores.

comment by Kawoomba · 2013-08-31T19:17:20.781Z · LW(p) · GW(p)

It would be much better to specify a limit in terms of how large the compressed (OS) program may be, rather than lines of code, since the latter are as a metric too easy to cheat with.

comment by jmmcd · 2013-09-01T07:02:24.499Z · LW(p) · GW(p)

The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn't agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn't it good enough to just optimise one's source code at the top level? Isn't there a limit to how much you can gain by running on top of a perfect O/S?

(BTW the "tower of Babel" is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python - RPython - LLVM - ??? - electrons.)

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-01T09:00:12.776Z · LW(p) · GW(p)

There are lots of reasons why an optimizable OS and tool chain are relevant:

  • control over the lower level OS allows for significant performance gains (there have been significant algorithmic gains in process isolation, scheduling and e.g. garbage collection on the OS level all of which improve run-time).

  • access to a comparatively simple OS and tool chain allows the AI to spread to other systems. Writing a low level virus is significantly more 'simple', powerful, effective and possile to hide than spreaing via text interface.

  • a kind of self-optimizable tool chain is presumably needed within an AI system anyway and STEPS proposes a way to not only model but actually build this.

Replies from: jmmcd
comment by jmmcd · 2013-09-01T10:40:02.376Z · LW(p) · GW(p)

control over the lower level OS allows for significant performance gains

Even if you got a 10^6 speedup (you wouldn't), that gain is not compoundable. So it's irrelevant.

access to a comparatively simple OS and tool chain allows the AI to spread to other systems.

Only if those other systems are kind enough to run the O/S you want them to run.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-01T10:59:44.903Z · LW(p) · GW(p)

It may be irrelevant in the end but not in the beginning. I'm not really talking about the runaway phase of some AI but about the hard or non-hard takeoff and there any factor will weigh heavily. 10^3 will make the difference between years or hours.

comment by wuncidunci · 2013-09-02T18:16:53.635Z · LW(p) · GW(p)

Understanding the OS to be able to optimize better sounds somewhat useful to a self-improving AI.

Understanding the OS to be able to reason properly about probabilities of hardware/software failure sounds very important to a self-improving AI that does reflection properly. (obviously it needs to understand hardware as well, but you can't understand all the steps between AI and hardware if you don't understand the OS)

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2013-09-02T21:52:06.793Z · LW(p) · GW(p)

If I remember correctly then a VHDL specification of hardware was also part of the STEPS project.