Posts

Comments

Comment by derekz2 on Nonsentient Optimizers · 2008-12-27T15:58:17.000Z · LW · GW

These last two posts were very interesting, I approve strongly of your approach here. Alas, I have no vocabulary with any sort of precision that I could use to make a "nonperson" predicate. I suppose one way to proceed is by thinking of things (most usefully, optimization processes) that are not persons and trying to figure out why... slowly growing the cases covered as insight into the topic comes about. Perform a reduction on "nonpersonhood" I guess. I'm not sure that one can succeed in a universal sense, though... plenty of people would say that a thermostat is a person, to some extent, and rejecting that view imposes a particular world-view.

It's certainly worth doing, though. I know I would feel much more comfortable with the idea of making (for example) a very fancy CAD program Friendly than starting from the viewpoint that the first AI we want to build should be modeled on some sort of personlike goofy scifi AI character. Except Friendly.

Comment by derekz2 on Not Taking Over the World · 2008-12-16T14:05:46.000Z · LW · GW

Oops, Julian Morrison already said something similar.

Comment by derekz2 on Not Taking Over the World · 2008-12-16T14:04:19.000Z · LW · GW

I think Robin's implied suggestion -- to not be so quick to discard the option of building an AI that can improve itself in certain ways but not to the point of needing to hardcode something like Coherent Extrapolated Volition. Is it really impossible to make an AI that can become "smarter" in useful ways (including by modifying its own source code, if you like), without it ever needing to take decisions itself that have severe nonlocal effects? If intelligence is an optimization process, perhaps we can choose more carefully what is being optimized until we are intelligent enough to go further.

I suppose one answer is that other people are on the verge of building AIs with unlimited powers so there is no time to be thinking about limiting goals and powers and initiative. I don't believe it, but if true we really are hosed.

It seems to me that if reasoning leads us to conclude that building self-improving AIs is a million-to-one shot to not destroy the world, we could consider not doing it. Find another way.

Comment by derekz2 on Sustained Strong Recursion · 2008-12-06T15:15:14.000Z · LW · GW

Robin, perhaps you could elaborate a little bit... assuming I understand what's going on (I'm always hopeful), the "recursion" here is the introduction of output being a function of "subjective time" (y) instead of "clock time" (t), and, further, y -- it is postulated -- is related to t by:

dy/dt = e^y

because the ratio of y to t is directly related to output (which as noted above is said to be an exponential function of y due Moore's law-type arguments).

That's seriously "strange". It is very different than a non-"recursive" analysis where, say, dy/dt = e^t. I could imagine you objecting to the veracity of this model, or claiming that this type of recursive loop is standard practice. Which of these are you saying, or are you saying something different entirely?

Comment by derekz2 on Sustained Strong Recursion · 2008-12-06T11:42:42.000Z · LW · GW

Good post, thanks for making it. Besides the issue of whether Intel gets some of the recursive benefits, there is also the question of how FOOMable Intel would be if its engineers ran on its own hardware. Since Intel is embedded in the global economy and chip fabs are monstrously expensive undertakings, speeding up certain design issues would only go so far. I suppose the answer is that Intel will shortly invent molelcular nanotechnology but it's not really clear to what extent Drexler's vision or a completely flexible variant is even possible.

Still, your point here was to illustrate mathematically the way "recursion" of the type you are talking about increases growth and you did a good job of that.

Comment by derekz2 on Recursive Self-Improvement · 2008-12-02T03:46:23.000Z · LW · GW

From a practical point of view, a "hard takeoff" would seem to be defined by self-improvement and expansion of control at a rate too fast for humans to cope with. As an example of this, it is often put forward as obvious that the AI would invent molecular nanotechnology in a matter of hours.

Yet there is no reason to think it's even possible to improve molecular simulation, required to search in molecular process-space, much beyond our current algorithms, which on any near-term hardware are nowhere near up to the task. The only explanation is that you are hypothesizing rather incredible increases in abilities such as this without any reason to even think that they are possible.

It's this sort of leap that makes the scenario difficult to believe. Too many miracles seem necessary.

Personally I can entertain the possibility of a "takeoff" (though it is no sure thing that it is possible), but the level of optimization required for a hard takeoff seems unreasonable. It is a lengthy process just to compile a large software project (a trivial transformation). There are limits to what a particular computer can do.

Comment by derekz2 on Surprised by Brains · 2008-11-23T14:33:22.000Z · LW · GW

At some random time before human brains appeared, the time estimate is more difficult. Believer's imminence is helped a lot by humans' existence.

Comment by derekz2 on Whence Your Abstractions? · 2008-11-20T03:23:16.000Z · LW · GW

This public exercise, where two smart people hunt for the roots of their disagreement on a complex issue, has the potential to be the coolest thing I've seen yet on this blog. I wonder if it's actually possible -- there are so many ways that it could go wrong, humans being what they are. I really appreciate the two of you giving it a try.

Not that you asked for people to take shots from the cheap seats, but Eliezer, your ending question: "I mean... is that really what you want to go with here?" comes across (to me at least) as an unnecessarily belligerent way to proceed, likely to lead the process in unfortunate directions. It's not an argument to win, with points to score.

Comment by derekz2 on Failure By Analogy · 2008-11-18T04:50:42.000Z · LW · GW

Analogies are great guess generators, sources of wondrous creativity. It's very cool that the universe works in such a way that analogy often leads to fruitful ideas, and sometimes to truths.

But determining whether a particular analogy has really led anywhere requires more and different work, because analogy is not a valid form of inference. Nothing is ever true because of an analogy.

Comment by derekz2 on Selling Nonapples · 2008-11-14T00:41:01.000Z · LW · GW

I think subsumption is still popular amongst students and hobbyists.

It does raise an interesting "mini Friendliness" issue... I'm not really comfortable with the idea of subsumption software systems driving cars around on the road. Robots taking over the world may seem silly to most of the public but there are definite decisions to be made soon about what criteria we should use to trust complex software systems that make potentially deadly decisions. So far I think there's a sense that the rules robots should use are completely clear, the situations enumerable -- because of the extreme narrowness of the task domains. As tasks become more open ended, that may not be true for much longer.