Posts

Comments

Comment by jprwg on By default, avoid ambiguous distant situations · 2019-06-02T21:48:21.202Z · LW · GW

I see, thank you. So then, would you say this doesn't & isn't intended to answer any question like "whose perspective should be taken into account?", but that it instead assumes some answer to that question has already been specified, & is meant to address what to do given this chosen perspective?

Comment by jprwg on By default, avoid ambiguous distant situations · 2019-05-28T20:56:47.314Z · LW · GW

I'm trying to synthesise actual human values, not hypothetical other values that other beings might have.

To be clear, when you say "actual human values", do you mean anything different than just "the values of the humans alive today, in the year 2019"? You mention "other beings" - is this meant to include other humans in the past who might have held different values?

Comment by jprwg on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T21:54:44.515Z · LW · GW

Perhaps "size of compiled program" would be one way to make a crude complexity estimate. But I definitely would like to be able to better define this metric.

In any case, I don't think the concept of software complexity is meaningless or especially nebulous. A program with a great many different bespoke modules, which all interact in myriad ways, and are in turn full of details and special-cases and so on, is complex. A program that's just a basic fundamental core algorithm with a bit of implementation detail is simple.

I do agree that the term "complexity" is often used in unhelpful ways; a common example is the claim that the brain must be astronomically complex purely on the basis of it having so many trillions of connections. Well, a bucket of water has about 6x10^26 hydrogen bonds, but who cares? This is clearly not a remotely useful model of complexity.

I do think learned complexity makes the problem of defining complexity in general harder, since that training data can't count for nothing. Otherwise, you could claim the interpreter is the program, and the program you feed into it is really the training data. So clearly, the simpler and more readily available the training data, the less complexity it adds. And the cheapest simplest training data of all would be that generated from self-play.

>Although, the design instructions for the brain can be efficiently compressed, and indeed brains are made from surprisingly simple instructions.

Can you elaborate on this? If this is based on the size of the functional genome, can you assure me that the prenatal environment, or simply biochemistry in general, offers no significant additional capabilitiy here?

I'm reminded of gwern's hypothetical interpreter that takes a list of integers and returns corresponding frames of Pirates of the Caribbean (for which I can't now find a cite... I don't think I imagined this?). Clearly the possibility of such an interpreter does not demonstrate that the numbers 0 through 204480 are generically all that's needed to encode Pirates of the Caribbean in full.

Comment by jprwg on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:20:30.097Z · LW · GW

That could represent one step in a general trend of subsuming many detailed systems into fewer simpler systems. Or, it could represent a technology being newly viable, and the simplest applications of it being explored first.

For the former to be the case, this simplification process would need to keep happening at higher and higher abstraction levels. We'd explore a few variations on an AI architecture, then get a new insight that eclipses all these variations, taking the part we were tweaking and turning it into just another parameter for the system to learn by itself. Then we'd try some variations of this new simpler architecture, until we discover an insight that eclipses all these variations, etc. In this way, our AI systems would become increasingly general without any increase in complexity.

Without this kind of continuing trend, I'd expect increasing capability in NN-based software will have to be achieved in the same way as in regular old software: integrating more subsystems, covering more edge cases, generally increasing complexity and detail.

Comment by jprwg on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T22:00:35.907Z · LW · GW

Humans didn't evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn't undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for.

Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don't think anyone would argue otherwise. The plausible version of the modularity model claims the modules or subsystems are specialised for performing relatively narrow subtasks, with a real-world task making use of many modules in concert - like how complex software systems today work.

As an analogy, consider a toolbox. It contains many different tools, and you could reasonably describe it as 'modular'. But this doesn't at all imply that it contains a separate tool for each DIY task: a wardrobe-builder, a chest-of-drawers-builder, and so on. Rather, each tool performs a certain narrow subtask; whole high-level DIY tasks are completed by applying a variety of different tools to different parts of the problem; and of course each tool can be used in solving many different high-level tasks. Generality is achieved by your toolset offering broad enough coverage to enable you to tackle most problems, not by having a single universal thing-doer.

... I also think [HLAI is] likely to be a discrete research target ... You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.

What's your basis for this view? For example, do you have some strong reason to believe the human brain similarly achieves generality via a single universal mechanism, rather than via the combination of many somewhat-specialised subsystems?

Comment by jprwg on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-22T23:59:09.419Z · LW · GW

The main thing that would predict slower takeoff is if early AGI systems turn out to be extremely computationally expensive.

Surely that's only under the assumption that Eliezer's conception of AGI (simple general optimisation algorithm) is right, and Robin's (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we'd still get a FOOM anyway?

Comment by jprwg on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-22T23:53:08.811Z · LW · GW

While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.

That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:

  • A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.

  • Long-run data showing AI systems gradually increasing in capability without any increase in complexity. The AGZ example here might be part of an overall trend in that direction, but as a single data point it really doesn't say much.