Concrete vs Contextual values

post by whpearson · 2009-06-02T09:47:30.233Z · LW · GW · Legacy · 32 comments

Contents

32 comments

The concept of recursive self-improvement  is not an accepted idea outside of the futurist community. It just does not seem right in some fashion to some people. I am one of those people, so I'm going to try and explain the kind of instinctive skepticism I have towards it. It hinges on the difference between two sorts of values, whose difference I have not seen made explicit before (although likely it has somewhere). This difference is that of the between a concrete and contextual value.

So lets run down the argument so I can pin down where it goes wrong in my view.

  1. There is a value called intelligence that roughly correlates with the ability to achieve goals in the world (if it does not then we don't care about intelligence explosions as they will have negligible impact on the real worldTM)
  2. All things being equal a system with more compute power will be more capable than one with less (assuming they can get the requisite power supply). Similarly systems that have algorithms with better run time complexities will be more capable.
  3. Computers will be able to do things to increase the values in 2. Therefore they will form a feedback loop and become progressively more and more capable at an ever increasing rate.

The point where I become unstuck is in the phrase "all things being equal". Especially what the "all" stands for. Let me run down a similar argument for wealth.

  1. There is a value called wealth that roughly correlates with the ability to acquire goods and services from other people.
  2. All things being equal a person with more money will be more wealthy than one with less.
  3. You are able to put your money in the bank and get compound interest on your money, so your wealth should be exponential in time (ignoring taxes).

3 can be wrong in this, dependent upon the rate of interest and the rate of inflation. Because of inflation, each dollar you have in the future is less able to buy goods. That is the argument in 3 ignores that at different times and in different environments money is worth different amounts of goods. Hyper inflation is a stark example of this. So the "all things being equal" references the current time and state of the world and 3 breaks that assumption by allowing time and the world to change.

Why doesn't the argument work for wealth, but you can get stable recursive growth on neutrons in a reactor? It is because wealth is a contextual value, it depends on the world around you, as your money grows with compound interest the world changes it to make it less valuable without touching your money at all. Nothing can change the number of neutrons in your reactor without physically interacting with them or the reactor in some way. The neutron density value is concrete and containable, and you can do sensible maths with it.

I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year. That is despite an increase in raw computing ability it will not have an increase in achieving the goal of winning chess. Another possible example of the contextual nature of intelligence is the case where a systems ability to perform well in the world is affected by other people knowing its source code, and using it to predict and counter its moves.

From the view of intelligence as a contextual value, current  discussion of recursive self-improvement seems overly simplistic. We need to make explicit the important things in the world that intelligence might depend upon and then see if we can model the processes such that we still get FOOMs.

Edit: Another example of the an intelligences effectiveness being contextual is the role of knowledge in performing tasks. Knowledge can have a expiration date after which it becomes less useful. Consider knowledge about the current english idioms usefulness for writing convincing essays, or the current bacterial population when trying to develop nano-machines to fight them.  So you might have an atomically identical intelligence whose effectiveness varies dependent upon the freshness of the knowledge. So there might be conflicts between expending resources on improving processing power or algorithms and keeping knowledge fresh, when trying to shape the future. It is possible, but unlikely, that an untruth you believe will become true in time (say your estimate for the population of a city was too low but its growth took it to your belief), but as there are more ways to be wrong than right, knowledge is likely to degrade with time.

32 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-02T16:34:17.686Z · LW(p) · GW(p)

Both of your attempted counterexamples in chess and wealth have the same structure: it is relative chess-playing ability and relative wealth that fails to go FOOM, because they are directly opposed by other FOOMs on the same curve; it is obvious that relative ability cannot go FOOM for all players in a game. Economic growth is still an exponential curve and so is the chess-playing efficiency of computers compared to stationary targets (that is, us).

The question of relative-ability FOOM versus global-shared FOOM was precisely the substance of the dispute between myself and Robin.

I would also suggest that the distribution of wealthy humans is more fat-tailed than the distribution of computer chessplayer efficiences precisely because of the increased self-interaction of human wealth.

Replies from: whpearson
comment by whpearson · 2009-06-02T17:38:47.682Z · LW(p) · GW(p)

Those weren't all my counter examples, see the degradation of some knowledge over time.

Economic growth is not currently an exponential upwards curve. So I'm not sure what your point is here.

Is high inflation possible on a global scale and peoples saved wealth shrink in absolute terms as well? I don't see why not, it just requires a global decrease in output with an increasing population.

I also don't see why people compares the intelligence of computers against humans on their own. Humans are tools users, why not compare against human + their tools (i.e. computers).

Replies from: Psychohistorian, Matt_Simpson
comment by Psychohistorian · 2009-06-02T18:26:30.132Z · LW(p) · GW(p)

Your money example is a linguistic trick. It uses nominal wealth. 3. simply does not hold for nominal wealth. If "all else equal" meant "interest > inflation," then it would say that real wealth is growing exponentially, in which case 3 would indeed always hold. The example as it stands just doesn't prove much, other than nominal wealth is not real wealth.

More significantly, wealth is purely socially determined; it has no objective value. Intellect has objective abilities. Chess is a terrible example. If you use something non-relative, like, say, deriving general relativity, or building a rocket, or catapult, or what-have-you, higher intelligence will result in a better/more-quickly-made end product. In some cases, the end product won't exist without a certain level of intelligence; if no human had ever had an IQ over 75, I sincerely doubt we'd have general relativity (or electricity, or, well, pretty much anything).

There may be a bit more complexity to the issue of recursive self-improvement, but I really don't see the distinction between contextual and concrete values being nearly as significant as you have claimed.

Replies from: JGWeissman, whpearson
comment by JGWeissman · 2009-06-02T19:07:17.819Z · LW(p) · GW(p)

Your money example is a linguistic trick. It uses nominal wealth.

I would like to expand on this idea. Earning interest on a savings account is not a FOOM, but an attempt to participate in someone else's FOOM. The real economic FOOM comes from using one's resources to develop better resource producing capabilities, e.g. building a factory or tools (which may at some point be obsolete, the trick is to get a good return on the investment before that happens, and then move on to a new investment). A savings account, on the other hand, is able to collect interest by providing loans to people who have plans to make good investments. The interest rate they can collect will be low because there are more savings accounts than skilled investors (and the savings accounts compete with other loan providers, like central banks that can print money for loans, which incidentally also causes the inflation that negates the value of the interest earned).

Generally, it is not a problem that FOOMs may fail due to competition, as one of them will win the competition, just as the serious investors do better than those who use savings accounts.

comment by whpearson · 2009-06-02T20:01:41.241Z · LW(p) · GW(p)

Sorry, about the message, I got the wrong meaning of objective.

Yes IQ has objective properties in the sense they don't rely on society.

But I think the difference between objective and subjective is not a good one to keep. Minds and societies are physical things, they may change more mecurially than other physical things, but they are still physical.

comment by Matt_Simpson · 2009-06-02T19:17:26.650Z · LW(p) · GW(p)

Economic growth is not currently an exponential upwards curve. So I'm not sure what your point is here.

Don't mistake a short term deviation for a long term trend.

comment by orthonormal · 2009-06-02T17:24:25.477Z · LW(p) · GW(p)

I think the intuition you have for recursive self-improvement is that of a machine running essentially the same program at each stage, just faster. That's not what's meant. Human brains aren't sped-up chimp brains; they have some innate cognitive modules that allow them to learn, communicate, model cause and effect, and change the world in ways chimps simply can't. We don't know what cognitive modules are even possible, let alone useful; but it seems clear there's 'plenty of room at the top', and that a recursively self-improving intelligence could program and then use such modules. Not to go Vernor Vinge on you, but if a few patched-together modules made the difference between chimp and human technology, then a few deliberately engineered ones could leap to what we consider absurd levels of control over the physical world.

Replies from: whpearson
comment by whpearson · 2009-06-02T18:08:04.273Z · LW(p) · GW(p)

I agree there is plenty of the room at the top. The question is how we get their.

I avoided the type of scenarios you describe because we don't understand them, We can't quantify them and be sure there will be positive feedback loops.

Replies from: orthonormal
comment by orthonormal · 2009-06-02T20:09:17.111Z · LW(p) · GW(p)

You have to quantify your uncertainty, at least. I consider it highly likely that there are many novel cognitive modules that an intelligence not far beyond human could envision and construct, that would never even occur to us. But not even this is required.

It seems to me implausible that the cognitive framework we humans have is anywhere near optimal, especially given the difficult mind-hacks it takes us to think clearly about basic problems that aren't hardwired in. Some really hard problems are solved in blinding speed without conscious awareness, while some very simple problems just don't have a cognitive organ standing ready, and so need to be (very badly) emulated by verbal areas of the brain. (If we did the mental arithmetic for Bayesian updating the way we do visual processing— or if more complicated hypotheses felt more unlikely— we'd have had spaceflight in 50,000 BC.) We're cobbled together one codon-change at a time, with old areas of the brain crudely repurposed as culture outstrips genetic change. Thus an AI with cognitive architecture on our level, but able to reprogram itself, would have ample room to become much, much smarter than us, even without going into cognitive realms we can't yet imagine— simply by integrating modules we can already construct externally, like calculators, into its reasoning and decision processes. Even this, without the further engineering of novel cognitive architecture, looks sufficient for a FOOM relative to human intelligence.

comment by conchis · 2009-06-02T11:52:12.786Z · LW(p) · GW(p)

Presumably, contextual effects could also be positive rather than negative, and could therefore make FOOMs more likely rather than less? To adapt the game analogy slightly, it depends whether your opponents are improving faster than your team mates (and whether your improving tends to attract or repel better team mates).

Physics probably isn't becoming a tougher opponent over time; society probably is. Part of the trick may lie in working with it rather than against it.

comment by Richard_Kennaway · 2009-06-02T12:24:11.567Z · LW(p) · GW(p)

What is Bill Gates, but wealth gone FOOM?

Replies from: CronoDAS
comment by CronoDAS · 2009-06-03T04:46:17.934Z · LW(p) · GW(p)

Warren Buffet is a better example of "wealth" gone FOOM.

Microsoft went FOOM, but it wasn't its money that went FOOM. It was MS-DOS that went FOOM. Many of Microsoft's other products were only as successful as they are because they were tied to MS-DOS and Windows. For example, the reason everybody uses Microsoft Office today is because, years ago, Microsoft charged PC manufacturers less for the Office + Windows bundle than for Windows alone. Everyone's PC already had Office installed when they bought it, so nobody had to go buy WordPerfect to do word processing or Lotus 1-2-3 to do spreadsheets. And then, once vendor lock-in took hold...

comment by Nick_Novitski · 2009-06-02T11:43:31.672Z · LW(p) · GW(p)

What makes the intelligence cycle zero-sum? What devalues the 10 MIPs advance? After all, the goal is not to earn a living with the prize money brought in by an Incredible Digital Turk, but to design superior probability-space searching programming algorithms, using chess as a particular challenge, then to use that to solve other problems which are not moving targets, like machine vision or materials analysis or...alright, I admit to ignorance here. I just suspect that not all goals for intelligence involve competing with/modeling other growing intelligences.

Technological advances (which seem similar enough to "increases in the ability to achieve goals in the world" to be worthy of a tentative analogy) may help some (the 20 MIPs crowd) disproportionately, but don't they frequently still help everyone who implements them? If people in Africa get cellphones, but people in Europe get supercomputers, all people are still getting an economic advantage relative to their previous selves; they can use resources better than they could previously.

Also, if point 3'' is phrased equally as vaguely as 3' (perhaps: "Wealthy people are able to do things to increase the values in 2''."), then it seems much more reasonable. Wealth can be used to obtain information and contacts that giver greater relative wealth-growing advantage, such as "Don't just put it all in the bank," or "My cousin's company is about to announce higher-than-expected earnings," or even "Global hyperinflation is coming, transfer assets to precious metals." Conversely (I think), if point 3' had a formulation sufficiently specific to be similarly limited ("Computers can keep having more RAM installed and thus will have more intelligence over time."), I don't see how that would be an indictment of the general case. What am I missing?

comment by PhilGoetz · 2009-06-03T17:37:04.929Z · LW(p) · GW(p)

What are you claiming?

I think your statements imply that humans aren't any more intelligent than chimpanzees, because a human isn't able to rise any higher in human society than a chimpanzee is able to rise in chimp society.

If you're making a claim about the utility of increasing intelligence, and saying that we'd be wise to forgo it, then you need to address the game-theoretic / evolutionary force in favor of it.

Replies from: whpearson
comment by whpearson · 2009-06-05T18:49:28.679Z · LW(p) · GW(p)

I'm claiming that intelligence is a contextual value. That is in order to talk of the the intelligence of something you also have to reference the context it exists in.

As contexts change over time positive feedback loops in things like improving processor speed or algorithm efficiency may not actually increase the ability to shape the world how you want.

comment by Z_M_Davis · 2009-06-02T16:32:05.226Z · LW(p) · GW(p)

a system with more compute power will be more capable than one with less (assuming they can get the requisite power supply). Similarly systems that have algorithms with better run time complexities will be more capable.

Is sheer computing power really the issue here? General intelligence isn't going to spontaneously emerge just because you built a really big supercomputer running a really efficient word processor.

comment by JamesCole · 2009-06-02T14:12:06.586Z · LW(p) · GW(p)

I'd argue that intelligence has a contextual nature as well. A simple example would be a computer chess tournament with a fixed algorithm that used as much resources as you threw at it. Say you manage to increase the resources for your team steadily by 10 MIPs per year, you will not win more chess games if another team is expanding their capabilities by 20 MIPs per year.

If you're comparing a randomly selected intelligent system against another randomly selected intelligent system drawn from the same pool, then of course the relative difference isn't going to change as you crank up the general level of intelligence.

But if you compare one of these against anything else as you crank up the general level of intelligence then it's a whole other story. And these other comparisions are pretty much what's at stake here.

comment by JulianMorrison · 2009-06-02T12:07:13.749Z · LW(p) · GW(p)

You seem to be arguing that one FOOM in the middle of another bunch of faster FOOMs won't be so impressive.

Er, OK, fair enough, and so?

Replies from: whpearson
comment by whpearson · 2009-06-02T13:42:05.193Z · LW(p) · GW(p)

I'm trying to argue more than that, does my edit to the post make it clearer?

Replies from: JulianMorrison
comment by JulianMorrison · 2009-06-02T15:07:03.477Z · LW(p) · GW(p)

A bit. You're arguing that one intelligent system, comprising smarts and knowledge, can be differently effective depending on its context. Smarts might be less effective if opposed by something smarter. Knowledge might be less effective if it's mistaken or incomplete.

So far, not controversial.

What you haven't managed to do is dent the recursive self improvement hypothesis. That is, you haven't shown that "all things aren't equal" between an AI and its improved descendant self.

Replies from: whpearson
comment by whpearson · 2009-06-02T15:59:00.686Z · LW(p) · GW(p)

To me it seems obvious from looking at the history of the earth that the world changes and what might be effective at one point is not necessarily so in the future.

Is it up to me to show that "all things aren't equal", or is it up to you to show that "all things are equal"? Whose opinion should be the default position that needs to be refuted?

I think I have given sufficient real world examples to at least make further thought into this matter worthwhile. Probably we should both try and argue the others side or something.

Replies from: orthonormal, timtyler, JulianMorrison
comment by orthonormal · 2009-06-02T17:24:08.632Z · LW(p) · GW(p)

Well, some things change, but the examples we have of general intelligence are all cross-domain enough to handle such change. Human beings are more intelligent than chimps; no plausible change in the environment that leaves both humans and chimps alive will result in chimps developing more optimization power than humans. The scientific community in the modern world does a better job of focusing human intelligence on problem-solving than does a hunter-gatherer religion; no change in the environment that leaves our scientists alive will allow our technology to be surpassed by the combined forces of animist tribes from the African jungles.

comment by timtyler · 2009-06-02T18:59:15.548Z · LW(p) · GW(p)

Repeated asteroid strikes that kill all multicellular creatures would be an example of an environmental change that prevented (or at least delayed) an intelligence explosion.

In a benign environment, nature appears to favour collecting computing elements together. The enormous modern data centres are the most recent example from a long history of intelligence deployments.

comment by JulianMorrison · 2009-06-02T20:56:46.069Z · LW(p) · GW(p)

"Equal" is the default - the rules are simpler. Exceptions need explanations.

Replies from: whpearson
comment by whpearson · 2009-06-02T22:04:24.783Z · LW(p) · GW(p)

I think we might be getting too terse. I have explained some cases where the effectiveness of a collection of atoms at performing goals has a different value dependent upon the environment. We need to explain those, so our function of

intelligence func (atoms a, environment e) can't just be

intelligence func (atoms a) which would be simpler

We need the environment in there some times and we need to explain why it is in there and why not. What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.

Replies from: JulianMorrison, orthonormal
comment by JulianMorrison · 2009-06-03T01:06:08.105Z · LW(p) · GW(p)

Intelligence in the abstract consumes experience (a much lower-level concept than either atoms or environment) and attempts to compute "understanding" - a predictive model of the underlying rules. Even very high intelligence wouldn't necessarily make a perfect model, given misleading input.

BUT

Intelligence is still a strictly more-is-stronger thing in a predictable universe. Which is what I read you as meaning by "all things being equal". Even if there is a theoretical limit on intelligence, nothing that exists comes remotely close. Even if there are confounding inputs, more intelligence will compensate better. Even if there are adverse circumstances, more intelligence will be better at predicting ahead of time and laying plans. Surprised human: lion gets lunch. Forewarned human: lion becomes a rug.

Replies from: whpearson
comment by whpearson · 2009-06-03T07:53:17.813Z · LW(p) · GW(p)

Intelligence is still a strictly more-is-stronger thing in a predictable universe.

Edit: By definition it is, but we have to be careful with what we say is obviously more intelligent. An animal with a larger more complex brain might be said to be less intelligent than another if he can't get enough food to feed it. Because it will not be around to use its brain and steer the future.

This is why all animals brains aren't being expanded by evolution.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-06-03T09:06:54.254Z · LW(p) · GW(p)

Evolution makes trade-offs for resources. No good having a better brain you can't afford to fuel.

"Predictability" as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.) Other intelligences don't make the universe unpredictable.

Replies from: whpearson
comment by whpearson · 2009-06-03T09:37:52.980Z · LW(p) · GW(p)

"Predictability" as I used the word means laws of physics that can be inferred from experience. (Versus no laws, or no usable evidence.)

In order to be able to make predictions about the world it is not enough to know just the laws of physics, you have to know the current state.

It is easier to infer the state of some non-intelligences than it is intelligences.

comment by orthonormal · 2009-06-02T23:08:27.913Z · LW(p) · GW(p)

What would justify making the equal case the default is if over the space of all environment more often than not the environment made no difference.

The environments we encounter are very homogeneous compared to the space of possibilities, enough so that it generally won't flip the ordering of (sufficiently different) minds by intelligence/optimization power. There's no plausible (pre-Singularity) environment in which chimps will suddenly have the technological advantage over humans, though they tie us in the case of global extinction.

Replies from: whpearson
comment by whpearson · 2009-06-03T07:44:54.796Z · LW(p) · GW(p)

Why pick chimps particularly? If there any environments where humans don't survive and things with less brain power do (e.g. bacteria, beetles) then it indicates that it is not always good to have a big brain.

comment by conchis · 2009-06-02T11:51:23.327Z · LW(p) · GW(p)

In your wealth example, the contextual constraint affects the feedback process (i.e. capability to self-improve) rather than capability to affect the world. This seems crucial, but it's not clear that contextual constraints on intelligence function in this way.