[SEQ RERUN] Aiming at the Target

post by MinibearRex · 2012-10-07T03:44:03.557Z · LW · GW · Legacy · 17 comments

Today's post, Aiming at the Target was originally published on 26 October 2008. A summary (taken from the LW wiki):

 

When you make plans, you are trying to steer the future into regions higher in your preference ordering.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Belief in Intelligence, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

17 comments

Comments sorted by top scores.

comment by Decius · 2012-10-07T06:13:08.424Z · LW(p) · GW(p)

"The target" is not always something you can aim at and hit. Sometimes there are two outcomes which are very close together in actualizing, but very far apart in desirability.

For example, launching a satellite might be the most desired outcome, while scrapping the project is of intermediate desirability and botching the launch has the lowest desirability. You can't aim away from 'botched launch' very much before you need to shift your aim all the way to 'scrap the project'.

The highest value on the dartboard is the triple-20, but the best place for someone with moderate accuracy to aim is somewhere between triple-16 and the bullseye; if one assumes that a player will hit a random point within a circular area with radius related to accuracy, the best point of aim is not a continuous function of the accuracy of the player, but jumps from region to region. So it is with aiming at real targets- one doesn't choose the outcome, but chooses the region from which an outcome will be chosen.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-10-08T14:37:14.597Z · LW(p) · GW(p)

Hence the bit about expected utility.

Replies from: Decius
comment by Decius · 2012-10-08T20:41:19.024Z · LW(p) · GW(p)

What I'm saying is that a rational darts thrower who just hit triple-1 is likely has a higher estimation of his own skill than one that just hit triple-14, and the one who just hit the bullseye likely has the lowest estimation of his own skill. (assuming all players are maximizing points on one throw).

comment by Maelin · 2012-10-11T05:15:11.860Z · LW(p) · GW(p)

There's some discussion in the original thread about what exactly counts as optimising but it doesn't seem to have had any result, and I confess I'm struggling to find a definition of optimisation that says "definitely optimisation" about human minds and Deep Blue and evolution, but "definitely not optimisation" about a rock sitting on the ground or water running down a hill, and which feels like I have actually made a reduction instead of just something circular or synonymous.

Does anybody have a good working definition of optimisation that captures the things that feel optimisationy?

Replies from: MinibearRex
comment by MinibearRex · 2012-10-12T04:52:43.308Z · LW(p) · GW(p)

Optimization is a hypothesis. It's a complex hypothesis. You get evidence in favor of the hypothesis that water is an optimization process when you see it avoiding local minimums and steering itself to the lowest possible place on earth.

Replies from: Maelin
comment by Maelin · 2012-10-16T00:39:27.051Z · LW(p) · GW(p)

But who says the water has to optimise for "lowest possible place"? Maybe it's just optimising for "occupying local minima". Out of all the possible arrangements of the water molecules in the entire universe that the water might move towards if you fill a bucket from the ocean and then tip it out again, it sure seems to gravitate towards a select few, pun intended.

How can we define optimisation in a way that doesn't let us just say "it's optimising to end up like that" about any process with an end state?

Replies from: MinibearRex
comment by MinibearRex · 2012-10-16T04:24:41.490Z · LW(p) · GW(p)

Because there's a simpler hypothesis (gravity) that not only explains the behavior of water, but also the behavior of other objects, motions of the planets, etc. There is still some tiny amount of probability allocated to the optimization hypothesis, but it loses out to the sheer simplicity and explanatory power of competing hypotheses.

Replies from: Maelin
comment by Maelin · 2012-10-16T04:48:00.878Z · LW(p) · GW(p)

I don't think I'm being clear. I don't understand what it means for something to be vs not-be an optimisation process. What features or properties distinguish an optimisation process from a not-optimisation process?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-16T05:09:18.282Z · LW(p) · GW(p)

How can we define optimisation in a way that doesn't let us just say "it's optimising to end up like that" about any process with an end state? [..] What features or properties distinguish an optimisation process from a not-optimisation process?

Well, OK, suppose we observe process P causes a system S to transition from state S1 to S2, and observe that S1 is better than S2 for achieving goals in set G1 and S2 is better than S1 for achieving G2. Suppose we lack a definition like what you're asking for, and naively assert that P is an optimizing process for all goals G2. So, for example, we assert that gravity is an optimization process for collecting water in my basement, among other things.

Which, as you say, is unsatisfying.

But what happens next?

If we apply P to S', and observe it causes a transition from S'1 to S'2, we are no longer able to say quite so readily that P optimizes for G2. Assuming we can talk about goals in a consistent way between systems, then it seems more natural to say that P optimizes for the intersection of G2 and G'2.

If we observe the behavior of P across a wide range of systems, and we discover that the intersection of the goals optimized for by P is a fairly narrow target Gn, eventually we reach a point where if a new system Sx comes along in state Sx1, and we know that achievable state Sx2 is better for achieving Gn, we can confidently predict that P will cause an Sx1->Sx2 transition in Sx even if we don't know what the mechanism of that transition might be.

It seems relatively clear to me that once we've reached that point, we have excellent grounds for calling P an optimization process for Gn.

This may not be a necessary condition -- indeed, I suspect it isn't -- but it seems sufficient.

Would you agree?

Replies from: Maelin
comment by Maelin · 2012-10-16T07:39:06.553Z · LW(p) · GW(p)

I'm not sure that this does the job, but I might be misunderstanding:

  • Clippy the paperclip maximiser, being placed in a given system S, transitions the system from state S1 (not many paperclips) to state S2 (many paperclips), and does this reliably across many different systems. We can confidently predict that if we put Clippy in a new system, it will soon end up full of paperclips, even if we aren't sure what the mechanism will be.
  • Water, being placed in a given system S, transitions from state S1 (water is anywhere) to state S2 (water occupies local minima and isn't just floating around), and does this reliably across many different systems. We can confidently predict that if we put water in a new system, it will soon end up with a wet floor but not likely a wet ceiling. It just so happens that we do know the mechanism of transition but that shouldn't matter, I think.

So I feel like this kind of behaviour is actually necessary but isn't sufficient to identify an optimisation process. But I might be missing your point.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-16T13:13:20.910Z · LW(p) · GW(p)

Ah, OK. I think I get you now.
I agree that whether we know the mechanism doesn't matter.

Personally, I'm comfortable saying that gravity is an optimization process. The interesting question is what gravity optimizes for. I might conclude, for example, after watching how gravitational fields affect matter, that gravity optimizes for minimizing the distance between sources of mass.

From that it follows that gravity is not a particularly powerful optimization process. I conclude this because I observe many situations where distance between masses is no longer being minimized because gravity only has a very limited way of arranging its environment to achieve that goal. And I suspect that one of the things you're looking for, in attempting to arrive at a definition that distinguishes gravity from Clippy, is a notion of optimization power similar to this. In other words, it's possible that your question can be rephrased as "how do we measure optimization power?"

Another possible distinction among optimization processes that we often implicitly talk about here is value-independence. That is, when we talk about AGI, what's being evoked is a powerful optimization process that can optimize for paperclips, or shoes, or smileyfaces, or satisfied humans, or whatever it happens to value. It's just as powerful an optimization process either way. Gravity doesn't seem to have this property. Clippy might or might not.

The general assumption around here is that something as effective as Clippy is using algorithms which are generalizable; I'm not sure I've ever seen the idea of a non-generalizable powerful optimization process even discussed here. I suspect this derives in large part from the site's focus on Bayes Theorem, which is entirely domain-independent, as the core of intelligence/optimization.

This focus is in-principle separable from the site's focus on optimizing systems, but in practice the two are not explicitly separated during discussion.

Replies from: Maelin
comment by Maelin · 2012-10-18T05:43:24.840Z · LW(p) · GW(p)

But it seems like then every process can be an optimisation process, and when you measure the optimisation power that's really telling you more about whether the 'optimisation target' you selected as your measure is a good fit for the process you're looking at. It tells you more about your interpretation of the optimisation target than it does about the process itself.

Gravity isn't very powerful for minimising distance between sources of mass, but it is very powerful for "making mass move in straight lines through curved spacetime"[1]. For any process at all, you just look at "whatever it actually ends up doing", and then say that was its optimisation target all along, and hey presto, it turns out there are superpowerful optimisation processes everywhere you look, all being hugely successful at making things turn out how (you think) they wanted, provided you think they wanted things the way they actually turned out. If you get to choose your own interpretation of what the optimisation target is, 'optimisation process' doesn't seem like a very useful notion at all.

Also, re: value independence: Evolution seems like a pretty definite candidate for what we want 'optimisation process' to mean, but its values seem to be pretty inextricably baked in to the algorithm. You can't reprogram evolution to start optimising for paperclips, for example. It only optimises for whatever genes are selectively favoured by the environment.

[1] insert a more accurate description of what gravity does here if required.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-18T14:50:15.338Z · LW(p) · GW(p)

Yes, I agree that on this account every process is technically speaking an optimization process for some target, and I agree that optimization power can only be measured relative to particular target or class of targets.

That said, when we say that evolution, or a human-level intelligence, is an optimization process we mean something rather more than this: we mean something like that it's an optimization process with a lot of optimization power across a wide range of targets. (And, sure, I agree that if we hold the environment constant, including other evolved lifeforms, then evolution has a fixed target. If we lived in a universe where such constant environments were common, I suspect we would not be inclined to describe evolution as an optimization process.)

I don't see how that makes "optimization process" a useless notion. What do you want to use it for?

Replies from: Maelin
comment by Maelin · 2012-10-23T01:33:00.031Z · LW(p) · GW(p)

(apologies for delayed reply)

I really just want to know what Eliezer means by it. It seems to me like I have some notion of an optimisation process, that says "yep, that's definitely an optimisation process" when I think about evolution and human minds and Clippy, and says "nope, that's not really an optimisation process - at least, not one worth the name" about water rolling down a hill and thermodynamics. And I think this notion is sufficiently similar to the one that Eliezer is using. But my attempts to formalise this definition, to reduce it, have failed - I can't find any combination of words that seems to capture the boundary that my intuition is drawing between Clippy and gravity.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-23T02:44:06.648Z · LW(p) · GW(p)

Treating the boundary between Clippy and gravity as a quantitative one (optimization power over a wide range of targets, as above) rather than a qualitative one does violence to some of my intuitions as well, but on balance I don't endorse those intuitions. Gravity is, technically speaking, an optimization process... though, as you say, it's not worth the name in a colloquial sense.

Replies from: Maelin
comment by Maelin · 2012-10-23T05:42:51.223Z · LW(p) · GW(p)

I find this very unsatisfying, not least because the optimisation power over a wide range of targets is easily gamed just by dividing any given 'target' of a process into a whole lot of smaller targets and then saying "look at all these different targets that the process optimised for!"

Claiming that optimisation power is defined simply by a process's ability to hit some target from a wide range of starting states, and/or has a wide range of targets that it can hit, both seem to be easily gameable by clever sophistry with your choice of how you choose the targets by which you measure its optimisation power. There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy) from processes that only come out as good at optimising if we select clever targets to measure them by.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-23T13:30:14.203Z · LW(p) · GW(p)

We seem to have very different understandings of what constitutes a wide range. A narrow target does not suddenly become a wide range of targets because I choose to subdivide it, any more than I can achieve a diversified stock portfolio by separately investing each dollar into the same company's stock.

So I'm still pretty comfortable with my original stance here: optimization is as optimization does.

That said, I certainly agree that clever sophistry can blur the meaning of our definitions. This seems like a good reason to eschew clever sophistry when analyzing systems I want to interact effectively with.

And I can appreciate finding it unsatisfying. Sometimes the result of careful thinking about a system is that we discover our initial intuitions were incorrect, rather than discovering a more precise or compelling way to express our initial intuitions.

There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy)

I'm not really sure what you mean by "part" here. But general-purpose optimizers are more interesting than narrow optimizers, and powerful optimizers are more interesting than less powerful optimizers, and if we want to get at what's interesting about them we need more and better tools than just the definition of an "optimization process".