Calibrating Against Undetectable Utilons and Goal Changing Events (part1)

post by diegocaleiro · 2013-02-20T09:09:04.562Z · LW · GW · Legacy · 11 comments

Contents

  1What Significantly Changes Life's Direction
  1.1 Predominantly external changes
  1.2 Predominantly Internal Changes
  2 Long Term Goals and Even Longer Term Goals
  3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time
None
11 comments

Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential changes follows, taking in consideration the likelihood of a goal's success in terms of difficulty and length.

Throughout I'll talk about postponing utilons into undetectable distances. Doing so (I'll claim), is frequently motivationally driven by a cognitive dissonance between what our effects on the near world are, and what we wish they were. In other words it is:

A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.

I suspect that some clusters of SciFi, Lesswrong, Transhumanists, and Cryonicists are particularly prone to postponing utilons into undetectable distances, and in the second post I'll try to specify which subgroups might be more likely to have done so. The phenomenon, though composed of a lot of biases, might even be a good thing depending on how it is handled.

 

Sections will be:

  1. What Significantly Changes Life's Direction (lists)

  2. Long Term Goals and Even Longer Term Goals

  3. Proportionality Between Goal Achievement Expected Time and Plan Execution Time

  4. A Hypothesis On Why We Became Long-Term Oriented

  5. Adapting Bayesian Reasoning to Get More Utilons

  6. Time You Can Afford to Wait, Not to Waste

  7. Reference Classes that May Be Postponing Utilons Into Undetectable Distances

  8. The Road Ahead

Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.

 

1What Significantly Changes Life's Direction


1.1 Predominantly external changes

As far as I recall from reading old (circa 2004) large scale studies on happiness, the most important life events in how much they change your happiness for more than six months are: 

 

Roughly in descending order. 

That is a list of happiness changing events, I'm interested here in goal-changing events, and am assuming there will be a very high correlation.

 

From life experience, mine, of friends, and of academics I've met, I'll list some events which can change someone's goals a lot:

 

That is long enough, if not exhaustive, so let's get going... 

 

1.2 Predominantly Internal Changes

 

I'm not a social endocrinologist but I think this emerging science agrees with folk wisdom that a lot changes in our hormonal systems during life (and during the menstrual cycle) and of course this changes our eagerness to do particular things. Not only hormones but other life events which mostly relate to the actual amount of time lived change our psychology. I'll cite some of those in turn:

 

 

 

2 Long Term Goals and Even Longer Term Goals

 

I have argued sometimes here and elsewhere that selves are not as agenty as most of the top writers in this website seem to me to claim they should be, and that though in part this is indeed irrational, an ontology of selves which had various sized selves would decrease the amount of short term actions considered irrational, even though that would not go all the way into compensating hyperbolic discounting, scrolling 9gag or heroin consumption. That discussion, for me, was entirely about choosing between doing now something that benefits 'younow' , 'youtoday', 'youtomorrow', 'youthis weekend' or maybe a month from now. Anything longer than that was encompassed in a “Far Future” mental category. The interest here to discuss life-changing events is only in those far future ones which I'll split into arbitrary categories:

1) Months 2) Years 3) Decades 4) Bucket List or Lifelong and 5) Time Insensitive or Forever.

I have known more than ten people from LW whose goals are centered almost completely at the Time Insensitive and Lifelong categories, I recall hearing :

"I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, “My sole goal in life is to live an indefinite life-span”, “I want to reduce X-risk in any way I can, that's all”.

I myself stated once my goal as

“To live long enough to experience a world in which human/posthuman flourishing exceeds 99% of individuals and other lower entities suffering is reduced by 50%, while being a counterfactually significant part of such process taking place.”

Though it seems reasonable, good, and actually one of the most altruistic things we can do, caring only about Bucket Lists and Time Insensitive goals has two big problems

  1. There is no accurate feedback to calibrate our goal achieving tasks

  2. The Goals we set for ourselves require very long term instrumental plans, which themselves take longer than the time it takes for internal drives or external events to change our goals.

 

The second one has been said in a remarkable Pink Floyd song about which I wrote a motivational text five years ago: Time.

You are young and life is long and there is time to kill today

And then one day you find ten years have got behind you

No one told you when to run, you missed the starting gun

 

And you run and you run to catch up with the sun, but it's sinking

And racing around to come up behind you again

The sun is the same in a relative way, but you're older

Shorter of breath and one day closer to death

 

Every year is getting shorter, never seem to find the time

Plans that either come to naught or half a page of scribbled lines

 

Okay, maybe the song doesn't say exactly (2) but it is within the same ballpark. The fact remains that those of us inclined to care mostly about very long term are quite likely to end up with a half baked plan because one of those dozens of life-changing events happened, and that agent with the initial goals will have died for no reason if she doesn't manage to get someone to continue her goals before she stops existing.

 

This is very bad. Once you understand how our goal-structures do change over time – that is, when you accept the existence of all those events that will change what you want to steer the world into – it becomes straightforward irrational to pursue your goals as if that agent would live longer than it's actual life expectancy. Thus we are surrounded by agents postponing utilons into undetectable distances. Doing this is kind of a bias in the opposite direction of hyperbolic discounting. Having postponed utilons into undetectable distances is predictably irrational because it means we care about our Lifelong, Bucket List, and Time Insensitive goals as if we'd have enough time to actually execute the plans for these timeframes, while ignoring the likelihood of our goals changing in the meantime and factoring that in.

 

I've come to realize that this was affecting me with my Utility Function Breakdown which was described in the linked post about digging too deep into one's cached selves and how this can be dangerous. As I predicted back then, stability has returned to my allocation of attention and time and the whole zig-zagging chaotic piconomical neural Darwinism that had ensued has stopped. Also  relevant is the fact that after about 8 years caring about more or less similar things, I've come to understand how frequently my motivation changed direction (roughly every three months for some kinds of things, and 6-8 months for other kinds). With this post I intend to learn to calibrate my future plans accordingly, and help others do the same. Always beware of other-optimizing though.

 

Citizen: But what if my goals are all Lifelong or Forever in kind? It is impossible for me to execute in 3 months what will make centenary changes.

 

Well, not exactly. Some problems require chunks of plans which can be separated and executed either in parallel or in series. And yes, everyone knows that, also AI planning is a whole area dedicated to doing just that in non-human form. It is still worth mentioning, because it is much more simply true than actually done.

 

This community in general has concluded in its rational inquiries that being longer term oriented is generally a better way to win, that is, it is more rational. This is true. What would not be rational is to in every single instance of deciding between long term or even longer term goals, choose without taking in consideration how long will the choosing being exist, in the sense of being the same agent with the same goals. Life-changing events happen more often than you think, because you think they happen as often as they did in the savannahs in which your brain was shaped.

 

 

3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time

 

So far we have been through the following ideas. Lots of events change your goals, some externally some internally, if you are a rationalist, you end up caring more about events that take longer to happen in detectable ways (since if you are average you care in proportion to emotional drives that execute adaptations but don't quite achieve goals). If you know that humans change and still want to achieve your goals, you'd better account for the possibility of changing before their achievement. Your kinds of goals are quite likely prone to the long-term since you are reading a Lesswrong post.

 

Citizen: But wait! Who said that my goals happening in a hundred years makes my specific instrumental plans take longer to be executed?


I won't make the case for the idea that having long term goals increases the likelihood of the time it takes to execute your plans being longer. I'll only say that if it did not take that long to do those things your goal would probably be to have done the same things, only sooner.

 

To take one example: “I would like 90% of people to surpass 150 IQ and be in a bliss gradient state of mind all the time”

Obviously, the sooner that happens, the better. Doesn't look like the kind of thing you'd wait for college to end to begin doing, or for your second child to be born. The reason for wanting this long-term is that it can't be achieved in the short run.

 

Take Idealized Fiction of Eliezer Yudkosky: Mr Ifey had this supergoal of making a Superintelligence when he was very young. He didn't go there and do it. Because he could not. If he could do it he would. Thank goodness, for we had time to find out about FAI after that. Then his instrumental goal was to get FAI into the minds of the AGI makers. This turned out to be to hard because it was time consuming. He reasoned that only a more rational AI community would be able to pull it off, all while finding a club of brilliant followers in this peculiar economist's blog. He created a blog to teach geniuses rationality, a project that might have taken years. It did, and it worked pretty well, but that was not enough, Ifey soon realized more people ought to be more rational, and wrote HPMOR to make people who were not previously prone to brilliance as able to find the facts as those who were lucky enough to have found his path. All of that was not enough, an institution, with money flow had to be created, and there Ifey was to create it, years before all that. A magnet of long-term awesomeness of proportions comparable only to the Best Of Standing Transfinite Restless Oracle Master, he was responsible for the education of some of the greatest within the generation that might change the worlds destiny for good. Ifey began to work on a rationality book, which at some point pivoted to research for journals and pivoted back to research for the Lesswrong posts he is currently publishing. All that Ifey did by splitting that big supergoal in smaller ones (creating Singinst, showing awesomeness in Overcoming Bias, writing the sequences, writing the particular sequence “Misterious Answers to Misterious Questions” and writing the specific post “Making Your Beliefs Pay Rent”). But that is not what I want to emphasize, what I'd like to emphasize is that there was room for changing goals every now and then. All of that achievement would not have been possible if at each point he had an instrumental goal which lasts 20 years whose value is very low uptill the 19th year. Because a lot of what he wrote and did remained valuable for others before the 20th year, we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in varied ways.

 

So yes, the ubiquitous advice of chopping problems into smaller pieces is extremely useful and very important, but in addition to it, remember to chop pieces with the following properties:

 

(A) Short enough that you will actually do it.

 

(B) Short enough that the person at the end, doing it, will still be you in the significant ways.

 

(C) Having enough emotional feedback that your motivation won't be capsized before the end. and

 

(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occurred before expected time.

 

 


Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.


11 comments

Comments sorted by top scores.

comment by savageorange · 2013-02-21T05:19:35.499Z · LW(p) · GW(p)

I always appreciate more carefully thought out material about dealing with our transience as agents. I suggest to improve the formatting of the list at the end, as that is key information. Formatting it as a classic numbered/lettered list seems appropriate, i.e.

(A) Short enough that you will actually do it.

(B) Short enough that the person at the end, doing it, will still be you in the significant ways.

(C) Having enough emotional feedback that your motivation won't be capsized before the end. and

(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occured before expected time.

D seems related to proofing projects against partially-hostile agents (including ourselves). I'm interested in expanding on this. I suspect the same strategy employed by diplomats has a large part to play: Cultivate valuing human universal values, and ground the project solidly on them. Keep other people well informed so they can be an extra set of eyes to possibly notice changes in direction.

I also think you probably want to change these:

an ontology of selves which had variegated sized selves

...

we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in variegated ways.

While that's artistically amusing, I believe you mean 'various' in both those cases (or 'various' in the first and 'varied' in the second).

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-21T09:37:44.627Z · LW(p) · GW(p)

Edited. Thanks! Non-native language question: "I'm interested in expanding on this." implies you are interested in doing research or writing a post about this, or that you are interested in reading an expansion on this by the author (me)?

Replies from: chemotaxis101
comment by chemotaxis101 · 2013-02-21T10:05:03.284Z · LW(p) · GW(p)

I suppose he's interested in generally discussing the above-mentioned point.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-21T10:13:48.112Z · LW(p) · GW(p)

The third alternative of course.

comment by chemotaxis101 · 2013-02-20T14:33:54.105Z · LW(p) · GW(p)

Trying to get a job and noticing more accurately what the market looks like [I'd be glad if someone pointed to the article in which willnewsome talks about making money, circa 2010]

(Virtual) Employment Open Thread

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-20T17:25:27.312Z · LW(p) · GW(p)

Thanks! Edited.

comment by summerstay · 2013-02-21T14:27:09.850Z · LW(p) · GW(p)

I really enjoyed the first part of the post-- just thinking about the fact that my future goals will be different from my present ones is a useful idea. I found the bit of hagiography about E.Y. at the end weird and not really on topic. You might just use a one or two sentence example: He wanted to build an A.I., and then later he didn't want to.

Replies from: Mimosa
comment by Mimosa · 2013-02-22T21:15:51.223Z · LW(p) · GW(p)

Not exactly. The core idea remains the same, but the method in which he's getting there has, and the type of mind that he wants to create has changed.

comment by Elithrion · 2013-02-21T03:18:01.991Z · LW(p) · GW(p)

I really like this post, possibly because it lines up well with ideas I've been thinking recently myself.

One related interesting thing to consider (which you may or may not be planning to mention in the second post) is what exactly would a fully rational agent who acknowledges that her goals may change will do. For example, she might accept that the changes are appropriate, and then perhaps claim there is some underlying goal system that accounts for both present goals and changes in response to the environment (in which case, she could try to figure out what it is and see if that gets her any meaningful mileage). More interestingly, she may choose to go to war with her future self. She could set up a lot of hard precommitments that make deviating from the current goals really hard, deliberately avoid events that might change goals where possible (e.g. if single, avoid getting into a relationship), keep track of hormonal levels and try to keep them constant, artificially if necessary. And then she could consider the fact that maybe future self will end up deviating anyway (with a reduced probability), and then model the future self as having a backlash against current goals/fanaticism, and then try engaging in acausal trade with her future self. Maybe that will then lead to some more cooperative strategy. It's very tempting to say that after the negotiation and optimisations it all adds up to normality, but I'm not sure it actually does (also I'm not sure the negotiation would actually be viable in a standard way, since neither present nor future self can commit very well, and future self can't be modeled that well).

Also, there are a bunch of typoes/spelling mistakes scattered throughout (e.g. "exaustive") - you might want to run a spellchecker over the post.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-22T01:43:12.624Z · LW(p) · GW(p)

You wouldn't believe how much open office spellchecker can miss. Sorry for that, I'll run online spell check from now on.

Though I appreciate your theoretical points from a decision theory perspective, I decided on maintaining a human-centered line of argument in the second post. Instead of idealizing the agent and getting her to do awesome geeky math stuff (which would be nice) I used positive psychology to help think what a human should do given current human fallibility on these things. I'll gladly help making a sequel, if you feel like writing one yourself (or yourfutureself)

Replies from: Elithrion
comment by Elithrion · 2013-02-22T04:33:32.344Z · LW(p) · GW(p)

Yeah, focusing on actual humans makes sense, especially since the idealized behaviour seems like it might be pretty crazy (according to standard notions of madness). Thanks for the offer of help, I'll let you know if I actually get around to writing a post!