Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1)

post by diegocaleiro · 2013-02-22T01:09:16.987Z · LW · GW · Legacy · 11 comments

Contents

  Here is the original unchanged post with sections 1-3 and the new sections 4-8. If you read the first post, go straight to section 4.
  1What Significantly Changes Life's Direction
  1.1 Predominantly external changes
  1.2 Predominantly Internal Changes
  2 Long Term Goals and Even Longer Term Goals
  3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time
  4 A Hypothesis On Why We Became Long-Term Oriented
  5 Adapting Bayesian Reasoning to Get More Utilons
  6 Time You Can Afford to Wait, Not to Waste
  7 Reference Classes that May Be Postponing Utilons Into Undetectable Distances
  8 The Road Ahead
None
11 comments

Here is the original unchanged post with sections 1-3 and the new sections 4-8. If you read the first post, go straight to section 4.

Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential changes follows, taking in consideration the likelihood of a goal's success in terms of difficulty and length.

 

Throughout I'll talk about postponing utilons into undetectable distances. Doing so (I'll claim), is frequently motivationally driven by a cognitive dissonance between what our effects on the near world are, and what we wish they were. In other words it is:

A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.

I suspect that some clusters of SciFi, Lesswrong, Transhumanists, and Cryonicists are particularly prone to postponing utilons into undetectable distances, and here I try to think of which subgroups might be more likely to have done so. The phenomenon, though composed of a lot of biases, might even be a good thing depending on how it is handled.

 

Sections will be:

  1. What Significantly Changes Life's Direction (lists)

  2. Long Term Goals and Even Longer Term Goals

  3. Proportionality Between Goal Achievement Expected Time and Plan Execution Time

  4. A Hypothesis On Why We Became Long-Term Oriented

  5. Adapting Bayesian Reasoning to Get More Utilons

  6. Time You Can Afford to Wait, Not to Waste

  7. Reference Classes that May Be Postponing Utilons Into Undetectable Distances

  8. The Road Ahead

 

 

1What Significantly Changes Life's Direction


1.1 Predominantly external changes

As far as I recall from reading old (circa 2004) large scale studies on happiness, the most important life events in how much they change your happiness for more than six months are: 

 

Roughly in descending order. 

That is a list of happiness changing events, I'm interested here in goal-changing events, and am assuming there will be a very high correlation.

 

From life experience, mine, of friends, and of academics I've met, I'll list some events which can change someone's goals a lot:

 

That is long enough, if not exhaustive, so let's get going... 

 

1.2 Predominantly Internal Changes

 

I'm not a social endocrinologist but I think this emerging science agrees with folk wisdom that a lot changes in our hormonal systems during life (and during the menstrual cycle) and of course this changes our eagerness to do particular things. Not only hormones but other life events which mostly relate to the actual amount of time lived change our psychology. I'll cite some of those in turn:

 

 

 

2 Long Term Goals and Even Longer Term Goals

 

I have argued sometimes here and elsewhere that selves are not as agenty as most of the top writers in this website seem to me to claim they should be, and that though in part this is indeed irrational, an ontology of selves which had various sized selves would decrease the amount of short term actions considered irrational, even though that would not go all the way into compensating hyperbolic discounting, scrolling 9gag or heroin consumption. That discussion, for me, was entirely about choosing between doing now something that benefits 'younow' , 'youtoday', 'youtomorrow', 'youthis weekend' or maybe a month from now. Anything longer than that was encompassed in a “Far Future” mental category. The interest here to discuss life-changing events is only in those far future ones which I'll split into arbitrary categories:

1) Months 2) Years 3) Decades 4) Bucket List or Lifelong and 5) Time Insensitive or Forever.

I have known more than ten people from LW whose goals are centered almost completely at the Time Insensitive and Lifelong categories, I recall hearing :

"I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, “My sole goal in life is to live an indefinite life-span”, “I want to reduce X-risk in any way I can, that's all”.

I myself stated once my goal as

“To live long enough to experience a world in which human/posthuman flourishing exceeds 99% of individuals and other lower entities suffering is reduced by 50%, while being a counterfactually significant part of such process taking place.”

Though it seems reasonable, good, and actually one of the most altruistic things we can do, caring only about Bucket Lists and Time Insensitive goals has two big problems

  1. There is no accurate feedback to calibrate our goal achieving tasks

  2. The Goals we set for ourselves require very long term instrumental plans, which themselves take longer than the time it takes for internal drives or external events to change our goals.

 

The second one has been said in a remarkable Pink Floyd song about which I wrote a motivational text five years ago: Time.

You are young and life is long and there is time to kill today

And then one day you find ten years have got behind you

No one told you when to run, you missed the starting gun

 

And you run and you run to catch up with the sun, but it's sinking

And racing around to come up behind you again

The sun is the same in a relative way, but you're older

Shorter of breath and one day closer to death

 

Every year is getting shorter, never seem to find the time

Plans that either come to naught or half a page of scribbled lines

 

Okay, maybe the song doesn't say exactly (2) but it is within the same ballpark. The fact remains that those of us inclined to care mostly about very long term are quite likely to end up with a half baked plan because one of those dozens of life-changing events happened, and that agent with the initial goals will have died for no reason if she doesn't manage to get someone to continue her goals before she stops existing.

 

This is very bad. Once you understand how our goal-structures do change over time – that is, when you accept the existence of all those events that will change what you want to steer the world into – it becomes straightforward irrational to pursue your goals as if that agent would live longer than it's actual life expectancy. Thus we are surrounded by agents postponing utilons into undetectable distances. Doing this is kind of a bias in the opposite direction of hyperbolic discounting. Having postponed utilons into undetectable distances is predictably irrational because it means we care about our Lifelong, Bucket List, and Time Insensitive goals as if we'd have enough time to actually execute the plans for these timeframes, while ignoring the likelihood of our goals changing in the meantime and factoring that in.

 

I've come to realize that this was affecting me with my Utility Function Breakdown which was described in the linked post about digging too deep into one's cached selves and how this can be dangerous. As I predicted back then, stability has returned to my allocation of attention and time and the whole zig-zagging chaotic piconomical neural Darwinism that had ensued has stopped. Also  relevant is the fact that after about 8 years caring about more or less similar things, I've come to understand how frequently my motivation changed direction (roughly every three months for some kinds of things, and 6-8 months for other kinds). With this post I intend to learn to calibrate my future plans accordingly, and help others do the same. Always beware of other-optimizing though.

 

Citizen: But what if my goals are all Lifelong or Forever in kind? It is impossible for me to execute in 3 months what will make centenary changes.

 

Well, not exactly. Some problems require chunks of plans which can be separated and executed either in parallel or in series. And yes, everyone knows that, also AI planning is a whole area dedicated to doing just that in non-human form. It is still worth mentioning, because it is much more simply true than actually done.

 

This community in general has concluded in its rational inquiries that being longer term oriented is generally a better way to win, that is, it is more rational. This is true. What would not be rational is to in every single instance of deciding between long term or even longer term goals, choose without taking in consideration how long will the choosing being exist, in the sense of being the same agent with the same goals. Life-changing events happen more often than you think, because you think they happen as often as they did in the savannahs in which your brain was shaped.

 

 

3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time

 

So far we have been through the following ideas. Lots of events change your goals, some externally some internally, if you are a rationalist, you end up caring more about events that take longer to happen in detectable ways (since if you are average you care in proportion to emotional drives that execute adaptations but don't quite achieve goals). If you know that humans change and still want to achieve your goals, you'd better account for the possibility of changing before their achievement. Your kinds of goals are quite likely prone to the long-term since you are reading a Lesswrong post.

 

Citizen: But wait! Who said that my goals happening in a hundred years makes my specific instrumental plans take longer to be executed?


I won't make the case for the idea that having long term goals increases the likelihood of the time it takes to execute your plans being longer. I'll only say that if it did not take that long to do those things your goal would probably be to have done the same things, only sooner.

 

To take one example: “I would like 90% of people to surpass 150 IQ and be in a bliss gradient state of mind all the time”

Obviously, the sooner that happens, the better. Doesn't look like the kind of thing you'd wait for college to end to begin doing, or for your second child to be born. The reason for wanting this long-term is that it can't be achieved in the short run.

 

Take Idealized Fiction of Eliezer Yudkosky: Mr Ifey had this supergoal of making a Superintelligence when he was very young. He didn't go there and do it. Because he could not. If he could do it he would. Thank goodness, for we had time to find out about FAI after that. Then his instrumental goal was to get FAI into the minds of the AGI makers. This turned out to be to hard because it was time consuming. He reasoned that only a more rational AI community would be able to pull it off, all while finding a club of brilliant followers in this peculiar economist's blog. He created a blog to teach geniuses rationality, a project that might have taken years. It did, and it worked pretty well, but that was not enough, Ifey soon realized more people ought to be more rational, and wrote HPMOR to make people who were not previously prone to brilliance as able to find the facts as those who were lucky enough to have found his path. All of that was not enough, an institution, with money flow had to be created, and there Ifey was to create it, years before all that. A magnet of long-term awesomeness of proportions comparable only to the Best Of Standing Transfinite Restless Oracle Master, he was responsible for the education of some of the greatest within the generation that might change the worlds destiny for good. Ifey began to work on a rationality book, which at some point pivoted to research for journals and pivoted back to research for the Lesswrong posts he is currently publishing. All that Ifey did by splitting that big supergoal in smaller ones (creating Singinst, showing awesomeness in Overcoming Bias, writing the sequences, writing the particular sequence “Misterious Answers to Misterious Questions” and writing the specific post “Making Your Beliefs Pay Rent”). But that is not what I want to emphasize, what I'd like to emphasize is that there was room for changing goals every now and then. All of that achievement would not have been possible if at each point he had an instrumental goal which lasts 20 years whose value is very low uptill the 19th year. Because a lot of what he wrote and did remained valuable for others before the 20th year, we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in varied ways.

 

So yes, the ubiquitous advice of chopping problems into smaller pieces is extremely useful and very important, but in addition to it, remember to chop pieces with the following properties:

 

(A) Short enough that you will actually do it.

 

(B) Short enough that the person at the end, doing it, will still be you in the significant ways.

 

(C) Having enough emotional feedback that your motivation won't be capsized before the end. and

 

(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occurred before expected time.

 

4 A Hypothesis On Why We Became Long-Term Oriented

 

For anyone who rejoiced the company of the writings of Derek Parfit, George Ainslie, or Nick Bostrom, there are a lot of very good reasons to become more long-term oriented. I am here to ask you about those reasons: Is that you true acceptance?

 

It is not for me. I became longer term oriented because of different reasons. Two obvious ones are genetics expressing in me the kind of person that waits a year for the extra marshmallow while fantasizing about marshmallow worlds and rocking horse pies, and secondly wanting to live thousands of years. But the one I'd like to suggest that might be relevant to some here is that I was very bad at making people who were sad or hurt happy. I was not, as they say, empathic. It was a piece of cake bringing folks from neutral state to joy and bliss. But if someone got angry or sad, specially sad with something I did, I would be absolutely powerless about it. This is only one way of not being good with people, a people's person etc... So my emotional system, like the tale's Big Bad Wolf blew, and blew, and blew, until my utilons were comfortably sitting aside in the Far Future, where none of them could look back at my face, cry, and point to me as the tears cause.

 

Paradoxically, though understandably, I have since been thankful for that lack of empathy towards those near. In fact, I have claimed, where I forget, that it is the moral responsibility of those with less natural empathy of the giving to beggars kind to care about the far future, since so few are within this tiny psychological mindspace of being able to care abstractly while not caring that much visibly/emotionally. We are such a minority that foreign aid seems to be the thing that is more disproportional in public policy between countries (Savulescu, J - Genetically Enhance Humanity of Face Extinction 2009 video). Like the whole minority of billionaires ought to be more like Bill Gates, Peter Thiel and Jaan Tallinn, the minority of underempathic folk ought to be more like an economist doing quantitative analysis to save or help in quantitative ways. Let us look at our examples again:

 

“My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, "I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “I want to reduce X-risk in any way I can, that's all” , “My sole goal in life is to live an indefinite life-span”.

 

So maybe their true (or original) acceptance of Longterm, like mine, was something like Genes + Death sucks + I'd rather interact with people of the future whose bots in my mind smile, than those actually meaty folk around me, with all their specific problems, complicated families and boring Christian relationship problems. This is my hypothesis. Even if true, notice it does not imply that longterm isn't rational, after all Parfit, Bostrom and Ainslie are still standing, even after careful scrutiny.

 


5 Adapting Bayesian Reasoning to Get More Utilons

 

Just like many within this community praise Bayesian reasoning but don't explicitly keep track of belief distributions (as far as I recall only Steve Rayhawk and Anna Salamon, of all I met, kept a few beliefs numerically) few or none probably would need the math specifics to calibrate and slide their plans towards 'higher likelihood of achievement' given their change in goals probability over time. In other words, few need to do actual math to just account intuitively - though not accurately - for changing their plans in such way that now the revised plans are more likely to work than they were before.

 

It doesn't amount to much more than simple multiplication and comparison. If you knew for a period of time how likely you are at each point to become someone who does not have goal X anymore, you should strive for X to be you-independent by the time that this transformation is likely to happen. But how likely must the gruesome transformation be at the point in time in which you expect the plan to be over? 10% would probably make you a blogger, 30% a book writer 50% a Toyota-like company creator and 90% would probably make you useless, since 90% of times no legacy would be left from your goals. It would be the rationalist equivalent of having ADHD. And if you also have actual ADHD, then the probabilities would multiply into a chaotic constant shift of attention in which your goals are not realized 90% of time even if they last actually very short times, that is probably why the World looks insane.

 

Then again, how likely?  My suggestion would be to use a piece of knowledge that comes from the derailed subset of positive psychology, Human Resources. There is this discovered ratio called The Losada Ratio, aka The Losada Line. 

2.9013

The Schelling Point for talking about it to increase its memetic spreading is 3, so I'll use 3 here. But what is it? From Wikipedia:

The Losada Line, also known as the "Losada ratio," was described by psychologist Marcial Losada while researching the differences in ratios of positivity and negativity between high and low performance teams.[1][2]

The Losada Line represents a positivity/negativity ratio of roughly 2.9, and it marks the lower boundary of the Losada Zone (the upper bound is around 11.6). It was corroborated by Barbara Fredrickson, a psychologist at the University of North Carolina, Chapel Hill, in individuals, and by Waugh and Fredrickson in relationships.[3] They found that the Losada Line separates people who are able to reach a complex understanding of others from those who do not. People who "flourish" are above the Losada Line, and those who "languish" are below it.[4][5] The Losada Line bifurcates the type of dynamics that are possible for an interactive human system. Below it, we find limiting dynamics represented by fixed-point attractors; at or above it, we find innovative dynamics represented by complex order attractors (complexor).

3:1 Compliments per complaint. Three requests per order. Three "That's awesome!" per "Sorry, I didn't like that"

 

Getting back to our need for a Bayesian slide/shift in which we increase how likely our goals are to be achieved, wouldn't it be great if we set to achieve above the Losada line, thus keeping ourselves motivated by having reality complimenting us within the Losada zone?

 

3:1 is the ratio I suggest of expected successes for Longterm folk in the process of dividing their Time Insensitive supergoals - which, granted, may be as unlikely as Kurzweil actually achieving longevity escape velocity - into smaller instrumental goals.

If you agree with most of what has been said so far, and you'd like to be rewarded by your boss, Mr Reality, in the proportion of those who thrive, while doing stuff more useful than playing videogames previously designed within the Losada Zone, I suggest you try and do instrumental stuff with a 75% expected likelihood of success.

 

Once well calibrated, you'll succeed three out of four times in your endeavours emotionally, which will keep you away from learned helplessness while still excited about the Longterm, Lifelong, Far Futuristic ideals with lower likelihoods. I would hope this could complement the whole anti-Akrasia posts on Lesswrong.

 

For those who would really like to think about the math involved, take a look at the three introductions to bayesianisms available in Lesswrong and complement them with the idea of Bayesian shift related to the Doomsday argument which can be found in wikipedia or Bostrom's "Anthropic Bias" book.

 

 

6 Time You Can Afford to Wait, Not to Waste

 

Many of us are still younger than 25. Unless you started trying to achieve goals particularly young, if you are 18 you are likely undergoing a twister of events and hormones, and whatever you guess that will end up being your average time-span of motivation without internal or external interruptions actually won't be more than a guess. By 25 you are familiar with yourself, and probably able get your projects into frequency. But what to do before that?  One suggestion that comes to mind is to create plans that actually require quite a long time.

For the same reasons that those whose natural empathy is a little less than normal bear a moral responsibility of taking care of those a little distant, those who yet do not know if they are able to set out to do the impossible, with accuracy and consistency over long periods should probably try doing that. It is better to have false positives than false negatives in this case. Not only you'd never know how long you'd last if you set for short term, but also the very few who are able to go long term would never be found.

So if you are young (in your dialect's meaning of 'young') shut up, do the impossible, save the world. Isn't that the point of the Thiel Fellowship for Twenty Under Twenty anyway?

 

 

7 Reference Classes that May Be Postponing Utilons Into Undetectable Distances

 

Of course Lesswrongers are prone to postponing utilons into undetectable distances. But which subsets? I regard the highest risk groups to be:

 

Cryocrastinators - For one they want to live forever, on the other hand, their plan of doing something about it never succeeded, this looks like undercalibration.

The subset of Effective Altruists who care mostly about future people/minds they'll never meet - I find myself in this group.

The subset of  Singularitarians whose emotions are projected to SL4 and afterwards - whom are doing something akin to the ascetic folk who talk about life as if it were set in the future, after death, making them less able to deal with their daily meaty human problems.

 

Sure there will be a large minority of each group which doesn't fall prey to postponing utilons into undetectable distances, and sure, you belong to that minority, but stressing the point out makes it salient enough that if you ever find yourself trying to rationalize about this more than you should, you'll think twice. 

 

It has been said to me that if that is what makes an Effective Altruist, so be it! And thank goodness we have them. To which I say: Yes!


Every characteristic has a causal origin, and frequently a story can be told which explains the emergence of that characteristic. Given that we are all biases, no wonder some of those stories will have biases in prominent roles. This does not invalidate the ethics of those whose lives were shaped by those stories. In fact, if a transmissible disease of mind were the sole cause of awesome people in this world, scientists would be trying to engineer mosquitoes to become more, not less infectious.

 

Does that mean I spent this entire post arguing that something like a bias exist which we'd better not overcome? Or worse yet, is all of this post an attempt to justify caring about those far even though it would be emotionally much harder - and thus feel more real - to care about those near? Maybe, you be the judge.

 

There are also several distinct other reference classes that are smaller, though would deserve mention. The Big Worlds Tegmark or Vilenkin Fans who think about the whole superintelligence fighters, economics of acausal trade, Permutation City and so on... 

A good heuristic to check that is to see if the person has a low tolerance to frustration plus a lot of undetectable utilons in her worldview. Interestingly the more undetectable utilons you have, the more it looks like you are just extrapolating the universally accepted as ethical idea of expanding your circle of altruism, one that has been called, of all things, your "circle of empathy". 

 

8 The Road Ahead

 

In this community, and that is perhaps it's greatest advantage, there can never be enough stress on deciding well which direction to take. Or which tool to use. If you are about to travel, it is argued, the most important thing is to figure out to where are you going and steer your future into that direction.  What I suggest is as important as the direction to which one decided to travel is that you figure out you tank's size and how often do you need gas stations on your way there. It is better to get to a worse place than to be stymied by an inner force that, if you can't quite control, you can at least significantly reduce its likelihood of failure and amount of damage. 

 


 

11 comments

Comments sorted by top scores.

comment by Giles · 2013-02-22T13:54:22.213Z · LW(p) · GW(p)

This is like a whole sequence condensed into a post.

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-22T17:48:21.242Z · LW(p) · GW(p)

I was told to divide in parts before. I may republish it in that form if people would like it. The problem was that several sections are too dependant on others, and those I asked before thought it was worse in parts than as a whole.

Replies from: ModusPonies
comment by ModusPonies · 2013-02-22T20:31:36.470Z · LW(p) · GW(p)

Making this a single post was the right choice. A series of posts would be appropriate for many different ideas that build on each other, whereas this seems like a single idea with many facets.

comment by moridinamael · 2013-02-22T22:10:44.621Z · LW(p) · GW(p)

I'm wondering if there isn't more discussion here because other people don't like this as much as I do, or because they're as blown away as I am and still processing the content.

comment by RomeoStevens · 2013-02-23T05:22:42.321Z · LW(p) · GW(p)

Why would I supplant my near self with my far self when my far self cares way less about my happiness?

edit: this is an honest question not an attempt at humor or snark.

Replies from: diegocaleiro
comment by diegocaleiro · 2017-02-22T12:42:10.180Z · LW(p) · GW(p)

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

Replies from: RomeoStevens
comment by RomeoStevens · 2017-02-23T08:37:21.885Z · LW(p) · GW(p)

This doesn't seem very feasible to me given both the prediction horizon being short and my preferences changing in ways I would not have predicted. Option value seems like a much better framework for thinking about the future.

comment by maia · 2013-02-23T00:01:00.621Z · LW(p) · GW(p)

I don't quite understand your section on young people. Are you saying that people under 25, despite the fact that their goals are probably changing rapidly, should try to make long term plans? And that these people especially should try to do impossible things?

(I'm in that group, so I am curious.)

Replies from: diegocaleiro
comment by diegocaleiro · 2013-02-23T02:15:46.189Z · LW(p) · GW(p)

I am saying that there will be few in that group who will actually manage to pull off long time work. Since they have more time ahead of them, they should try, even if their likelihood of success is low anyway. Basically, use that time figuring out if you belong to that minority. If you do, use that in your favour. If you don't, that is fine, you still have time to go.

I was going to say just the above paragraph, but I sense I'm not dealing with the least convenient world here. So I'll take another shot: There are two separate statements: 1)Young people are worst at predicting their internal external change rates, and thus should wait before they start to predict that about themselves 2)Young people have more time so they should go for long-term, both because they won't learn how long they usually last if they try only short term, and because there are some plans in the world that just have to be long term, and if someone will end up executing those, they are probably young, for they need a long time to execute (per definition).

Yeah, I sense the contradiction there as much as you do, and I don't know what to do about it... Maybe I'm a bad designer of planners, or maybe Nature sucks and there is not much to be done about it.

Replies from: maia
comment by maia · 2013-02-24T18:17:39.142Z · LW(p) · GW(p)

That makes a lot more sense, actually. Young people have the most time to carry out long-term plans, and there are some plans that take so long that you have to start when you are young to have any hope of completing them.

But young people also have quickly changing goals, so... that sucks.

comment by Shmi (shminux) · 2013-02-22T18:44:48.653Z · LW(p) · GW(p)

Given that your part1 currently sits at a negative karma value (for the record, I did not vote it up or down) and generated almost no interest, I'm wondering why you have decided to publish more of the same.