Why might the future be good?

post by paulfchristiano · 2013-02-27T07:22:09.782Z · LW · GW · Legacy · 14 comments

Contents

  How much altruism do we expect?
    Dose natural selection select for self-interest?
    Historical distortions
      Short-range consequentialism
      Value drift
    So what does natural selection select for?
    What values are we starting with?
  How important is altruism?
  Conclusion
None
14 comments

 

(Cross-posted from Rational Altruist. See also recent posts on time-discounting and self-driving cars.)

When talking about the future, I often encounter two (quite different) stories describing why the future might be good:

  1. Decisions will be made by people whose lives are morally valuable and who want the best for themselves. They will bargain amongst each other and create a world that is good to live in. Because my values are roughly aligned with their aggregate preferences, I expect them to create a rich and valuable world (by my lights as well as theirs).
  2. Some people in the future will have altruistic values broadly similar to my own, and will use their influence to create a rich and valuable world (by my lights as well as theirs).

Which of these pictures we take more seriously has implications for what we should do today. I often have object level disagreements which seem to boil down to disagreement about which of these pictures is more important, but rarely do I see serious discussion of that question. (When there is discussion, it seems to turn into a contest of political ideologies rather than facts.)

If we take picture (1) seriously, we may be interested in ensuring that society continues to function smoothly, that people are aware of and pursue what really makes them happy, that governments are effective, markets are efficient, externalities are successfully managed, etc. If we take picture (2) seriously, we are more likely to be concerned with changing what the people of the future value, bolstering the influence of people who share our values, and ensuring that altruists are equipped to embark on their projects successfully.

I'm mostly concerned with the very long run---I am wondering what conditions will prevail for most of the people who live in the future, and I expect most of them to be alive very far from now.

It seems to me that there are two major factors that control the relative importance of pictures (1) and (2): how prominent should we expect altruism to be in the future, and how efficiently are altruistic vs. selfish resources being used to create value? My answer to the second question is mostly vague hand-waving, but I think I have something interesting to say on the first question.

How much altruism do we expect?

I often hear people talking about the future, and the present for that matter, as if we are falling towards a Darwinian attractor of cutthroat competition and vanishing empathy (at least as a default presumption, which might be averted by an extraordinary effort). I think this picture is essentially mistaken, and my median expectation is that the future is much more altruistic than the present.

Dose natural selection select for self-interest?

In the world of today, it may seem that humans are essentially driven by self-interest, that this self-interest was a necessary product of evolution, that good deeds are principally pursued instrumentally in service of self-interest, and that altruism only exists at all because it is too hard for humans to maintain a believable sociopathic facade.

If we take this situation and project it towards a future in which evolution has had more time to run its course, creating automations and organizations less and less constrained by folk morality, we may anticipate an outcome in which natural selection has stripped away all empathy in favor of self-interest and effective manipulation. Some may view this outcome as unfortunate but inevitable, others may view it as a catastrophe which we should work to avert, and still others might view it as a positive outcome in which individuals are free to bargain amongst themselves and create a world which serves their collective interest.

But evolution itself does not actually seem to favor self-interest at all. No matter what your values, if you care about the future you are incentivized to survive, to acquire resources for yourself and your descendants, to defend yourself from predation, etc. etc. If I care about filling the universe with happy people and you care about filling the universe with copies of yourself, I'm not going to set out by trying to make people happy while allowing you and your descendants to expand throughout the universe unchecked. Instead, I will pursue a similar strategy of resource acquisition (or coordinate with others to stop your expansion), to ensure that I maintain a reasonable share of the available resources which I can eventually spend to help shape a world I consider value. (See here for a similar discussion.)

This doesn't seem to match up with what we've seen historically, so if I claim that it's relevant to the future I have some explaining to do.

Historical distortions

Short-range consequentialism

One reason we haven't seen this phenomenon historically is that animals don't actually make decisions by backwards-chaining from a desired outcome. When animals (including humans) engage in goal-oriented behavior, it tends to be pretty local, without concern for consequences which are distant in time or space. To the extent that animal behavior is goal-oriented at a large scale, those goals are largely an emergent property of an interacting network of drives, heuristics, etc. So we should expect animals to have goals which lead them to multiply and acquire resources, even when those drives are pursued short-sightedly. And indeed, that's what we see. But it's not the fault of evolution alone---it is a product of evolution given nature's inability to create consequentialist reasoners.

Casually, we seem to observe a similar situation with respect to human organizations---organizations which value expansion for its own sake (or one of its immediate consequences) are able to expand aggressively, while organizations which don't value expansion have a much harder time deciding to expand for instrumental reasons without compromising their values.

Hopefully, this situation is exceptional in history. If humans ever manage to build systems which are properly consequentialist---organizations or automations which are capable of expanding because it is instrumentally useful---we should not expect natural selection to discriminate at all on the basis of those systems' values.

Value drift

Human's values are also distorted by the process of reproduction. A perfect consequentialist would prefer to have descendants who share their values. (Even if I value diversity or freedom of choice, I would like my children to at least share those values, at least if I want that freedom and diversity to last more than one generation!) But humans don't have this option---the only way we can expand our influence is by creating very lossy copies. And so each generation is populated by a fresh batch of humans with a fresh set of values, and the values of our ancestors only have an extremely indirect effect on the world of today.

Again, a similar problem afflicts human organizations. If I create a foundation that I would like to persist for generations, the only way it can expand its influence is by hiring new staff. And since those staff have a strong influence over what my foundation will do, the implicit values of my foundation will slowly but surely be pulled back to the values of the pool of human employees that I have to draw from.

These constraints distort evolution, causing selection to act only those traits which can be reliably passed on from one generation to the next. In particular, this exacerbates the problem from the preceding section---even to the extent that humans can engage in goal-oriented reasoning and expand their own influence instrumentally, these tendencies can not be very well encoded in genes or passed on to the next generation in other ways. This is perhaps the most fundamental change which would result from the development of machine intelligences. If it were possible to directly control the characteristics and values of the next generation, evolution would be able to act on those characteristics and values directly.

So what does natural selection select for?

If the next generation is created by the current generation, guided by the current generation's values, then the properties of the next generation will be disproportionately affected by those who care most strongly about the future.

In finance: if investors have different time preferences, those who are more patient will make higher returns and eventually accumulate much wealth. In demographics: if some people care more about the future, they may have more kids as a way to influence it, and therefore be overrepresented in future generations. In government: if some people care about what government looks like in 100 years, they will use their political influence to shape what the government looks like in 100 years rather than trying to win victories today.

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

I think this picture is reasonably robust. There are ways that natural selection (/ efficient markets) can be frustrated, and I would not be too surprised if these frustrations persisted indefinitely, but nevertheless this dynamic seems like one of the most solid features of an uncertain future.

What values are we starting with?

Most of people's preferences today seem to concern what happens to them in the near term. If we take the above picture seriously, these values will eventually have little influence over society. Then the question becomes: if we focus only on humanity's collective preferences over the long term, what do those preferences look like? (Trying to characterize preferences as "altruistic" or not no longer seems useful as we zoom in.)

This is an empirical question, which I am not very well-equipped to value. But I can make a few observations that ring true to me (though my data is mostly drawn from academics and intellectuals, who may fail to be representative of normal people in important ways even after conditioning on the "forward-looking" part of people's values):

  1. When people think about the far future (and thus when they articulate their preferences for the far future) they seem to engage a different mode of reasoning, more strongly optimized to produce socially praise-worthy (and thus prosocial) judgments. This might be characterized as a bias, but to the extent we can talk about human preferences at all they seem to be a result of these kinds of processes (and to the extent that I am using my own altruistic values to judge futures, they are produced by a similar process). This effect seems to persist even when we are not directly accountable for our actions.
  2. People mostly endorse their own enlightened preferences, and look discouragingly at attempts to lock-in hastily considered values (though they often seem to have overconfident views about what their enlightened preferences will look like, which admittedly might interfere with their attempts at reflection).
  3. I find myself sympathetic to very many people's accounts of their own preferences about the future, even where those accounts different significantly from my own. I would be surprised if the distribution of moral preferences was too scattered.
  4. To the extent that people care especially about their species, their nation, their family, themselves, etc. : they seem to be sensitive to fairness considerations (and rarely wish e.g. to spend a significant fraction of civilization's resources on themselves), their preferences seem to be only a modest distortion of aggregative values (wanting people with property X to flourish is not so different from wanting people to flourish, if property X is some random characteristic without moral significance), and human preferences seem to somewhat reliably drift in the direction of more universal concern as basic needs are addressed and more considerations are considered.

After cutting away all near-term interests, I expect that contemporary human society's collective preferences are similar to their stated moral preferences, with significant disagreement on many moral judgments. However, I expect that these values support reflection, that upon reflection the distribution of values is not too broad, and that for the most part these values are reasonably well-aligned. With successful bargaining, I expect a mixture of humanity's long-term interests to be only modestly (perhaps a factor of 10, probably not a factor of 1000) worse than my own values (as judged by my own values).

Moreover, I have strong intuitions to emphasize those parts of my values which are least historically contingent. (I accept that all of my values are contingent, but am happier to accept those values that are contingent on my biological identity than those that are contingent on my experiences as a child, and happier to accept those that are contingent on my experiences as a child than those that are contingent on my current blood sugar.) And I have strong reciprocity intuitions that exacerbate this effect and lead me to be more supportive of my peers' values. These effects make me more optimistic about a world determined by humanity's aggregate preferences than I otherwise would be.

How important is altruism?

(The answer to this question, unlike the first one, depends on your values: how important to what? I will answer from my own perspective. I have roughly aggregative values, and think that the goodness of a world with twice as many happy people is twice as high.)

Even if we know a society's collective preferences, it is not obvious what their relative importance is. At what level of prevalence would the contributions of explicit altruism become the source of value? If altruists are 10% of the influence-weighted population, do the contributions of the altruists matter? What if altruists are 1% of the population? A priori, it seems clear that the explicit altruists should do at least as much good--on the altruistic account--as any other population (otherwise they could decide to jump ship and become objectivists, or whatever). But beyond that, it isn't clear that altruists should create much more value--even on the altruistic account--than people with other values.

I suspect that explicit altruistic preferences create many times more value than self-interest or other nearly orthogonal preferences. So in addition to expecting a future in which altruistic preferences play a very large role, I think that altruistic preferences would be responsible for most of the value even if they controlled only 1% of the resources.

One significant issue is population growth. Self-interest may lead people to create a world which is good for themselves, but it is unlikely to inspire people to create as many new people as they could, or use resources efficiently to support future generations. But it seems to me that the existence of large populations is a huge source of value. A barren universe is not a happy universe.

A second issue is that population characteristics may also be an important factor in the goodness of the world, and self-interest is unlikely to lead people to ensure that each new generation has the sorts of characteristics which would cause them to lead happy lives. It may happen by good fortune that the future is full of people who are well-positioned to live rich lives, but I don't see any particular reason this would happen. Instead, we might have a future "population" in which almost all resources support automation that doesn't experience anything, or a world full of minds which crave survival but experience no joy, or etc.; "self-interest" wouldn't lead any of these populations to change themselves to experience more happiness. It's not clear why we would avoid these outcomes except by a law of nature that said that productive people were happy people (which seems implausible to me) or by coordinating to avoid these outcomes.

(If you have different values, such that there is a law [or at least guideline] of nature: "productive people are morally valuable people," then this analysis may not apply to you. I know several such people, but I have a hard time sympathizing with their ethics.)

Conclusion

I think that the goodness of a world is mostly driven by the amount of explicit optimization that is going on to try and make the world good (this is all relative to my values, though a similar analysis seems to carry with respect to other aggregative values). This seems to be true even if relatively little optimization is going on. Fortunately, I also think that the future will be characterized by much higher influence for altruistic values. If I thought altruism was unlikely to win out, I would be concerned with changing that. As it is, I am instead more concerned with ensuring that the future proceeds without disruptions. (Though I still think it is worth it to try and increase the prevalence of altruism faster, most of all because this seems like a good approach to minimizing the probability of undesired disruptions.)

 

14 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2013-02-27T22:23:26.810Z · LW(p) · GW(p)

If humans ever manage to build systems which are properly consequentialist---organizations or automations which are capable of expanding because it is instrumentally useful---we should not expect natural selection to discriminate at all on the basis of those systems' values.

You seem to be making several more assumptions for your "median future" that you haven't made explicit here. 1) Humans will manage to build such properly consequentialist systems not subject to value drift soon, before too much further evolution will have taken place. 2) We will succeed in imbuing such systems with the altruistic values that we still have at that point. 3) Such properly consequentialist systems will be able to either out-compete other entities that are subject to short-range consequentialism and value drift, or at least survive into the far future in an environment with such competitors.

Have you discussed (or seen good arguments made elsewhere) why these are likely to be the case?

Replies from: paulfchristiano
comment by paulfchristiano · 2013-02-27T23:30:25.206Z · LW(p) · GW(p)

I agree. I argued that values about the long term will dominate in the long term, and I suggested that our current long term values are mostly altruistic. But in the short term (particularly during a transition to machine intelligences) our values could change in important ways, and I didn't address that.

I expect we'll handle this ("expect" as in probability >50%, not probability 90%) primarily because we all want the same outcome, and we don't yet see any obstacles clearly enough to project confidently that the obstacles are too hard to overcome. But like I said, it seems like an important thing to work on, directly or indirectly.

I don't quite understand your point (3), which seems like it was addressed. A competitor who isn't able to reason about the future seems like a weak competitor in the long run. It seems like the only way such a competitor can win (again, in the long run) is by securing some irreversible victory like killing everyone else.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-02-28T06:38:23.855Z · LW(p) · GW(p)

I expect we'll handle this ("expect" as in probability >50%, not probability 90%) primarily because we all want the same outcome, and we don't yet see any obstacles clearly enough to project confidently that the obstacles are too hard to overcome.

When you say "we all want the same outcome", do you mean we all want consequentialist systems, with our values and not subject to value drift, to be built before too much evolution has taken place? But many AGI researchers seem to prefer working on "heuristic soup" type designs (which makes sense if those AGI researchers are not themselves "properly consequentialist" and don't care strongly about long range outcomes).

I don't quite understand your point (3), which seems like it was addressed.

What I mean is that the kind of value-stable consequentialist that humans can build in the relevant time frame may be too inefficient to survive under competitive pressure from other cognitive/organizational architectures that will exist (even if it can survive as a singleton).

comment by shminux · 2013-02-27T20:00:35.702Z · LW(p) · GW(p)

I don't understand, what is the reason to only consider why the future might be good, as opposed to all possible alternatives?

Replies from: paulfchristiano, timtyler
comment by paulfchristiano · 2013-02-27T20:11:39.680Z · LW(p) · GW(p)

What is the reason to only consider one question, as opposed to all possible questions?

Asking what forces make the future good is relevant if you expect that your ability to influence an effect depends on the magnitude of that effect. Similarly, if you were interested in increasing economic growth, you might ask "what are the main factors driving economic growth?" so that you could see what to contribute to. If you were interested in making people's lives better, you might ask "what forces tend to make their lives good, and how can we support those forces?"

You could also ask "what determines what happens in people's lives?" but hopefully it makes sense to start with "what would make their lives good" if your interest is in making their lives good.

I give some (very vague) reasons why we might care about the difference between pictures 1 and 2.

Replies from: shminux
comment by shminux · 2013-02-27T20:26:56.186Z · LW(p) · GW(p)

I see. I guess my experience from the software development world is that developers are nearly always overly optimistic about nearly every project, and risk analysis and mitigation is the part that is lacking the most. So the questions that have the most impact on the success of a project are of the type "what can go wrong?". Not necessarily x-risk stuff, just your run-of-the-mill terrorism, incompetence, conspiracy, market collapse, resource shortage and such. But I guess there is enough talk of this already.

comment by timtyler · 2013-02-28T00:32:18.350Z · LW(p) · GW(p)

what is the reason to only consider why the future might be good, as opposed to all possible alternatives?

Perhaps as a counterpoint to the doom-saying bias of the many paranoid humans out there, who have already had their say on the topic.

comment by timtyler · 2013-02-27T11:52:45.927Z · LW(p) · GW(p)

When talking about the future, I often encounter two (quite different) stories describing why the future might be good: [...]

3) Evolution in our (benign) universe is progressive, leads to more goodness over time, and is already quite good.

Replies from: Mickydtron, paulfchristiano
comment by Mickydtron · 2013-02-27T18:29:56.625Z · LW(p) · GW(p)

The arguments laid out on the linked page are orthogonal to any questions of value or goodness.

The page's arguments conclude that "life is trying to occupy all space, and to become master of the universe." However, nothing is said as to what "life" will do with its mastery, and thus these arguments are unrelated to the question of why the future might be good, except insofar as most people would rank futures in which life is wiped out as not good.

I believe that it is fairly trivial to show that while evolution is in fact an optimization process, it is not optimizing for goodness. It is a pretty big jump from "evolution has an arrow" to "...and it points where we want it to". In fact, I believe that there is significant evidence that it does not point where we want. As evidence, I point to basically every group selection experiment ever.

I would also disagree that it is "already quite good". While it is certainly not the worst that could be conceived of, there is significant room for improvement. However, the current standing of the universe is less relevant to the article (which I enjoyed, by the way) than that there is room for improvement according to the values that we as people have, which is obvious enough to me as to need no defense.

I also object to the use of the word benign in the comment, as it appears to be there simply to sneak in connotations. On the linked page, the word is used as a synonym for "capable of bearing life", which can be applied to our universe without much controversy. However, when used in a sentence with the words "progressive" and "goodness", it seems that "capable of bearing life" is not the intended definition, and even if it is, it is predictably not the one that a reader will immediately reach for.

Replies from: timtyler
comment by timtyler · 2013-02-28T00:18:48.928Z · LW(p) · GW(p)

I believe that it is fairly trivial to show that while evolution is in fact an optimization process, it is not optimizing for goodness. It is a pretty big jump from "evolution has an arrow" to "...and it points where we want it to". In fact, I believe that there is significant evidence that it does not point where we want. As evidence, I point to basically every group selection experiment ever.

That page is pretty silly. These days, most of the scientists involved agree that kin selection and group selection are equivalent - and cover the same set of phenomena.

The equivalent of your proposal in the language of kin selection is for organisms to become more closely related. That's happening in humans, since humans have come to possess more and more shared memes - allowing cultural kin selection to produce cooperation between them on increasingly-larger scales. Once you properly account for cultural evolution things do seem to be reasonably on track. Other mechanisms that produce cooperaation - such as reciprocity, trade and reputations - are also going global. Essentially, Peter Kropotkin was correct.

I also object to the use of the word benign in the comment, as it appears to be there simply to sneak in connotations.

Hmm. I essentially mean: not being bombarded at a high frequency with meteories. I'm obliquely referencing Buckminster Fuller's book Approaching the Benign Environment.

comment by paulfchristiano · 2013-02-27T18:11:11.718Z · LW(p) · GW(p)

Yes, there are definitely other possible explanations, though the ones I gave seem most common. But yours seems to beg the question---why does evolution lead to more goodness over time?

Replies from: Qiaochu_Yuan, timtyler
comment by Qiaochu_Yuan · 2013-02-27T19:50:46.424Z · LW(p) · GW(p)

I could be wrong, but I don't think timtyler is claiming that this is a good argument but only that this is an argument that people use.

Replies from: timtyler
comment by timtyler · 2013-02-28T00:02:28.932Z · LW(p) · GW(p)

Evolutionary progress gets thumbs-up from me. The page I originally cited was my own.

comment by timtyler · 2013-02-28T00:01:00.858Z · LW(p) · GW(p)

We don't need to understand why evolution leads to goodness to see that it does so - and predict that it will continue to do so. However, scientists do have some understanding of why. Game theory, adaptation and synergy are some of the factors involved. Also a sufficiently-low frequency of meteorite strikes and cosmic rays is clearly an important factor.