Superintelligence 18: Life in an algorithmic economy
post by KatjaGrace · 2015-01-13T02:00:11.506Z · LW · GW · Legacy · 52 commentsContents
Summary Another view Notes In-depth investigations How to proceed None 52 comments
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
Welcome. This week we discuss the eighteenth section in the reading guide: Life in an algorithmic economy. This corresponds to the middle of Chapter 11.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Life in an algorithmic economy” from Chapter 11
Summary
- In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
- The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
- AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
- Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
- There are various other reasons that very short lives might be optimal for some applications. (p168-9)
- It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
- In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
- In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
- Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)
Another view
Robin Hanson on others' hasty distaste for a future of emulations:
Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.
But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.
I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.
Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.
But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.
More on whose lives are worth living here and here.
Notes
1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings here, here, here and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him.
2. Bostrom says,
Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)
It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.
In-depth investigations
If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.
- Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
- Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
- Investigate the likelihood of a multipolar outcome?
- Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
- What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
- What measures are useful for ensuring good multipolar outcomes?
- What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
How to proceed
This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11. The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.
52 comments
Comments sorted by top scores.
comment by RobinHanson · 2015-01-13T18:34:21.922Z · LW(p) · GW(p)
The idea of selecting for people willing to donate everything to an employer seems fanciful and not very relevant. In a low wage competitive economy the question would instead be if one is willing to create new copies conditional on them earning low wages. If large fractions of people are so willing then one needn't pay much selection power to get that feature.
Replies from: Sebastian_Hagen↑ comment by Sebastian_Hagen · 2015-01-14T01:39:57.869Z · LW(p) · GW(p)
Given a non-trivial population to start with, it will be possible to find people that will consent to copying given absolutely minimal (quite possibly none at all) assurances for what happens to their copy. The obvious cases would be egoists that have personal value systems that make them not identify with such copies; you could probably already find many of those today.
In the resulting low-wage environment, it will likewise be possible to find people who will consent to extensive modification/experimentation of their minds given minimal assurances for what happens afterwards (something on the order of "we guarantee you will not be left in abject pain" will likely suffice) if the alternative is starvation. Given this, why you do believe the idea of selection for donation-eagerness to be fanciful?
Replies from: RobinHanson↑ comment by RobinHanson · 2015-01-14T19:48:10.477Z · LW(p) · GW(p)
With near subsistence wages there's not much to donate, so no need to bother.
comment by RobinHanson · 2015-01-13T18:27:28.567Z · LW(p) · GW(p)
Having a low wage is just not like being a slave. The vast majority of humans who have ever lived were poor, but beside from the fact that slaves are also poor in a sense, the life of a typical poor person was not like the life of a slave. Ems might be poor in the sense of needing to work many hours to survive, but they would also have no pain, hunger, cold, sickness, grim, etc. unless they wanted them. Their physical appearance and surroundings would be what we'd see as very luxurious.
Replies from: William_S, Sebastian_Hagen, TedHowardNZ↑ comment by Sebastian_Hagen · 2015-01-14T01:23:11.983Z · LW(p) · GW(p)
Their physical appearance and surroundings would be what we'd see as very luxurious.
Only to the extent that this does not distract them from work. To the extent that it does, ems that care about such things would be outcompeted (out of existence, given a sufficiently competitive economy) by ones that are completely indifferent to them, and focus all their mental capacity on their job.
Replies from: RobinHanson↑ comment by RobinHanson · 2015-01-14T19:47:02.866Z · LW(p) · GW(p)
Yes, the surroundings would need to be not overly distracting. But that is quite consistent with luxurious.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2015-02-10T23:36:52.860Z · LW(p) · GW(p)
Because as Einstein is attributed to have said, Kingsley said and positive psychology confirmed: “We act as though comfort and luxury were the chief requirements of life, when all that we need to make us really happy is something to be enthusiastic about.”
↑ comment by TedHowardNZ · 2015-01-14T01:05:37.427Z · LW(p) · GW(p)
Hi Robin
What is significantly different between poor people and slaves? The poor have little means of travel, they must work for others often doing stuff they hate doing, just to get enough to survive. In many historical societies slaves often had better conditions and housing than many of the poor today.
How would you get security in such a system? How would anyone of wealth feel safe amongst those at the bottom of the distribution curve?
The sense of injustice is strong in humans - one of those secondary stabilising strategies that empower cooperation.
It is actually relatively easy to automate all the jobs that no-one wants to do, so that people only do what they want to do. In such a world, there is no need of money or markets.
There are actually of lot of geeks like me who love to automate processes (including the process of automation).
Market based thinking was a powerful tool in times of genuine scarcity. Now that we have the power to deliver universal abundance, market based thinking is the single greatest impediment to the delivery of universal security and universal abundance.
Replies from: RobinHanson, Sebastian_Hagen↑ comment by RobinHanson · 2015-01-14T19:51:20.492Z · LW(p) · GW(p)
Poverty doesn't require that you work for others; most in history were poor, but were not employees. Through most of history rich people did in fact feel safe among the poor. They didn't hang there because that made them lower status. You can only deliver universal abundance if you coordinate to strongly limit population growth. So you mean abundance for the already existing, and the worse poverty possible for the not-yet-existing.
↑ comment by Sebastian_Hagen · 2015-01-14T01:34:45.592Z · LW(p) · GW(p)
It is actually relatively easy to automate all the jobs that no-one wants to do, so that people only do what they want to do. In such a world, there is no need of money or markets.
How do you solve the issue that some people will have a preference for highly fast reproduction, and will figure out a way to make this a stable desire in their descendants?
AFAICT, such a system could only be stabilized in the long term by extremely strongly enforced rules against reproduction if it meant that one of the resulting entities would fall below an abundance wealth level, and that kind of rule enforcement most likely requires a singleton.
Replies from: Capla↑ comment by Capla · 2015-01-14T04:00:37.841Z · LW(p) · GW(p)
Is it feasible to make each "family" or "lineage" responsible for itself?
You can copy yourself as much as you want, but you are responsible for sustaining each copy?
Could we carry this further?: legally, no distinction is made between individuals and collections of copied individuals. It doesn't matter if you're one guy or a "family" of 30,000 people all copied (and perhaps subsequently modified) from the same individual: you only get one vote, and you're culpable if you commit a crime. How these collectives govern themselves is their own business, and even if it's dictatorial, you might argue that it's "fair" on the basis that copies made choices (before the split up) to dominate copies. If you're a slave in a dictatorial regime, it can only be because you're the sort of person who defects on prisoner dilemmas and seizes control when you can.
Maybe when some members become sufficiently different from the overall composition, they break off and become their own collective? Maybe this happens only at set times to prevent rampant copying to swamp elections?
Replies from: RobinHanson, Lumifer↑ comment by RobinHanson · 2015-01-14T19:52:05.925Z · LW(p) · GW(p)
Not only is this feasible, this is in fact the usual default situation in a simple market economy.
Replies from: Capla↑ comment by Capla · 2015-01-14T21:17:38.134Z · LW(p) · GW(p)
I'm imagining a political system composed of "citizen units." (Perhaps there is already an accepted terminology for my meaning? It doesn't seem different from the classical idea of the family.) A given citizen unit might be a single individual or it might be a billion individuals all descended from the same initial model (if that model is still alive, he/she would also be part of the citizen unit). Regardless of the numeric size, each citizen unit is guaranteed certain rights, namely, X vote(s) in political elections, a basic income (perhaps some constant fraction of the economy), and protected autonomy. Multi-individual citizen units are free to arrange their internal organization however they choose. Citizen units engage with each other in economic transactions.
A citizen unit composed of a single individual may decide to copy itself (and thereby become a citizen unit of two individuals), but it must be able to afford to sustain those two individuals. Copying may be an investment (having an extra member of the citizen unit will yield an an income gain that covers the cost of keeping another individual) but it could also be accomplished by budgeting more tightly (just like families who decide to have a child today). Realistically, citizen units will face a tradeoff between making and keeping more copies and running (fewer individuals) faster.
Occasionally, citizen units will make bad decisions and will be forced to kill one or more of their member-individuals. (Though, I imagine that there would be a large amount of redundancy between copies. I think that instead of straight deletion, the memories of one copy might be folded into another similar copy: not killing, but re-merging. I don't know if this is feasible.) However, this is the concern only of that citizen unit. As long as one is prudent, any citizen unit can expect to persist comfortably.
So long as the political boundary of "personhood" is kept firmly around the "citizen unit" instead of the individual, a general Malthusian trap is is of little threat. This is especially the case if citizens are guaranteed a basic income, which (given the technology-fueled mass unemployment and high per capita wealth that will likely accompany the lead up to Emulation Technology), may very well be standard by that time.
Is there some reason to expect that this model of personhood will not prevail? If it does, then what is the danger of a general Malthusian scenario?
Replies from: Jiro, Sebastian_Hagen↑ comment by Jiro · 2015-01-14T21:47:17.476Z · LW(p) · GW(p)
There's a continuum, where one end of the continuum is "exact copy" and the other end of the continuum is "basically a child, who starts out with few copied traits, and who can be influenced by the other members of the citizen unit but no more so than a child is today". An individual can be a copy to a greater or lesser degree; being a copy isn't a yes or no thing.
Assuming that people may still have children in this society, and that children have certain rights (such as eventually being permitted to move into separate citizen-units, or having the right not to be killed by the other members of the citizen-unit), you're going to need to set up a boundary point between "member who cannot leave and can be killed" and "member who can leave and cannot be killed". How are you going to do this?
(Bear in mind you also want to include cases such as copies that started out as exact copies but diverged sufficiently after X years.)
Replies from: Capla↑ comment by Capla · 2015-01-14T22:55:02.698Z · LW(p) · GW(p)
First of all, we could easily have something akin to child protective services, which protects the rights of marginalized individuals within citizen units. If individuals are being abused, they can be removed, and put with foster citizen units.
We may decide that actually, individuals don't have the right to leave the citizen unit they were "born into", but I do agree that I share some aversion to that idea. It is worth noting that in a society where the norm is existing in a close knit citizen unit of copies of varying similarity to you, individuals may have far less aversion to being unable to leave their C.U. (or leave it easily). It may be far less of a problem than it seems to us. Consider traditional societies where one's family is of large cultural importance.
However, if we ignore the sociological pressures...
We need a system by which sufficiently deviated copies can appeal to get "divorced" or "emancipated", but one that limits this occurrence so that the rate of citizen unit population growth doesn't outpace that of the economy. This certainly puts a damper on the clean, simple, and automatic non-Malthusian-ness of my proposed system, but it doesn't seem insurmountable. The problem is not so different than that of immigration regulation in our time.
The basic principle is that there should be a quota of new copies (or copy collectives, more likely) that can receive emancipation in a year. The number of this quota should be determined by the growth of the economy in that year.
We could disincentivize "divorces" or disincentivize making copies or even disincentivize only making copies that are sufficiently different from the original that the copy can be held to be morally independent of the actions of the citizen unit. Alternatively, we could incentivize "mergers", in which separate citizen units (Is there a better name for these than citizen units?) combine to form a single, new citizen unit. Consider why many people decide not to have children today: cost, loss of freedom, ect.
some ideas:
We only allow the quota'ed number of new citizen units to be split off in a given year. When the number of applicants exceeds the spots available, they can chose to either continue as part of the citizen unit they were "born into" until a spot opens up, or they can enter a sort of suspended animation where they are run extremely slowly (or are even deactivated and digitally compressed) until a slot opens up.
Every citizen unit has a state mandated right to split up into two citizen units once in so many years. Individuals right to decide which of the new citizen units to join is protected. (This has some complications involving game theory and picking teams).
Individuals can break off from the citizen unit they were born into and join another (willing) citizen unit, whenever they mutually agree to do so.
New C.U.s can break off from old ones as long as they combine with another new C.U. that wants to break off.
↑ comment by Jiro · 2015-01-15T09:32:48.196Z · LW(p) · GW(p)
If individuals are being abused, they can be removed, and put with foster citizen units.
How does that work? People are permitted to kill members of their own citizen units without penalty. If they can be killed without penalty, surely they can be abused without penalty too, right?
Of course you could say "they can be killed but they can't be abused", but that leads to problems. Can someone threaten a member of the same citizen unit with death or does that count as abuse? How do you determine abuse anyway (if I am about to duplicate myself, and I arrange a precommitment which results in one copy being abused and one copy benefitting, can the abused copy appeal to the anti-abuse law, thus essentially making such precommitments impossible?)
Consider traditional societies where one's family is of large cultural importance.
Families are limited in what they can do to their members much more than members of CUs are limited. Furthermore, we consider some of the things those families were permitted do to be immoral nowadays (such as letting husbands abuse wives.)
↑ comment by Sebastian_Hagen · 2015-01-14T22:39:01.583Z · LW(p) · GW(p)
This scenario is rather different than the one suggested by TedHowardNZ, and has a better chance of working. However:
Is there some reason to expect that this model of personhood will not prevail?
One of the issues is that less efficient CUs have to defend their resources against more efficient CUs (who spend more of their resources on work/competition). Depending on the precise structure of your society, those attacks may e.g. be military, algorithmic (information security), memetic or political. You'd need a setup that allows the less efficient CUs to maintain their resource share indefinitely. I question that we know how to set this up.
If it does, then what is the danger of a general Malthusian scenario?
The word "general" is tricky here. Note that CUs that spend most of their resources on instantiating busy EMs will probably end up with more human-like population per CU, and so (counting in human-like entities) may end up dominating the population of their society unless they are rare compared to low-population, high-subjective-wealth CUs. This society may end up not unlike the current one in wealth distribution, where a very few human-scale entities are extremely wealthy, but the vast majority of them are not.
Replies from: Capla↑ comment by Capla · 2015-01-14T23:08:17.466Z · LW(p) · GW(p)
One of the issues is that less efficient CUs have to defend their resources against more efficient CUs (who spend more of their resources on work/competition)
I am assuming (for now), a monopoly of power that enforces law and order and prevents crimes between C.U.s.
Note that CUs that spend most of their resources on instantiating busy EMs will probably end up with more human-like population per CU, and so (counting in human-like entities) may end up dominating the population of their society unless they are rare compared to low-population, high-subjective-wealth CUs.
I don't follow this. Can you elaborate?
Replies from: alienist, Sebastian_Hagen↑ comment by alienist · 2015-01-20T09:48:33.591Z · LW(p) · GW(p)
I am assuming (for now), a monopoly of power that enforces law and order and prevents crimes between C.U.s.
Any system becomes feasible once you assume a monopoly on power able to enforce an arbitrary law code. Of course, if you think about where the monopoly comes from you're back to a singleton scenario.
↑ comment by Sebastian_Hagen · 2015-01-17T22:38:15.812Z · LW(p) · GW(p)
To the extent that CUs are made up of human-like entities (as opposed to e.g. more flexible intelligences that can scale to effectively use all their resources), one of the choices they need to make is how large an internal population to keep, where higher populations imply less resources per person (since the amount of resources per CU is constant).
Therefore, unless the high-internal-population CUs are rare, most of the human-level population will be in them, and won't have resources of the same level as the smaller numbers of people in low-population CUs.
↑ comment by Lumifer · 2015-01-14T21:34:37.495Z · LW(p) · GW(p)
If you're a slave in a dictatorial regime, it can only be because you're the sort of person who defects on prisoner dilemmas
ROFL...
Replies from: Capla↑ comment by Capla · 2015-01-14T21:48:05.052Z · LW(p) · GW(p)
Is your amusement a sign of critique? I'm thinking that my above comment was perhaps not very lucid...
Replies from: Lumifer↑ comment by Lumifer · 2015-01-14T22:01:30.129Z · LW(p) · GW(p)
My amusement was triggered by the idea that defecting in a prisoner's dilemma is an unmistakable sign of utter depravity and black-hearted evilness...
Replies from: Capla↑ comment by Capla · 2015-01-14T23:03:50.710Z · LW(p) · GW(p)
A very specific prisoner's dilemma. My point is that complaining that you are being oppressed by your clone, who is almost perfectly identical to you, is all but an admission that you would oppress others (even your own clones!) given the chance.
comment by Capla · 2015-01-14T03:44:55.472Z · LW(p) · GW(p)
What would an EM economy be based on? These days the economy is largely driven by luxury goods. In the past it was largly driven by agricultural production. In a Malthusian, digital world is (almost) everything dedicated to electrical generation? What would the average superintelligent mind do for employment?
comment by KatjaGrace · 2015-01-13T02:05:37.994Z · LW(p) · GW(p)
If evolution doesn't basically imply forward progress, why do you think it seems like we are doing so much better than our ancestors?
Replies from: torekp, TedHowardNZ↑ comment by torekp · 2015-01-15T01:23:50.555Z · LW(p) · GW(p)
Because the "doing better" history is written by the victors. It's our values that are being used to judge the improvement. Further evolutionary change, if left to the same blind idiot god, is highly likely to leave our descendants with changed - and worse - values. So long as the value drift is slight and the competence keeps increasing, our descendants will live better lives. But if and when the value drift becomes large, that will reverse. That's why we've got to usurp the powers of the blind idiot god before it's too late.
Closely related: Scott Alexander's Meditations on Moloch.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-01-18T12:13:31.153Z · LW(p) · GW(p)
We are doing better, because we are achieving outcomes that have always been valued, like longer lifespan and health. The pharohs and emperors of yore would have envied the painless dentistry and flat screen TV's now enjoyed by the average person.
The Molochian argument is that there is a pressure towards the sacrifice of a subset of those valued outcomes , the ones which require coordination, which is motivated by the subset of values which are self centered and do not promote coordination. There is no wholesale sacrifice of values. If we do something to sacrifice one thing we value, our motivation is another value.
There is also a pressure in the other direction, towards the promotion of coordination, and that pressure is ethics. Ethics is a distributed Gardener. (Lesswrongian and Codexian ethical thinking are both equally and oddly uninterested in the question: what is ethics?) Typical ethical values such as fairness,equality, and justice all promote coordination.
Ethical values are not a passive reflection of what society is, but instead push it in a more coordinative direction.
Ethical values at a given time are tailored to what is achievable. Under circumstances where warfare is unavoidable, for instance, ethical values ameliorate the situation by promoting courage, chivalry, etc. This situation is often misread as "our ancestors valued war, but we value peace".
There are no guarantees one way or the other about which tendency will win out.
The ethical outlook of a society is shaped by the problems it needs to solve, and can realistically solve, but not down to unanimity. Different groups within society have different interests, which is the origin of politics. Politics is disagreement about what to coordinate and how to coordinate.
↑ comment by torekp · 2015-01-18T15:26:25.776Z · LW(p) · GW(p)
I agree with most of that, including
The Molochian argument is that there is a pressure towards the sacrifice of a subset of those valued outcomes ,
but not:
the ones which require coordination,
I mean, that might be what Scott had in mind for the word Moloch, but the actual logic of the situation raises another challenge. The fragility of value, and the misalignment between human values and "whatever reproduces well, not just in the EAE but wherever and whenever", creates a dire problem.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-01-18T16:03:22.329Z · LW(p) · GW(p)
Molochian problems would be direr without the existence of a specific mechanism to overcome them.
I'm not a believer in the fragility of value.
http://lesswrong.com/lw/y3/value_is_fragile/br8k
Replies from: torekp↑ comment by torekp · 2015-01-22T02:55:41.582Z · LW(p) · GW(p)
You gave me the chance to check whether I was using "fragility of value" correctly. (I think so.) Your reply in that thread doesn't fit the fragility thesis: you're reading too much into it. EY is asserting that humanly-valuable outcomes are a small region in a high-dimensional space. That's basically all there is to it, though some logical consequences are drawn that flesh it out, and some of the evidence for it is indicated.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2015-01-26T16:42:23.308Z · LW(p) · GW(p)
If he is asserting only what you say, he is asserting nothing of interest. What FoV is usually taken to mean is that getting FAI right is difficult ... and that is right called fragility, because it is a process. However, it is not a conclusion supported by a premise about higher dimensional spaces, because that is not a process.
↑ comment by TedHowardNZ · 2015-01-13T07:35:42.090Z · LW(p) · GW(p)
Evolution tends to do a basically random walk exploration of the easily reached possibility space available to any specific life form. Given that it has to start from something very simple, initial exploration is towards greater complexity. Once a reasonable level of complexity is reached, the random walk is only slightly more likely to involve greater complexity, and is almost equally as likely to go back towards lesser complexity, in respect of any specific population. However, viewing the entire ecosystem of populations, there will be a general trajectory of expansion into new territory of possibility. The key thing to get is that in respect of any specific population or individual (when considering the population of behavioural memes within that individual), there is an almost equal likelihood of going back into territory already explored as there is of exploring new territory.
There is a view of evolution that is not commonly taught, that acknowledges the power of competition as a selection filter between variants, and also acknowledges that all major advances in complexity of systems are characterised by new levels of cooperation. And all cooperative strategies require attendant strategies to prevent invasion by "cheats". Each new level of complexity is a new level of cooperation.
There are many levels of attendant strategies that can and do speed evolution of subsets of any set of characters.
Evolution is an exceptionally complex set of systems within systems. At both the genetic and mimetic levels, evolution is a massively recursive process, with many levels of attendant strategies. Darwin is a good introduction, follow it with Axelrod, Maynard Smith, Wolfram; and there are many others worth reading - perhaps the best introduction is Richard Dawkins classic "Selfish Gene".
Replies from: claynaff↑ comment by claynaff · 2015-01-13T13:07:40.173Z · LW(p) · GW(p)
Unless it is deliberately or accidentally altered, an emulation will possess all of the evolved traits of human brains. These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer. (Pure altruism -- an act that benefits another at the expense of one's genetic interests -- is strongly selected against.) There are some varieties of altruism that survive: kin selection (e.g., rescuing a drowning nephew), status display (making a large donation to a hospital), and reciprocal aid (helping a neighbor in hopes they'll help you when aid is needed), but pure altruism (suicide bombing is a hideous example) is quite rare and self-limiting. That would be true even within an artificial Darwinian environment. Therefore, we have a limiting factor on what to expect in a world with brain emulations. Also, I must note, we have a limiting factor on TedHowardNZ's description of evolution above. Evo does not often climb down from a fitness peak (thus we are stuck with a blind spot in our eyes), and certainly not when the behaviors entailed reduce fitness. Only a changing environment can change the calculus of fitness in ways that allow prosocial behaviors to flourish w/o a net cost to fitness. But even a radically changed environment could not force pure altruism to exist in a Darwinian system.
Replies from: Sebastian_Hagen, TedHowardNZ↑ comment by Sebastian_Hagen · 2015-01-14T01:10:15.291Z · LW(p) · GW(p)
These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer.
Note that the employer in question might well be your own upload clan, which makes this near-analogous to kin selection. Even if employee templates are traded between employers, this trait would be exceptionally valuable in an employee, and so would be strongly selected for. General altruism might be rare, but this specific variant would probably enjoy a high fitness advantage.
↑ comment by TedHowardNZ · 2015-01-13T22:27:45.947Z · LW(p) · GW(p)
Language and conceptual systems are so complex, that communication (as in the replication of a concept from one mind to another) is often extremely difficult. The idea of altruism is one such thing. Like most terms in most languages, it has a large (potentially infinite) set of possible meanings, depending on context.
If one takes the term altruism at the simplest level, it can mean simply having regard for others in choices of action one makes. In this sense, it is clear to me that it is actually in the long term self interest of everyone to have everyone having some regard for the interests of others in all choices of action. It is clear that having regard only for short term interest of self leads to highly unstable and destructive outcomes in the long term. Simple observation of any group of primates will show highly evolved cooperative behaviours (reciprocal altruism).
And I agree, that evolution is always about optimisation within some set of parameters. We are the first species that has had choice at all levels of the optimisation parameters that evolution gets to work with. And actually has the option of stepping entirely outside of the system of differential survival of individuals.
To date, few people have consciously exercised such choice outside of very restricted and socially accepted contexts. That seems to be exponentially changing.
Pure altruism to me means a regard for the welfare of others which is functionally equal to the regard one has for one's own welfare. I distinguish this from exclusive altruism (a regard for the welfare of others to the exclusion of self interest) - which is, obviously, a form of evolutionary, logical, and mathematical suicide in large populations (and even this trait can exist at certain frequencies within populations in circumstances of small kin groups living in situations that are so dangerous that some members of the group must sacrifice themselves periodically or the entire group will perish - so is a form of radical kin selection - and having evolved there, the strategy can remain within much larger populations for extended periods without being entirely eliminated).
There is no doubt that we live in an environment that is changing in many different dimensions. In some of those dimensions the changes are linear, and in many others the changes are exponential, and in some the systemic behaviour is so complex that it is essentially chaotic (in the mathematical sense, where very tiny changes in system parameters {within measurement uncertainty levels} produce orders of magnitude variations in some system state values).
There are many possible choices of state calculus. It seems clear to me that high level cooperation gives the greatest possible probability of system wide and individual security and freedom. And in the evolutionary sense, cooperation requires attendant strategies to prevent invasion by short term "cheating".
Given the technical and social and "spiritual" possibilities available to us today, it is entirely reasonable to classify the entire market based economic structure as one enormous set of self reinforcing cheating strategies. And prior to the development of technologies that enabled the possibility of full automation of any process that was not the case, and now that we can fully automate processes it most certainly is the case.
So it is a very complex set of systems, and the fundamental principles underlying those systems are not all that complex, and they are very different from what accepted social and cultural dogma would have most of us believe.
comment by William_S · 2015-01-17T00:59:53.107Z · LW(p) · GW(p)
Suppose we assume that there is a Work/Leisure resource allocation ratio (including time and money) for each EM, and that economic pressures continually move this asymptotically closer to zero. Does it seem like there could be experiences derived from the Leisure resource allocation that could make the increasing amount of resource allocation to Work worthwhile? It doesn't seem like this would work for vanilla humans: there's only so long you'd be willing to work for even a really, really good 2-week vacation. But would this still apply to EMs?
comment by Capla · 2015-01-14T03:38:26.746Z · LW(p) · GW(p)
If we achieved a multipolar society of digital superintelligent minds, and the economy balloons overnight, I would expect that all the barriers to space travel would be removed. Could superrich, non-digital humans, simply hop on a star-ships and go to some other corner of the universe to set up culture? We may not be able to compete with digital minds, but do we have to? Is space big enough that biological humans can maintain their own culture and economy in isolation somewhere?
How long before the EM's catch up and we are in contact again? What happens then? Do you think we'd have enough mutual egalitarian instinct that we wouldn't just take what the other guy has (or more likely, the EM's just take what we have)? What would the state of the idea of "human rights" be at this time? Could a supercompetitive economy maintain a "prime directive", such that they'd leave us alone?
comment by KatjaGrace · 2015-01-13T02:09:54.015Z · LW(p) · GW(p)
Do you expect the future to be as Bostrom describes in this section, if the world is not taken over by a single superintelligence?
Replies from: lump1↑ comment by lump1 · 2015-01-13T04:16:24.225Z · LW(p) · GW(p)
The one safe bet is that we'll be trying to maximize our future values, but in the emulated brains scenario, it's very hard to guess at what those values would be. It's easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It's one of the worst things.
Like most people, I don't expect that this value will be fully extended to emulated individuals. I do think it's worth having a discussion about what aspects of it might survive into the emulated minds future. Some of it surely will.
I've seen some (e.g. Marxists) argue that these fuzzy values questions just don't matter, because economic incentives will always trump them. But the way I see it, the society that finally produces the tech for emulated minds will be the wealthiest and most prosperous human society in history. Historical trends say that they will take the basic right to a comfortable human life even more seriously than we do now, and they will have the means to basically guarantee it for the ~9 billion humans. What is it that these future people will lack but want - something that emulated minds could give them - which will be judged to be more valuable than staying true to a deeply held ethical principle? Faster scientific progress, better entertainment, more security and more stuff? I know that this is not a perfect analogy, but consider that eugenic programs could now advance all of these goals, albeit slowly and inefficiently. So imagine how much faster and more promising eugenics would have to be before we resolve to just go for it despite our ethical misgivings? The trend I see is that the richer we get, the more repugnant it seems. In a richer world, a larger share of our priorities is overtly ethical. The rich people who turn brain scans into sentient emulations will be living in an intensely ethical society. Futurists must guess their ethical priorities, because these really will matter to outcomes.
I'll throw out two possibilities, chosen for brevity and not plausibility: 1. Emulations will be seen only as a means of human immortality, and de novo minds that are not one-to-one continuous with humans will simply not exist. 2. We'll develop strong intuitions that for programs, "he's dead" and "he's not running" are importantly different (cue parrot sketch).
Replies from: RobinHanson, Capla↑ comment by RobinHanson · 2015-01-13T18:31:15.016Z · LW(p) · GW(p)
There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.
↑ comment by Capla · 2015-01-14T04:20:04.367Z · LW(p) · GW(p)
It's a tragedy of the commons combined with selection pressures. If there is just a few people who decide to spread out and make as many copies as possible, then there will be slightly more of those people in the next generation. Those new multipliers will copy themselves in turn. Eventually, the population is swamped by individuals who favor unrestrained reproduction. This happens even if it is a very slight effect (if 99% of the world thinks it's good to only have one copy a year, and 1% usually only makes one copy but every ten years has an extra, given enough time, the vast majority of the population has 1.1 copies a year). The population balloons, and we don't have that tremendous wealth per captia anymore.
comment by KatjaGrace · 2015-01-13T02:09:06.437Z · LW(p) · GW(p)
Bostrom says many human behaviors are for 'signaling', and that some of these may not be the best way to signal in the future. But it seems many are not even the most effective way to signal now. What is going on? (this was brought up last time)
Replies from: Sebastian_Hagen, DanielFilan↑ comment by Sebastian_Hagen · 2015-01-14T01:19:22.143Z · LW(p) · GW(p)
Adaption executers, not fitness maximizers. Humans probably have specific hard-coded adaptations for the appreciation of some forms of art and play. It's entirely plausible that these are no longer adaptive in our world, and are now selected against, but that this has not been the case for long enough for them to be eliminated by evolution.
This would not make these adaptations particularly unusual in our world; modern humans do many other things that are clearly unadaptive from a genetic fitness perspective, like using contraceptives.
↑ comment by DanielFilan · 2020-08-29T23:31:25.263Z · LW(p) · GW(p)
The Caplanian story about education is that a lot of what people are signalling is conformism, and it's inherently hard to move to a new better way of signalling conformism.
comment by KatjaGrace · 2015-01-13T02:03:36.144Z · LW(p) · GW(p)
How might you go about learning whether happiness, consciousness, and other good human qualities are evolutionarily adaptive enough to survive in alien future scenarios?
Replies from: torekp, almostvoid↑ comment by torekp · 2015-01-15T01:41:32.675Z · LW(p) · GW(p)
You mean besides "do a helluva lot of neuroscience"? A lot of our good qualities are subjective, in the most basic sense of that word. We don't even understand their physical basis well enough to predict whether (or which ones) significantly different physical systems will share. If you want to know whether robots can take your job, the question of whether machines can think is exactly as uninteresting as whether submarines can swim (h/t Dijkstra). If you want to know whether you can become a certain kind of robot, that's a whole 'nother story.
↑ comment by almostvoid · 2015-01-13T09:05:51.526Z · LW(p) · GW(p)
In a way happiness is ingrained into specific personality types. My neighbour - next flat - is amazingly happy even after she locked herself out and I tried to break in for her. That happiness can only be duplicated with good drugs. Then there is attitude. I was in India [not as a 5 * tourist either] and found they were content [a bit less than happy] with their lives which compared to ours was a big obvious difference. Anyway it's a moot point as the Scandinavians won that round globally the last time because - social democracy works and it is not socialism which a lot of the braindead insist it is. so will all this collapse in a sci fin Asimov type future? No. As cars replaced horse-cabbies and the underground trains created true mass transit happiness per se was-is not affected-effected. Nor to airoplanes instead of ancient clippers to travel across the seas. Personally I can't wait for the future. I even dream about it. At times. People adjust as kids to their surroundings and take it from there. Anthropologists, social scientists, historians, journalists and writers and even real scientists have shown us that we can be happy whether living in the Stone Age [Australian aborigines] or high tech astronauts and everything in between.
Replies from: Capla↑ comment by Capla · 2015-01-14T04:07:58.662Z · LW(p) · GW(p)
That happiness can only be duplicated with good drugs.
What makes you say that. I suppose I may be one of those people who is just luckily happy (though, I doubt it. I used to be a very angry person), but in my experience you can train happiness.
Replies from: diegocaleiro↑ comment by diegocaleiro · 2015-02-11T01:03:34.824Z · LW(p) · GW(p)
40%, give or take.
comment by KatjaGrace · 2015-01-13T02:02:37.664Z · LW(p) · GW(p)
Do you hope that brain emulations are developed before or after other AI?