One possible issue with radically increased lifespan
post by Spectral_Dragon · 2012-05-30T22:24:38.149Z · LW · GW · Legacy · 86 commentsContents
86 comments
I might need a better title (It has now been updated), but here goes, anyway:
I've been considering this for a while now. Suppose we reach a point where we can live for centuries, maybe even millenia, then how do we balance? Even assuming we're as efficient as possible, there's a limit for how much resources we can have, meaning an artificial limit at the amount of people that could exist at any given moment even if we explore what we can of the galaxy and use any avaliable resource. There would have to be roughly the same rate of births and deaths in a stable population.
How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal elite that the poor, unaugmented ones would have no chance of measuring up to? I'm sorry if this has already been considered, I'm very uneducated on the topic. If it has, could someone maybe link an analysis of the topic of lifespans and the like?
86 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-31T07:07:31.990Z · LW(p) · GW(p)
If your civilization expands at a cubic rate through the universe, you can have one factor of linear growth for population (each couple of 2 has exactly 2 children when they're 20, then stop reproducing) and one factor of quadratic growth for minds (your mind can go as size N squared with time N). This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach, at which point some unimaginably huge superintelligent minds will, billions of years later, have to face some unpleasant problems, assuming physics-as-we-know-it cannot be dodged, worked around, or exploited.
Meanwhile, PARTY ON THE OTHER SIDE OF THE MILKY WAY! WOO!
Replies from: Mitchell_Porter, None, shminux, Bart119, Spectral_Dragon, shminux↑ comment by Mitchell_Porter · 2012-05-31T13:22:32.227Z · LW(p) · GW(p)
This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach
If dark energy is constant, and if no-one artificially moves more galaxies together, then after 100 billion years, all that's left in our Hubble volume is a merged Andromeda and Milky Way. On a supragalactic scale, the apparent abundance of the universe is somewhat illusory; all those galaxies we can see in the present are set up to fly apart so quickly that no-one gets to be emperor of more than a few of them at once.
It seems no-one has thought through the implications of this for intelligence in the universe. Intelligences may seek to migrate to naturally denser galactic clusters, though they then run the risk of competing with other migrants, depending on the frequency with which they arise in the universe. Intergalactic colonization is either about creating separate super-minds who will eventually pass completely beyond communication, or about trying to send some of the alien galactic mass back to the home galaxy, something which may require burning through the vast majority of the alien galaxy's mass-energy (e.g. to propel a few billion stars back to the home system). There will be periods, lasting billions of years, in which some galaxies are beyond material reach, but visible and thus capable of communicating.
This cosmology of alien paperclip maximizers fighting over a universe of mutually receding galaxies, is not exactly proved by science, but it would be interesting to see it fleshed out.
Replies from: XiXiDu↑ comment by XiXiDu · 2012-05-31T13:43:39.362Z · LW(p) · GW(p)
Also see this timeline of the far future.
↑ comment by [deleted] · 2012-05-31T16:04:26.358Z · LW(p) · GW(p)
If your civilization expands at a cubic rate through the universe
You're picturing the far-future civilization as a ball, whose boundary is expanding at a constant rate. But I think a more plausible picture is a spherical shell. The resources at the center of the ball will be used up, and it will only be cost-effective to transport resources from the boundary inwards to a certain distance. If the dead inner boundary expands at the same rate as the live outer boundary, we'll be experiencing quadratic, not cubic growth.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-31T20:07:06.879Z · LW(p) · GW(p)
You know, you're right. I will change my reply accordingly henceforth - linear population growth, linear increase in energy usage / computing power, and quadratic increase in (nonenergetically stored) memories.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2012-05-31T20:14:47.647Z · LW(p) · GW(p)
Don't you get some pretty nasty latency on accessing those memories?
Replies from: faul_sname↑ comment by faul_sname · 2012-06-01T15:11:34.434Z · LW(p) · GW(p)
You get a linear increase in low-latency memory and a quadratic increase in high-latency memory.
Replies from: Armok_GoB↑ comment by Shmi (shminux) · 2012-06-01T00:01:15.130Z · LW(p) · GW(p)
each couple of 2 has exactly 2 children
says a poly...
↑ comment by Bart119 · 2012-05-31T16:30:17.545Z · LW(p) · GW(p)
LW in general seems to favor a very far view. I'm trying to get used to that, and accept it on its own terms. But however useful it may be in itself, a gross mismatch between the farness of views which are taken to be relevant to each other is a problem.
It is widely accepted that spreading population beyond earth (especially in the sense of offloading significant portions of the population) is a development many hundreds of years in the future, right? A lot of extremely difficult challenges have to be overcome to make it feasible. (I for one don't think we'll ever spread much beyond earth; if it were feasible, earlier civilizations would already be here. It's a boring resolution to the Fermi paradox but I think by far the most plausible. But this is in parentheses for a reason).
Extending lifespans dramatically is far more plausible, and something that may happen within decades. If so, we will have to deal with hundreds or thousands of years of dramatically longer lifespans without galactic expansion as a relief of population pressures. It's not a real answer to a serious intermediate-term problem. Among other issues, such a world will set the context within which future developments that would lead to galactic expansion would take place.
The OP's point needs a better answer.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2012-06-01T00:53:59.358Z · LW(p) · GW(p)
offloading from earth becomes very easy when brains are instantiated on silicon.
↑ comment by Spectral_Dragon · 2012-06-02T13:52:08.247Z · LW(p) · GW(p)
That all sounds rather well, but sort of lights warning lights in the back of my head - that sounds suspiciously like an Applause Light.
My current issue would probably be regulation - it's possible this will all happen within a century. What if some superintelligent beings don't get to handle it in a few billion years? What if WE have to face it, with the less than rational people wanting to live as long as possible (We might want to too, but try to look over the consequences. Or I HOPE so at least), then what? I'm not asking what anyone else should do. I'm asking what WE should do in this situation. Assuming it first becomes available in developed countries to those that can pay for it, and gradually, which to me seems most likely, what happens and what can we do about it?
↑ comment by Shmi (shminux) · 2012-05-31T19:22:31.908Z · LW(p) · GW(p)
This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach, at which point some unimaginably huge superintelligent minds will, billions of years later, have to face some unpleasant problems, assuming physics-as-we-know-it cannot be dodged, worked around, or exploited.
Due to my innate, if misguided, belief in the fair universe, I hope that everyone can get their own baby universe to nucleate at will. The mechanism has been proposed before:
The bubble universe model proposes that different regions of this inflationary universe (termed a multiverse) decayed to a true vacuum state at different times, with decaying regions corresponding to "sub"- universes not in causal contact with each other and resulting in different physical laws in different regions which are then subject to "selection", which determines each region's components based upon (dependent on) the survivability of the quantum components within that region. The end result will be a finite number of universes with physical laws consistent within each region of spacetime.
All these "unimaginably huge superintelligent minds" have to do is to control this process of bubbling.
Replies from: faul_sname, jhuffman↑ comment by faul_sname · 2012-06-01T15:13:40.832Z · LW(p) · GW(p)
not in causal contact with each other
I see potential problems here, at least for any humans or social nonhumans..
Replies from: shminux↑ comment by Shmi (shminux) · 2012-06-01T16:53:18.730Z · LW(p) · GW(p)
I suspect that being a demiurge of your own universe can be pretty enticing.
comment by [deleted] · 2012-05-31T00:32:58.158Z · LW(p) · GW(p)
Seriously, why is this post being downvoted? It's a legit question and the OP isn't making any huge mistakes or drawing any stupid conclusions. He's just stating some confusion and asking for links.
I actually feel pretty mad about this.
Replies from: JenniferRM, beriukay, wedrifid↑ comment by JenniferRM · 2012-05-31T02:37:45.927Z · LW(p) · GW(p)
Upvoted after seeing the comment. I thought about downvoting when I came to the thread and thought of doing so for a minute or three. The problem I had was the title's tone of summarizing once and for all what "the consequences of transhumanism are" and then doing the job really really poorly. I have a vague (but declining?) "my tribe"-feeling towards transhumanism and don't like seeing it bashed, or associated with straw-man-like arguments.
I think a title that avoided this inclination could have been something like "Is immortalist demography bleak?" or maybe "I fear very long lives lead to resources crunches and high gini coefficients" or you know... something specific and tentative rather than abstract and final. Basically, good microcontent.
One thing I've just had to get used to is that LWers are bad at voting. Comments I'm proud of are frequently ignored, and comments that I think are cheap tricks frequently get upvoted. Whatever people see first, right after an article will generally get upvoted much more than normal. Its not because quality comes first when sorting by that, because if you look at ancient posts where the sort order of comments is forced to be chronological, the very first comment will frequently have many upvotes even when it is inane.
I've been wondering how to "fix it" but I have nothing concrete. I fear that it is just that "typical internet users" are habituated to clicking on accessible "like" buttons because that's how they interact with facebook, and internet communities inevitably decay absent heroically good site design/management, and so on.
Replies from: None, Spectral_Dragon, steven0461↑ comment by [deleted] · 2012-05-31T02:51:07.796Z · LW(p) · GW(p)
Comments I'm proud of are frequently ignored, and comments that I think are cheap tricks frequently get upvoted.
This. Very much this. Not a lot of my stuff gets upvoted. Yesterday I think, I had an existential crisis about it "oh my god, do I suck?". Yes that's stupid, but I often find it deeply disturbing that I am not a demigod.
LW is better than reddit, but yeah.
Another observation is that the downvoters seem to come out first. Posts (articles in discussion, specifically) that end up highly voted usually start out hovering around zero or going negative before rising. This post for example.
EDIT: Actually, I'd really like to see some graphs and stats on this from the LW database. Another thing to get more useful data is to allow people to cast a vote for which of their own comments they are most proud of, and see if this vote correlates with community vote.
↑ comment by Spectral_Dragon · 2012-05-31T12:04:51.028Z · LW(p) · GW(p)
Thank you! That was what I was looking for in a title, I just couldn't seem to find the right words. I'll be editing the title in a minute. I also got pretty intimidated - within 10 minutes I'd lost about a fifth of my total karma and no one would tell me why. That seems to me another weakness - we are too quick to vote and seemingly not good enough at debating some topics and explaining WHY something deserves to be voted up/down.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-05-31T13:44:57.782Z · LW(p) · GW(p)
I was very unhappy to see this downvoted as much as it was, although I thought it may have been because of something in the sequences I hadn't gotten to yet. But I especially try to avoid downvoting new people who are obviously making an effort, as you were. So I'm glad this corrected itself.
Replies from: billswift↑ comment by billswift · 2012-05-31T16:38:18.464Z · LW(p) · GW(p)
On rewarding effort, rather than results and quality:
Actually, they were being treated like children but they had been their whole lives and didn't know the difference.
Treating like an adult: You're fucking up. Here's how to fix it. Now fix it.
Treating like a child: You're trying really hard! Good job! It's not the result that matters, it's just that you try!
(That's actually a functional way to deal with children up to a point. In most cases they can't do a real job. But when they get to the point they can, when they're ready to learn to be adults with adult responsibilities, "it's a good try" should never cut it.)
- John Ringo, The Last Centurion, p.369
↑ comment by steven0461 · 2012-05-31T05:53:38.200Z · LW(p) · GW(p)
I've been wondering how to "fix it" but I have nothing concrete.
Letting go of the assumption that every user account's votes should have the same weight would probably go a long way. I'm not saying such a measure is called for right now; I'm just bringing it up to get people used to the idea if things get worse.
Replies from: RobertLumley, D2AEFEA1, evand↑ comment by RobertLumley · 2012-05-31T13:42:16.548Z · LW(p) · GW(p)
Letting go of the assumption that karma means much above -3 would also go a long way. Karma is just here really to keep trolls away. If there are vast differences in Karma scores posted from around the same time, then maybe that means something. I know personally that the comments and posts I am most proud of are, generally speaking, my least upvoted ones.
To consider an example, this and this were posted around the same time, both to discussion. The former initially received vastly more karma than the second. But the former, while amusing, has virtually no content. The second is a well reasoned, well supported post. Did the former's superior karma mean that it was a better article? Obviously not. That's why the second was promoted and, once it was, eventually overtook the former.
Another obvious example is the sequences. Probably everyone here would agree that at least 75 of the best 100 posts on LW are from the sequences. But, for the most part, they sit at around 10-20 karma. Those that are outside that are the extraordinarily popular ones, which are linked to a lot, and sit at probably around 40 karma. This is not an accurate reflection of their quality versus other articles that I see around 10-40 karma.
I really try (but don't always succeed) to vote karma based on "Is this comment/post at a higher or lower karma score than I think it should have?". If everyone used this, then Karma scores might have some meaning relative to each other. But I don't think many people use this strategy, and the result is that karma scores are skewed towards more read and funnier posts. Which generally tend to be shorter and less substantial.
Replies from: Vladimir_Nesov, D2AEFEA1, army1987, TheOtherDave↑ comment by Vladimir_Nesov · 2012-05-31T14:18:41.605Z · LW(p) · GW(p)
Letting go of the assumption that karma means much above -3 would also go a long way. Karma is just here really to keep trolls away.
When a comment I make is not upvoted to at least +3, I give a moment's consideration to the question of what I did wrong (and delete some of the comments that fail this test).
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-01T23:58:41.166Z · LW(p) · GW(p)
Some of your comments should be useful to the elite but not the masses. Such comments are only sometimes voted to +3. E.g., IIRC you regularly make decision theory comments that don't go to +3, so it seems you don't follow this rule even when talking about important things.
(It's only semi-related, but who cares about the votes of the masses anyway? You're here to talk to PCs and potential PCs, which is less than 1% of the LessWrong population. You're beyond the point of rationality where you have to worry about not caring about NPCs becoming a categorical rule followed by everyone. On that note, you should care about the opinion of the churchgoer more, and the LessWronger less. Peace out comrade.)
↑ comment by D2AEFEA1 · 2012-05-31T14:46:32.301Z · LW(p) · GW(p)
Would it be difficult (and useful) to change the voting system inherited from reddit and implement one where casting a vote would rate something on a scale from minus ten to ten, and then average all votes together?
Replies from: Emile, RobertLumley↑ comment by RobertLumley · 2012-05-31T14:52:07.023Z · LW(p) · GW(p)
Difficult? Probably not. Useful is debatable. I'm not sure that the Karma system is important enough to consider in much detail. I just don't see much low hanging fruit there.
↑ comment by A1987dM (army1987) · 2012-08-18T23:20:20.197Z · LW(p) · GW(p)
I really try (but don't always succeed) to vote karma based on "Is this comment/post at a higher or lower karma score than I think it should have?".
So do I.
↑ comment by TheOtherDave · 2012-05-31T13:54:31.486Z · LW(p) · GW(p)
Letting go of the assumption that karma means much above -3 would also go a long way. Karma is just here really to keep trolls away
I wonder how hard it would be to build a LW addon (like the antikibbitzer) that replaced numeric readouts with a tier label (e.g. "A" >=10, "F" for <=-3, etc.), and how using that would affect my experience of LW.
Replies from: RobertLumley↑ comment by RobertLumley · 2012-05-31T13:56:48.953Z · LW(p) · GW(p)
I think that would be pretty awkward, since posts would start in the "C" range. I think most people here would consider getting a "C" bad. But tiers make for an interesting concept, if you move away from grades.
Replies from: Emile, TheOtherDave↑ comment by TheOtherDave · 2012-05-31T18:17:36.598Z · LW(p) · GW(p)
Sure; the specific tier thresholds are secondary and might even be user-definable parameters, so people don't need to know what tier they're in on my screen, if knowing that would make them feel bad.
↑ comment by D2AEFEA1 · 2012-05-31T14:56:00.012Z · LW(p) · GW(p)
I would second that. On the other hand, how would you decide what weight to give to someone's vote? Newcomers vs older members? Low vs high karma? I'm not sure a function of both these variables would be sufficient to determine meaningful voting weights (that is, I'm not sure such a simple mechanism would be able to intelligently steer more karma towards good quality posts even if they were hidden, obscure or too subtle).
↑ comment by evand · 2012-05-31T14:33:17.929Z · LW(p) · GW(p)
What if the site just defaulted to a random sort order, so different people are presented with different comments first? That would still tend to bias in favor of older comments getting high presentation rank more. I'm not sure that's such a bad thing, though.
↑ comment by beriukay · 2012-05-31T08:46:12.128Z · LW(p) · GW(p)
I think xkcd is appropriate here.
Replies from: jhuffmancomment by JoshuaZ · 2012-05-31T00:29:57.878Z · LW(p) · GW(p)
This question shouldn't be being downvoted as much as it is- it is a legitimate question although would probably go better in its own set in the open thread rather than a discussion section.
Yes, this has been discussed a fair bit- the main argument in most transhumanist circles when this comes up is that everyone will get the benefits and that birth rates will go down accordingly (possibly by enforcement). In that regard, there's a fair bit of data that humans birth rates go down naturally as lifespan goes up. There are other responses but this is the most common. It is important to realize that it is unlikely that this issue will need to be seriously addressed for a long way off.
Replies from: RobertLumley, None, Spectral_Dragon↑ comment by RobertLumley · 2012-05-31T13:49:18.646Z · LW(p) · GW(p)
The volume of comments generated indicates to me that it is too large for an open thread.
↑ comment by [deleted] · 2012-05-31T00:40:28.102Z · LW(p) · GW(p)
the main argument in most transhumanist circles when this comes up is that everyone will get the benefits and that birth rates will go down accordingly ... It is important to realize that it is unlikely that this issue will need to be seriously addressed for a long way off
Stinky motivated stopping. Birth rates aren't the only problem. Personal growth causes the same problem.
Agree about not actually needing to address this yet. our future selves will be much smarter and still have plenty of time.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-06-01T17:44:25.044Z · LW(p) · GW(p)
Speed of light latency puts limits on the maximum size of a brain. Or, rather, it enforces a relationship between the speed of operation of a brain and its size. At a certain point, making a sphere of computronium bigger no longer makes it more effective, since it needs to talk to working components that are physically distant. Granted, it's not a small limit, especially if people want a bunch of copies of themselves.
↑ comment by Spectral_Dragon · 2012-05-31T12:30:40.800Z · LW(p) · GW(p)
Good point. Should I move it? If so, I don't know how.
I'd really like to see anyone here who really knows what they're talking about (I don't, for example, but I want to know) discuss it here. Currently looking for both a plausible situation and solution.
comment by knb · 2012-05-31T10:24:39.838Z · LW(p) · GW(p)
A lot of people seem to be shrugging this question off, saying basically, "Transhuman minds are ineffable, we can't imagine what they would do." If we have some kind of AI god that rapidly takes over the world after a hard takeoff, then I think that logic basically applies. The world after that point will reflect the values implemented into the AI god.
Robin Hanson has described a different scenario, that I take somewhat more seriously than the AI god scenario.
This long competition has not selected a few idle gods using vast powers to indulge arbitrary whims, or solar system-sized Matrioshka Brains. Instead, frontier life is as hard and tough as the lives of most wild plants and animals nowadays, and of most subsistence-level humans through history. These hard-life descendants, whom we will call “colonists,” can still find value, nobility, and happiness in their lives. For them, their tough competitive life is just the “way things are” and have long been.
Hanson is describing a return to the Malthusian condition that has defined life since the beginning. The assumptions seem fairly strong to me:
- Competition won't cease.
- The darwinian drive to reproduce will remain (because it is adaptive).
- Resources are functionally limited, there is only so much usable energy.
- Life will be hard, there will be lots of people "starving" (e.g. not being able to access enough energy to keep functioning, or not being able to buy replacement parts/server space) at the margins, as it is in the natural environment.
comment by Shmi (shminux) · 2012-05-30T23:23:52.290Z · LW(p) · GW(p)
This reminds me. An interesting question is, assuming constant mass/person, how long until the speed of light becomes a limiting factor? I.e. given a fixed growth rate, at what total population would the colonization speed be approaching the speed of light just to keep the # humans per cubic parsec of space constant? It is clear that this will happen at some point, given the assumptions of constant birth rate and constant body mass, because the volume of colonized space only grows as time cubed, while the population grows exponentially.
Here is a back-of-the-envelope stab at it. I have not googled it beforehand, because it's fun to do one's own estimate.
Assume 1 habitable planet (10^10 people) per cubic light year, the total colonized number of planets ~ #years^3 after the onset of interstellar travel at near light speed. (I am not accounting for the relativistic time dilation, though the correction should be small, since only a minority of people are in-flight at any given time.) I'm ignoring the factors of the order of 1, such as 4 pi/3 for the volume of a sphere.
Assume 0.1%/year growth rate. This is about 1/20 of the current birth rate. Total number of occupied planets = 1.001^(#years).
The two numbers become comparable after about 30,000 years. This is less than one third of the size of the Milky Way galaxy (in light years). After that, the population growth will be limited by the known physical laws, long before other galaxies are explored.
Replies from: JenniferRM↑ comment by JenniferRM · 2012-05-31T01:13:35.768Z · LW(p) · GW(p)
Other physical angles:
If the economy continues to grow at roughly the present rate, using more energy as it does so, when will we be consuming the entire solar energy output each year? And if this energy growth happens on the surface of the earth and heat dissipation works in a naive way then how long till the surface of the earth is as hot as the sun? Answers: A bit less than 1400 years from now to be eating the sun, and a bit less than 1000 years from now till Earth's surface is equally hot, respectively. Blog post citation!
The same blogger did a followup post on the possibility of economic growth that "virtualizes our values" (my terminology, not the blogger's, he calls it "decoupling") so that humanity gets gazillions of dollars worth of something while energy use is fixed by fiat in the model. Necessarily the "fluffy stuff" (his term) somehow takes over the economy such that things like food are negligible to economic activity. With 5% "total economy" growth and up-to-an-asymptote energy growth, by 2080 98% percent of the value of the economy is made of of "fluffy stuff" which seems to imply that real world food and real world gasoline would be less than 2% of the economy... which implies that the average paycheck would be spent on very very little food or gas and quite a lot on "fluffy stuff".
The blogger takes this as evidence that the fluffy economy is impossible and (implicitly) that we should just accept that civilization has peaked and should turn into lowered-expectations-hippies, but to me his "ridiculous" energy scenario sounds suspiciously similar to Hanson's em scenario. What use has an em for a real hamburger made out of real beef grown with real grass shined on by real sunshine? Very little use. It would be like having the deed to an extrasolar planet. How awesome would it be to be an em? Very much awesome :-)
Replies from: steven0461, mwengler↑ comment by steven0461 · 2012-05-31T05:51:36.281Z · LW(p) · GW(p)
The blogger takes this as evidence that the fluffy economy is impossible and (implicitly) that we should just accept that civilization has peaked and should turn into lowered-expectations-hippies
See an earlier discussion for some more criticism of the blogger's claim.
↑ comment by mwengler · 2012-05-31T13:20:01.450Z · LW(p) · GW(p)
Ems or other more efficient versions of living intelligence just put off the exponential malthusian day of reckoning by 100 years or a 1000 years or 10000 years. As long as you have reproducing life, its population will tend to or "want to" grow exponentially, while with technical improvements, I can't think of a reason in the world to expect them to be exponential.
I also wonder at what point speciation becomes inevitable or else extremely likely. Presumably in a world of ems with 10^N more ems than we now have people, and very fast em-thinking speeds restricting their "coherence length" (the distance over which they have significant communication with other ems within some unit of time meaningful to them) of perhaps 10s of km, we would it seems have something like 10^M civilizations averaging 10^(N-M) as complex as our current global civilization, with population size standing in as a rough measure of complexity. Whether ems want to compete or not, at some point you will have slightly more successful or aggressive large civilizations butting up against each other for resources.
In the long run, I think, exponentials dominate. This is the lesson on compound interest I take from Warren Buffett. Further, one of the lesson's I take from Matt Ridley's "Rational Optimist" is that the Malthusian limit is the rule and the last 2 centuries saw us nearly hitting it a few times, with something like the "Green Revolution" coming along in a "just in time" fashion to avoid it. Between what Hanson has to say and what Ridley has to say, and what Buffett has to say (about compound interest i.e. exponentials), it sure seems likely that in the long run Malthus is the rule and our last one or two centuries have been a transition period between Malthusian equilibria.
comment by orthonormal · 2012-05-31T05:31:45.480Z · LW(p) · GW(p)
The fertility rate is far more important than mortality. You can calculate for yourself that even if humans were immortal, an average fertility below 1 child per parent does not lead to exponential population growth, while average fertility above 1 child per parent means exponential growth even if people keep dying at seventy.
comment by buybuydandavis · 2012-05-31T01:30:12.087Z · LW(p) · GW(p)
What makes you think we have meaningful opinions to share on the options available to beings that are pushing the carrying capacity of the galaxy?
Replies from: Spectral_Dragon↑ comment by Spectral_Dragon · 2012-05-31T12:15:37.689Z · LW(p) · GW(p)
I don't, but I'm NOT going to stop thinking about something just because smarter beings are considering it as well.
The problem is that I'm worried we'll reach the point of having to choose between long lives/no artificial reduction in birth rates/an elite with an advantage and most people carrying on as usual, far too soon. We won't have progressed to superintelligence when this first becomes an issue. It might even be that we've not left the solar system by that time. And most people will definitely be more against letting a possible AI shape the fate of mankind than the thought of using science (It's a vague term, but I'm not sure what will increase our lived most first - tech, drugs or gene splicing).
So we might have to face this issue, even within our own lifetimes. While I still have only a crude understanding, I think it's possible we have to face something like this. At the very least, the amount of resources we have even without increased lifespans will pose problems.
Replies from: Bart119↑ comment by Bart119 · 2012-05-31T16:40:46.473Z · LW(p) · GW(p)
I'm with you on thinking this is a serious issue. I also think the LW community has done a very poor job of dismissing all such concerns, often with derision. A post I made on the subject got downvoted into oblivion, which is OK (community standards and all). I accept some of the criticisms, but expect to bring the issue up again with them better addressed.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-05-31T17:10:30.495Z · LW(p) · GW(p)
There were many other reasons to downvote your post, as discussed in a fair bit of detail in the comments.
Replies from: Bart119↑ comment by Bart119 · 2012-05-31T17:19:14.170Z · LW(p) · GW(p)
I understand that. I said it was OK. But I thought Spectral_Dragon in particular might be interested, flaws and all. My observation of derision of such concerns is not about my post, but many other places which I have seen when researching this.
Replies from: Spectral_Dragon↑ comment by Spectral_Dragon · 2012-05-31T20:36:49.177Z · LW(p) · GW(p)
It's interesting, but doesn't cover the points I'm most concerned about - within a century, it's likely this will become a problem, birth/death have to be regulated. And given not everyone is rational... How do we do it? Cost, promising not to have kids, or what?
Also, I agree that the human mind might not function at optimum efficiency that long. It's a side point, and can probably be fixed, but... We're NOT adapted to live more than a few millenia at best. Maybe even a few centuries. Though this is only speculation.
comment by [deleted] · 2012-05-30T23:55:07.891Z · LW(p) · GW(p)
I think It's a biggish problem to solve.
The first thing that makes it easier is that I see no reason we ought to increase the number of people very much. I think N billion is enough of a party already. Once we all know each other, maybe we can talk about making more. (for the people who are still alive)
Of course that doesn't make the problem go away. If we want to grow as people continuously, we will eventually hit limits. Especially since we might decide that exponential growth is the only acceptable thing.
We might have to accept that there is a non-infinite amount of utility in the universe. That's no reason to give up tho. We should still make sure we at least hit the maximum possible.
See the fun theory sequence specifically this one for exploration of these issues.
Replies from: mwengler↑ comment by mwengler · 2012-05-31T12:51:02.982Z · LW(p) · GW(p)
If you are part of a group that wants to grow slowly and there are even two intelligences out there who are not on board with your program, your group will have to have a CEV which kills off the dissenters. Otherwise, "compound interest" growth rates of the dissenters will turn your slow-growers into a footnote, a vanishingly small "also-ran" in the evolution of transhumanists.
Replies from: None↑ comment by [deleted] · 2012-05-31T15:47:07.604Z · LW(p) · GW(p)
kills off
This is why we need FAI... There are much better solutions to group disagreements than murder.
Replies from: mwengler↑ comment by mwengler · 2012-06-01T05:55:58.698Z · LW(p) · GW(p)
Sure, alternatives to murder. But are there alternatives to "kill off?" How do you beat a population which consistently out-reproduces your population? Either you make their growth rate < their reproduction rate, or you lose. As much smarter as an FAI might be than is a UNI (Unfriendly Natural Intelligence), the laws of compound interest would apply to both.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-01T06:28:21.869Z · LW(p) · GW(p)
How do you beat a population which consistently out-reproduces your population?
Confine them in a finite space. Wait.
Replies from: mwengler, mwengler↑ comment by mwengler · 2012-06-01T13:14:17.144Z · LW(p) · GW(p)
How do you beat a population which has all the capabilities of your population, EXCEPT they out-reproduce your population?
"Confine them in a finite space. Wait." You assume here that your population has advantages over the population you are combining. Your population has the power, the additional intellect, the grandfathered-in control of more resources, perhaps a persistent AND ENFORCED information advantage over the other population.
Without a significant asymmetry in your population's favor, when you combine them in a finite space, their population grows beyond yours, they now have the motivation and the ability to beat your population, to resolve whatever resource or information asymmetry in their favor that has previously allowed your population to dominate them.
We are not talking about humans vs cockroaches here. Or even if we are, the answer is the same. If humans became threatened by cockroach populations, we would (and do) simply kill them. We also come up with more clever ways of controlling their population, we introduce things into their environment which bring their population growth rate way down, even negative (i.e. we kill them?).
But transhumans A vs transhumans B, where A has decided to reproduce slowly? Unless B makes some correspondingly self-limiting decision, all other things being equal, population A will eventually dominate.
As I write this I do realize my claim that population A would need to be willing to kill defectors, to kill people who were obviously population B. I was also assuming an asymmetry: that population A would be willing to kill B and B not willing, or able, to kill A. The scenario I was thinking of was that the slow-reproducers at least for now have the population advantage and therefore the power advantage.
My underlying idea then would be that IF you propose a slow-growth policy for Transhumans, AND you wish to have this proposal survive for a long time, THEN you must PREVENT a significant population of transhumans who are just like you, EXCEPT they are fine with fast growth. You cannot allow a population which has the same fundamental capabilities as you do EXCEPT they have a higher population growth rate, to grow to a population size as big as yours, because if you do, you will soon pass a point where your own survival as a group is beyond your control.
If having the fast reproducers eventually "win," eventually dominate your group, is OK with you, then what is the point of the slow growth prescription in the first place?
If not having the fast reproduers eventually win is not OK with you, then what must you assume to even think that your population can produce an asymmetry which will cause the fast reproducers to either 1) stop reproducing even though they want to, and/or 2) die at a rate which counters their reproduction rate?
Either you crush them while you have some temporary advantage, or they eventually beat your population out. There is no third way, is there?
comment by Vaniver · 2012-06-01T17:20:32.549Z · LW(p) · GW(p)
It is unlikely that society will ever neatly divide into "haves" and "have nots"- be suspicious of sharp divisions. There should be lots of boundary cases, and probably a smooth gradient.
The primary question for answering these sorts of questions, I think, is whether modification to existing agents is more or less effective than modification to new agents. It seems more likely to me that genetic engineering can radically increase human lifespans than interventions in already developed humans- and if that's the case, then there won't really be immortal elites because the younger generation will always have rosier prospects than the older generation (unless a cap is reached, of course). Currently, we would expect a 200 year old programmer to run circles around a 20 year old programmer, because experience is really valuable. But if the 20 year old has 10 generations of intelligence boosts that the 200 year old can't get, then the 20 year old is probably going to win- especially if one of those boosts is easier experience transfer!
comment by Kaj_Sotala · 2012-05-31T07:42:54.345Z · LW(p) · GW(p)
My essay on this (go to the original article to see the hyperclicks to some of the references, as I'm too lazy to copy them here)
One of the most common objections against the prospect of radical life extension (RLE) is that of overpopulation. Suppose everyone got to enjoy from an eternal physical youth, free from age-related decay. No doubt people would want to have children regardless. With far more births than deaths, wouldn't the Earth quickly become overpopulated?
There are at least two possible ways of avoiding this fate. The first is simply having children later. Even if nobody died of aging, there would still be diseases, accidents and murders. People who've looked at the statistics estimate that with no age-related death, people would on average live to be a thousand before meeting their fate in some way. Theoretically, if everyone just waited to be a thousand before having any kids, then population growth would remain on the same level as it is today.
Of course, this is completely unrealistic. Most people aren't going to wait until they are a thousand to have kids. But they might still have them considerably later than they do now. The average age for having your first child has already gone up as lifespans have grown. If you're going to live for a thousand years, why rush with having kids as soon as possible?
Currently there is (at least for women) an effective maximum cap on how high the age for first childbirth can grow, since once a mother's age grows beyond 35 or so, the probability for birth defects goes up radically. However, current reproductive technology has already made pregnancies over the age of 50 a real possibility. At the moment, this frequently requires egg donation, but a rudimentary ability to produce eggs from stem cells may not be that far away, certainly a lot closer than RLE. By the point that we have RLE, we'll likely also have the ability to produce new sperm and eggs from a person's own cells. Combined with an overall better condition of the body brought about by RLE, this seems like it could increase the maximum age for pregnancy indefinitely. With that, the average age for a first birth going up at least a couple of decades doesn't seem all that unrealistic.
Besides the average age for having kids going up, there's the possibility of larger family groups. Must we necessarily have a norm for children being the kids of exactly two adults? As a personal example, my best friend has a daughter who's two years old right now. I've been over there helping take care of the girl a lot, enough to make me feel like she's part of my family as well. Even if I never had children of my own, I already feel something resembling the feelings related to having a child of your own. In addition to growing attached to the children of your close friends, polyamory is also gradually becoming more common and accepted. With romantic relationships involving more than two people we also get children with more than two parent-like figures. Many have a strong desire to pass on their genes, something which can be helped with e.g. the recent creation of 3-parent human embryos.
So with both the prospect of having kids later and a child having more than two parents, I really don't think that the population problem is as hard to solve as some people make it out to be. It should also be noted that it's not like scientists are going to develop RLE one day, and then the next, blam, everyone lives forever. Rather, the technology will be developed in stages. In the early stages, there are going to be a lot of people who have grown far too frail to be helped, and it might take a long time before we hit acturial escape velocity, so there might simply be an e.g. 10-year bump on people's lifespan and then 20 years could pass before the next major breakthrough.
The treatments may also not be affordable for everyone at first, though it needs to be noted that governments will have a huge incentive to subsidize the treatements for everyone to reduce the healthcare costs of the elderly and to push back the age for retirement. A 2006 article in The Scientist argues that simply slowing aging by seven years would produce large enough of an economic benefit to justify the US investing three billion dollars annually to this research. The commonly heard "but only the rich could live forever" argument against RLE does not, I feel, take into account the actual economic realities (amusingly enough, as its supporters no doubt think they're the economically realistic ones).
So we're going to get a slowly and gradually lengthening average lifespan, which at first probably won't do much more than reverse the population decline that will hit a lot of Western countries soon. The replenishment rate required to keep a population stable is about 2.1 children per woman. The average fertility rate in a lot of industrialized countries is well below this - for instance, 1.58 in Canada, 1.42 in Germany, 1.32 in Italy, 1.20 in Japan and 1.04 in Hong Kong. The EU average is 1.51. Yes, in a lot of poor countries the figures are considerably higher - Niger tops the chart with 7.68 children per woman - but even then the overall world population growth is projected to start declining around 2050 or so.
To give a sense of proportion: suppose that tomorrow, we developed literal immortality and made it instantly available for everyone, so that the death rate would drop to zero in a day, with no adjustment to the birth rate. Even if this completely unrealistic scenario were to take place, the overall US population growth would still only be about half of what it was during the height of the 1950s baby boom! Even in such a completely, utterly unrealistic scenario, it would still take around 53 years for the US population to double - assuming no compensating drop in birth rates in that whole time.
We've adapted to increasing lifespans before. Between 1950 and 1990, the percentage of population over 65 almost doubled in Sweden, going from 10.3 to 18.1. (In the United Kingdom it went up from 10.7 to 15.2, in the US from 8.1 to 12.6, and in the more-developed countries overall it went from 7.6 to 12.1.) The beauty of economics is that like all resource consumption, having children is a self-regulating mechanism: if a growing population really does exert a heavy strain on resources, then it will become more expensive to have children, and people will have less of them. The exception is in the less industrialized countries where children are still a net economic benefit for their parents and not a cost, but most of the world is industrializing quickly. Over the last fifty years, the gaps between the rich and poor have gotten smaller and smaller, to the point where people are calling the whole concept of a first world/third world divide a myth. I see no reason to presume that radical life extension and indefinite youths would pose us any problems that we couldn't handle, at least not on the overpopulation front.
Of course, this presumes that we'll remain as basically biological entities. If we develop uploading and the ability to copy minds at will, well... that's a different kettle of fish, with the various evolutionary dynamics involved in that being a much larger potential problem. And of course, if we get uploading, we're probably close to AI and a full-scale intelligence explosion, with all the issues that that involves.
Replies from: mwengler↑ comment by mwengler · 2012-05-31T12:44:21.181Z · LW(p) · GW(p)
The long-lived slowly reproducing transhumans have to be willing to kill off "dissenters," long-lived transhumans who reproduce at a faster rate, or else they will be a footnote in our evolution, the Neanderthals or also-rans on the path to whereever we get. Either you out reproduce or you kill your competitors, or you lose.
It seems to me a lot of future scenarios here depend on a kind of top-down imposed control and uniformity you just don't see among intelligent competitors. It only takes a small number of escapees from the control who pick a strategy that eats the lunch of the top-down controlled group to bring that whole thing to an end in finite time.
Whatever you propose has to be intensively successful against conceivable variations. Long-lived slow-reproducers MAY be, if they are ruthless against other intelligences that are not so measured as they in their reproduction. Are you ready for that?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-05-31T16:59:53.367Z · LW(p) · GW(p)
If some people wish to have lots of children and are willing to endure having an otherwise lower standard of living because of that, that's fine by me. So far birth rates haven't been going down because of top-down control, but because of people adapting to changed economic conditions.
Replies from: knb↑ comment by knb · 2012-05-31T19:27:26.449Z · LW(p) · GW(p)
This strikes me as very naive. Birthrates have been declining for a few decades (in some countries) and you're trying to extrapolate this trend out into the distant future. Meanwhile there are already developed countries that buck this trend. Qatar is richer than any European country, and Qataris have 4 kids per woman. Imagine you had a petri dish of bacteria, and introduce a chemical that stops reproduction of 98% of the types of bacteria inside it, 2% of the bacteria have resistance to this chemical. What you would see is a period of slowing growth, as most of the bacteria stop reproducing. Then there would be a period of decline in the bacteria population, as the non-reproducing bacteria start dying off. Finally, there would be a return to exponential growth, as the 2% fill the petri dish left empty by the sterilized bacteria.
All that is necessary is for a small percentage to be immune and to be able to pass their immunity to their children with some consistency.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-01T06:56:25.971Z · LW(p) · GW(p)
Well, Qatar's birthrate is in a decline, too. But that's beside the point, since I don't actually disagree with you. Both your comment and mwengler's reply strike me as arguing a different point from what I was making in the essay.
I was primarily trying to say that life extension won't cause an immediate economic disaster - that yes, although it will impact global demographics, it will take many decades (maybe centuries) for the impact of those changes to propagate through society, which is plenty of time for our economy to adjust. Dealing with gradual change isn't a problem, dealing with sudden and unanticipated shocks is. We've successfully dealt with many such gradual changes before.
In contrast, you two seem to be making the Malthusian argument that in the long term, any population will expand until it reaches the maximum capacity of the environment and each individual makes no more than a subsistence living. And I agree with that, but that has little to do with life extension, since the very same logic would apply regardless of whether life extension was ever invented. Yes, life extension may have the effect that we'll hit population and resource limits faster, but we'd eventually run into them anyway. The main question is whether life extension would accelerate the expansion towards Malthusian limits enough to make the transition period much more painful than what it would otherwise be, and whether that added pain would outweigh the massive reductions in human suffering that age-related decline causes - and I don't see a reason to presume that it would.
Replies from: knb↑ comment by knb · 2012-06-02T07:28:45.604Z · LW(p) · GW(p)
Well, Qatar's birthrate is in a decline, too. But that's beside the point, since I don't actually disagree with you. Both your comment and mwengler's reply strike me as arguing a different point from what I was making in the essay.
I know this is tangential, but I want to point out why the statistic you used is deceptive. Qatar has a huge foreign population (80% of the population) with much lower birthrates than the native Qataris. Four kids/ woman for Qataris, and 2 kids / woman for resident foreigners. So the decline in birth rates is mainly caused by 2 factors related to immigration. The first is that the resident foreigners have relatively fewer women (most migrant workers are males) and therefore lower the "births per 10,000". Second, the women that do immigrate in have far lower birth rates than Qataris.
The more important element here that I disagree with is this:
If some people wish to have lots of children and are willing to endure having an otherwise lower standard of living because of that, that's fine by me.
There are externalities here. When people make lots of kids, it doesn't just crowd out resources for their parents. It crowds them out for everybody. At some point, more kids means higher prices (or, in a command economy, smaller rations) for everyone else. I am somewhat sympathetic to the Hansonian sentiment that having a huge number of poor people is better than having a tiny number of idle gods, and that poor people can be happy.
But I do flinch away from the idea that human-level minds should be like dandelion seeds for profligate, reproduction-obsessed future ems.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-06-02T14:19:26.061Z · LW(p) · GW(p)
I know this is tangential, but I want to point out why the statistic you used is deceptive.
Ah. Thanks for the correction.
At some point, more kids means higher prices (or, in a command economy, smaller rations) for everyone else.
At some point in the far future, yes. But for now, more kids are AFAIK considered to have positive externalities, and barring uploading or the Singularity that looks to be the case for at least a couple of hundred years.
(Of course, discussing developments a couple of hundred years in the future while making the assumption that we'll remain as basically biological seems kinda silly, but there you have it.)
comment by Gastogh · 2012-05-31T19:50:17.206Z · LW(p) · GW(p)
I decided a while back that the next time there's a LW census, I'm going to suggest adding a question like this: "Ceteris NOT being paribus, if you could make all the people in the world invulnerable to diseases, cellular degeneration and similar aging-related problems by pushing a button, would you push it?" I would be very interested in those results.
How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal elite that the poor, unaugmented ones would have no chance of measuring up to?
Limiting lifespan is a time-buying measure at best, and an ineffective one at that; we already have severely limited lifespans, and that hasn't worked out all that shiny from the sustainability standpoint.
The technological elite scenario seems very likely unless there are other radical advances; having the tech to produce biological immortality isn't the same as being able to provide that service to everyone. As things stand, large swaths of the world still can't even get their hands on simple vaccines, and shipping those across the world is a hell of a lot simpler than genesplicing people. And even if it was as simple as shipping X cans of immortality/enhancement pills to all corners of the world, you'd still have on your hands all the luddites raised to believe that genetic engineering and eternal life are sins against whatever.
Replies from: DanArmak↑ comment by DanArmak · 2012-06-02T11:00:29.425Z · LW(p) · GW(p)
I would push this button, and I predict so would almost everyone else here. (So I'm interested in hearing why someone wouldn't.) Reasons:
- This is the only immediately available way to make myself and my loved ones immortal. Compared to that gain, also making everyone else immortal is much less important, whatever the eventual results. This is sufficient reason in itself.
- Population has been limited in the past by Malthusian limits and might be again. But Malthusian limits don't mean fewer people are born. They mean just as many are born, but most of them starve to death when young (simplifying). Making people immortal wouldn't change that basic behavior. What would change it is making people richer and less religious - each of these is very strongly correlated with number of children. Incidentally, making people biologically immortal would help greatly to wean them from the allegiance of anti-science, anti-progress, and religion.
- All other proposed negative effects of making everyone immortal are mostly speculation. It's as easy to speculate on positive effects. E.g., "people care more about the future and live longer so they become wiser - and so they help fix this problem and institute licensing to have children".
↑ comment by Gastogh · 2012-06-03T19:28:38.560Z · LW(p) · GW(p)
The reason why perhaps not push the button: unforeseeable (?) unintended consequences.
I expect point number 1 would weigh heavily in anyone's mind when making the choice, but it might turn out to be a harmfully biased option, assuming it even works. As to point two: in the absence of diseases and aging, the population would hit its limits along some other front. Starvation is only the obvious end of the line; the catch is what we might expect to see on the way there, such as rising global tensions, civil unrest, wars (gloves off or otherwise), accelerated environmental decay - all the things that may not seem like such pressing problems now. We could with perfect seriousness ask the question whether the current state of affairs isn't safer for humanity at large than after pressing the button. (I'll confess: I would use people's answers to the original question mostly as a proxy measurement for their general optimism.)
Frankly, I'd argue the exact reverse of point 3. IMO, it takes heavy speculation to avoid any of the risks I mentioned, and speculating on the positive effects is what seems questionable. The only immediate species-wide benefit would be that world-class expertise on all fields of science suddenly stops pouring out of the world at a steady pace. Anything like "people will care more about the future" supposes fairly fundamental changes in how people think and behave. I expect birth control regulations would be passed, but would you expect to see them work? How would you expect to see them enforced? My guess is: not in worldwide peace and mutual harmony.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-03T19:55:19.996Z · LW(p) · GW(p)
Are you also considering the unforeseen unintended consequences of not pushing the button and concluding that they are preferable? (If so, can you clarify on what basis?)
Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
Replies from: Gastogh↑ comment by Gastogh · 2012-06-03T20:53:48.027Z · LW(p) · GW(p)
Yes. It's not that the world-as-is is a paradise and we shouldn't do anything to change it, but pushing the button seems like it would rock the boat far more than not pushing it. Where by "rock the boat" I mean "significantly increase the risk of toppling the civilization (and possibly the species, and possibly the planet) in exchange for the short-term warm fuzzies of having fewer people die in the first few years following our decision."
Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
Uncertainty being just as much a reason to push as not-push seems like another way of saying we might as well flip a coin, which doesn't seem right. Now, I'm not claiming to be running some kind of oracle-like future extrapolation algorithm where I'm confident in saying "catastrophe X will break out in place Y at time Z", but assuming that making the best possible choice gets higher priority than avoiding personal liability, the stakes in this question are high enough that we should choose something. Something more than a coin flip.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-03T22:09:18.311Z · LW(p) · GW(p)
If uncertainty is just as much a reason to push as not-push, that doesn't preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be?
That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that's an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
Replies from: Gastogh↑ comment by Gastogh · 2012-06-04T13:41:09.165Z · LW(p) · GW(p)
I'm starting to feel I don't know what's being meant by uncertainty here. It is not, to me, a reason in and of itself either way - to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for "reasons other than uncertainty". (Or did I misunderstand that part of your post?) For me it's just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions.
For the other point, here's some direct disclosure about why I think what I think:
There's plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc.
To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people's rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail.
Am I misusing the Razor when I use it to back these claims?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-06-04T14:51:18.935Z · LW(p) · GW(p)
Perhaps I confused the issue by introducing the word "uncertainty." I'm happy to drop that word.
You started out by saying "The reason why perhaps not push the button: unforeseeable (?) unintended consequences." My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either.
You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don't actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that's an excellent reason to not-push the button.
It's just a different reason than you started out citing.
comment by [deleted] · 2012-05-31T05:25:45.300Z · LW(p) · GW(p)
The novels Red Mars, Green Mars and Blue Mars by Kim Stanley Robinson bring up this question and proposes an answer, offered here in a simple rot13.com cypher for those who haven't read the books yet.
Gubfr jub erprvir gur ybatrivgl gerngzrag ner fgrevyvmrq ng gur fnzr gvzr.
Replies from: None, billswiftcomment by damage · 2012-05-31T08:32:45.873Z · LW(p) · GW(p)
To me, the real turning point is if and when we learn how to precisely control our personalities - in short, reengineering human nature itself. Of course there's the nature vs nurture matter in this, not to mention all the potential factors than even go into a personality, let alone alter it. But I'm 100% against uncontrolled transhumanism, or even mere unregulated genetic modification or augmentation.
Though, let's suppose there was a way to correct obviously harmful behaviorial defects with at least a partial genetic basis, particularly behavior every society would see as egregiously harmful, and especially criminal (supposedly anti-social personality disorder- such as psychopathy - is one of these). Would the prospect of even reducing that behavior be worth it?
Replies from: mwengler↑ comment by mwengler · 2012-05-31T12:38:10.964Z · LW(p) · GW(p)
The dissonance is between the modifications you would like to see and the modifications which will dominate. Even if 99.999% wants to see a kinder, gentler, less psychopathic human, if there is one a-hole in the bunch who turns up psychopathic agression and reproduction drive in such a way that the resulting creature does pretty well, his result will dominate.
I would bet that personalities that will not kill off the other creatures who are genetically dangerous to them will never, over time, be on the winning side.
Replies from: billswift↑ comment by billswift · 2012-05-31T16:24:53.213Z · LW(p) · GW(p)
Not dominate, but force a mixed strategy; as I pointed out in another comment last week:
In game theory, whether social or evolutionary, a stable outcome usually (I'm tempted to say almost always) includes some level of cheaters/defectors.
Which requires the majority to have some means of dealing with them when they are encountered.