The basic argument for the feasibility of transhumanism

post by ChrisHallquist · 2012-10-14T08:04:08.557Z · LW · GW · Legacy · 36 comments

Eliezer sometimes talks about how animals on earth are but a tiny dot in the "mind design space." For example, in "Artificial Intelligence as a Positive and Negative Factor in Global Risk," he writes:

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens.” When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general. Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-ingeneral. The entire map floats in a still vaster space, the space of optimization processes. Natural selection creates complex functional machinery without mindfulness; evolution lies inside the space of optimization processes but outside the circle of minds.

Though Eliezer doesn't stress this point, this argument applies as much to biotechnology as Artificial Intelligence. You could say, paralleling Eliezer, that when we talk about "biotechnology" we are really talking about living things in general, because life on Earth represents just a tiny subset of all life that could have evolved anywhere in the universe. Biotechnology may allow to create some of that life that could have evolved but didn't. Extending the point, there's probably an even vaster space of life that's recognizably life but couldn't have evolved, because it exists in a tiny island of life not connected to other possible life by a chain of small, beneficial mutations, and therefore is effectively impossible to reach without the conscious planning of a bioengineer.

The argument can further be extended to nanotechnology. Nanotechnology is like life in that they both involve doing interesting things with complex arrangements of matter on a very small scale, it's just that visions of nanotechnology tend to involve things which don't otherwise look very much like life at all. So we've got this huge space of "doing interesting this with complex arrangements of matter on a very small scale," of which existing life on earth is a tiny, tiny fraction, and in which "Artificial Intelligence," "biotechnology," and so on represent much large subsets.

Generalized in this way, this argument seems to me to be an extremely important one, enough to make it a serious contender for the title "the basic argument for the feasibility* of transhumanism."  It suggests a vast space of unexplored possibilities, some of which would involve life on earth being very different than it is right now. Short of some catastrophe putting a halt to scientific progress, it seems hard to imagine how we could avoid having some significant changes of this sort not taking place, even without considering specifics involving superhuman AI, mind uploading, and so on.

On Star Trek, this outcome is avoided because a war with genetically enhanced supermen led to the banning of genetic enhancement, but in the real world such regulation is likely to be far from totally effective, no more than current bans on recreational drugs, performance enhancers, or copyright violation are totally effective. Of course, the real reason for the genetic engineering ban on Star Trek is that stories about people fundamentally like us are easier for writers to write and viewers to relate to.

I could ramble on about this for some time, but my reason for writing this post is to bounce ideas off people. In particular:

 

  1. Is there a better candidate for the title "the basic argument for the feasibility of transhumanism"?--and--
  2. What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.

 

*I don't call it an argument for transhumanism, because transhumanism is often defined to involve claims about the desirability of certain developments, which this argument doesn't show anything about one way or the other.)

36 comments

Comments sorted by top scores.

comment by betterthanwell · 2012-10-14T15:12:12.478Z · LW(p) · GW(p)

What objections can be raised against this argument? I'm looking both for good objections and objections that many people are likely to raise, even if they aren't really any good.

I'm not sure if this is an objection many people are likely to raise, or a good one, but in any case, here are my initial thoughts:

Transhumanism is just a set of values, exactly like humanism is a set of values. The feasibility of transhumanism can be shown from a compiling a list of those values that are said to qualify someone as a transhumanist, and the observed existence of people with such values, whom we then slap a label on, and say: Here is a transhumanist!

Half an hour on google should probably suffice to persuade the sceptic that transhumanists do in fact exist, and therefore transhumanism is feasible. And so we're done.


I realize that this is not what you mean when you refer to the feasibility of transhumanism. You want to make an argument for the possiblity of "actual transhumans". Something along the lines of: "It is feasible that humans with quantitatively or qualitatively superior abilities, in some domain, relative to some baseline (such as the best, or the average performance of some collection of humans, perhaps all humans) can exist." Which seems trivially true, for the reasons you mention.

Where are the boundaries of human design space? Who do we decide to put in the plain old human category? Who do we put in the transhuman category — and who is just another human with some novel bonus attribute?

If one goes for such a definition of a transhuman as the one I propose above, are world record holding athletes then weakly transhuman, since they go beyond the previously recorded bounds of human capability in strength, or speed, or endurance?

I'd say yes, but justifying that would require a longer reply. One question one would have to answer is: Who is a human? (The answers one would get to this question has likely changed quite a bit since the label "human" was first invented.)


If one allows the category of things that receives a "yes" in reply to the question "is this one a human?" to change at all, if one allows that category to expand or indeed to grow over time, perhaps by an arbitrary amount. (Which is excactly what seems, to me at least, to have happened, and seems to continue to be the case.) Then, perhaps, there will never be a transhuman. Only a growing category of things which one considers to be "human". Including some humans that are happier, better, stronger and faster than any current or previously recorded human.

In order to say "this one is a transhuman" one needs to first decide upon some limits to what one will call "human", and then decide, arbitrarily, that whoever goes beyond these limits, we will put into this new category, instead of continuing to relax the boundaries of humanity, so as to include the new cases, as is usual.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2012-10-14T15:50:00.008Z · LW(p) · GW(p)

Wikipedia defines transhumanism as:

Transhumanism, abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities.

So what I mean by "the feasibility of transhumanism" is just the "possibility" half of that definition, setting aside the desirability.

Even granting all that, I suppose you can still quibble about semantics, but I ran through several possible labels for what I had in mind and that seemed the best choice.

comment by NancyLebovitz · 2012-10-14T13:40:21.832Z · LW(p) · GW(p)

An idea at least good enough for sf: biotech is illegal but available. People improve (or "improve") their children and eventually, what's considered to be normal drifts pretty far from what we'd consider to be baseline human.

comment by Vladimir_Nesov · 2012-10-14T13:31:50.052Z · LW(p) · GW(p)

Typos: 'in . the', 'exists in a tiny of life', 'other possibly life', 'an chain'.

Replies from: ChrisHallquist
comment by Mouthwash · 2012-10-24T17:44:24.214Z · LW(p) · GW(p)

Here's an interesting problem: why do we live in this era? Imagine the people that lived before we migrated out of Africa; when the human population was less than 10,000. What were the odds of being one of those people? At least less than winning the lottery. So we can conclude that the likelihood of existing in a specific era is proportional to the amount of consciousness in existence during that time period.

This presents a major problem for a technological singularity as the odds of living before the singularity turned all matter in the universe into consciousness are virtually nil. So there will be no singularity, and it's almost frightening to imagine what we can conclude from this for our future.

Replies from: TimS
comment by TimS · 2012-10-24T18:05:18.523Z · LW(p) · GW(p)

This argument is called the Doomsday Argument. It has been discussed several times around these parts (e.g. here)

In a technical sense, the issue resolves around how you think self-sampling should be understood. You might consider looking up the "Sleeping Beauty" problem for more discussion of that point.

In a non-technical sense, there's what might be called the "reference-class problem." On the one hand, the number of people in existence has constantly been increasing over time. On the other, the number of interconnected civilizations seems to be dropping (after the widespread adoption of the internet, one could argue that the number of distinct civilizations currently in existence can be counted on one's fingers and toes). Figuring out the correct reference class has profound effects on the conclusions one reaches using this kind of reasoning.

Replies from: Mouthwash
comment by Mouthwash · 2012-10-24T18:10:06.688Z · LW(p) · GW(p)

Yeah, I knew I couldn't have been the only one to have thought of this.

comment by Mouthwash · 2012-10-24T17:35:35.181Z · LW(p) · GW(p)

Here's an interesting problem: why do we live in this era? Imagine the people that lived before we migrated out of Africa; when the human population was less than 10,000. What were the odds of being one of those people? At least less than winning the lottery, obviously. So we can conclude that the likelihood of living in a specific time period is proportional to the amou

comment by DanboTheManobo · 2012-10-16T15:45:17.220Z · LW(p) · GW(p)

There are two arguments that are at the heart of criticising the feasibility of transhumanism. One is skeptical about whether we can gain the science to achieve this aim and the other asserts that, whilst the tech may be possible, human beings will use it to kill each other in vast quantities.

The latter seems a more fundamental problem with human nature. You want personalized medicine? That requires the wide distribution of Bio-tech printers - printers that would be just as happy to print out a lethal, tailor-made virus.

This argument is as old as the hills. But other than Totalitarian snooping on EVERYBODY - How do you prevent widely distributed uber-tech from being abused?

comment by Mitchell_Porter · 2012-10-15T23:39:40.571Z · LW(p) · GW(p)

I'm looking both for ... objections

As "V_V" implies, the existence of other forms of life and other forms of intelligence does not imply the possibility of radical life extension or of superintelligence.

It is easy enough to imagine a future in which biotechnology permits all sorts of altered lives and altered states without going much beyond the lifespan or intelligence of anything already in the animal kingdom, and in which computers, robots, and computer programs continue to be as brittle as they are now. So history continues and becomes posthuman, but not transhuman.

comment by amitpamin · 2012-10-15T12:52:37.464Z · LW(p) · GW(p)

As has been raised by others, just because the design space is large, does not imply that the possibilities have high probability of being actualized.

Your argument shows that there is possibility. And, I think, nothing more. But yes, exempting existential catastrophe, I don't see how transhumanism is avoidable.

comment by djcb · 2012-10-14T19:56:17.548Z · LW(p) · GW(p)

I don't really see how the argument for feasibility of H+ has much to do with the size of the design space for life (and AI, and nanotech,...) as long as its non-empty. After all, there's a huge design space for impossibilities as well. Or am I misunderstanding the argument?

There are some rather mundane improvements (at least compared to the design space) that would be enough (if realized) to show the feasibility -- say, intelligence augmentation, brain-computer hybrids.

comment by Zaine · 2012-10-14T08:18:56.938Z · LW(p) · GW(p)

Would an EMP effectively disable any implanted nanotechnology? If so, how can nanotechnology be made EMP-proof?

Replies from: saturn, TrE
comment by saturn · 2012-10-14T23:31:32.008Z · LW(p) · GW(p)

EMP destroys equipment by inducing high voltage and current in unshielded conductors, which act as antennas. The amount of energy picked up is related to the length of the conductor, with shorter conductors picking up less energy. Anything small enough to be described as "nanotechnology" would probably be unaffected, as long as it's not connected to unshielded external wiring. (An unmodified human touching a conductor would also experience an electric shock during an EMP.)

Replies from: Zaine
comment by Zaine · 2012-10-15T01:56:36.042Z · LW(p) · GW(p)

Thank you! That makes me very happy.

Replies from: V_V
comment by V_V · 2012-10-15T20:05:04.179Z · LW(p) · GW(p)

Why do you particularly care about EMP?

Replies from: Zaine
comment by Zaine · 2012-10-15T21:23:11.994Z · LW(p) · GW(p)

I was worried that human augmentation might come at the cost of susceptibility to EMP's: tricksters finding it humourous to walk around with controlled radii EMP devices and troubling augmented humans.

Replies from: V_V
comment by V_V · 2012-10-15T21:40:07.378Z · LW(p) · GW(p)

As opposed to beating random people with a stick, for instance? Try not to worry about unlikely things

Replies from: Zaine
comment by Zaine · 2012-10-16T06:03:33.224Z · LW(p) · GW(p)

Point taken. But the repercussions of EMP disruption of augmented humans aren't akin to playful beatings with sticks: an augmented eye short-circuiting, an augmented arm rendered considerably heavy dead weight, an artificial heart stopped.

Unless of course you meant stick-beatings of a fatal or maiming nature, in which case I would not call them tricksters but thugs. Sorry if my diction misled.

Replies from: V_V
comment by V_V · 2012-10-16T14:48:35.495Z · LW(p) · GW(p)

to playful beatings with sticks

'playful' beatings with sticks?

an artificial heart stopped

Indeed. Messing with people's implanted devices, whether it is a standard pacemaker or sci-fiesque medical nanotech, would be a severe act of assault, not a prank.

What puzzles me is why you care about this particular type of assault, since other types seem much more likely.

Replies from: Zaine
comment by Zaine · 2012-10-16T20:56:30.523Z · LW(p) · GW(p)

'playful' beatings with sticks?

That was the only way I could reconcile 'beating random people with a stick' and 'tricksters'; 16th century vagabonds in London, for instance, may have found it an amusing pastime.

What puzzles me is why you care about this particular type of assault, since other types seem much more likely.

EMP assaults come (or came, if it indeed would not prove problematic) across as the largest obstacle in ensuring augmentation's safety from malicious attacks, as it would be difficult to identify a guilty party in a crowd, and attacks might, if the technology develops to allow for it, be relatively easy to carry out.

What types of assaults do you consider having a higher probability?

  • Ah, I think I may understand you now. You mean why do I care about how augmented humans may be attacked, when similar technology could be used to much more nefarious ends? To that I say I do not have the requisite knowledge base for grounding any such speculations in reality.
comment by TrE · 2012-10-14T08:31:03.169Z · LW(p) · GW(p)

EMP's affect only circuitry which can be broken by high voltage. The intersection of this with nanotechnology is not empty, but also not the same as all nanotechnology.

A faraday cage should be enogh to block the effects of an EMP (full-body chainmail, anyone?).

Replies from: Zaine
comment by Zaine · 2012-10-14T08:41:47.150Z · LW(p) · GW(p)

I really wasn't presenting it as an argument, but more a request for information. I view technology of the sort I imagine nanotech requires as necessitating electricity, which I had thought susceptible to EMP's; I know brains aren't affected, but could not fully elaborate why.

Replies from: TrE, Luke_A_Somers
comment by TrE · 2012-10-14T09:19:20.324Z · LW(p) · GW(p)

I really wasn't presenting it as an argument, but more a request for more information.

If it appears to you that I didn't treat it as such, I apologize.

Replies from: Zaine
comment by Zaine · 2012-10-14T09:55:27.964Z · LW(p) · GW(p)

It appeared an assumption that I was informed could be inferred from the comment; if I have caused you distress, I too apologise.

comment by Luke_A_Somers · 2012-10-15T17:31:18.891Z · LW(p) · GW(p)

I would be surprised if a human were completely unaffected by an EMP that trashed the electronics around them.

Replies from: gwern
comment by gwern · 2012-10-16T00:54:47.337Z · LW(p) · GW(p)

Nuclear EMP effects had real-world impact damaging electronics, but I never saw any mention of human health damage from the EMP (as opposed to the fallout).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-10-16T12:56:37.572Z · LW(p) · GW(p)

Street lights are an extreme case - hooked up directly to a very long baseline with no real protection to speak of. Anything capable of taking on, say, a cell phone, would have to be several orders of magnitude stronger.

Replies from: gwern
comment by gwern · 2012-10-16T15:46:16.977Z · LW(p) · GW(p)

The link mentioned that if the detonation had been over the US, the effect itself would have been 6x stronger - quite aside from being closer than 1500 kilometers to places that mattered. And that wasn't even designed to maximize EMP effects in any way.

Besides that, when someone says 'the electronics around them', I think that covers a lot more and more important stuff than one's cellphone.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-10-16T16:32:45.850Z · LW(p) · GW(p)

The context here was EMP to be deployed against nanobots, not power grids. The source will thus be optimized to produce EMP, and to minimize collateral damage against general infrastructure - perhaps by producing smaller pulses closer to the target rather than enormous pulses further away.

In particular, the ability to affect microelectronics is paramount. The ability to take down the grid is irrelevant.

comment by V_V · 2012-10-14T23:00:30.936Z · LW(p) · GW(p)

(replying here because the karma system apparently doesn't allow me to reply in the original subthread)

Standard evolutionary theory. Evolution did not have to take the path it took by any stretch of the imagination.

It could have created a zilion other varieties of bacteria, or rodents, or something else, but I wouldn't call those things transhumans.

While it is certainly plausible that humans can be improved to some extent using genetic engineering, given the state of the evidence there is no reason to believe that the typical transhumanist fantasies such as extreme lifespans or extreme intelligence will be feasible with this approach.

AI and non-biochemical nanotech are even more speculative technologies.

EDIT:

We can't really say whether the space of all the possible minds includes something that is substantially different from a human mind and yet be intelligent as much as or even more than a human at things humans do, (and be also manufacturable by humans and not alien enough that humans can't interact with it in any meaningful way).

Similarly, we can't say whether the space of all the possible types of chemistries includes something substantially different than biochemistry-as-we-know-it, and yet still capable of sustaining processes and forming structures of complexity and efficiency comparable to our biochemistry, and be compatible with the physical and chemical properties of our environment.

comment by [deleted] · 2012-10-14T22:24:14.215Z · LW(p) · GW(p)

The difference in life experiences between the most wealthy and everyone else (much less the least wealthy) might as well be a transhuman drift. Better educated, higher average iq, longer lives, more fun, better nutrition, better health. The world has honored that diversity for generations. We grumble about those few super-rich, but so far we see few castles surrounded by peasants with pitchforks and torches to drive the monsters out. Super-fame is enough like a psychic mind control power that, again, we can guess a psychic transhuman wouldn't be banned if such a person came to be. Not so much magic but imbedded Skype. The super wealthy and super famous... for every John that gets shot and George that gets stabbed we still have a Paul and a Ringo. And a Yoko.

When transhumans appear they will not be banned.

comment by V_V · 2012-10-14T15:40:38.916Z · LW(p) · GW(p)

Actually it's not even an argument, it's just an assertion: "There are lots of unexplored possibilities" How do you know?

Replies from: ChrisHallquist
comment by ChrisHallquist · 2012-10-14T15:54:25.602Z · LW(p) · GW(p)

Standard evolutionary theory. Evolution did not have to take the path it took by any stretch of the imagination.

Replies from: Vaniver
comment by Vaniver · 2012-10-14T18:40:57.623Z · LW(p) · GW(p)

Standard evolutionary theory. Evolution did not have to take the path it took by any stretch of the imagination.

It didn't have to- but can't we discuss relative likelihoods of paths taken by evolution? It may be that the likely paths all look similar- the trajectory we took was in a 'rut' of broadly similar trajectories. It might instead be that wildly different trajectories are similarly likely- but that seems like something that you would need detailed inside experience to judge.

Consider an analogy to thermostats. The range of temperature fields that are present in a house over some time period is a tiny speck on the space of all possible temperature fields- but it's that narrow because the inputs don't vary all that much and there's a regulating system that tries to keep it narrow. Similarly, mutation could produce a far wider variety of life than we see now- but natural selection pares it down.