"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh

post by gwern · 2011-12-04T10:33:41.875Z · LW · GW · Legacy · 11 comments

Contents

  Physical vulnerability
  Material and immaterial vulnerability
  Bodily vulnerability
  Metaphysical vulnerability
  Existential and psychological vulnerabilities
  Social and emotional vulnerability
  Ethical-axiological vulnerability
  'Relational vulnerability'/'Conclusion: Heels and dragons'
None
11 comments

"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh (university); abstract:

Transhumanist visions appear to aim at invulnerability. We are invited to fight the dragon of death and disease, to shed our old, human bodies, and to live on as invulnerable minds or cyborgs. This paper argues that even if we managed to enhance humans in one of these ways, we would remain highly vulnerable entities given the fundamentally relational and dependent nature of posthuman existence. After discussing the need for minds to be embodied, the issue of disease and death in the infosphere, and problems of psychological, social and axiological vulnerability, I conclude that transhumanist human enhancement would not erase our current vulnerabilities, but instead transform them. Although the struggle against vulnerability is typically human and would probably continue to mark posthumans, we had better recognize that we can never win that fight and that the many dragons that threaten us are part of us. As vulnerable humans and posthumans, we are at once the hero and the dragon.


Bostrom has written a tale about a dragon that terrorizes a kingdom and people who submit to the dragon rather than fighting it. According to Bostrom, the “moral” of the story is that we should fight the dragon, that is, extend the (healthy) human life span and not accept aging as a fact of life (Bostrom 2005, 277). And in The Singularity is Near (2005) Kurzweil has suggested that following the acceleration of information technology, we will become cyborgs, upload ourselves, have nanobots in our bloodstream, and enjoy nonbiological experience. Although not all transhumanist authors explicitly state it, these ideas seem to aim toward invulnerability and immortality: by means of human enhancement technologies, we can transcend our present limited existence and become strong, invulnerable cyborgs or immortal minds living in an eternal, virtual world.

...However, in this paper, I will ask neither the ethical-normative question (Should we develop human enhancement techniques and should we aim for invulnerability?) nor the hermeneutical question (How can we best interpret and understand transhumanism in the light of cultural, religious, and scientific history?). Instead, I ask the question: If and to the extent that transhumanism aims at invulnerability, can it – in principle – reach that aim? The following discussion offers some obvious and some much less obvious reasons why posthumans would remain vulnerable, and why human vulnerability would be transformed rather than diminished or eliminated...However, to focus only on a defense or rejection of what is valuable in humans would leave out of sight the relation between (in)vulnerability and posthuman possibilities. It would lead us back to the ethical-normative questions (Is human enhancement morally acceptable? Is vulnerability something to be valued? Is the transhumanist project acceptable or desirable?), which is not what I want to do in this paper. Moreover, ethical arguments that present the problem as if we have a choice between “natural” humanity and “artificial” posthumanity are based on essentialist assumptions that make a sharp distinction between “what we are” (the natural) and technology (the artificial), whereas this distinction is at least questionable. Perhaps there is no fixed human nature apart from technology, perhaps we are “artificial by nature” (Plessner 1975). If this is so, then the problem is not whether or not we want to transcend the human but how we want to shape that posthuman existence. Should we aim at invulnerability and if so, can we? As indicated before, here I limit the discussion to the “can” question.

Breaking down the potential improvements:

Physical vulnerability

Not only could human enhancement make us immune to current viruses; it could also offer other “immunities,” broadly understood...However, the project of total vulnerability or even overall reduction of vulnerability is bound to fail. If we consider the history of medical technology, we observe that for every disease new technology helps to prevent or cure, there is at least one new disease that escapes our techno-scientific control. We can win one battle, but we can never win the war. There will be always new diseases, new viruses, and, more generally, new threats to physical vulnerability. Consider also natural disasters caused by floods, earthquakes, volcanic eruptions, and so on.

Moreover, the very means to fight those threats sometimes create new threats themselves. This can happen within the same domain, as is the case with antibiotics that lead to the development of more resistant bacteria, or in another domain, as is the case with new security measures in airports, which are meant as protections against physical harm by terrorism but might pose new (health?) risks. Paradoxically, technologies that are meant to reduce vulnerability often create new ones. This is also true for posthuman technologies. For example, posthumans would also be vulnerable to at least some of the risks Bostrom calls “existential risks” (Bostrom 2002), which could wipe out posthumankind. Nanotechnology or nuclear technology could be misused, a superintelligence could take over and annihilate humankind, or technology could cause (further) resource depletion and ecological destruction. Military technologies are meant to protect us but they can become a threat, making us vulnerable in a new way. We wanted to master nature in order to become less dependent on it, but now we risk destroying the ecology that sustains us. And of course there are many physical threats we cannot foresee – not even in the near future.

Material and immaterial vulnerability

Consider computer viruses. Here the story is similar to the story of biological viruses: there are ongoing cycles of threats, counter-measures, and new threats. We can also consider physical damage to computers, although that is much less common. In any case, if we extend ourselves with software and hardware, this creates additional vulnerabilities. We must cope with “software” vulnerability and “hardware” vulnerability. If humans and posthumans live in an “infosphere” (see for example Floridi 2002), this is not a sphere of immunity. Perhaps our vulnerability becomes less material, but we cannot escape it. For instance, a virtual body in a virtual world may well be shielded from biological viruses, but it is vulnerable to at least three kinds of threats.

  1. First, there are threats within the virtual world itself (consider for instance virtual rape), which constitutes virtual vulnerability.
  2. Second, the software programme that provides a platform for the virtual world might be damaged, for example by means of a cyber attack. This can lead to the “death” of the virtual character or entity.
  3. Third, all these processes depend on (material) hardware. The world wide web and its wired and wireless communications rest on material infrastructures without which the web would be impossible. Therefore, if posthumans uploaded themselves into an infosphere and dispensed with their biological bodies, they would not gain invulnerability and immortality but merely transform their vulnerability.

Bodily vulnerability

Minds need bodies. This is in line with contemporary research in cognitive science, which argues that “embodiment” is necessary since minds can develop and function only in interaction with their environment (Lakoff and Johnson 1999 and others). This direction of thought is also taken in contemporary robotics, for example when it recognizes that manipulation plays an important role in the development of cognition (Sandini et al. 2004). In his famous 1988 book on “mind children” Moravec argued that true AI can be achieved only if machines have a body (Moravec 1988)...Thus, uploading and nano-based cyborgization would not dispense with the body but transform it into a virtual body or a nano-body. This would create vulnerabilities that sometimes resemble the vulnerabilities we know today (for instance virtual violence) but also new vulnerabilities.

Metaphysical vulnerability

With this atomism comes that atomist view of death: there is always the possibility of disintegration; neither physical-material objects nor information objects exist forever. Information can disintegrate and the material conditions for information are vulnerable to disintegration as well. Thus, at a fundamental level everything is vulnerable to disintegration, understood by atomism as a re-organization of elementary particles. This “metaphysical” vulnerability is unavoidable for posthumans, whatever the status of their elementary particles and the organs and systems constituted by these particles (biological or not). According to their own metaphysics, the cyborgs and inforgs that transhumanists and their supporters wish to create would be only temporal orders that have only temporary stability – if any.

 Note, however, that recently both Floridi and contemporary physics seem to move toward a more ecological, holistic metaphysics, which suggests a different definition of death. In information ecologies, perhaps death means the absence of relations, disconnection. Or it means: deletion, understood ecologically and holistically as the removal out of the whole. But in the light of this metaphysics, too, there seems no reason why posthumans would be able to escape death in this sense.

Existential and psychological vulnerabilities

This gives rise to what we may call “indirect” or “second-order” vulnerabilities. For instance, we can become aware of the possibility of disintegration, the possibility of death. We can also become aware of less threatening risks, such as disease. There are many first-order vulnerabilities. Awareness of them renders us extra vulnerable as opposed to beings who lack such an ability to take distance from ourselves. From an existential-phenomenological point of view (which has its roots in work by Heidegger and others), but also from the point of view of common sense psychology, we must extend the meaning of vulnerability to the sufferings of the mind. Vulnerability awareness itself constitutes a higher-order vulnerability that is typical of humans. In posthumans, we could only erase this vulnerability if we were prepared to abandon the particular higher form of consciousness that we “enjoy.” No transhumanist would seriously consider that solution to the problem.

Social and emotional vulnerability

If I depend on you socially and emotionally, then I am vulnerable to what you say or do. Unless posthumans were to live in complete isolation without any possibility of inter-posthuman communication, they would be as vulnerable as we are to the sufferings created by the social life, although the precise relation between their social life and their emotional make-up might differ...For example, in Houellebecq’s novel the posthumans have a reduced capacity to feel sad, but at the cost of a reduced capacity to desire and to feel joy. More generally, the lesson seems to be: emotional enhancement comes at a high price. Are we prepared to pay it? Even if we succeed in diminishing this kind of vulnerability, we might lose something that is of value to us. This brings me to the next kind of vulnerability.

Ethical-axiological vulnerability

We value not only people and our relationships with them; we are also attached to many other things in life. Caring makes us vulnerable (Nussbaum 1986). We develop ties out of our engagement with humans, animals, objects, buildings, landscapes, and many other things. This renders us vulnerable since it makes us dependent on (what we experience as) “external” things. We sometimes get emotional about things since we care and since we value. We suffer since we depend on external things...Posthumans could be cognitively equipped to follow this strategy, for instance by means of emotional enhancement that allows more self-control and prevents them forming too strong ties to things. If we really wanted to become invulnerable in this respect, we should create posthumans who no longer care at all about external things – including other posthumans. That would be “posthumans” who no longer have the ability to care and to value. They would “connect” to others and to things, but they would not really engage with them, since that would render them vulnerable. They would be perfectly rational Stoics, perhaps, but it would be odd to call them “posthumans” at all since the term “human” would lose its meaning. It is even doubtful if this extreme form of Stoicism would be possible for any entity that possesses the capacity of valuing and that engages with the world.

'Relational vulnerability'/'Conclusion: Heels and dragons'

The only way to make an entity invulnerable, it turns out, would be to create one that exists in absolute isolation and is absolutely independent of anything else. Such a being seems inconceivable – or would be a particularly strange kind of god. (It would have to be a “philosopher’s” god that could hardly stir any religious feelings. Moreover, the god would not even be a “first mover,” let alone a creator, since that would imply a relation to our world. It is also hard to see how we would be aware of its existence or be able to form an idea about it, given the absence of any relation between us and the god.) Of course we could – if ethically acceptable at all – create posthumans that are less vulnerable in some particular areas, as long as we keep in mind that there are other sources of vulnerability, that new sources of vulnerability will emerge, and that our measure to decrease vulnerability in one area may increase it in another area.

If transhumanists accept the results of this discussion, they should carefully reflect on, and redefine, the aims of human enhancement and avoid confusion about how these aims relate to vulnerability. If the aim is invulnerability, then I have offered some reasons why this aim is problematic. If their project has nothing to do with trying to reach invulnerability, then why should we transcend the human? Of course one could formulate no “ultimate” goals and choose less ambitious goals, such as more health and less suffering. For instance, one could use a utilitarian argument and say that we should avoid overall suffering and pain. Harris seems to have taken these routes (Harris 2007). And Bostrom frequently mentions “life extension” as a goal rather than “invulnerability” or “immortality.” But even in these “weakened” or at least more modest forms, the transhumanist project can be interpreted as a particularly hostile response to (human) vulnerability that probably has no parallel in human history.

...Furthermore, this paper suggests that if we can and must make an ethical choice at all, then it is not a choice between vulnerable humans and invulnerable posthumans, or even between vulnerability and invulnerability, but a choice between different forms of humanity and vulnerability. If implemented, human enhancement technologies such as mind uploading will not cancel vulnerability but transform it. As far as ethics is concerned, then, what we need to ask is which new forms of the human we want and how (in)vulnerable we wish to be. But this inquiry is possible only if we first fine-tune our ideas of what is possible in terms of enhancement and (in)vulnerability. To do this requires stretching our moral and technological imaginations.

Moreover, if I’m right about the different forms of posthuman vulnerability as discussed above, then we must dispense with the dragon metaphor used by Bostrom: vulnerability is not a matter of “external” dangers that threaten or tyrannize us, but that have nothing to do with what we are; instead, it is bound up with our relational, technological and transient kind of being – human or posthuman. If there are dragons, they are part of us. It is our tragic condition that as relational entities we are at once the heel and the arrow, the hero and the dragon.

Before criticizing it, I'd like to point to the introduction where the author lays out his mission: to discuss what problems cannot "in principle" be avoided, what vulnerabilities are "necessary". In other words, he thinks he is laying out fundamental limits, on some level as inexorable and universal as, say, Turing's Halting Theorem.

But he is manifestly doing no such thing! He lists countless 'vulnerabilities' which could easily be circumvented to arbitrary degrees. For example, the computer viruses he puts such stock on: there is no fundamental reason computer viruses must exist. There are many ways they could be eliminated starting from formal static proofs of security and functionality; the only fundamental limit relevant here would be Turing/Rice's theorem, which is applicable only if we wanted to run all possible programs, which we manifestly cannot and do not. Similar points apply to the rest of his software vulnerabilities.

I would also like to single out his 'Metaphysical vulnerability'; physicists, SF authors, and transhumanists have been, for decades, outlining a multitude of models and possibilities for true immortality, ranging from Dyson's eternal intelligences to Tipler's collapse to Omega point to baby blackhole-universes. To appeal to atomism is to already beg the question (why not run intelligence on waves or more exotic forms of existence, why this particle-chauvinism?).

This applies again and again - the author supplies no solid proofs from any field, and apparently lacks the imagination or background to imagine ways to circumvent or dissolve his suggested limits. They may be exotic methods, but they still exist; were the author to reply that to employ such methods would result in intelligences so alien as to no longer be human, then I should accuse him of begging the question on a even larger scale - of defining the human as desirable and, essentially, as that which is compatible with his chosen limits.

Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.

11 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2011-12-04T20:32:27.397Z · LW(p) · GW(p)

The problem with the abstract seems different from what you describe (I only read the abstract). It looks like a kind of fallacy of gray, arguing for irrelevance of (vast) quantitative improvements by pointing out (supposed) absence of corresponding absolute qualitative change. It's similar to a popular reaction to the idea of life extension: people point out that it's not possible to live "forever", even though this point doesn't make the improvement from 80 to 800 years any less significant. (It's misleading to bite the bullet and start defending possibility of immortality, which is unnecessary for the original point.) This pattern matches most of the goals outlined in the abstract.

Replies from: gwern
comment by gwern · 2011-12-05T02:19:51.920Z · LW(p) · GW(p)

That's part of the frustrating thing - there are many parts which do look exactly like the fallacy of grey (thanks for reminding me of the name, I simply couldn't remember it) and he seems to recognize it a bit in some of the later parts like where he describes how a defender of Bostrom might point out that the goal of the fable was to motivate us to eliminate one particularly bad dragon.

But he also took pains to explicitly state at one point his concern with fundamental limits, so anyone who looked at just the abstract or just (all the many) parts that looked like fallacy of grey could instantly be smacked down as 'you clearly did not read my paper carefully, because I am not concerned with the transhumanists' incremental improvements but with the final goal of perfection'.

The paper is muddled enough that I don't think this was deliberate, but it does impress me a little bit.

Replies from: khafra
comment by khafra · 2011-12-05T17:35:54.620Z · LW(p) · GW(p)

Annoyance was the feeling I got, as well. It seems to me that in the places he does not commit the fallacy of grey, he only restates limits that any LW-style transhumanist understands--ie, in an EM scenario without a friendly singleton, there will still be disease, injuries, and death; even given a friendly singleton, with meaningful "continuous improvement" we only get about 28,000 subjective years until the heat death of the universe, etc.

comment by orthonormal · 2011-12-04T19:54:56.437Z · LW(p) · GW(p)

Thanks for doing this; your criticism is precisely what I was thinking a few lines into the piece. To echo the other thing Douglas_Knight said, though, it's helpful to say something at the top that lets people know whether this is a worthwhile read for them. (For instance, the Scooby Doo post's title makes it pretty clear to most people whether or not it's the sort of thing they want to read right now.)

In this case, it would have been relevant to say that (in your analysis) the linked article isn't of interest for any quality insights, but mainly because it's been published in a prestigious journal and thus illustrates the (embarrassingly shallow) current level at which academics publicly engage with transhumanist ideas. (There are more deft/polite/high-status ways to briefly convey this information, of course.)

comment by Richard_Kennaway · 2011-12-04T10:45:34.015Z · LW(p) · GW(p)

Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.

So why bother with it?

Replies from: gwern
comment by gwern · 2011-12-04T11:01:20.114Z · LW(p) · GW(p)

For the same reason you deal with any critic; in particular, this is published in one of the most relevant journals for LW topics. One may not like it or think it is a valuable contribution, but that doesn't mean it's not worth discussing, especially since as far as I can tell, no one has discussed it yet.

(And what's with the high standards? This is in Discussion; this is more relevant than at least a quarter of the other Discussion posts like Scooby Doo or 'Semantic Over-achievers'.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-12-04T14:43:26.330Z · LW(p) · GW(p)

It seems to me that this standard would result in you writing hundreds of similar reviews with the same conclusion. Why did you choose this one? If you write more articles like this, please state the conclusion at the beginning so I can avoid reading it. I can filter other posts by their titles.

Replies from: gwern
comment by gwern · 2011-12-04T16:48:43.495Z · LW(p) · GW(p)

I'm not sure there are hundreds of such articles, but since you asked, I was thinking of doing the other 3 papers in this special JET issue (note the tag); then, if people seemed to find it valuable or it seemed to be leading to good discussions, I might then sporadically do particularly good or interesting ones in the previous issues of JET. Is this a problem?

While I'm asking your permission, perhaps you could tell me in advance what you would think of a chapter by chapter read of Good and Real, or reading through the SL4 archive to produce 'greatest hits' pages of links to and excerpts from the best/most original SL4 emails. (After all, I wouldn't want to annoy you.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-12-04T18:11:50.585Z · LW(p) · GW(p)

"Particularly good or interesting" articles sound like great ones to write about. That's the opposite of "nothing of interest to us." If you can identify "particularly good or interesting" articles, why write about the current ones? They won't be current forever. If you conclude that a chapter of Good and Real is worthless, then I would like to know that at the start of the review. But surely the reason you chose Good and Real for this treatment is because you don't expect that conclusion.

comment by Nisan · 2011-12-04T17:01:51.937Z · LW(p) · GW(p)

Thank you for providing a digest of the article. After reading the abstract, I wanted to know the content of the argument, but I didn't want to read the whole thing. The digest is just perfect.

comment by CronoDAS · 2011-12-05T04:09:00.155Z · LW(p) · GW(p)

/me shrugs

Yeah, most proposed "immortality" methods probably wouldn't survive, say, the Earth falling into a black hole, a sufficiently close gamma ray burst, or the heat death of the universe, but, you know, I don't really care.