Bioconservative and biomoderate singularitarian positions

post by Roko · 2009-06-02T13:19:04.275Z · LW · GW · Legacy · 53 comments

Contents

53 comments

Let us define a singularitarian as a person who considers it likely that some form of smarter than human intelligence will be developed in a characteristic timeframe of a century, and that the manner in which this event occurs is important enough to expend effort altering. Given this definition, it is perfectly possible to be a bioconservative singularitarian - that  is someone who:

opposes genetic modification of food crops, the cloning and genetic engineering of livestock and pets, and, most prominently, rejects the genetic, prosthetic, and cognitive modification of human beings to overcome what are broadly perceived as current human biological and cultural limitations.

 - one can accept the (at present only suggestive) factual arguments of Hanson, Yudkowsky, Bostrom etc that smarter than human intelligence is the only long-term alternative to human extinction (this is what one might call an "attractor" argument - that our current state simply isn't stable), whilst taking the axiological and ethical position that our pristine, unenhanced human form is to be held as if it were sacred, and that any modification and/or enhancement of the human form is to be resisted, even if the particular human in question wants to be enhanced. A slighly more individual-freedoms-oriented bioconservative position would be try very hard to persuade people (subject to certain constraints) to decide not to enhance themselves, or to allow people to enhance themselves only if they are prepared to face derision and criticism from society. A superintelligent singleton could easily implement such a society.

This position seems internally consistent to me, and given the seemingly unstoppable march of technological advancement and its rapid integration into our society (smartphones, facebook, online dating, youtube, etc) via corporate and economic pressure, bioconservative singularitarianism may become the only realistic bioconservative position.

One can even paint a fairly idyllic bioconservative world where human enhancement is impossible and people don't interact with advanced technology any more, they live in some kind of rural or hunter-gatherer world where the majority of suffering and disease (apart from death, perhaps) is eliminated by a superintelligent singleton, and the singleton takes care to ensure that this world is not "disturbed" by too much technology being invented by anyone. Perhaps people live in a way that is rather like one would have found on a Tahiti before Europeans got there. There are plenty of people who think that they already live in such a world - they are called theists, and they are mistaken (more about this in another post).

For those with a taste for a little more freedom and a light touch of enhancement, we can define biomoderate singularitarianism, which differs from the above in that it sits somewhere more towards the "risque" end of the human enhancement spectrum, but it isn't quite transhumanism. As before, we consider a superintelligent singleton running the practical aspects of a society and most of the people in that society being somehow encouraged or persuaded not to enhance themselves too much, so that the society remains a clearly human one. I would consider Banks' Culture to be the prototypical early result of a biomoderate singularity, followed by such incremental changes as one might expect due to what Yudkowsky calls "heaven of the tired peasant" syndrome - many people would get bored of "low-grade" fun after a while. Note that in the Culture, Banks describes people with significant emotional enhancements and the ability to change gender - so this certainly isn't bioconservative, but the fundaments of human existence are not being pulled apart by such radical developments as mind merging, uploading, wireheading or super-fast radical cognitive enhancement.

Bioconservative and biomoderate singularities are compatible with modern environmentalism, in that the power of a superintelligent AI could be used to eliminate damage to the natural world, and humans could live in almost perfect harmony with nature. Harmony with nature would involve a superintelligence carefully managing biological ecosystems and even controlling the actions of individual animals, plants and microorganisms, as well as informing and guiding the actions of human societie(s) so that no human was ever seriously harmed by any creature (no-one gets infected by parasites, bacteria or viruses (unless they want to be), no-one is killed by wild animals), and no natural ecosystem is seriously harmed by human activity. A variant on this would have all wild animals becoming tame, so that you could stroll through the forest and pet a wildcat.

A biomoderate singularity is an interesting concept to consider, and I think it has some interesting applications to a Freindly AI strategy. It is also, I feel, something that I think will be somewhat easier to sell to most other humans around than a full-on, shock level 4, radical transhumanist singularity. In fact we can frame the concept of a "biomoderate technological singularity" in fairly normal language: it is simply a very carefully designed self-improving computer system that is used to eliminate the need for humans to do work that they don't (all things considered) want to do.

One might well ask: what does this post have to do with instrumental rationality? Well, due to various historical co-incidences, the same small group of people who popularized technologically enabled bio-radical stances such as gender swapping, uploading, cryopreservation, etc also happen to be the people who popularized ideas about smarter than human intelligence. When one small, outspoken group proposes two ideas which sound kind of similar, the rest of the world is highly likely to conflate them.

The situation on the ground is that one of these ideas has a viable politico-cultural future, and the other one doesn't: "bioradical" human modification activates so many "yuck" factors that getting it to fly with educated, secular people is nigh-on impossible, never mind the religious lot. The notion that smarter-than-human intelligence will likely be developed, and that we should try to avoid getting recycled as computronium is a stretch, but at least it involves only nonobvious factual claims and obvious ethical claims.

It is thus an important rationalist task to separate out these two ideas and make it clear to people that singularitarianism doesn't imply bioradicalism.

See Also: Amputation of Destiny

53 comments

Comments sorted by top scores.

comment by Arenamontanus · 2009-06-02T14:27:38.031Z · LW(p) · GW(p)

From a rhetorical point of view, the biomoderate singularity probably works better than the non-moderate version since it contains less outrageous and "silly" elements. It even fits with the "machines of loving grace" ideas about AI worlds that got crowded out by the superintelligent AI memes somewhere in the late 80's.

In practice I doubt people attracted to bioconservatism would go for singularity thinking, simply because they might be dominated by the ingroup, authority and purity moral foundations in Jonathan Haidt's system. If you think that there exists a natural order or a historical structure that shouldn't be overthrown, then a singularity may not be what you want.

I think a common assumption is that singularities are highly contingent affairs, where the range of potential outcomes is enormous. That makes it sensible to try to get a singularity with the right dynamics to get to the outcome set one likes. This assumption is likely based on the fact that the smarter a being is, the more behavioural and mental flexibility it exhibits, and singularities presumably involve lots of smarts. But there could be strong attractors in singularity dynamics, essentially imploding the range of outcomes (e.g. something akin to Robin's cosmic common locusts). In this case the choice might be between just a few attractors - if we even can know them beforehand.

Replies from: orthonormal, Roko
comment by orthonormal · 2009-06-02T18:05:02.427Z · LW(p) · GW(p)

Robin's cosmic common locusts

That's an indie band name if ever I heard one.

(Sorry.)

comment by Roko · 2009-06-02T14:50:32.573Z · LW(p) · GW(p)

From a rhetorical point of view, the biomoderate singularity probably works better than the non-moderate version since it contains less outrageous and "silly" elements.

yes, I strongly agree. However people always seem to find arguments even against the biomoderate version...

Replies from: timtyler
comment by timtyler · 2009-06-02T18:49:21.755Z · LW(p) · GW(p)

You mean like the idea that it is a crude attempt to hinder progress?

Replies from: Roko
comment by Roko · 2009-06-02T22:51:14.039Z · LW(p) · GW(p)

The ones I have heard are basically all of the form "you'd get bored".

comment by Vladimir_Nesov · 2009-06-02T14:05:20.063Z · LW(p) · GW(p)

Moral progress follows human nature more than it does people's mistaken or imprecise ideas about human nature. Ancient Greeks shouldn't get to fix slavery as a law of nature, no more than bioconservatives should get an ability to enforce their views, no more than anyone else.

A person's current moral position in the present shouldn't (and, in a related sense, can't) determine the outcome of establishing FAI. You don't vouch for the particular features of the outcome, you don't predict the exact Kasparov's moves, you only predict that the outcome is a win.

This makes bioconservative and singularitarian positions completely independent, the same as the cases of altruist singularitarian, hedonist singularitarian or nazi singularitarian.

However, there is a bonus: whatever moral position you currently hold, you can find motivation in the fact that to the extent your position is correct, it'll get implemented by the dynamic set forward by FAI (this dynamic might consist in the people's activities), and it'll get protected from existential risk and more subtle moral death spirals (including blind modification of human nature).

Replies from: Roko
comment by Roko · 2009-06-02T14:49:39.830Z · LW(p) · GW(p)

whatever moral position you currently hold, you can find motivation in the fact that to the extent your position is correct,

What do you mean by a moral position being "correct"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-02T15:04:44.757Z · LW(p) · GW(p)

(Disambiguation attempt:) A correct moral position is e.g. one not leading to confusion about moral content, such as belief that eating babies is a human terminal value.

It's strange to me that you ask this question. Since you moved on from objective morality, I hope you didn't turn into a kind of moral relativist, in particular not accepting that one can be morally wrong.

Replies from: Roko, Z_M_Davis, Psychohistorian, timtyler
comment by Roko · 2009-06-02T23:00:42.887Z · LW(p) · GW(p)

It's strange to me that you ask this question. Since you moved on from objective morality, I hope you didn't turn into a kind of moral relativist, in particular not accepting that one can be morally wrong.

Yes, if I am honest I do believe that an action being "morally wrong" (in the same observer-independent sense that 2+2=5 is "wrong") is a misnomer. Actions can be either acceptable to me or not, but there is no objectivity, and if Vladimir were to announce that he likes fried infants for dinner I could only say that I disapprove, not that he is objectively wrong.

I am not sure whether the standard meaning of the term "moral relativism" describes the above position, but I am certainly no nihilist.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-02T23:10:01.763Z · LW(p) · GW(p)

That's preference, what you can mention, or at least use. The morality of which one can be mistaken runs deeper, and at least partially can be revealed by the right moral arguments: after hearing such arguments, you change your preference, either establishing it where there was none or reversing it. After the change, you see your previous position as having been mistaken, and it's much less plausible to encounter another argument that would move your conclusion back. If I have the right model of a person, I can assert that he is morally mistaken in this sense.

Note that this notion of moral mistake doesn't require there to be an argument that would actually convince that person in a reasonable time, or for there to be no argument that would launch the person down a moral death spiral that would obliterate one's humane morality. Updating preferences on considering new arguments, or on new experience, are tools intended to show the shape of the concept that I'm trying to communicate.

Replies from: Roko
comment by Roko · 2009-06-02T23:20:33.995Z · LW(p) · GW(p)

after hearing such arguments, you change your preference, either establishing it where there was none or reversing it.

There do exist pieces of sensory data that have the ability to change a human's preferences. For example, consider stockholm syndrome.

Some less extreme cases would include getting a human to spend time with some other group of humans that s/he dislikes, and finding that they are "not as bad as they seem".

It is far from clear to me that these kinds of processes are indicative of some kind of moral truth, moral progress, or moral mistakes. It's just our brain-architecture behaving the way it does. Unless you think that people who suffer from stockholm syndrome have discovered the moral truth of the matter (that certain terrorist organizations are justified in kidnapping, robbing baks, etc), or that people who buy Christmas presents instead of sending the money to Oxfam or SIAI are making "moral mistakes".

Talking about a human being having preferences is always going to be an approximation that breaks down in certain cases, such as stockholm syndrome. Really, we are just meat computers that implement certain mathematically elegant ideas (such as having a a "goal") in an imperfect manner. There exist certain pieces of sensory input that have the ability to rewrite our goals, but so what?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-02T23:28:31.726Z · LW(p) · GW(p)

I'm not talking about breaking one's head with a hammer, there are many subtle arguments that you'd recognize as enlightening, propagating preference from where it's obvious to situations that you never connected to that preference, or evoking emotional response where you didn't expect one. As I said, there obviously are changes that can't be considered positive, or that are just arbitrary reversals, but there are also changes that you can intellectually recognize as improvements.

Replies from: Roko
comment by Roko · 2009-06-04T12:54:06.920Z · LW(p) · GW(p)

As I said, there obviously are changes that can't be considered positive, or that are just arbitrary reversals, but there are also changes that you can intellectually recognize as improvements.

If there exist lots of arbitrary reversals, how do you know whether any particular change is an improvement? Unless you can provide some objective criterion... which we both agree you cannot.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-04T13:56:43.593Z · LW(p) · GW(p)

For some examples of judgments about changes being positive or negative, take a look at Which Parts Are "Me"?. You can look forward to changes in yourself, including the changes in your emotional reactions in response to specific situations, that figure into your preference. When you are aware of such preferred changes, but they are still not implemented, that's akrasia.

Now there are changes that only become apparent when you consider an external argument. Of course, it is you who considers the argument and decides what conclusion to draw from it, but the argument can come from elsewhere. For analogy, you may be easily able to check a solution to an equation, while unable to find it yourself.

The necessity for external moral arguments comes from you not being logically omniscient, their purpose is not in changing your preference directly.

comment by Z_M_Davis · 2009-06-02T16:56:37.938Z · LW(p) · GW(p)

Eating babies is clearly not a terminal value for the vast majority of humans, but if there's someone out there who really likes eating babies, then it is not at all clear to me in what sense we can say that's we're right and she's wrong. You assert that moral progress follows from human nature. I ask, how do you know? What experiment falsified the slavery-is-right hypothesis? What sort of evidence would make you abandon your morality?

comment by Psychohistorian · 2009-06-02T18:08:21.110Z · LW(p) · GW(p)

A correct moral position is e.g. one not leading to confusion about moral content, such as belief that eating babies is a human terminal value.

Confusion is a property of the mind. Something that is defined as correct by not causing confusion is thus necessarily subjective. If people (possibly only a person) were different, and it did cause confusion, it would no longer be correct.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-02T18:24:51.919Z · LW(p) · GW(p)

Correctness is a property of the mind as well. And it's not a definition, it's an attempt for disambiguation, with an example. How many disclaimers do I need?

Replies from: Annoyance
comment by Annoyance · 2009-06-02T18:29:24.630Z · LW(p) · GW(p)

Correctness is not a property of minds. It's potentially a property of conclusions, although this cannot be generally known. It's only usefully a property of arguments considered as a whole.

comment by timtyler · 2009-06-02T17:44:14.004Z · LW(p) · GW(p)

Your definition seems strange and counter-intuitive - since surely this leads to all kinds of "evil" moral positions being described as being "correct".

comment by MichaelAnissimov · 2009-06-02T21:23:05.807Z · LW(p) · GW(p)

Great post Roko, here's my response, which is supportive.

Replies from: Roko
comment by Roko · 2009-06-02T23:04:55.585Z · LW(p) · GW(p)

thanks, though few others here seem to consider it to be important. Perhaps it is the case that there are some very important unstated statements of the obvious that need to be made, but when you make them people aren't impressed, because, well, you've just stated the obvious!

comment by CarlShulman · 2009-06-02T20:45:21.854Z · LW(p) · GW(p)

Nick Bostrom's paper says that in the long run we should expect extinction, stagnation, posthumanity, or oscillation. But he describes a global state that uses social control technologies (ubiquitous surveillance, lie detectors, advanced education techniques) to maintain a steady technology level without generally superintelligent machines as falling into the radical change category.

What strong bioconservatism needs to work is a global order/singleton that can maintain its values, not necessarily superintelligent software entities.

Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn't really draw on or develop rationality ideas very much.

Replies from: Roko, JGWeissman
comment by Roko · 2009-06-02T22:49:34.070Z · LW(p) · GW(p)

Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn't really draw on or develop rationality ideas very much.

This is an interesting suggestion; but I would claim that if it gets to the stage where a point whose understanding and refinement is crucial to the attainment of our goals - to winning - is not suitable because it isn't about "rationality", then I think we have moved away from the true spirit of instrumental rationality.

Replies from: whpearson
comment by whpearson · 2009-06-03T22:04:38.499Z · LW(p) · GW(p)

It seems that what is getting voted up at the moment is mainly generic rationality stuff not future/planning oriented stuff (not that I ever expected my stuff to get voted up).

Generic rationality is maybe the only thing we share and worrying about the future is perhaps only a minority pursuit on LW now.

comment by JGWeissman · 2009-06-02T21:06:00.268Z · LW(p) · GW(p)

Also, while I like the post, I wonder if it would work better for your own blog than Less Wrong, since it doesn't really draw on or develop rationality ideas very much.

I think it is appropriate to have (non-promoted) articles on side topics of interest to large segments of the Less Wrong community.

Replies from: CarlShulman
comment by CarlShulman · 2009-06-03T06:51:17.771Z · LW(p) · GW(p)

Good point about the option on promotion.

comment by JoeShipley · 2009-06-02T20:20:12.004Z · LW(p) · GW(p)

I feel as though if you are hoping to preserve the specific biological scope of humanity you have some significant roadblocks on the way. Our species was generated in millions of years of shifting genes with selection factors blatant to subtle, and more recently we've stripped as many selection factors out as we can. (For good reason, natural selection is a harsh mistress...)

Malaria etc still is a selecting factor as has been documented but they're greatly reduced. In Dawkin's 'The Ancestor's Tale', he tells the story of the Russian Silver Fox breeding experiment in which wild foxes were selected for tame characteristics, resulting in foxes that behaved like border collies.

He hypothesizes that if humans were subject to a similar non-natural sexual selection, picking for the 'tamest' humans (adult male chimpanzees will kill each other and definitely don't work well with groups, while adolescent chimps can work together in large groups no problem -- this is another suggestion w/ the skeleton and other claims for the whole idea of species right after the human-chimpanzee concestor being pushed toward neoteny in order to work in larger groups.)

The border-collie-foxes ended up having floppy ears, liked being pet, yipped and enjoyed playing with humans. If Dawkins is correct, we're a bunch of domesticated humans in a similar fashion. When you throw a wrench into natural selection like that, things start to go out of whack instantly like the constant birth problems Pugs have, bloat in Bassett hounds to back problems in daschunds. It's difficult to predict -- a part of the naturally selected whole that had one purpose, modified to another, can have all kinds of unexpected repercussions. Anything that can 'loves' to do double or triple duty in the body.

So unless you snapshot the human genome the way it is and keep people from randomly reproducing as they like to do, you don't get to maintain a 'pristine' human condition.

Is it preferable to slowly wreck and junk up your genome and species via a more or less unguided (at least in the center of the curve) process, or attempt to steer it in a humane way without eugenics by genetic engineering even though the consequences could be drastic?

The bottom line is that our species will change no matter what we do. I don't know for sure, but I would prefer thought going into it over neglect and leaving the whole thing up to chaos.

Replies from: Roko, andrewc
comment by Roko · 2009-06-02T22:55:54.599Z · LW(p) · GW(p)

A biomoderate singularity managed by a superintelligent singleton would not have much difficulty with these problems. For example, the singleton could make slight changes to individuals' genetic code to make sure these things didn't go wrong. I don't think people would mind the odd bit of messing around to prevent people gradually becoming retards.

Replies from: JoeShipley
comment by JoeShipley · 2009-06-03T18:30:40.691Z · LW(p) · GW(p)

This is true, I didn't think of this. A superintelligent sheperd. Interesting idea. It just seems so stagnant to me, but I don't have the value meme for it.

comment by andrewc · 2009-06-03T00:12:34.548Z · LW(p) · GW(p)

I don't understand how you can relate health problems in pure bred dogs usually attributed to in-breeding, to a theory of degeneration of current humans. Mongrels ('Mutts' in US English?) have a reputation for being healthier, smarter, and longer-lived than most pure breds, and most of them come about due to random stray boy dogs impregnating random stray girl dogs.

I think it's simply false that human reproduction now selects for the 'tamest' humans, whatever that means. Now, as always, human reproduction selects for those who are most able to reproduce.

Did Dawkins actually articulate an argument like the one you present?

Replies from: JoeShipley, JGWeissman
comment by JoeShipley · 2009-06-03T18:05:05.443Z · LW(p) · GW(p)

Well, yes, on Pg. 31 of 'The Ancestor's Tale',

  • Back to the Russian fox experiment, whcih demonstrates the speed with which domestication can happen, and the likelihood that a train of incidental effects would fllow in the wake of selection for tameness. It is entirely probable that cattle, pigs, horses, sheep, goats, chickens, gees,e ducks and camels followed a course which was just as fast, and just as rich in unexpected side-effects. It also seems plausible that we ourselves evolved down a parallel road of domestication after the Agricultural Revolution, towards our own version of tameness and associated by-product traits. In some cases, the story of our own domestication is clearly written in our genes. The classic example, meticulously documented by WIlliam Durham in his book Coevolution, is lactose tolerance... [continued later on the page and then to 32]
  • ...My generalization concerne dthe human species as a whole and, by implication, the wild Homo Sapiens fromn which we are all descended. It is as if I had said, 'Wolves are big, fierce carnivores that hunt in packs and bay at the moon', knowing full well that Pekineses and Yorkshire terriers belie it. The difference is that we have a seperate word, dog, for domestic wolf, but not for domestic human... [continued pg 33]...
  • Is lactose tolerance just the tip of the iceberg? Are our genomes riddled with evidences of domestication, affecting not just our biochemistry but our minds? Like Belyaev's domesticated foxes, and like the domesticated wolves that we call dogs, have we become tamer, more lovable, with the human equivalents of floppy ears, soppy faces and wagging tails? I leave you with that thought, and move hastily on. -Dawkins, 'The Ancestor's Tale'

For what its worth....

Replies from: andrewc
comment by andrewc · 2009-06-04T00:18:22.690Z · LW(p) · GW(p)

Cheers for that. I might just look it up when I have some time. Still skeptical but it seems more plausible after reading those quotes. The hypothesis of selection for lactose tolerance seems a good place to start.

comment by JGWeissman · 2009-06-03T00:29:05.324Z · LW(p) · GW(p)

I think it's simply false that human reproduction now selects for the 'tamest' humans, whatever that means. Now, as always, human reproduction selects for those who are most able to reproduce.

It does not make sense to say that human reproduction does not select for the 'tamest' humans because it really selects for those most able to reproduce. Those are different levels of abstraction. The question is: are the 'tamest' humans the ones most able to reproduce, and therefore selected for by evolution?

Replies from: JoeShipley, andrewc
comment by JoeShipley · 2009-06-03T18:11:02.640Z · LW(p) · GW(p)

Agreed. One of the interesting points in that Dawkin's book is how sexual selection can result in the enhancement of traits that neither increase survivability or produce more offspring. He talks about 'fashions' spreading within a species, in his personal theory of how humans started walking upright.

Basically, the females or the males start selecting for a particular rare behavior as indicative of something desirable over their lessers, which leads to that male or female exhibiting that trait reproducing and the trait being reinforced for as long as it is in 'fashion'. Several cases of the way that can run away are presented in the book; Testicle size in chimpanzees due to sperm competition and the incredible sexual dimorphism in elephant seals which has driven the male to up to 8 times the size of the female. (Only one male in any given group reproduces.)

There's always a reason for any selection, but when you deal with creatures with any kind of mindfulness, sometimes the reasons stem from the minds rather than perfectly from the biology.

comment by andrewc · 2009-06-03T01:46:20.987Z · LW(p) · GW(p)

I don't understand your point about levels of abstraction.

The question is: are the 'tamest' humans the ones most able to reproduce, and therefore selected for by evolution?

Are the most rockin' humans the ones most able to reproduce? In the absence of any visible evidence, my answer to both questions is most likely not. Evidence would require a clear definition of tame (or rockin'). We can mostly agree on what a tame fox is but what is a tame human?

It seems to me that essentially random copulation, with some selection/treatment for serious genetic diseases is just fine for maintaining biological humans pretty much as-is. I don't know enough about mathematical biology to articulate a quantitative argument for this, but I'd like to hear it, for or against.

Replies from: JGWeissman
comment by JGWeissman · 2009-06-03T04:03:19.115Z · LW(p) · GW(p)

I don't understand your point about levels of abstraction.

Imagine if someone said "This shape is not a rectangle. It is a quadrilateral." You would probably think, "Well, some quadrilaterals are rectangles, so the shape being a quadrilateral does not mean it is not a rectangle." "Quadrilateral" represents a higher level of abstraction than "rectangle" in that it specifies the shape less. Generally, the fact that something is accurately described in a vague manner does not mean it cannot also be accurately described in a more precise manner.

That evolution selects for the most reproductively fit is tautological, it is practically the definition of "reproductively fit". The reason this tautology is useful is that it gets us to ask the question: what more concrete properties must an organism have to be reproductively fit? Here the property of "tameness" has been proposed as such a property, and represents a lower level of abstraction. Though, not much lower, you correctly point out that "Evidence would require a clear definition of tame".

Replies from: JoeShipley
comment by JoeShipley · 2009-06-03T18:12:41.529Z · LW(p) · GW(p)

In this case, tame might mean: "Able to co-exist with other males in your species". Our concestor with chimpanzees probably wasn't, but we had to adapt.

comment by PhilGoetz · 2009-06-03T17:29:34.899Z · LW(p) · GW(p)

Do you manage a community of fish, ants, or microorganisms, providing them with a safe and enjoyable environment, and eliminating as much disease and suffering for them as you can? If you have dogs, do you provide them with a wooded 100-acre lot to run free and hunt on?

If not, why not?

Or do you, maybe, have one dog or one cat that you keep locked up alone all day in your house while you're gone?

Even those animals we claim to love - say, dogs and cats - are very much living for our enjoyment. They live according to our convenience, not in conditions they would have chosen.

Replies from: Roko
comment by Roko · 2009-06-07T11:53:11.729Z · LW(p) · GW(p)

Even those animals we claim to love - say, dogs and cats - are very much living for our enjoyment. They live according to our convenience, not in conditions they would have chosen.

Phil, I don't see the point you are trying to make here. Can you spell it out for me?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-06-10T16:09:01.251Z · LW(p) · GW(p)

You wrote, "One can even paint a fairly idyllic bioconservative world where human enhancement is impossible and people don't interact with advanced technology any more, they live in some kind of rural or hunter-gatherer world where the majority of suffering and disease (apart from death, perhaps) is eliminated by a superintelligent singleton."

If it would make sense for that singleton to look after humans that way, it would make sense for humans to look after dogs, cats, and ants that way. And we don't.

Replies from: Roko, Roko
comment by Roko · 2009-06-10T17:03:17.081Z · LW(p) · GW(p)

If it would make sense for that singleton to look after humans that way, it would make sense for humans to look after dogs, cats, and ants that way

are you trying to reason (by analogy) that because humans don't look after (e.g. ants) very carefully, that any superintelligent AI would not carefully look after humans?

comment by Roko · 2009-06-10T16:53:52.448Z · LW(p) · GW(p)

And we don't.

Well as a matter of fact, we do in the case of dogs and cats... people treat them like children!

comment by dclayh · 2009-06-02T20:18:37.823Z · LW(p) · GW(p)

the singleton takes care to ensure that this world is not "disturbed" by too much technology being invented by anyone

I don't see how that would work without periodic "hard resets" (meaning apocalypses). Wanting to innovate and improve things is pretty fundamental to all populations of humans (with the possible exception of the Piraha).

humans could live in almost perfect harmony with nature.

This seems too vague to be anything but an applause light.

Replies from: Roko, Vladimir_Nesov
comment by Roko · 2009-06-02T23:05:57.973Z · LW(p) · GW(p)

This seems too vague to be anything but an applause light.

I shall elaborate it

Replies from: dclayh
comment by dclayh · 2009-06-03T00:42:00.708Z · LW(p) · GW(p)

That is indeed clearer: you're just replacing nature with an elaborate simulacrum.

Replies from: Roko
comment by Roko · 2009-06-03T13:36:35.818Z · LW(p) · GW(p)

yes, but that simulacrum is closer to the human ideal conception of nature than the real thing is.

People's concept of "perfect, pristine natural beauty" doesn't include things like the Candiru or the HIV virus or Ebola or aggressive poisonous seasnakes, etc. The view of nature in our minds is a sanitized one.

Replies from: dclayh
comment by dclayh · 2009-06-03T17:37:30.882Z · LW(p) · GW(p)

I don't disagree.

comment by Vladimir_Nesov · 2009-06-02T21:54:55.967Z · LW(p) · GW(p)

That event could be prohibited.

comment by antisingularity · 2009-06-04T21:50:56.541Z · LW(p) · GW(p)

This is quite an interesting piece advocating that not everything about the Singularity will be wonderful and there might be reason to show some restraint. It is sort of the same way that genetic engineering and cloning holds great promise and great peril, and acceptance by some while rejection by others. I do not really believe that the Singularity will ever occur, but I am glad that there is this kind of discussion within the community.

http://antisingularity.wordpress.com/

comment by derekz · 2009-06-03T13:09:30.229Z · LW(p) · GW(p)

People on this site love to use fiction to illustrate their points, and a "biomoderate singularity managed by a superintelligent singleton" is very novel-friendly, so that's something!

comment by billswift · 2009-06-02T19:49:58.321Z · LW(p) · GW(p)

"our pristine, unenhanced human form is to be held as if it were sacred"

Bioconservatives ARE theists. They just elevate current evolutionary results to godhood.

Replies from: Roko, byrnema
comment by Roko · 2009-06-02T22:56:34.174Z · LW(p) · GW(p)

Highly nonstandard use of "theist"

comment by byrnema · 2009-06-02T22:20:49.019Z · LW(p) · GW(p)

What's the difference between believing in the meaning of anything and being religious? Do atheists that don't believe in belief object to the apparent concreteness of the theist "God"?