Guardians of the Gene Pool

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-16T20:08:39.000Z · LW · GW · Legacy · 73 comments

Like any educated denizen of the 21st century, you may have heard of World War II.  You may remember that Hitler and the Nazis planned to carry forward a romanticized process of evolution, to breed a new master race, supermen, stronger and smarter than anything that had existed before.

Actually this is a common misconception.  Hitler believed that the Aryan superman had previously existed—the Nordic stereotype, the blond blue-eyed beast of prey—but had been polluted by mingling with impure races.  There had been a racial Fall from Grace.

It says something about the degree to which the concept of progress permeates Western civilization, that the one is told about Nazi eugenics and hears "They tried to breed a superhuman."  You, dear reader—if you failed hard enough to endorse coercive eugenics, you would try to create a superhuman.  Because you locate your ideals in your future, not in your past.  Because you are creative.  The thought of breeding back to some Nordic archetype from a thousand years earlier would not even occur to you as a possibility—what, just the Vikings?  That's all?  If you failed hard enough to kill, you would damn well try to reach heights never before reached, or what a waste it would all be, eh?  Well, that's one reason you're not a Nazi, dear reader.

It says something about how difficult it is for the relatively healthy to envision themselves in the shoes of the relatively sick, that we are told of the Nazis, and distort the tale to make them defective transhumanists.

It's the Communists who were the defective transhumanists.  "New Soviet Man" and all that.  The Nazis were quite definitely the bioconservatives of the tale.

73 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by burger_flipper2 · 2007-12-16T21:10:26.000Z · LW(p) · GW(p)

Relatively new to the forum and just watched the 2 1/2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction.

My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn't answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.

This was the question about the friendly AI: "Why are you assuming it knows the outcome of its modifications?"

Any pointer to the answer would be much appreciated.

comment by James_D._Miller · 2007-12-16T21:53:36.000Z · LW(p) · GW(p)

The Soviet new "man" that Stalin wanted to create was a half-ape, half-man super-warrior.

See http://news.scotsman.com/ViewArticle.aspx?articleid=2688011

Replies from: James_Miller
comment by James_Miller · 2011-11-26T07:59:49.601Z · LW(p) · GW(p)

I no longer trust the validity of this article.

Replies from: None
comment by [deleted] · 2011-11-26T08:20:26.166Z · LW(p) · GW(p)

Not a True Scotsman, is it?

comment by TGGP4 · 2007-12-16T22:53:07.000Z · LW(p) · GW(p)

This entry reminded me of Blank Slate Asymmetry from Gene Expression. A lot of people would say the difference in our perceptions/opinions result from our general attitude toward progress, but I would suggest that it was contingent on our opposition in war to the Nazis while many of our elites were rather friendly towards the Soviets.

comment by JulianMorrison · 2007-12-17T02:24:02.000Z · LW(p) · GW(p)

The Soviets weren't what I'd call transhumanists, because their New Man wasn't a definable goal or factual trend, he was a utopian catch-all of projected virtue. A transhumanist will be able to break down his goals ("uploading") into subgoals ("AI and brain scans") and roughly sketch a research path ("symbolic AI") that would either approach the goal, or fail in an informative way ("combinatorial explosion"). The Soviets could do no such thing, because NSM was nothing definable. He would certainly pop up as a consequence of the experience of enough Marxism. Timescale couldn't be defined. Success could not be predicted until it was encountered. Keep plugging on and have faith.

I call that religion. It isn't set in the real future. It's set in the same never-never land that contains the Second Coming. Importantly, it doesn't lead to a future-oriented culture, which is more precisely a realist goal oriented culture. Nobody works towards NSM (except in propaganda posters) and so he never gets any closer.

comment by Richard_Kulisz · 2007-12-17T03:23:17.000Z · LW(p) · GW(p)

It has nothing to do with poverty of imagination and everything to do with black propaganda. The Soviets were simply never evil enough. And we know that looking forward into the future is evil, therefore the Nazis must have been guilty of that crime. If the Soviets had done it, why it may even have rehabilitated that concept. Can't have that, can we?

The problem isn't that Westerners can't imagine themselves in the shoes of the Romantic Nazis. All to the contrary, the problem is that elite conservative Westerners find it ALL TOO EASY to imagine themselves in the shoes of the Romantic Nazis. So much so that they had to safeguard a part of Nazi ideology, to cherish it and safeguard it, by separating it from the Nazis themselves.

The Romantics turned the Nazis into Transhumanists because it fit their own agenda.

comment by TGGP4 · 2007-12-17T04:07:23.000Z · LW(p) · GW(p)

It is often forgotten in the early days of proto-Nazi racial theory the Prussians were said to be the Master Race because they were a combination of German and Slav! Their combination was supposed to be just right from the perspective of Prussians, reminding me of Charles Murray's "Who wants to be an elephant?". Nietzsche also proposed breeding ubermenschen by giving Prussian officers jewish brides (haven't read him myself, just heard he said this in BG&E).

Replies from: FeepingCreature
comment by FeepingCreature · 2012-06-09T21:36:09.653Z · LW(p) · GW(p)

Beyond Good and Evil, Aphorism 251. Nietzsche is not entirely serious there.

For the record, Nietzsche's concept of the Overman is primarily a spiritual genesis of a post-human creature that would have the strength to freely chose their own values in pursuit of a higher goal (If I understand it correctly). It has little to do with eugenics.

comment by Joseph_Hertzlinger · 2007-12-17T05:29:30.000Z · LW(p) · GW(p)

The example of Communism shows that being future-oriented will not always eliminate the "Guardians of Truth" syndrome. Sometimes it will produce people who guard a specific view of the future.

comment by HughRistik2 · 2007-12-17T09:46:49.000Z · LW(p) · GW(p)

It says something about the degree to which the concept of progress permeates Western civilization, that the one is told about Nazi eugenics and hears "They tried to breed a superhuman."

What interests me is the frequent opposition to transhumanism because of transhumanism's supposedly mistaken notion of progress. Just because progress might not be smooth, it doesn't mean that we haven't experienced it in various dimensions. Skeptics about progress seem to have a romanticized view of the past, going along with a quasi-religious notion of a fall from grace (due to technology, "patriarchy," techno-patriarchy, or whatever).

I don't mind if other people want to go back to the days before we had fire, or say, the germ theory of medicine—as long as they don't try to take me with them.

Replies from: stcredzero
comment by stcredzero · 2012-06-03T18:28:28.043Z · LW(p) · GW(p)

I don't mind if other people want to go back to the days before we had fire, or say, the germ theory of medicine as long as they don't try to take me with them.

You should be more specific. If such a group decides to kill off all the technologists first, in a misguided attempt to make their non technological future safe for them, it could be said they didn't try to take you with them.

comment by Ben_Jones · 2007-12-17T09:49:08.000Z · LW(p) · GW(p)

"Sometimes it will produce people who guard a specific view of the future."

Anyone read Joseph's post (just above) and immediately think 'Singularitarians!'?

Certainly the majority of (though not all) people I've met who assign that word to themselves are closer to being Guardians than Seekers. Fairly natural in that all causes want to be cults, but still likely to be harmful to the cause.

comment by Richard_Hollerith2 · 2007-12-17T13:35:44.000Z · LW(p) · GW(p)

I have not noticed that, Ben.

Not all of us who believe physics-since-1600 and biology-since-1860 have seen unequivocal progress believe there has been unequivocal progress in popular political or moral opinion. The civil-rights movement of the 1950s and 1960s for example clearly represents an increase in the consistency of the application of the ideal of equality, but it constitutes unequivocal progress only if you believe that the spread of the ideal of equality constitutes unequivocal progress.

comment by Caledonian2 · 2007-12-17T13:47:18.000Z · LW(p) · GW(p)

'Progress' is the accumulation of changes towards a pre-defined goal.

Just a few lifespans ago, 'progress' consisted of spreading settlers into sparsely-populated land, killing or driving away the aborigines, draining the wetlands, slaughtering the predators, and converting the ecology into farmland. It was using antibiotics widely and prophylactically, replacing ancient crop strains with monocultures, and designing our communities around the automobile.

'Progress' is the hobgoblin trotted out by everyone who thinks they know what the future should be. Anyone foolish enough to name their political goals 'progressive' ought to be excluded from the political process.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-17T14:53:15.000Z · LW(p) · GW(p)

"Sometimes it will produce people who guard a specific view of the future."

Anyone read Joseph's post (just above) and immediately think 'Singularitarians!'?

Nope and I challenge you to name two examples.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-05T16:52:00.878Z · LW(p) · GW(p)

Ray Kurtzweil, who apparently doesn't know what the word "exponential" means, because he thinks an "exponential" growth can have a vertical asymptote: ["As exponential growth continues to accelerate into the first half of the twenty-first century, it will appear to explode into infinity, at least from the limited and linear perspective of contemporary humans. "] (http://www.kurzweilai.net/the-law-of-accelerating-returns) He does soften it a bit by saying "appear"... but still, exponential growth never goes to infinity, that's just mathematically not how it works.

And Nick Bostrom, who tells people it's totally plausible that we are living in a simulation, ignoring the mountains of evidence that we are not (including literal mountains, come to think of it), as well as the fundamental flaw in any epistemology that non-falsifiably claims reality is an illusion... and then to top it all off, his whole equation relies on the ridiculous assumption that the number of individuals in a simulation is equal to the number of individuals in a real universe (he calls both H). Frankly even the idea that we could be in a simulation stretches the whole idea of what a simulation is to the breaking point---a simulation by definition isn't real, and yet here we are with actual conscious beings, and he's claiming we're simulated. (The argument also relies on the assumption that transhumans would create holocausts for amusement, which means they are apparently psychopaths. And then he says this is the good future, the one we're hoping for?)

So not only have I shown you two examples, I've shown you two of the most prominent individuals in the entire Singularity movement, both of whom make really ridiculous claims that would not be taken seriously if they didn't have an almost religious aura of authority about them. Frankly Eliezer, you're the only prominent Singularitarian who doesn't act like a cult leader.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2012-06-05T19:14:18.292Z · LW(p) · GW(p)

Frankly even the idea that we could be in a simulation stretches the whole idea of what a simulation is to the breaking point---a simulation by definition isn't real, and yet here we are with actual conscious beings, and he's claiming we're simulated.

But we don't have privileged, direct access to the real world anyway; everything you experience now, is, in a certain sense, a "simulation" constructed by your brain. (If you don't like the word simulation, you're welcome to choose another.) When you look at a red book, the reason you think there's a red book out there in the real world is because light reflecting off the book is being absorbed by your eyes and translated into sensory data that is sent to your brain. If we replaced your eyes with some as-yet-science-fictional camera that supplied the exact same data to your optic nerve, you might not notice; you don't have any reason to care whether the information from which your visual field is constructed was gathered by a "real" eye or a merely artificial camera. But then if we put a shutter cap on the camera and started supplying your optic nerve with data that was generated by a computer program rather than by means of measuring light, you again have no particular reason to notice or care. The hypothesis "I'm experiencing the real world" and the hypothesis "I'm being supplied with real-world-like sensory data despite being implemented in some other way" make the same predictions. We might have any number of good reasons to reject the latter hypothesis, but "simulations aren't real by definition" isn't one of them.

his whole equation relies on the ridiculous assumption that the number of individuals in a simulation is equal to the number of individuals in a real universe (he calls both H)

One would imagine that assumption was made only to simplify the presentation; it doesn't affect the core ideas. For example, see Robin Hanson's "I'm a Sim, or You Aren't" for a variation that makes different assumptions about the size of simulations.

comment by Caledonian2 · 2007-12-17T15:04:06.000Z · LW(p) · GW(p)

That's immediately what I thought.

Myself, and Eliezer Yudkowsky. There's your two examples.

comment by steven · 2007-12-17T15:06:37.000Z · LW(p) · GW(p)

"Anyone read Joseph's post (just above) and immediately think 'Singularitarians!'?"

Please report to the nearest termination center.

comment by Joseph_Hertzlinger · 2007-12-17T15:32:12.000Z · LW(p) · GW(p)

There are, of course, many different future visions that could be guarded.

comment by Ben_Jones · 2007-12-17T15:41:53.000Z · LW(p) · GW(p)

A Truth-Guardian is someone who 'guards' an Idea by zapping (in its myriad forms) rather than through rational argument.

Are you willing to tell me that you've never met a Singularitarian who has attacked an opponent's authority (zap), or denigrated another's work (zap), or sought to work on their Idea's strong points to the neglect of its weak points (subtle zap), or acted in an elitist manner in order to confer perceived authority on themself (smug zap), or presented new data in such a way as to strengthen their previous predictions (super Bottom Line zap!)? Have you, Eliezer, never ever guarded your view of the future rather than argued dispassionately, even against a plainly wrong argument?

If you say no, I wholly withdraw my (well-meant) comment. Caledonian can be my second example. :p

The moment anyone makes a biased argument because of their attachment to an Idea, they become a Guardian. Singularitarians are people, and they take criticism, and defend their beliefs, requisitely passionately. Apologies if it seemed as though I was singling anybody out for specific criticism of bias - not my intention. For the record, I'm a firm believer. :)

Replies from: stcredzero
comment by stcredzero · 2012-06-03T18:38:24.952Z · LW(p) · GW(p)

A Truth-Guardian is someone who 'guards' an Idea by zapping (in its myriad forms) rather than through rational argument.

How does this idea relate to "Well tended gardens die by pacifism?"

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-03T19:40:34.612Z · LW(p) · GW(p)

Is that an actual question, or an oblique way of suggesting that the thesis of "Well tended gardens die by pacifism" is promoting a form of Truth-Guardianism, and therefore contradicts the thesis of "Guardians of the Truth", and therefore perhaps both theses are flawed?

If it's the latter: yes, yes, very clever.

Assuming charitably that it's the former, my two cents about how they relate:

  • WTGDBF predicts that where local community norms N1 differ from global norms N2 there's a tendency for N2 to displace N1 whenever the local community interacts with the larger world, and suggests that if I consider N1 superior to N2 I have a moral responsibility to counteract this tendency, which sometimes requires violating N2.

  • GOTT suggests that certain norms which involve punishing attempts to challenge or question certain ideas regardless of how novel, well-formed, or carefully reasoned those challenges/questions are, are bad for communities that embrace them, despite being well-protected from outside norms.

  • Combining the two suggests that when I choose to defend my local community norms against corruption by outside norms, I also have a moral responsibility to be right about the superiority of my community's norms.

Replies from: stcredzero, Viliam_Bur
comment by stcredzero · 2012-06-03T20:18:49.777Z · LW(p) · GW(p)

Is that an actual question

Yes.

or an oblique way of suggesting that the thesis of "Well tended gardens die by pacifism" is promoting a form of Truth-Guardianism, and therefore contradicts the thesis of "Guardians of the Truth", and therefore perhaps both theses are flawed?

Apparent contradictions are often interesting areas of inquiry. Since I had to join my girlfriend for Phở, I only had time to post the one sentence.

It doesn't mean that both theses are flawed. They could be opposing forces. The apparent contradiction might indicate a point where optimization is challenging. This could explain why groups seem doomed to fall to one pathology or another. There's probably positive feedback in either direction, making groups dynamically unstable on this "axis" -- whatever that might be. Maybe this is to be explained by our group cohesion mechanisms being designed to help us survive when the next tribe over decides to attack, and why things too easily devolve into might makes right?

Combining the two suggests that when I choose to defend my local community norms against corruption by outside norms, I also have a moral responsibility to be right about the superiority of my community's norms.

The last phrase makes me cautious. I think one has a moral responsibility to respect the truth by seeking the truth. If we look at ideologies, how well do they deal with the notion of superiority? How many past notions of superiority seem barbaric? Is there a way of transcending or sidestepping this notion of superiority altogether?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-03T22:05:44.184Z · LW(p) · GW(p)

Agreed that apparent contradictions are often interesting areas of inquiry.

The only ways I know of to sidestep having to decide which norms best align with my values is to adopt values such that either no community's norms are superior to any others', or such that whatever norms happen to emerge victorious from the interaction of social groups are superior to all the norms they displace. Neither of those tempt me at all, though I know people who endorse both.

If I reject both of those options, I'm left with the possibility that two communities C1 and C2 might exist such that C1's norms are superior to C2's, but the interaction of C1 and C2 results in C1's norms being displaced by C2's.

I don't see a fourth option. Do you?

For example... you say I have a moral responsibility to seek truth, which suggests that if I'm in a community whose values oppose truthseeking in certain areas, I have a moral responsibility to violate my community's norms. No?

Replies from: stcredzero
comment by stcredzero · 2012-06-03T22:33:50.406Z · LW(p) · GW(p)

This has interesting parallels to the Friendly AI problem. For example, one could posit that material wealth might somehow be a suitable arbiter, but I can imagine plenty of situations where C2 displaces C1 (Corporate lobbying?) followed by global ecological catastrophes. Here, dollars take the place of smiley faces strewn across the solar system. Maybe the problem of a sustainably benevolent truth-seeking group is somehow the same problem as FAI on some level?

Replies from: TheOtherDave, pnrjulius
comment by TheOtherDave · 2012-06-04T00:01:21.837Z · LW(p) · GW(p)

Solving Friendliness involves capturing desirable ethical guidelines in a robust and sustainable way, so I'd expect the relationship between Friendliness and sustainably benevolent truth-seeking to depend a lot on the relationship between ethics and truth-seeking. I'd agree that they are thematically related, but very much non-identical.

comment by pnrjulius · 2012-06-09T02:32:36.427Z · LW(p) · GW(p)

Yes! The problem of Friendly Corporate Behavior is an urgent and unsolved one. (Indeed, corporations have many of the attributes of artificial intelligences, though of course not all.)

The sustainably benevolent moral group is not Friendly AI; it is Friendly NI (natural intelligence). The two problems are probably closely related, but I can see a few important differences: NIs had to evolve, so they're going to start out optimized for reproduction. AIs are designed, so they're optimized for whatever you optimize them for.

Replies from: stcredzero
comment by stcredzero · 2012-06-09T15:07:38.469Z · LW(p) · GW(p)

AIs are designed, so they're optimized for whatever you optimize them for.

My prediction: The ones optimized for reproduction are the ones that will be around in the long term.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-11T01:33:51.247Z · LW(p) · GW(p)

Not necessarily, because there's no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don't need to reproduce if you can just keep existing and expand your power over the cosmos.

But you're right, insofar as AIs that rapidly self-destruct and never reproduce are not going to stick around long. (I think this is actually a tautology, but it's a tautology with the character of a mathematical theorem---definitely true, but not obvious or trivial.)

It's also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach---even places that are in fact evolutionarily stable once you get there.

Replies from: wedrifid, stcredzero
comment by wedrifid · 2012-06-11T01:46:12.635Z · LW(p) · GW(p)

you don't need to reproduce if you can just keep existing and expand your power over the cosmos.

Apart from the practical lightspeed limitations. You do need to reproduce or in some other way split yourself into space-separated parts if you wish to expand your power over a sufficient distance.

Replies from: stcredzero
comment by stcredzero · 2012-06-13T04:48:20.499Z · LW(p) · GW(p)

One of our mind children might read this someday and think, "Distance? What a quaint idea!"

comment by stcredzero · 2012-06-13T04:47:02.059Z · LW(p) · GW(p)

Not necessarily, because there's no law saying that AIs have to die. This changes the evolutionary calculus significantly; you don't need to reproduce if you can just keep existing and expand your power over the cosmos.

As wedrifid pointed out, that depends on what one can do about the lightspeed limit. And thermodynamics. I don't think not dying of old age changes evolution that much. Humans are prone to geriatric diseases because evolution can't do much for us past the reproductive years. Beings without a lifespan won't face that.

I highly doubt that no AI won't ever destroy another, though.

It's also worth considering that there are different constraints between NIs and AIs though. NIs have to change gradually, piece by piece, gene by gene. AIs can be radically overhauled in a single generation. This gives them access to places on the fitness landscape that we could never reach---even places that are in fact evolutionarily stable once you get there.

That just means that they'll evolve without the constraints of genetics, much as designs and memes do.

I think it's a mistake to treat superhuman AI as magic. In some contexts it will seem magic, but not all. Human habitations viewed from 10,000 meters look like growths of lichen. In some contexts, some dogs are "smarter" than some people. Human intelligence gives us a tremendous advantage over all other life on Earth, but it is not magic. Superhuman intelligence is not magic. It's just intelligence.

comment by Viliam_Bur · 2012-06-04T11:48:59.012Z · LW(p) · GW(p)

There is some difference between group ideas and group norms, although sometimes these two overlap. There is also a difference between challenging group ideas, and breaking group norms.

An example of a group idea: "It is reasonable to give million dollars to an organization that will freeze your head when you die, because someone might scan your brain and make a machine simulation of you, and it will be really you."

An example of a group norm: "We should refrain from political examples, personal attacks, irrational arguments, etc."

An example of challenging a group idea: "I think the machine simulation is not really you. Even if it is 'alive', it is a new life form; and your old self is dead."

An example of breaking group norms: "This is so stupid!!! I guess you have also voted for [political party]!"

Sometimes these two things can be confused. For example it can be a group norm to never challenge group ideas (or to limit challenging them to ways that have no chance to succeed). This should not happen. On the other hand, it is also very frequent to obviously break group norms and then complain about group's intolerance to challenging its ideas -- this is a typical pattern for many internet trolls, and the community should be able to recognize it.

An example: "Cryonics does not work, f*** you!" "Downvoted for swearing." "You just downvote me because I disagree with you, f*** you!"

Replies from: TheOtherDave, pnrjulius
comment by TheOtherDave · 2012-06-04T13:33:58.265Z · LW(p) · GW(p)

Yes, agreed with all of this. Though as you suggest, the two can overlap. "Give million dollars to an organization that will freeze your head when you die" can become a group norm, and "refrain from political examples, personal attacks, irrational arguments, etc." can be a group idea. And as you say, it is common for one to be confused for the other, sometimes deliberately for rhetorical effect.

comment by pnrjulius · 2012-06-09T02:34:49.781Z · LW(p) · GW(p)

Also sometimes the group's norms are as problematic as its ideas; e.g. KKK, Nazis.

But usually the norms are not too bad, it's just the ideas that are ridiculous (moderate religion in a nutshell). So it definitely makes sense to make a distinction for practical purposes.

comment by Ian_C. · 2007-12-17T16:12:46.000Z · LW(p) · GW(p)

"The moment anyone makes a biased argument because of their attachment to an Idea, they become a Guardian."

I think it's more important what happens when the bias is discovered. Does the group in question reward it or try to eliminate it? For example there is corruption in democracies as well as less free forms of government, what makes the difference is what happens when it is discovered.

comment by steven · 2007-12-17T16:50:13.000Z · LW(p) · GW(p)

Ben, of course no one is 100% un-Guardian-like, but you seemed to be claiming Singularitarians were unusually Guardian-like.

comment by Nathan_Myers · 2007-12-17T18:17:10.000Z · LW(p) · GW(p)

Wouldn't that make them "bio-reactionaries" or "bio-romantics"? Or has the equation of "conservatism" (which once denoted an inclination to preserve the status quo) with "reactionism" (desire to re-instate the status quo ante), "romanticism" (promotion of some vanished, idealized past), or raw fascism (power is its own logic) pervaded even these hallowed halls? Do we have a name for what was once called conservatism, or does the concept no longer have any meaningful referent?

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T02:37:08.807Z · LW(p) · GW(p)

Part of the problem is that reactionaries call themselves "conservative" even though, you're right, they really aren't.

In the US, equal rights for women is really a conservative idea in the original sense, because it's something that our culture has already mostly accepted. People arguing against it aren't conserving the status quo, they are harkening back to some bygone halycon era.

But think of how weird it sounds to say that feminists are conservative! So I think the term in practice has moved away from its original etymological meaning.

comment by TGGP4 · 2007-12-17T19:55:07.000Z · LW(p) · GW(p)

Nathan Myer's, how about "status quo bias"?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-17T20:10:33.000Z · LW(p) · GW(p)

Caledonian a Singularitarian? I doubt he knows what the word means. I don't recall him on the Singularity Institute donors list, or any of the mailing lists or websites. The term denotes activism, not belief - an "environmentalist" is not someone who believes in the existence of the environment.

Ben Jones, if the standard confirmation/disconfirmation bias is regarded as "Guardianship" then the guardian/discoverer distinction loses all meaning even with respect to scientists versus the Inquisition. The question is whether people exhibit their ordinary human biases to defend the status quo (or status quo ante), or to defend their new ideas and innovations. The latter case, though still ordinarily human-biased, is time-oriented toward the future.

comment by Chris · 2007-12-17T20:22:10.000Z · LW(p) · GW(p)

"an "environmentalist" is not someone who believes in the existence of the environment." Non sequitur. An environmentalist is someone who believes in the value of the environment. sloppy, sloppy, sloppy.......

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-17T21:07:38.000Z · LW(p) · GW(p)

Really, Chris. So if I believe in the value of the environment, but believe that it's much less valuable than the use to be gained by paving it over with strip mines, then I'm an "environmentalist"?

In any case it's a moot point. Mark Plus coined the term "Singularitarian", but didn't do much with it; when I decided to build a Singularitarian movement, I asked Mark Plus for ownership of the word and was granted it; and I define the term to involve activism. If you mean something else by the word, feel free to call yourself a "Singularian" or something.

Replies from: danlowlite, Dojan, DimitriK
comment by danlowlite · 2010-10-25T21:46:35.431Z · LW(p) · GW(p)

Were/Are you joking? Seriously. I don't understand how one can own a word. Did I miss something?

I'm not disagreeing that it might involve activism (though I would define activism quite broadly), but how can one "own" a word?

comment by Dojan · 2011-10-18T19:24:32.156Z · LW(p) · GW(p)

Might I suggest open-sourcing the word?

Oh, and also, like, every other word, ever?

comment by DimitriK · 2014-11-12T22:12:00.020Z · LW(p) · GW(p)

I think Chris was talking about value in a relative sense (though ironicly was sloppy and left his statement too vague).

What's more surprising here is that you guys are arguing over a definition of environmentalim. Taboo it and you'd probably agree.

Most surprising of all is seeing you claim you own a word, Eliezer. I may have just started reading these sequences but I'm pretty sure there was a post or two on how you can't just define a word how you want.

Ironically enough you are guarding singularitarianism with your comment. And you're doing it by redefining the word to suit youside. And im pretty sure its a redefinement. The normative use for singularitarianist doesn't involve activism. Nor does environmentalist. You might value one singularitarianist or environmentalist more than another if they are an activist for thecause, but that's another matter.

comment by chris4 · 2007-12-17T21:38:03.000Z · LW(p) · GW(p)

Yudkowsky, I was using the colloquial meaning of the word value, that is, positive value. If you insist, positive value of a healthy environment to promote the interests of, and as defined by, the entity that assesses the value. OK ? No prob for 'ownership' of the label, my issue was with the metaphor. BTW, as I respect the issues raised here, and the expertise of those who raise them, I'd love to see a post on the biases around the concept 'ownership'.

comment by Caledonian2 · 2007-12-18T01:24:16.000Z · LW(p) · GW(p)
Caledonian a Singularitarian? I doubt he knows what the word means. I don't recall him on the Singularity Institute donors list, or any of the mailing lists or websites.

Ahem.

It's wonderful using words in arguments when you get to redefine them. Who's the source for the alternate definition, the one that replaced "one who believes the concept of a Singularity" and the one more complex than "activist for the Singularity"? Hmmm...

I also love that you equate "working toward bringing about the Singularity" with donating money to your Institute or being on a mailing list.

Seekers of truth do not attempt to hardwire goals and evaluations means into entities they create, whether deistic or merely offspring. Only Guardians value their beliefs so much that they attempt to transmit them as arbitrary, received 'wisdom'.

comment by Caledonian2 · 2007-12-18T01:27:46.000Z · LW(p) · GW(p)

I missed this the first time through:

I asked Mark Plus for ownership of the word and was granted it

I... wow. I don't quite know how to respond to a person who makes a statement such as this.

comment by Nick_Tarleton · 2007-12-18T01:48:53.000Z · LW(p) · GW(p)

Seekers of truth do not attempt to hardwire goals and evaluations means into entities they create, whether deistic or merely offspring. Only Guardians value their beliefs so much that they attempt to transmit them as arbitrary, received 'wisdom'.

Values are not "beliefs", "true", or "false". (What about this is so hard to understand?)

comment by Caledonian2 · 2007-12-18T02:17:17.000Z · LW(p) · GW(p)
Values are not "beliefs", "true", or "false". (What about this is so hard to understand?)

To the degree that your claim is true, values are meaningless. They have consequences only to the degree that your claim is false.

The nice thing about opinions is that they mean absolutely nothing.

comment by Peter_de_Blanc · 2007-12-18T06:26:00.000Z · LW(p) · GW(p)

Values (that is, goals of optimizers) are vastly meaningful; they affect the future shape of the universe.

comment by Ben_Jones · 2007-12-18T10:12:55.000Z · LW(p) · GW(p)

A fair point, Eliezer. I'd agree that if it weren't for dis/confirmation biases, nothing would ever get done. If Einstein, when questioned about what he would have done if his special theory was disproved, had said 'meh, I can take it or leave it,' he probably wouldn't have had the drive to discover it in the first place. Attachment to your Big Idea is often what drives us.

That said, I don't see that a Big Idea About The Future is so different from a Big Idea About The Past in terms of value for humanity. Both can be open or closed, pacifistic or violent, inclusive or exclusive. It's what you do with it that counts! The question of whether the Singularity as currently defined has positive utility for the human race is not a given, neither will it be unanimous.

I've tried and tried, but I can't think of any other Big Ideas that have stemmed from people looking at where science and technology are going, and extrapolating them to a future point. Perhaps someone who's less hungover can think of one. Office Christmas do last night, still coming around.

Caledonian - I'd say that one of the key concepts in my current understanding of the Singularity is that it's the polar opposite of a hard-wired goal. Surely the very idea is that we don't know what happens inside/beyond a singularity, hence the name?

comment by Ben_Jones · 2007-12-18T10:27:03.000Z · LW(p) · GW(p)

Retrospective apologies for the long post - will keep it brief in future!

comment by Caledonian2 · 2007-12-18T13:53:30.000Z · LW(p) · GW(p)
Caledonian - I'd say that one of the key concepts in my current understanding of the Singularity is that it's the polar opposite of a hard-wired goal. Surely the very idea is that we don't know what happens inside/beyond a singularity, hence the name?

The whole point of attempting a "Friendly AI" is that its proponents believe that it IS possible to exclude entire branches of possibility from an AI's courses of action - that the superhuman intelligence can be made safe. Not merely friendly in a human sense, but favorable to human interests, not 'evil'.

Of course, they cannot provide an objective and rigorous description of what "being in human interests" actually entails, nor can they explain clearly what 'evil' is. But they know it when they see it, apparently. And since many of them seem to believe that 'values' are arbitrary, they've never bothered subjecting what they value to analysis.

Perhaps the possibility that a consequence of an entity being utterly good might be its being utterly unsafe has never occurred to them. And perhaps the possibility that superhuman general intelligence might analyze their values and find them lacking never occurred to them either. That would explain a lot.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T02:40:08.737Z · LW(p) · GW(p)

Why would being good make you unsafe?

Replies from: Ben_Welchner
comment by Ben_Welchner · 2012-06-09T03:16:02.836Z · LW(p) · GW(p)

Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.

comment by Matthew2 · 2007-12-20T09:05:00.000Z · LW(p) · GW(p)

caledonian said: "Perhaps the possibility that a consequence of an entity being utterly good might be its being utterly unsafe has never occurred to them."

This describes monotheism rather well. It has occured to me.

comment by Ben_Jones · 2007-12-20T12:34:05.000Z · LW(p) · GW(p)

Caledonian,

Yes, it has occurred to 'them'. I hope you haven't read http://www.singinst.org/AIRisk.pdf, since if you have, you haven't grasped the challenge. The crux isn't excluding branches of possible action by an AI, it's ensuring those avenues aren't attractive options for any reason.

comment by Caledonian2 · 2007-12-20T13:08:45.000Z · LW(p) · GW(p)
The crux isn't excluding branches of possible action by an AI, it's ensuring those avenues aren't attractive options for any reason.

(agog)

Would you care to explain what the distinction between those two states is?

comment by Ben_Jones · 2007-12-20T14:37:26.000Z · LW(p) · GW(p)

Sure - it's the difference between not stealing because you think you'll get caught and go to prison, and not stealing because you think theft is irrational/immoral/wrong/you name it. The first is sociopathy, the second is what we'd term normal human reasoning. Can I assume you believe there is no such thing as a Friendly AI?

comment by Caledonian2 · 2007-12-20T15:44:43.000Z · LW(p) · GW(p)

When you're determining the value structure of a mind, "ensuring those avenues aren't attractive options for any reason" IS excluding them from the set of possible courses of action. The key phrase there is for any reason.

As for the rest of your argument, reasoning is precisely what the normal human does NOT do, and it's hilarious that you think logical arguments are what keeps most people from theft.

comment by Rick_Smith · 2007-12-24T13:54:02.000Z · LW(p) · GW(p)

Caledonian, shouldn't you check up on who currently owns the word 'reasoning' before stating that?

I guess there must be some sort of register somewhere...

comment by thatoliver · 2012-06-06T22:24:38.906Z · LW(p) · GW(p)

A minor semantic point: wouldn't advocating a return to the ancient Nordic race make them racial reactionaries rather than racial conservatives?

Taking the British National Party as an example of a racial conservative group, we see that they endeavour to PRESERVE the white race. They believe the master race (or, in this case, the race that somehow deserves ownership of the UK) is extant, and must be protected. On the other hand, the Nazi wished to RESTORE a racial standard that they believed had been long buried.

comment by Yosarian2 · 2013-01-05T01:29:34.988Z · LW(p) · GW(p)

This is somewhat true. (It gets even stranger when you find out that they were also trying to similar things with animals, trying to somehow breed dogs back to the first dog ancestor.) However, it's worth noting that Nazis directly tapped into the common "eugenics" mode of thought in our society, and eugenicists in general were trying to "breed better humans" (by doing things like encouraging the forced sterilization of the insane and the physically disabled, ect).

Of course, it's still a fundamental fail of an idea all the way around. Sure, you could do artificial selection on humans even without understanding genetics, the same way we did with dogs, but for that to work you'd have to have absolute and total control over the reproduction of entire large human sub-populations for dozens of generations, nothing short of that would work. And there's no way a tyrannical govenrment that absurd manages to stay in power for that long. On the scale of the kinds of stuff eugenicists were actually doing, it simply couldn't have a significant effect on the human gene pool in any plausible time-frame.

If you really wanted to try to breed better humans, and you weren't all powerful, probably your best best would be to try to ingrain a powerful and ubiquitous "sexy people are smart" meme into the culture and then keep it alive. If you were able to do that and maintain the meme for 1000 years or so, then sexual selection might start to have an effect to increase the average intelligence of the human race.

Replies from: army1987
comment by A1987dM (army1987) · 2013-01-05T02:16:54.087Z · LW(p) · GW(p)

try to ingrain a powerful and ubiquitous "sexy people are smart" meme into the culture

I've been trying to do that for years. It doesn't seem to be working, as I can't even get laid myself, let alone smart people in general. ;-)

Replies from: Yosarian2
comment by Yosarian2 · 2013-01-05T02:24:25.068Z · LW(p) · GW(p)

Actually, looking at the last couple of decades, I would say that he "smart people are sexy" meme really started to take off around the time that a number of computers science geeks suddenly became some of the richest people in the world.

comment by Jan_Rzymkowski · 2014-05-09T23:07:10.015Z · LW(p) · GW(p)

"if you failed hard enough to endorse coercive eugenics"

This might be found a bit too controversial, but I was tempted to come up with not-so-revolting coercive eugenics system. Of course it's not needed, if there is technology for correcting genes, but let's say we only have circa 1900 technology. It has nothing to do with the point of Elizer's note, it's ust my musing.

Coervie eugenics isn't strictly immoral itself. It is a way of protecting people not yet born from genetical flaws - possible diseases, etc. But even giving them less then optimal features - intelligence, strength, looks - is quite equivalent to making them stupidier, weaker, uglier. If you could give your child healthy and pleasent life, yet decide to strip him from that, you are hurting him - it's not like his well-being is your property. But can you have YOUR child, while eugenics prevent you from breeding? Not in genetic sense, but it seems deeply flawed to base parent-child relation simply on genetic code. It's upbringing that matters. Adopted child is in any meaningful way YOUR child. But there are two problems - you can't really use "good genes" people for producing babies for "bad gene" people and "bad gene" mothers may have problem caring newborns without hormonal effect of birth. Way to make eugenics weaker, but overcome these problems, is to limit only mens' breading. When a couple with "good gene man" wants children - let them. If couple with "bad gene man" wants children, then future mother is impregnated by some (possible hired) "good gene man". Normally the couple have protected sex.

It is by no means perfect. But the price for relative well-being of future people is only for a woman to have sex with not her husband, and for husband to be "cheated on". While it seems quite unsettling, it's mainly our cultural norm. While this might be unpleasant for both, it isn't considerably worse then not being able to drink and smoke for woman through pregnancy. Therefore, such coercive eugenics would gradually improve gene pool, while not being considerable more evil then forbidding pregnant woman to smoke cigarettes.

I don't mean to say that such a system would be a good choice. But simply that it would be trading the rights of alive for the rights of not yet born.

I apologize, if above was inappropriate.

Replies from: Jiro, PetjaY
comment by Jiro · 2014-05-12T20:57:34.044Z · LW(p) · GW(p)

But even giving them less then optimal features - intelligence, strength, looks - is quite equivalent to making them stupidier, weaker, uglier.

I don't believe that killing someone is equivalent to letting him die. Why should I believe that making someone stupid is equivalent to letting him be stupid?

Also, cheating on someone to improve the health of the offspring results in a non-identity problem since the offspring is not the same one that would have been created without cheating, so whether the offspring is benefited is questionable.

Replies from: Jan_Rzymkowski
comment by Jan_Rzymkowski · 2014-05-13T15:09:21.869Z · LW(p) · GW(p)

You're right. I got way too far with claiming equivalence.

As for non-identity problem - I have trouble answering it. I don't want to defend my idea, but I can think of an example when one brings up non-identity and comes to wrong conclusion: Drinking alcohol while pregnant can cause a fetus to develop a brain damage. But such grave brain damage means this baby is not the same one, that would be created, if his mother didn't drink. So it is questionable that the baby would benefit from its mother abstinence.

comment by PetjaY · 2014-12-24T21:23:25.642Z · LW(p) · GW(p)

"But can you have YOUR child, while eugenics prevent you from breeding? Not in genetic sense, but it seems deeply flawed to base parent-child relation simply on genetic code. It's upbringing that matters. Adopted child is in any meaningful way YOUR child."

Treating people not genetically your children as if they were is a big minus in our evolutionary game these days. It also helps bad behaviour (making children and letting others raise them), so i´d say that it manages to be bad both for yourself and population, though the second part depends on why the child was given for adoption.

In general improving gene pool would be a good idea, but finding collective solutions for it that don´t cause more bad than good seems hard. Also if our evolution gets rid of the heuristic that sex=children=good which isn´t working anymore and replaces it with something like "acts that lead to you children=good" we then get people spending their money smarter, which increases reproductive success of richer people who tend to be >average intelligent.