Savulescu: "Genetically enhance humanity or face extinction"
post by Roko · 2010-01-10T00:26:56.846Z · LW · GW · Legacy · 235 commentsContents
235 comments
In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are "Unfit for the future" - that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.
Skip to 1:30 to avoid the tedious introduction
Genetically enhance humanity or face extinction - PART 1 from Ethics of the New Biosciences on Vimeo.
Genetically enhance humanity or face extinction - PART 2 from Ethics of the New Biosciences on Vimeo.
Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn't get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired.
235 comments
Comments sorted by top scores.
comment by Daniel_Burfoot · 2010-01-10T01:20:45.591Z · LW(p) · GW(p)
When I collected some SIAI volunteers' opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
Biased sample!
Replies from: Roko, knb↑ comment by Roko · 2010-01-10T11:24:41.930Z · LW(p) · GW(p)
Yes, it is a biased sample. However, reality is not a democracy: some people have better ideas than others.
Personally, I think that the within-SIAI view of AI takeoff timelines will suffer from bias: it'll be emotionally tempted into putting down timelines that are too near term. But I don't know how much to correct for this.
A primitive outside view analysis that I did indicates a ~50% probability of superintelligent AI by 2100.
Replies from: djcb↑ comment by djcb · 2010-01-10T16:16:33.038Z · LW(p) · GW(p)
Could you elaborate a bit on this analysis? It'd be interesting how you arrived at that number.
Replies from: Roko↑ comment by Roko · 2010-01-10T18:12:41.382Z · LW(p) · GW(p)
take a log-normal prior for when AI at the human level will be developed, with t_0 at 1956. Choose the remaining two parameters to line up with the stated beliefs of the first AI researchers - i.e. they expected human level AI to not have occurred within a year, but they seem to have assigned significant probability to it happening by 1970. Then update that prior on the fact that, in 2010, we still have no human level AI.
This "outside view" model takes into account the evidence provided by the failure of the past 64 years of AI, and I think it is a reasonable model.
Replies from: djcb↑ comment by djcb · 2010-01-11T12:01:52.153Z · LW(p) · GW(p)
Thanks, that was indeed interesting.
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this. Do you have some reason to believe that their expectations were too optimistic by a factor of about 10 (1970 vs 2100) rather than some other number?
Replies from: Roko↑ comment by Roko · 2010-01-11T14:34:21.441Z · LW(p) · GW(p)
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this
They are a factor because their opinions in 1956, before the data had been seen, form a basis for constructing a prior that was not causally affected by the data.
comment by taw · 2010-01-10T04:25:00.390Z · LW(p) · GW(p)
A small dose of outside view shows that it's all nonsense. The idea of evil terrorist or criminal mastermind is based on nothing - such people don't exist. Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
See everything Schneier has ever written about it if you need data confirming what I just said.
Replies from: DanArmak, pjeby, wallowinmaya, Vladimir_Nesov, mattnewport, timtyler↑ comment by DanArmak · 2010-01-10T08:19:58.983Z · LW(p) · GW(p)
Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
We forecast technology becoming more powerful and available to more people with time. As a corollary, the un-maximized destructive power of idiots also grows, eventually enough to cause x-risk scenarios.
↑ comment by pjeby · 2010-01-10T19:10:08.663Z · LW(p) · GW(p)
Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
What about the recent reports of Muslim terrorists being (degreed) engineers in disproportionate numbers? While there's some suggestion of an economic/cultural explanation, it does indicate that at least some terrorists are people who were at least able to get engineering degrees.
↑ comment by David Althaus (wallowinmaya) · 2011-06-30T20:05:01.831Z · LW(p) · GW(p)
Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
Kinda funny, the first terrorist which came to my mind was this guy.
From Wikipedia: Kaczynski was born in Chicago, Illinois, where, as an intellectual child prodigy, he excelled academically from an early age. Kaczynski was accepted into Harvard University at the age of 16, where he earned an undergraduate degree, and later earned a PhD in mathematics from the University of Michigan. He became an assistant professor at the University of California, Berkeley at age 25, but resigned two years later.
It took the FBI 17 years to arrest the Una-Bomber and he only got caught because he published a pamphlet in the New York Times, which his brother could identify.
Anyway, IMO Savalescu merely says that with further technological progress it could be possible for smart ( say IQ around 130 ) sociopaths to kill millions of people. Do you really believe that this is impossible?
Replies from: taw↑ comment by taw · 2011-07-02T09:34:04.031Z · LW(p) · GW(p)
Wikipedia describes Una Bomber's feats as "mail bombing spree that spanned nearly 20 years, killing three people and injuring 23 others".
3 people in twenty years just proves my point that he either never cared about maximizing destruction or was really horrible about it. You can do better in one evening by getting an SUV, filling it with gas canisters for extra effect, and driving it into a school bus at full speed. See Mythbusters for some ideas.
The facts of the matter are such people don't exist. They're possible in a way that Russell's Teapot is possible.
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2011-07-02T10:18:53.449Z · LW(p) · GW(p)
Yeah, good points, but Kaczynski tried to kill especially math or science professors or generally people who contributed to technological progress. He didn't try to kill as many people as possible, so detonating a bunch of school kids was not on his agenda.
Anyway, IMO it is odd to believe that there is less than a 5% probability that some psychopath in the next 50 years could kill millions of people, perhaps through advanced bio-technology ( Let alone nanotechnology or uFAI). That such feats were nearly impossible in the past does not imply that they will be impossible in the future.
Replies from: taw↑ comment by taw · 2011-07-02T13:06:01.044Z · LW(p) · GW(p)
Unless you believe distribution of damaging psychopaths is extremely fat tailed, lack of moderately successful ones puts a very tight bound on probability of extremely damaging psychopath.
All the "advanced biotech / nanotech / ai" is not going to happen like that. If it happens at all, it will give more power to large groups with enough capital to research and develop them, not to lone psychopaths.
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2011-07-02T14:10:25.978Z · LW(p) · GW(p)
All the "advanced biotech / nanotech / ai" is not going to happen like that. If it happens at all, it will give more power to large groups with enough capital to research and develop them, not to lone psychopaths.
I hope you're right, and I also think that it is more likely than not. But you seem to be overly confident. If we are speculating about the future it is probably wise to widen our confidence intervals...
↑ comment by Vladimir_Nesov · 2010-01-10T14:00:46.222Z · LW(p) · GW(p)
The idea of evil terrorist or criminal mastermind is based on nothing - such people don't exist.
Savulescu explicitly discusses smart sociopaths.
↑ comment by mattnewport · 2010-01-10T20:38:17.234Z · LW(p) · GW(p)
I think Schneier is one of the most intelligent voices in the debate on terrorism but I'm not convinced you sum up his position entirely accurately. I had a browse around his site to see if I could find some specific data to confirm your claim and had trouble finding anything. The best I could find was Portrait of the Modern Terrorist as an Idiot but it doesn't contain actual data. I'm rather confused why you linked to the specific blog post you chose which seems largely unrelated to your claim. Do you have any better links you could share?
Note that in the article I link he states:
There is a real threat of terrorism. And while I'm all in favor of the terrorists' continuing incompetence, I know that some will prove more capable.
↑ comment by timtyler · 2010-01-10T10:36:56.279Z · LW(p) · GW(p)
There a terrorist attempt only recently:
"Nation on edge after Christmas terrorism attempt"
Replies from: billswift↑ comment by billswift · 2010-01-10T13:10:33.779Z · LW(p) · GW(p)
Read some Schneier. A more accurate headline should be: "Nation on edge after an idiot demonstrates his idiocy". Nearly all terrorism has been performed by people who have serious mental deficiencies - even the 9/11 attacks depended on a lot of luck to succeed. Shit happens, but random opportunities usually aid the competent more than the incompetent. And nearly all criminals and terrorists are of lower intelligence, the few that are reasonably intelligent are seriously lacking in impulse control, which screws up their ability to make and carry through plans. Besides Bruce Schneier's work, see "The Bell Curve", and most newer literature on intelligence.
comment by JulianMorrison · 2010-01-10T18:43:44.972Z · LW(p) · GW(p)
So, we could decompile humans, and do FAI to them. Or we could just do FAI. Isn't the latter strictly simpler?
Replies from: Fredrik, Roko↑ comment by Roko · 2010-01-11T17:07:02.965Z · LW(p) · GW(p)
I don't think so. The problem with FAI is that there is an abrupt change, whereas IA is a continuous process with look-ahead: you can test out a modification on just one human mind, so the process can correct any mistakes.
If you get the programming on your seed AI wrong, you're stuffed.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-11T17:21:39.851Z · LW(p) · GW(p)
I believe it's almost backwards: with IA, you get small mistakes accumulating into irreversible changes (with all sorts of temptations to declare the result "good enough"), while with FAI you have a chance of getting it absolutely right at some point. The process of designing FAI doesn't involve any abrupt change, the same way as you'd expect for IA. On the other hand, if there is no point with IA where you can "let go" and be sure the result holds the required preference, the "abrupt change" of deploying FAI is the point where you actually win.
comment by Fredrik · 2010-01-10T03:37:28.941Z · LW(p) · GW(p)
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I'd judge against putting all our eggs in the AI basket.
Replies from: Nick_Tarleton, timtyler↑ comment by Nick_Tarleton · 2010-01-10T04:54:06.705Z · LW(p) · GW(p)
"We" aren't deciding where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn't always "FAI research".
Replies from: billswift↑ comment by billswift · 2010-01-10T13:28:08.123Z · LW(p) · GW(p)
From a thread http://esr.ibiblio.org/?p=1551#comments in Armed and Dangerous:
Andy Freeman Says: January 6th, 2010 at 1:11 am
There’s another factor. Regulation is systemic risk.
Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.
The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.
There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.
Replies from: Wei_Dai, arbimote↑ comment by Wei Dai (Wei_Dai) · 2010-01-10T20:04:03.253Z · LW(p) · GW(p)
Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson's ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.
Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?
Replies from: billswift↑ comment by billswift · 2010-01-11T15:49:14.745Z · LW(p) · GW(p)
I think you misunderstand Robin's scenario; if we survive, the Malthusian scenario is inevitable after some point.
Replies from: orthonormal↑ comment by orthonormal · 2010-01-12T02:26:38.736Z · LW(p) · GW(p)
Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.
Replies from: billswift, CarlShulman↑ comment by billswift · 2010-01-12T09:26:38.675Z · LW(p) · GW(p)
Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn't mean I favor a Malthusian result, just that the alternatives are worse.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-14T09:25:45.284Z · LW(p) · GW(p)
I don't agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)
↑ comment by CarlShulman · 2010-01-12T02:38:11.376Z · LW(p) · GW(p)
He says that a singleton is unlikely but not negligibly so.
Replies from: orthonormal↑ comment by orthonormal · 2010-01-12T04:59:02.892Z · LW(p) · GW(p)
Ah, I see that you are right. Thanks.
↑ comment by arbimote · 2010-01-12T04:15:31.491Z · LW(p) · GW(p)
... seems to have this belief that there is some perfect cure for any problem.
There may not be a single strategy that is perfect on it's own, but there will always be an optimum course of action, which may be a mixture of strategies (eg dump $X into nanotech safety, $Y into intelligence enhancement, and $Z into AGI development). You might never have enough information to know the optimal strategy to maximise your utility function, but one still exists, and it is worth trying to estimate it.
I mention this because previously I have heard "there is no perfect solution" as an excuse to give up and abandon systematic/mathematical analysis of a problem, and just settle with some arbitrary suggestion of a "good enough" course of action.
Replies from: billswift↑ comment by billswift · 2010-01-12T09:23:02.439Z · LW(p) · GW(p)
It isn't just that there is no "perfect" solution, to many problems there is no solution at all; just a continuing difficulty that must be continually worked through. Claims of some optimal (or even good enough) solution to these sorts of social problems is usually a means to advance the claimants' agendas, especially when they propose using gov't coercion to force everybody to follow their prescriptions.
Replies from: arbimote↑ comment by arbimote · 2010-01-12T12:11:32.325Z · LW(p) · GW(p)
That claims of this type are sometimes made to advance agendas does not mean we shouldn't make these claims, or that all such claims are false. It means such claims need to be scrutinised more carefully.
I agree that more often than not there is not a simple solution, and people often accept a false simple solution too readily. But the absence of a simple solution does not mean there is no theoretical optimal strategy for continually working through the difficulty.
comment by knb · 2010-01-10T00:48:30.300Z · LW(p) · GW(p)
- Your first link seems to be broken.
- I didn't watch the full video, but does he actually propose how human beings should be made more docile and intelligent? I don't mean a technical method, but rather a political method of ensuring that most of humanity gets these augmentations. This is borderline impossible in a liberal democracy. I think this explains why programming an AI is a more practical approach. Consider how many people are furious because they believe that fluoridated water turns people into docile consumers, or that vaccines give kids autism. Now imagine actually trying to convince people that the government should be allowed to mess around with their brains. And if the government doesn't mandate it, then the most aggressive and dangerous people will simply opt out.
↑ comment by Roko · 2010-01-10T00:52:17.081Z · LW(p) · GW(p)
This is borderline impossible in a liberal democracy... Now imagine actually trying to convince people that the government should be allowed to mess around with their brains
In the Q&A at 15:30, he opines that it will take the first technologically enabled act of mass terrorism to persuade people. I agree: I don't think anything will get done on x-risks until there's a gigadeath event.
Replies from: Fredrik↑ comment by Fredrik · 2010-01-10T03:44:07.472Z · LW(p) · GW(p)
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection - perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I'm starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don't quote me on that. Or whatever, go ahead.
Replies from: billswift, ChristianKl, Roko↑ comment by billswift · 2010-01-10T13:20:25.588Z · LW(p) · GW(p)
Not just "rotten eggs" either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become "criminals", it is mandating gov't meddling with their brains. I, for example, don't use alcohol or any other recreational drug, I don't use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
Replies from: None, Fredrik↑ comment by [deleted] · 2015-11-07T06:18:08.352Z · LW(p) · GW(p)
On the topic of shingles, shingles is associated with depression. Should I ask my GP for the vaccine for prevention given that I live in Australia, have had chickenpox, but haven't had shingles?
↑ comment by Fredrik · 2010-01-10T17:26:31.375Z · LW(p) · GW(p)
You don't have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I'm just like that myself actually. Some people wouldn't be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it's safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren't capable of grasping it, would oppose it strongly - possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn't even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
Replies from: mattnewport↑ comment by mattnewport · 2010-01-10T20:41:55.550Z · LW(p) · GW(p)
If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.
Replies from: Roko, Fredrik↑ comment by Roko · 2010-01-10T21:54:50.785Z · LW(p) · GW(p)
it might only be a small enhancement. +30 IQ points across the board would save the world, +30 to just you would not make much difference.
Replies from: mattnewport, ChristianKl↑ comment by mattnewport · 2010-01-10T22:21:50.551Z · LW(p) · GW(p)
+30 IQ points across the board would save the world
I find that claim highly dubious.
Replies from: Roko↑ comment by ChristianKl · 2010-01-14T12:05:36.691Z · LW(p) · GW(p)
30 additional points of intelligence for everzone could mean that AI gets developed sooner and therefore there less time for FAI research.
The same goes for biological research that might lead to biological weapons.
Replies from: Roko, Roko↑ comment by Roko · 2010-01-15T14:48:11.803Z · LW(p) · GW(p)
My personal suspicion, and what motivates me to think that IA is a good idea, is that the human race is facing a massive commons problem with respect to AGI. Realizing that there is a problem requires a lot of intelligence. If no one, or very few, realize that something is wrong, then it is unlikely that anything will be done about it. If this is the case, it doesn't matter how much time we have: if there's little support for the project of managing the future, little money and little manpower, then even a century or a millennium is not long enough.
Replies from: ChristianKl↑ comment by ChristianKl · 2010-01-15T15:59:03.047Z · LW(p) · GW(p)
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn't mean that they don't fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Replies from: Roko↑ comment by Roko · 2010-01-15T17:20:59.256Z · LW(p) · GW(p)
Christian, FAI is hard because it doesn't necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That's why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to "look before we jump".
Replies from: ChristianKl, Morendil↑ comment by ChristianKl · 2010-01-16T00:20:35.670Z · LW(p) · GW(p)
Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn't smart enough to improve itself.
↑ comment by Morendil · 2010-01-15T17:41:32.939Z · LW(p) · GW(p)
"Editing the mental states of ems" sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it's not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It's a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it's still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological "add-ons" ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Replies from: Roko, Roko↑ comment by Roko · 2010-01-15T17:58:00.047Z · LW(p) · GW(p)
We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Hopefully volunteers could be found; but in any case, the stakes here are the end of the world, the end justifies the means.
Replies from: Vladimir_Nesov, ciphergoth, Morendil↑ comment by Vladimir_Nesov · 2010-01-15T21:38:45.015Z · LW(p) · GW(p)
To whoever downvoted Roko's comment -- check out the distinction between these ideas:
↑ comment by Paul Crowley (ciphergoth) · 2010-01-16T11:20:15.777Z · LW(p) · GW(p)
I'd volunteer and I'm sure I'm not the only one here.
Replies from: Roko, AdeleneDawner↑ comment by AdeleneDawner · 2010-01-16T11:47:06.576Z · LW(p) · GW(p)
You're not, though I'm not sure I'd be an especially useful data source.
Replies from: RobinZ↑ comment by RobinZ · 2010-01-16T15:38:33.385Z · LW(p) · GW(p)
I've met at least one person who would like a synesthesia on-off switch for their brain - that would make your data useful right there.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-01-17T05:49:12.464Z · LW(p) · GW(p)
Looks to me like that'd be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who'd like that, too.
↑ comment by Roko · 2010-01-15T17:55:33.031Z · LW(p) · GW(p)
Moreover, it's not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared current brain science.
It doesn't have to; working ems would be good enough to lift us out of the problematic situation we're in at the moment.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-15T21:34:49.953Z · LW(p) · GW(p)
I worry these modified ems won't share our values to a sufficient extent.
Replies from: Roko↑ comment by Roko · 2010-01-15T22:43:37.799Z · LW(p) · GW(p)
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-15T23:03:21.235Z · LW(p) · GW(p)
Possibly. But I'd rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn't give a powerful intelligence explosion -- then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
↑ comment by Fredrik · 2010-01-10T22:23:47.106Z · LW(p) · GW(p)
So individual autonomy is more important? I just don't get that. It's what's behind the wheels of the autonomous individuals that matters. It's a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to "way too fracking high".
It's everyone's happiness and progress that matters. If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?
Replies from: mattnewport↑ comment by mattnewport · 2010-01-10T22:31:25.424Z · LW(p) · GW(p)
If you can raise the floor for everyone, so that we're all just better, what's not to like about giving everybody that treatment?
The same that's not to like about forcing anything on someone against their will because despite their protestations you believe it's in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
Replies from: SoullessAutomaton, Fredrik↑ comment by SoullessAutomaton · 2010-01-11T00:09:16.567Z · LW(p) · GW(p)
On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
Replies from: mattnewport↑ comment by mattnewport · 2010-01-11T00:47:13.648Z · LW(p) · GW(p)
On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people's stated goals are not in line with their own 'best interests'. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force him not to against his will." Yeah, that's not evil at all.
There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to 'help' them against their will.
Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I'm willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2010-01-11T02:59:37.274Z · LW(p) · GW(p)
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they're getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they'd rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they'd like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
There's a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they're about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns.
Actually making someone aware of a danger they're approaching is often easier said than done. People have a habit of disregarding things they don't want to listen to. What's that Douglas Adams quote? Something like, "Humans are remarkable among species both for having the ability to learn from others' mistakes, and for their consistent disinclination to do so."
Incidentally I don't believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not 'evil' to refrain from doing so in my opinion.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one's choices.
I begin to suspect that may be the root of our actual disagreement here.
In general this is in a different category from the kinds of issues we've been talking about (forcing 'help' on someone who doesn't want it).
It's a completely different issue, actually.
...but there's a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the "helping people against their will" issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn't show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone's lives? No--if only because that doesn't generally work out very well. Yet, lacking a solution doesn't make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn't also be suspicious of such a proposition, but don't pretend that opposing the idea is free. It's not, so long as we're all sharing this society.
Maybe you're happy to pay the costs of allowing other people to make mistakes, but I'm not. It may very well be that the alternatives are worse, but that doesn't make the situation any more pleasant.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example - there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Complicated? That's clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren't really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
Replies from: mattnewport↑ comment by mattnewport · 2010-01-11T09:02:53.580Z · LW(p) · GW(p)
In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
Explaining to them why you believe they're making a mistake is justified. Interfering if they choose to continue anyway, not.
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one's choices.
I begin to suspect that may be the root of our actual disagreement here.
I don't recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
Badly hung over, he doesn't show up to work the next day and is fired from his job.
I don't want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual's choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn't owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
Complicated? That's clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don't think the case is strong enough to justify compulsion. It's not something I have a great deal of interest in however so I haven't looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I'm somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children - I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
↑ comment by Fredrik · 2010-01-11T00:48:08.199Z · LW(p) · GW(p)
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone's best interests, wouldn't it be rational to forgo autonomous choice? Can we agree on that it would be?
Replies from: mattnewport↑ comment by mattnewport · 2010-01-11T00:55:54.646Z · LW(p) · GW(p)
I might be wrong in my beliefs about their best interests, but that is a separate issue.
It's not a separate issue, it's the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone's best interests but we're debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don't believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone's best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Replies from: Fredrik↑ comment by Fredrik · 2010-01-11T02:17:57.923Z · LW(p) · GW(p)
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
Replies from: mattnewport, Nick_Tarleton↑ comment by mattnewport · 2010-01-11T23:51:50.310Z · LW(p) · GW(p)
I think that AI with greater than human intelligence will happen sooner or later and I'd prefer it to be friendly than not so yes, I'm for the Friendly AI project.
In general I don't support attempting to restrict progress or change simply because some people are not comfortable with it. I don't put that in the same category as imposing compulsory intelligence enhancement on someone who doesn't want it.
Replies from: Fredrik↑ comment by Fredrik · 2010-01-12T04:16:31.783Z · LW(p) · GW(p)
Well, the AI would "presume to know" what's in everyone's best interests. How is that different? It's smarter than us, that's it. Self-governance isn't holy.
Replies from: mattnewport↑ comment by mattnewport · 2010-01-12T04:53:01.224Z · LW(p) · GW(p)
An AI that forced anything on humans 'for their own good' against their will would not count as friendly by my definition. A 'friendly AI' project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don't think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
Replies from: Fredrik, pdf23ds↑ comment by Fredrik · 2010-01-12T14:59:02.807Z · LW(p) · GW(p)
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
Replies from: mattnewport↑ comment by mattnewport · 2010-01-12T21:30:26.942Z · LW(p) · GW(p)
The closest named ethical philosophy I've found to mine is something like Ethical Egoism. It's not close enough to what I believe that I'm comfortable self identifying as an ethical egoist however. I've posted quite a bit here in the past on the topic - a search for my user name and 'ethics' using the custom search will turn up quite a few posts. I've been thinking about writing up a more complete summary at some point but haven't done so yet.
↑ comment by pdf23ds · 2010-01-12T08:51:21.212Z · LW(p) · GW(p)
The category "actions forced on humans 'for their own good' against their will" is not binary. There's actually a large gray area. I'd appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in "against their will" and the second in "forced".
↑ comment by Nick_Tarleton · 2010-01-12T15:03:38.447Z · LW(p) · GW(p)
I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
It doesn't have to radically transform their lives, if they wouldn't want it to upon reflection. FAI ≠ enforced transhumanity.
↑ comment by ChristianKl · 2010-01-14T12:18:01.549Z · LW(p) · GW(p)
Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don't have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Replies from: Fredrik↑ comment by Fredrik · 2010-01-14T22:55:29.072Z · LW(p) · GW(p)
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
↑ comment by Roko · 2010-01-10T11:26:42.909Z · LW(p) · GW(p)
I suspect that once most people have had themselves or their children cognitively enhanced, you are in much better shape for dealing with the 10% of sticklers in a firm but fair way.
Replies from: mattnewport↑ comment by mattnewport · 2010-01-10T20:43:04.022Z · LW(p) · GW(p)
I'm not sure quite what you're advocating here but 'dealing with the 10% of sticklers in a firm but fair way' has very ominous overtones to me.
Replies from: ChristianKl, Fredrik, Roko↑ comment by ChristianKl · 2010-01-14T12:29:18.632Z · LW(p) · GW(p)
Those people don't get jobs or university education that they would need to use the dangerous knowledge about how to manufacture artificial viruses because they aren't smart enough in competition to the rest.
↑ comment by Fredrik · 2010-01-12T15:13:51.006Z · LW(p) · GW(p)
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers - possibly very efficiently due to our superior intelligence - rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
↑ comment by Roko · 2010-01-10T20:48:24.908Z · LW(p) · GW(p)
presumably you refer to the violation of individuals' rights here - forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
But how would you feel if the first people to undergo the treatments were politicians; they might be enhanced so that they were incapable of lying. Think of the good that that could do.
Replies from: Alicorn, mattnewport, ChristianKl, SoullessAutomaton↑ comment by mattnewport · 2010-01-10T20:52:25.450Z · LW(p) · GW(p)
My feeling is that if you rendered politicians incapable of lying it would be hard to distinguish from rendering them incapable of speaking.
If to become a politician you had to undergo some kind of process to enhance intelligence or honesty I wouldn't necessarily object. Becoming a politician is a voluntary choice however and so that's a very different proposition from forcing some kind of treatment on every member of society.
↑ comment by ChristianKl · 2010-01-14T12:25:41.875Z · LW(p) · GW(p)
Simply using a lie detector for politicians might be a much better idea. It's also much easier. Of course a lie detector doesn't really detect whether someone would be lying but the same goes for any cognitive enhancement.
↑ comment by SoullessAutomaton · 2010-01-11T00:51:19.814Z · LW(p) · GW(p)
presumably you refer to the violation of individuals' rights here - forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
Out of curiosity, what do you have in mind here as "participate in society"?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don't actually seem that much better. Hence your point about "the people who do get made smarter can figure it out", I guess.
comment by Christian_Szegedy · 2010-01-12T00:12:26.485Z · LW(p) · GW(p)
I am very skeptical about any human gene-engineering proposals (for anything other than targeted medical treatment purposes.)
Even if we disregard superhuman artificial intelligences, there are a lot of more direct and therefore much quicker prospective technologies in sight: electronic/chemical brain-enhancing/control, digital supervision technologies, memetic engineering, etc.
IMO, the prohibitively long turnaround time of large scale genetic engineering and its inherently inexact (indirect) nature makes it inferior to almost any thinkable alternatives.
Replies from: ChristianKl↑ comment by ChristianKl · 2010-01-13T15:10:42.922Z · LW(p) · GW(p)
We have had successful trials of gene therapy in the last year to let apes see additional colors. We will have the possibility to sequence the gene of all of humanity sometimes in the next decade. We will have the tech to choose to do massive testing and correlate the test scores with genes and develop gene therapy to switch those genes off in the next decade.
If we don't have ethical problems with doing so we could probably start pilot trials at the end of this decade for genetical engineering with gene therapy.
comment by timtyler · 2010-01-10T10:30:21.720Z · LW(p) · GW(p)
Much the same tech as is used to make intelligent machines augments human intelligence - by preprocessing its sensory inputs and post-processing its motor outputs.
In general, it's much quicker and easier to change human culture and the human environment than it is to genetically modify human nature.
Replies from: Roko↑ comment by Roko · 2010-01-10T11:27:33.708Z · LW(p) · GW(p)
quicker and easier to change human culture
how?
Replies from: timtyler↑ comment by timtyler · 2010-01-10T12:02:28.834Z · LW(p) · GW(p)
"Richard Dawkins - The Shifting Moral Zeitgeist"
Human culture is more end-user-modifiable than the human genome is - since we created it in the first place.
Replies from: billswift, Roko↑ comment by billswift · 2010-01-10T13:30:21.374Z · LW(p) · GW(p)
The problem is that culture is embedded in the genetic/evolutionary matrix; there are severe limits on what is possible to change culturally.
Replies from: timtyler↑ comment by timtyler · 2010-01-10T13:56:27.460Z · LW(p) · GW(p)
Culture is what separates us from cavemen. They often killed their enemies and ate their brains. Clearly culture can be responsible for a great deal of change in the domain of moral behaviour.
Replies from: pdf23ds, Jack, Fredrik↑ comment by pdf23ds · 2010-01-11T03:30:21.339Z · LW(p) · GW(p)
If Robin Hanson is right, moral progress is simply a luxury we indulge in in this time of plenty.
Replies from: Jayson_Virissimo, timtyler↑ comment by Jayson_Virissimo · 2010-01-12T06:25:13.565Z · LW(p) · GW(p)
Did crime increase significantly during the Great Depression? Wouldn't this potentially be falsifying evidence for Hanson's hypothesis?
Perhaps the Great Depression just wasn't bad enough, but it seems to cast doubt on the hypothesis, at the very least.
Replies from: Technologos↑ comment by Technologos · 2010-01-12T06:40:34.634Z · LW(p) · GW(p)
Crime is down during the current recession. It's possible that the shock simply hasn't been strong enough, but it may be evidence nonetheless.
I think Hanson's hypothesis was more about true catastrophes, though--if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn't worry about morality.
↑ comment by Fredrik · 2010-01-10T22:30:20.660Z · LW(p) · GW(p)
Culture has also produced radical Islam. Just look at http://www.youtube.com/watch?v=xuAAK032kCA to get a bit more pessimistic about the natural moral zeitgeist evolution in culture.
Replies from: timtyler↑ comment by timtyler · 2010-01-10T22:40:27.735Z · LW(p) · GW(p)
What fraction of the population, though? Some people are still cannibals. It doesn't mean there hasn't been moral progress. Update 2011-08-04 - the video link is now busted.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-14T22:14:56.693Z · LW(p) · GW(p)
The persistence of the taboo against cannibalism is an example where we haven't made moral progress. There's no good moral reason to treat eating human meat as any different than meat of other animals, once the animals in question are dead, though there may be health reasons. It's just an example of prejudice and unreasonable moral disgust.
↑ comment by Roko · 2010-01-10T12:34:19.831Z · LW(p) · GW(p)
Hmmm. The problem is, I don't think that Dawkins argues that the changes are deliberate, rather that they are part of a random drift. Also, he speaks in terms of changes over 100-40 years. That is hardly "quick", or even "quicker" than the 40-60 years that I claimed would be a minimum requirement for scientific alteration of human nature to work.
Replies from: timtyler↑ comment by timtyler · 2010-01-10T13:19:35.981Z · LW(p) · GW(p)
Personally, I think the changes are rather directional - and represent moral progress. However, that is a whole different issue.
Think how much the human genome has changed in the last 40-100 years to see how much more rapid cultural evolution can be. Culture is likely to continue to evolve much faster than DNA does - due to ethical concerns, and the whole "unmaintainable spaghetti code" business.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T13:29:38.688Z · LW(p) · GW(p)
I like today's morals better than those of any other time and I'd prefer if the idea of moral progress was defensible, but I have no good answer to the criticism "well, you would, you are of this time".
Replies from: DanArmak, timtyler↑ comment by DanArmak · 2010-01-10T22:58:13.712Z · LW(p) · GW(p)
I don't think most people living in other times & places privately agreed with their society's public morality, to the same extent that we do today.
For most of history (not prehistory), there was no option for public debate or even for openly stating opinions. Morality was normally handed down from above, from the rulers, as part of a religion. If those people had an opportunity to live in our society and be acclimatized to it, many of them may have preferred our morality. I don't believe the reverse is true, however.
This doesn't prove that our morality is objectively better - it's impossible to prove this, by definition - but it does dismiss the implication of the argument that "you like today's morality because you live today". Only the people who live today are likely to like their time's morality.
Replies from: ciphergoth, ChristianKl, Roko↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T11:13:45.141Z · LW(p) · GW(p)
This doesn't prove that our morality is objectively better - it's impossible to prove this, by definition - but it does dismiss the implication of the argument that "you like today's morality because you live today". Only the people who live today are likely to like their time's morality.
Thanks, this is a good point - and of course there's plenty to dislike about lots of morality to be found today, there's reason to hope the people of tomorrow will overall like tomorrow's morality even better. As you say, this doesn't lead to objective morality, but it's a happy thought.
↑ comment by ChristianKl · 2010-01-13T15:29:54.983Z · LW(p) · GW(p)
In the middle ages in Europe the middle class lived after much stricter morality than the ruling class when it comes to question such as having sex.
Morality was often the way of the powerless to feel like they are better than the ruling class.
↑ comment by timtyler · 2010-01-10T13:59:33.182Z · LW(p) · GW(p)
If drift were a good hypothesis, steps "forwards" (from our POV) would be about as common as steps "backwards". Are those "backwards" steps really that common?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T14:06:39.807Z · LW(p) · GW(p)
If we model morality as a one-dimensional scale and change as a random walk, then what you say is true. However, if we model it as a million-dimensional scale on which each step affects only one dimension, after a thousand steps we would expect to find that nearly every step brought us closer to our current position.
EDIT: simulation seems to indicate I'm wrong about this. Will investigate further. EDIT: it was a bug in the simulation. Numpy code available on request.
Replies from: timtyler↑ comment by timtyler · 2010-01-10T18:49:33.728Z · LW(p) · GW(p)
I would regard any claim that abolition of hanging, burning witches, caning children in schools, torture, stoning, flogging, keel-hauling and stocks are "morally orthogonal" with considerable suspicion.
Replies from: ChristianKl, ciphergoth↑ comment by ChristianKl · 2010-01-13T15:36:14.265Z · LW(p) · GW(p)
There no abolishion of torture anyone in the US. Some clever people ran a campaign in last decade that eroded the consensus that torture is always wrong. At the same time the US hasn't reproduced burning witches.
Replies from: RobinZ, timtyler↑ comment by RobinZ · 2010-01-13T15:53:19.091Z · LW(p) · GW(p)
There no abolishion of torture anyone in the US.
That's not the case. The United States signed and ratified the United Nations Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment.
Replies from: ChristianKl↑ comment by ChristianKl · 2010-01-14T21:50:27.331Z · LW(p) · GW(p)
Last year the US blackmailed the UK demanding that the UK either violates the United Nations Convention against torture or that the US will stop giving the UK intelligence about possible terrorist plots that might kill UK citizens. The US under the Obama administration doesn't only violate the document themselves but also it also blackmails other countries to violate it as well.
Replies from: RobinZ↑ comment by timtyler · 2010-01-13T15:43:03.036Z · LW(p) · GW(p)
Right - but it has been banned elsewhere:
http://en.wikipedia.org/wiki/European_Convention_on_Human_Rights#Article_3_-_torture
↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T19:33:04.318Z · LW(p) · GW(p)
I'm happy to see those things abolished too, but since I'm not a moral realist I can't see how to build a useful model of "moral progress".
Replies from: timtyler↑ comment by timtyler · 2010-01-10T20:06:06.052Z · LW(p) · GW(p)
According to:
http://en.wikipedia.org/wiki/Moral_realism
...this involves attributing truth and falsity to moral statements - whereas it seems more realistic to say that moral truth has a subjective component.
However, the idea of moral progress does not mean there is "one true morality".
It just means that some moralities are better than others. The moral landscape could have many peaks - not just one.
I see no problem with the concept of moral progress. The idea that all moralities are of equal merit seems like totally inexcusable cultural relativism to me. Politically correct, perhaps - but also silly.
Morality is about how best to behave. We have a whole bunch of theory from evolutionary biology that relates to that issue - saying what goals organisms have - which actions are most likely to attain them - how individual goals conflict with goals that are seen acceptable to society - and so on. Some of it will be a reflection of historical accidents - while other parts of it will be shared with most human cultures - and most alien races.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T09:01:34.391Z · LW(p) · GW(p)
My position on these things is currently very close to that set out in THE TERRIBLE, HORRIBLE, NO GOOD, VERY BAD TRUTH ABOUT MORALITY AND WHAT TO DO ABOUT IT.
Replies from: timtyler, Blueberry, pdf23ds↑ comment by timtyler · 2010-01-11T17:57:31.754Z · LW(p) · GW(p)
Well, I hope I explained how a denial of "moral realism" was quite compatible with the idea of moral progress.
Since that was your stated reason for denying moral progress, do you disagree with my analysis, or do you have a new reason for objecting to moral progress, or have you changed your mind about it?
I certainly don't think there is anything wrong with the idea of moral progress in principle.
Finding some alien races, would throw the most light on the issue of convergent moral evolution - but in the mean time, our history, and the behaviour of other animals (e.g. dolphins) do offer some support for the idea, it seems to me.
Conway Morris has good examples of convergent evolution. It is a common phenomenon - and convergent moral evolution would not be particularly surprising.
If moral behaviour arises in a space which is subject to attractors, then some moral systems will be more widespread than others. If there is one big attractor, then moral realism would have a concrete basis.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T18:03:10.170Z · LW(p) · GW(p)
No, sorry, I don't see it at all. When you say "some moralities are better than others", better by what yardstick? If you're not a moral realist, then everyone has their own yardstick.
I really recommend against ever using the thought-stopping phrase "political correctness" ever for any purpose, but I absolutely reject the "cultural relativism" that you attribute to me as a result, by the way. Someone performing a clitorectomy may be doing the right thing by their own lights, but by my lights they're doing totally the wrong thing, and since my lights are what I care about I'm quite happy to step in and stop them if I have the power to, or to see them locked up for it.
Replies from: timtyler, timtyler↑ comment by timtyler · 2010-01-11T18:26:54.027Z · LW(p) · GW(p)
To continue with your analogy, moral realists claim there is one true yardstick. If you deny that it doesn't mean you can't measure anything, and that all attempts are useless. For example, people could still use yardsticks if they were approximately the same length.
Replies from: ciphergoth, RobinZ↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T20:03:00.591Z · LW(p) · GW(p)
I'm still not catching it. There isn't one true yardstick, but there has been moral progress. I'm guessing that this is against a yardstick which sounds a bit more "objective" when you state it, such as "maximizing happiness" or "maximising human potential" or "reducing hypocrisy" or some such. But you agree that thinking that such a yardstick is a good one is still a subjective, personal value judgement that not everyone will share, and it's still only against such a judgement that there can be moral progress, no?
Replies from: timtyler↑ comment by timtyler · 2010-01-11T21:39:03.879Z · LW(p) · GW(p)
I don't expect everyone to agree about morality. However, there are certainly common elements in the world's moral systems - common in ways that are not explicable by cultural common descent.
Cultural evolution is usually even more blatantly directional than DNA evolution is. One obvious trend is moral evolution is its increase in size. Caveman morality was smaller than most modern moralities.
Cultural evolution also exhibits convergent evolution - like DNA evolution does.
Most likely, like DNA evolution, it will eventually slow down - as it homes in on an deep, isolated optimum.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation. If there were many different optima with wildly-different moralities, it would not. Probably an intermediate position is most realistic - with advanced moral systems agreeing on a many things - but not everything.
Replies from: Richard_Kennaway, ciphergoth, Richard_Kennaway, Blueberry↑ comment by Richard_Kennaway · 2010-01-13T00:50:50.783Z · LW(p) · GW(p)
(Replying again here rather than at the foot of a nugatory meta-discussion.)
I suggested C.S. Lewis' "The Abolition of Man" as proposing a candidate for an optimum towards which moral systems have gravitated.
C.S. Lewis was, as Tim Tyler points out, a Christian, but I shall trust that we are all rational enough here to not judge the book from secondary data, when the primary source is so short, clearly written, and online. We need not don the leather cloak and posied beak to avoid contamination from the buboes of this devilish theist oozing Christian memes. It is anyway not written from a Christian viewpoint. To provide a summary would be to make soup of the soup. Those who do not wish to read that, are as capable of not reading this, which is neither written from a Christian viewpoint, nor by a Christian.
I am sufficiently persuaded that the eight heads under which he summarises the Tao can be found in all cultures everywhere: these are things that everyone thinks good. One might accuse him of starting from New Testament morality and recognising only that in his other sources, but if so, the defects are primarily of omission. For example, his Tao contains no word in praise of wisdom: such words can be found in the traditions he draws on, but are not prominent in the general doctrines of Christianity (though not absent either). His Tao is silent on temperance, determination, prudence, and excellence.
Those unfamiliar with talk of virtue can consult this handy aide-memoire and judge for themselves which of them are also to be found in all major moral systems and which are parochial. Those who know many languages might also try writing down all the names of virtues they can think of in each language: what do those lists have in common?
Here's an experiment for everyone to try: think it good to eat babies. Don't merely imagine thinking that: actually think it. I do not expect anyone to succeed, any more than you can look at your own blood and see it as green, or decide to believe that two and two make three.
What is the source of this universal experience?
Lewis says that the Tao exists, it is constant, and it is known to all. People and cultures differ only in how well they have apprehended it. It cannot be demonstrated to anyone, only recognised. He does not speculate in this work on where it comes from, but elsewhere he says that it is the voice of God within us. The less virtuous among us are those who hear that voice more faintly; the evil are those who do not hear it at all, or hear it and hate it. I think there will be few takers for that here.
Some -- well, one, at least -- reverse the arrow, saying that God is the good that we do, which presumably makes Satan the evil that we do.
Others say that there are objective moral facts which we discern by our moral sense, just as we discern objective physical facts by our physical senses; in both cases the relationship requires some effort to attain to the objective truth.
Others say, this is how we are made: we are so constituted as to judge some things virtuous, just as we are so constituted as to judge some things red. They may or may not give evpsych explanations of how this came to be, but whatever the explanation, we are stuck with this sense just as much as we are stuck with our experience of colour or of mathematical truth. We may arrive at moral conclusions by thought and experience, but cannot arbitrarily adopt them. Some claim to have discarded them altogether, but then, some people have managed to put their eyes out or shake their brains to pieces.
Come the Singularity, of course, all this goes by the board. Friendliness is an issue beyond just AGI.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T12:03:20.105Z · LW(p) · GW(p)
We're still going in circles. Optimal by what measure? By the measure of maximizes the sort of things I value? Morals have definitely got better by that measure. Please, when you reply, don't use words like "best" or "optimal" or "merit" or any such normative phrase without specifying the measure against which you're maximising.
Replies from: timtyler↑ comment by timtyler · 2010-01-12T18:02:48.628Z · LW(p) · GW(p)
Re: "Optimal by what measure? By the measure of maximizes the sort of things I value?"
No!
The basic idea is that some moral systems are better than other - in nature's eyes. I.e. they are more likely to exist in the universe. Invoking nature as arbitrator will probably not please those who think that nature favours the immoral - but they should at least agree that nature provides a yardstick with which to measure moral systems.
I don't have access to the details of which moral systems nature favours. If I did - and had a convincing supporting argument - there would probably be fewer debates about morality. However, the moral systems we have seen on the planet so far certainly seem to be pertinent evidence.
Replies from: ciphergoth, Roko↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T19:23:58.229Z · LW(p) · GW(p)
Measured by this standard, moral progress cannot fail to occur. In any case, that's a measure of progress quite orthogonal to what I value, and so of course gives me no reason to celebrate moral progress.
Replies from: timtyler↑ comment by timtyler · 2010-01-12T20:20:38.013Z · LW(p) · GW(p)
Re: "moral progress cannot fail to occur"
Moral degeneration would typically correspond to devolution - which happens in highly radioactive environments, or under frequent meteorite impacts, or other negative local environmental condittions - provided these are avoidable elsewhere.
However, we don't see very much devolution happening on this planet - which explains why I think moral progress is happening.
I am inclined to doubt that nature's values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values - you can reasonably be expected to agree on a number of things.
Replies from: gregconen, Jack↑ comment by gregconen · 2010-02-13T12:19:51.403Z · LW(p) · GW(p)
I am inclined to doubt that nature's values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values - you can reasonably be expected to agree on a number of things.
From the perspective of the universe at large, humans are at best an interesting anomaly. Humans, plus all domesticated animals, crops, etc, compose less than 2% of the earth's biomass. The entire biomass is a few parts per billion of the earth (maybe it's important as a surface feature, but life is still outmassed by about a million times by the oceans and a thousand times by the atmosphere). The earth itself is a few parts per million of the solar system, which is one of several billion like it in the galaxy.
All of the mass in this galaxy, and all the other galaxy, quasars, and other visible collections of matter, are outmassed five to ten times by hydrogen atoms in intergalactic space.
And all that, all baryonic matter, composes a few percent of the mass-energy of the universe.
↑ comment by Jack · 2010-01-12T21:04:24.833Z · LW(p) · GW(p)
negative local environmental condittions
Negative?! They're great for the bacteria that survive.
And I suspect those with "devolved" morality would feel the same way.
Replies from: timtyler↑ comment by timtyler · 2010-01-12T21:38:03.612Z · LW(p) · GW(p)
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution - which is basically what I was talking about.
Most living systems can be expected to seek out those conditions. If they are powerful enough to migrate, they will mostly exist where living is practical, and mostly die out under conditions which are unfavourable.
Replies from: Jack↑ comment by Jack · 2010-01-12T22:31:43.421Z · LW(p) · GW(p)
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution - which is basically what I was talking about.
If your environment is insufficiently hostile there will be no natural selection at all. Evolution does not have a direction. The life that survives survives the life that does not, does not. That's it. Conditions are favorable for some life and unfavorable for others. There are indeed conditions where few complex, macroscopic life forms will develop-- but that is because in those conditions it is disadvantageous to be complex or macroscopic. If you live next to an underwater steam vent you're probably the kind of thing that likes to live there and won't do well in Monaco.
Replies from: timtyler, timtyler↑ comment by timtyler · 2010-01-12T22:45:45.381Z · LW(p) · GW(p)
Re: "Evolution does not have a direction."
My essay about that: http://originoflife.net/direction/
See also, the books "Non-Zero" and "Evolution's Arrow".
Replies from: Jack, Jack↑ comment by Jack · 2010-02-13T07:56:03.994Z · LW(p) · GW(p)
There is no reason to associate complexity with moral progress.
Replies from: timtyler↑ comment by timtyler · 2010-02-13T10:50:31.624Z · LW(p) · GW(p)
Sure. The evidence for moral progress is rather different - e.g. see:
"Richard Dawkins - The Shifting Moral Zeitgeist"
Replies from: Jack↑ comment by Jack · 2010-02-13T12:57:56.825Z · LW(p) · GW(p)
Wait a minute. This entire conversation begins with you conflating moral progress and directional evolution.
However, we don't see very much devolution happening on this planet - which explains why I think moral progress is happening.
Is the relationship between biological and ethical evolution just an analogy or something more for you?
Then I say: what you call good biological changes other organisms would experience as negative changes and vice versa.
You throw out the thesis about evolution having a direction because life fills more and more niches and is more and more complex. If those are things that are important to you, great. But that doesn't mean any particular organism should be excited about evolution or that there is a fact of the matter about things getting better. If you have the adaptations to survive in a complex, niche-saturated environment good for your DNA! If you don't, you're dead. If you like complexity things are getting better. If you don't things are getting worse. But the 'getting better' or 'getting worse' is in your head. All that is really happening is that things are getting more complex.
And this is the point about the 'shifting moral Zeitgeist' (which is a perfectly fine turn of phrase btw, because it doesn't imply the current moral Zeitgeist is any truer than the last one). Maybe you can identify trends in how values change but that doesn't make the new values better. But since the moral Zeitgeist is defined by the moral beliefs most people hold, most people will always see moral history up to that point in time as progressive. Similarly, most young people will experience moral progress the rest of their lives as the old die out.
Replies from: timtyler, timtyler, timtyler↑ comment by timtyler · 2010-02-13T14:19:45.712Z · LW(p) · GW(p)
I think there is some kind of muddle occurring here.
I cited the material about directional evolution in response to the claim that: "Evolution does not have a direction."
It was not to do with morality, it was to do with whether evolution is directional. I thought I made that pretty clear by quoting the specific point I was responding to.
Evolution is a gigantic optimization mechanism, a fitness maximizer. It operates in a relatively benign environment that permits cumulative evolution - thus the rather obvious evolutionary arrow.
↑ comment by timtyler · 2010-02-13T14:15:28.421Z · LW(p) · GW(p)
Re: "Is the relationship between biological and ethical evolution just an analogy or something more for you?"
Ethics is part of biology, so there is at least some link. Beyond that, I am not sure what sort of analogy you are suggesting. Maybe in some evil parallel universe, morality gets progressively nastier over time. However, I am more concerned with the situation in the world we observe.
The section you quoted is out of context. I was actually explaining how the idea that "moral progress cannot fail to occur" was not a logical consequence of moral evolution - because of the possibility of moral devolution. It really is possible to look back and conclude that your ancestors had better moral standards.
↑ comment by timtyler · 2010-02-13T14:12:12.470Z · LW(p) · GW(p)
We have already discussed the issue of whether organisms can be expected to see history as moral progress on this thread, starting with:
"If drift were a good hypothesis, steps "forwards" (from our POV) would be about as common as steps "backwards"."
↑ comment by Jack · 2010-02-13T07:53:49.302Z · LW(p) · GW(p)
I haven't read the books, though I'm familiar with the thesis. Your essay is afaict a restatement of that thesis. Now, maybe the argument is sufficiently complex that it needs to be made in a book and I'll remain ignorant until I get around to reading one of these books. But it would be convenient if someone could make the argument in few enough words that I don't have to spend a month investigating it.
↑ comment by Roko · 2010-01-12T18:05:16.761Z · LW(p) · GW(p)
The basic idea is that some moral systems are better than other - in nature's eyes. I.e. they are more likely to exist in the universe.
So, "might as right" ...
Replies from: timtyler↑ comment by timtyler · 2010-01-12T18:34:41.869Z · LW(p) · GW(p)
Nature is my candidate for providing an objective basis for morality.
Moral systems that don't exist - or soon won't exist - might have some interest value - but generally, it is not much use being good if you are dead.
"Might is right" does not seem like a terribly good summary of nature's fitness criteria. They are more varied than that - e.g. see the birds of paradise - which are often more beautiful than mighty.
Replies from: Roko↑ comment by Roko · 2010-01-12T21:17:25.237Z · LW(p) · GW(p)
Nature is my candidate for providing an objective basis for morality.
Ah, ok. That is enlightening. Of the Great Remaining Moral Realists, we have:
Tim Tyler: "The basic idea is that some moral systems are better than other - in nature's eyes. I.e. they are more likely to exist in the universe."
Stefan Pernar: "compassion as a rational moral duty irrespective of an agents level of intelligence or available resources."
David Pearce: "Pleasure and pain are intrinsically motivating and objectively Good and Bad, respectively"
Gary Drescher: "Use the Golden Rule: treat others as you would have them treat you"
Replies from: steven0461, thomblake, timtyler↑ comment by steven0461 · 2010-01-12T23:27:06.093Z · LW(p) · GW(p)
Drescher's use of the Golden Rule comes from his views on acausal game-theoretic cooperation, not from moral realism.
Replies from: Roko↑ comment by Roko · 2010-01-12T23:43:28.581Z · LW(p) · GW(p)
But he furthermore thinks that this can be leveraged to create an objective morality.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-01-13T20:46:52.537Z · LW(p) · GW(p)
Isn't this a definitional dispute? I don't think Drescher thinks some goal system is privileged in a queer way. Timeless game theory might talk about things that sound suspiciously like objective morality (all timelessly-trading minds effectively having the same compromise goal system?), but which are still mundane facts about the multiverse and counterfactually dependent on the distribution of existing optimizers.
Replies from: Roko↑ comment by timtyler · 2010-01-12T21:46:47.538Z · LW(p) · GW(p)
I don't think Stefan Pernar makes much sense on this topic.
David Pearce's position is more reasonable - and not very different from mine - since pleasure and pain (loosely speaking) are part of what nature uses to motivate and reward action in living things. However, I disagree with David on a number of things - and prefer my position. For example, I am concerned that David will create wireheads.
I don't know about Gary's position - but the Golden Rule is a platitude that most moral thinkers would pay lip service to - though I haven't heard it used as a foundation of moral behaviour before. Superficially, things like sexual differences make the rule not-as-golden-as-all-that.
Also: "Some examples of robust "moral realists" include David Brink, John McDowell, Peter Railton, Geoffrey Sayre-McCord, Michael Smith, Terence Cuneo, Russ Shafer-Landau, G.E. Moore, Ayn Rand, John Finnis, Richard Boyd, Nicholas Sturgeon, and Thomas Nagel."
↑ comment by Richard_Kennaway · 2010-01-12T18:51:11.618Z · LW(p) · GW(p)
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation.
Here is one proposed candidate for that optimum.
Replies from: timtyler↑ comment by timtyler · 2010-01-12T19:24:17.902Z · LW(p) · GW(p)
That link is to "C.S. Lewis's THE ABOLITION OF MAN".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-01-12T21:14:07.678Z · LW(p) · GW(p)
And I would be interested to know what people think of Lewis' Tao, and the arguments he makes for it.
Replies from: timtyler↑ comment by timtyler · 2010-01-12T22:52:22.456Z · LW(p) · GW(p)
Since:
http://en.wikipedia.org/wiki/C._S._Lewis#Conversion_to_Christianity
...I figure there would need to be clearly-evident redeeming features for anyone here to bother.
Replies from: thomblake↑ comment by thomblake · 2010-01-12T22:56:05.772Z · LW(p) · GW(p)
Meh. If someone being a theist were enough reason to not bother reading their arguments, we wouldn't read much at all.
Replies from: timtyler, ciphergoth↑ comment by timtyler · 2010-01-12T23:23:54.642Z · LW(p) · GW(p)
You have to filter crap out somehow.
Using "christian nutjob" as one of my criteria usually seems to work pretty well for me. Doesn't everyone do that?
Replies from: Blueberry↑ comment by Blueberry · 2010-01-14T22:21:52.685Z · LW(p) · GW(p)
C. S. Lewis is a Christian, but hardly a nutjob. I filter out Christian nutjobs, but not all Christians.
Replies from: timtyler, ciphergoth↑ comment by timtyler · 2010-01-15T07:06:45.304Z · LW(p) · GW(p)
Are there Christian non-nutjobs? It seems to me that Christianity poisons a person's whole world view - rendering them intellectually untrustworthy. If they believe that, they can believe anything.
Looking at:
http://en.wikipedia.org/wiki/C._S._Lewis#The_Christian_apologist
...there seems to be a fair quantity of nutjobbery to me.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-14T23:05:51.606Z · LW(p) · GW(p)
Except insofar as Christianity is a form of nutjobbery, of course.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-14T23:32:01.823Z · LW(p) · GW(p)
Well... yes and no. I wouldn't trust a Christian's ability to do good science, and I don't think a Christian could write an AI (unless the Christianity was purely cultural and ceremonial). But Christians can and do write brilliant articles and essays on non-scientific subjects, especially philosophy. Even though I disagree with much of it, I still appreciate C.S. Lewis or G. K. Chesterton's philosophical writing, and find it thought provoking.
Replies from: timtyler↑ comment by timtyler · 2010-01-15T07:12:28.395Z · LW(p) · GW(p)
In this case, the topic was moral realism. You think Christians have some worthwhile input on that? Aren't their views on the topic based on the idea of morality coming from God on tablets of stone?
Replies from: Blueberry↑ comment by Blueberry · 2010-01-15T17:30:12.772Z · LW(p) · GW(p)
Aren't their views on the topic based on the idea of morality coming from God on tablets of stone?
No, no more than we believe that monkeys turn into humans.
Replies from: timtyler↑ comment by timtyler · 2010-01-15T18:02:11.756Z · LW(p) · GW(p)
Christians believe human morality comes from god. Rather obviously disqualifies them from most sensible discussions about morality - since their views on the topic are utter nonsense.
Replies from: Alicorn↑ comment by Alicorn · 2010-01-15T18:07:18.758Z · LW(p) · GW(p)
This isn't fully general to all Christians. For instance, my best friend is a Christian, and after prolonged questioning, I found that her morality boils down to an anti-hypocrisy sentiment and a social-contract-style framework to cover the rest of it. The anti-hypocrisy thing covers self-identified Christians obeying their own religion's rules, but doesn't extend them to anyone else.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T23:56:54.935Z · LW(p) · GW(p)
You can't read everything; you have to collect evidence on what's going to be worth reading. A Christian on this sort of moral philosophy, I think that Lewis is often interesting but I plan to go to bed rather than read it, unless I get some extra evidence to push it the other way.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-01-13T01:09:03.957Z · LW(p) · GW(p)
FWIW, I recommend it.
AFAIR, that, the Narnia stories, and the Ransom trilogy are the only Lewis I've read. Are there others you have found interesting?
↑ comment by Blueberry · 2010-01-14T22:23:35.903Z · LW(p) · GW(p)
However, there are certainly common elements in the world's moral systems - common in ways that are not explicable by cultural common descent.
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn't mean it's objectively right.
Replies from: timtyler↑ comment by timtyler · 2010-01-16T15:42:56.896Z · LW(p) · GW(p)
You don't seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it's location is a function of accidents in our history.
Rather obviously - since human morality is currently in a state of progressive development - it hasn't reached any globally optimum value yet.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-17T07:02:59.352Z · LW(p) · GW(p)
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn't mean that value is objectively optimal, or the true morality.
In order to talk about moral "progress", or an "optimum" value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
Replies from: orthonormal, timtyler↑ comment by orthonormal · 2010-01-17T07:20:20.180Z · LW(p) · GW(p)
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don't require morality to be anything but a property of the agents who feel compelled by it, and which don't just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the "correct" extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
↑ comment by timtyler · 2010-01-17T09:47:44.233Z · LW(p) · GW(p)
Well, of course you can define "objectively optimal morality" to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define "optimal morality" as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.
↑ comment by timtyler · 2010-01-11T21:22:02.581Z · LW(p) · GW(p)
I was criticising the idea that "all moralities are of equal merit". I was not attributing that idea to you. Looking at:
http://en.wikipedia.org/wiki/Cultural_relativism
...it looks like I used the wrong term.
http://en.wikipedia.org/wiki/Moral_relativism
...looks slightly better - but still is not quite the concept I was looking for - I give up for the moment.
Replies from: thomblake↑ comment by thomblake · 2010-01-11T21:34:01.098Z · LW(p) · GW(p)
I'm not sure if there's standard jargon for "all moralities are of equal merit" (I'm pretty sure that's isomorphic to moral nihilism, anyway). However, people tend to read various sorts of relativism that way, and it's not uncommon in discourse to see "Cultural relativism" to be associated with such a view.
Replies from: ciphergoth, timtyler↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T23:36:20.837Z · LW(p) · GW(p)
Believing that all moralities are of equal merit is a particularly insane brand of moral realism.
↑ comment by timtyler · 2010-01-11T22:35:39.949Z · LW(p) · GW(p)
What I was thinking of was postmodernism - in particular the sometimes-fashionable postmodern conception that all ideas are equally valid. It is a position sometimes cited in defense of the idea that science is just another belief system.
↑ comment by pdf23ds · 2010-01-12T05:33:49.082Z · LW(p) · GW(p)
I've been reading that (I'm on page 87), and I haven't gotten to a part where he explains how that makes moral progress meaningless. Why not just define moral progress sort of as extrapolated volition (without the "coherent" part)? You don't even have to reference convergent moral evolution.
Replies from: ciphergoth, ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T08:47:23.095Z · LW(p) · GW(p)
I don't think he talks about moral progress. But the point is that no matter how abstractly you define the yardstick by which you observe it, if someone else prefers a different yardstick there's no outside way to settle it.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T08:22:38.706Z · LW(p) · GW(p)
I don't think it mentions moral progress. It just seems obvious that if there is no absolute morality, then the only measures against which there has been progress are those that we choose.
Replies from: pdf23ds↑ comment by pdf23ds · 2010-01-12T08:36:25.673Z · LW(p) · GW(p)
Of course it isn't "objective" or absolute. I already disclaimed moral realism (by granting arguendo the validity of the linked thesis). Why does it follow that you "can't see how to build a useful model of 'moral progress'"? Must any model of moral progress be universal?
Replies from: Jack, ciphergoth↑ comment by Jack · 2010-01-12T09:03:51.233Z · LW(p) · GW(p)
It is a truism that as the norms of the majority change the majority of people will see subjective moral progress. That kind of experience is assumed once you know that moralities change. So when you use the term moral progress it is reasonable to assume you think there is some measure for that progress other than your own morality. The way you're using the word progress is throwing a couple of us off.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T08:52:25.232Z · LW(p) · GW(p)
If you're talking about progress relative to my values, then absolutely there has been huge progress.
Replies from: pdf23ds↑ comment by pdf23ds · 2010-01-12T08:56:54.615Z · LW(p) · GW(p)
I'm not talking specifically about that. Mainly what I'm wondering is what exactly motivated you to say "can't see how ..." in the first place. What makes a measure of progress that you choose (or is chosen based on some coherent subset of human moral values, etc.) somehow ... less valid? not worthy of being used? something else?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-12T11:56:50.418Z · LW(p) · GW(p)
It's possible we're violently agreeing here. By my own moral standards, and by yours, there has definitely been moral progress. Since there are no "higher" moral standards against which ours can be compared, there's no way for my feelings about it to be found objectively wanting.
comment by ChristianKl · 2010-01-14T12:37:48.309Z · LW(p) · GW(p)
The reason why we have terrorism is because we don't have a moral consensus that labels killing people as bad. The US does a lot to convince Arabs that killing people is just when there's a good motive.
Switching to a value based foreign policy where the west doesn't violate it's moral norms in the mind of the Arabs could help us to get a moral consensus against terrorism but unfortunately that doesn't seem politically viable at the moment.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-14T13:58:45.690Z · LW(p) · GW(p)
I'd find this pleasant to believe, and I've been a longstanding critic of US foreign policy, but:
Terrorism isn't a big problem, it should be a long way down the list of problems the US needs to think about. It's interesting to speculate on what would make a difference to it, but it would be crazy to make it more than a very small influence on foreign policy.
Terrorists are already a long way from the moral consensus, which is one reason they're so rare.
It seems incredibly implausible to me that they're taking their moral lead from the US in any case.
And of course while killing people is bad all other things being equal, almost everyone already believes that; what they believe is that it's defensible in the pursuit of some other good (such as saving lives elsewhere) which I also believe.
↑ comment by ChristianKl · 2010-01-14T21:40:39.933Z · LW(p) · GW(p)
Terrorists usually aren't a long way from the moral consensus of their community. If you take polls asking people what they think of the US the answers radically changed in the last ten years in the middle east.
In Iran the Western ideals of democracy work enough to destabilize the government a bit. Our values actually work. They are something that people can believe in and draw meaning from.
comment by Zachary_Kurtz · 2010-01-11T19:39:10.075Z · LW(p) · GW(p)
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
Replies from: Vladimir_Nesov, Roko, PhilGoetz↑ comment by Vladimir_Nesov · 2010-01-11T20:22:50.139Z · LW(p) · GW(p)
Doomsday predictions have never come true in the past,
And why would that be?...
Replies from: Roko↑ comment by Roko · 2010-01-11T20:29:57.579Z · LW(p) · GW(p)
Anthropic issues are relevant here.
It is not possible for humans to observe the end of the human race. so lack of that observation is not evidence.
Global catastropic risks that weren't the the extinction of the race have happened. At one point, it is theorized that there were just 500 reproducing females left. That counts as a close shave.
Also, Homo Florensis and Homo Neanderthalis did, in fact, get wiped out.
Replies from: Zachary_Kurtz↑ comment by Zachary_Kurtz · 2010-01-11T22:24:27.567Z · LW(p) · GW(p)
I don't think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce 'freedoms.'
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to 'protect us' from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Replies from: Vladimir_Nesov, Roko, Nick_Tarleton↑ comment by Vladimir_Nesov · 2010-01-11T22:47:41.358Z · LW(p) · GW(p)
And, if we do have the anthropic explanation to 'protect us' from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven't been destroyed, but doesn't imply that you won't be destroyed. As simple as that.
Replies from: Zachary_Kurtz↑ comment by Zachary_Kurtz · 2010-01-11T22:51:44.996Z · LW(p) · GW(p)
I can't observe myself getting destroyed either, however.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-11T22:54:09.280Z · LW(p) · GW(p)
When you close your eyes, the World doesn't go dark.
Replies from: Zachary_Kurtz↑ comment by Zachary_Kurtz · 2010-01-11T22:56:25.780Z · LW(p) · GW(p)
The world probably doesn't go dark. We can't know for sure without using sense data.
Replies from: ciphergoth↑ comment by Roko · 2010-01-11T22:32:04.185Z · LW(p) · GW(p)
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Anthropics will prevent us from being able, after the event, to observe that the human race has ended. Dead people don't do observations. However, it will have ended, which many consider to be a bad thing. I suspect that you're confused about what it is that anthropics says: consider reading LW wiki or wikipedia on it.
Of course, if you bring Many Worlds QM into this mix, then you have the quantum immortality hypothesis, stating that nothing can kill you. However, I am still a little uncertain of what to make of QI.
Replies from: Zachary_Kurtz↑ comment by Zachary_Kurtz · 2010-01-11T22:34:54.670Z · LW(p) · GW(p)
I think I was equating quantum immortality with anthropic explanations, in general. My mistake.
Replies from: Roko↑ comment by Roko · 2010-01-11T22:45:54.040Z · LW(p) · GW(p)
No problem. QI still does confuse me somewhat. If my reading of the situation is correct, then properly implemented quantum suicide really would win you the lottery, without you especially losing anything. (yes, in the branches where you lose, you no longer exist, but since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there? Survival for just one extra second would make up for it - the number of "me's" is increasing so quickly that losing 99.999999% of them is negated by waiting a fraction of a second)
Replies from: Wei_Dai, pdf23ds↑ comment by Wei Dai (Wei_Dai) · 2010-01-11T23:27:02.881Z · LW(p) · GW(p)
since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there?
You're talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn't, so you can't make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don't know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I'll try to look it up. ETA: Nope, can't find it now.)
Replies from: Roko, Roko↑ comment by Roko · 2010-01-11T23:41:58.805Z · LW(p) · GW(p)
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure? I doubt this a lot. Do you think that if people were told that it preserved measure, they would be popping off to do it all the time?
I don't think that people are revealing a preference for measure here. I think that they're revealing that they trust their instinct to not do weird things that look like suicide to their subconscious.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-11T23:49:11.645Z · LW(p) · GW(p)
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure?
No, I'm not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don't know the answers, but I think I do have a clue that others don't seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
Replies from: jimrandomh, pdf23ds, steven0461↑ comment by jimrandomh · 2010-01-12T03:50:33.623Z · LW(p) · GW(p)
If quantum suicide works, then there's little hurry to use it, since it's not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
↑ comment by pdf23ds · 2010-01-12T08:10:53.806Z · LW(p) · GW(p)
In either case there is still a question of why do we fear death
Um, what? Why did we evolve to fear death? I suspect I'm missing something here.
Or perhaps we should take the fear of death as a hint that we should care about measure
You're converting an "is" to an "ought" there with no explanation, or else I don't know in what sense you're using "should".
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-12T08:40:27.703Z · LW(p) · GW(p)
Um, what? Why did we evolve to fear death? I suspect I'm missing something here.
That the way we fear death has the effect of maximizing our measure, but not the number of branches we are in, is perhaps a puzzle. See also http://lesswrong.com/lw/19d/the_anthropic_trilemma/14r8 starting at "But a problem with that".
You're converting an "is" to an "ought" there with no explanation, or else I don't know in what sense you're using "should".
I'm pointing out a possible position one might take, not one that I agree with myself. See http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/14jn
Replies from: pdf23ds↑ comment by pdf23ds · 2010-01-12T09:11:38.708Z · LW(p) · GW(p)
I'm pointing out a possible position one might take
Yes, but you didn't explain why anyone would want to take that position, and I didn't manage to infer why. One obvious reason, that the fear of death (the fear of a decrease in measure) is some sort of legitimate signal about what matters to many people, prompts the question of why I should care about what evolution has programmed into me. Or perhaps, more subtly, the question of why my morality function should (logically) similarly weight two quite different things--a huge extrinsic decrease in my measure (involuntary death) vs. an self-imposed selective decrease in measure--that were not at all separate as far as evolution is concerned, where only the former was possible in the EEA, and perhaps where upon reflection only the reasons for the former seem intuitively clear.
ETA: Also, I totally don't understand why you think that it's a puzzle that evolution optimized us solely for the branches of reality with the greatest measure.
↑ comment by steven0461 · 2010-01-11T23:53:04.954Z · LW(p) · GW(p)
Have you looked at Jacques Mallah's papers?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-12T00:15:42.841Z · LW(p) · GW(p)
Yes, and I had a discussion with him last year at http://old.nabble.com/language%2C-cloning-and-thought-experiments-tt22185985.html#a22189232 (Thanks for the reminder.)
If you follow the above link, you'll see that I actually took a position that's opposite of my position here: I said that people mostly don't care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don't know how to formalize human preferences.
Replies from: Roko↑ comment by Roko · 2010-01-12T00:38:13.890Z · LW(p) · GW(p)
you'll see that I actually took a position that's opposite of my position here: I said that people mostly don't care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don't know how to formalize human preferences.
Well, Wei, I certainly agree that formalizing human preferences is tough!
↑ comment by Roko · 2010-01-11T23:36:26.338Z · LW(p) · GW(p)
Branching preserves measure, while quantum suicide doesn't, so you can't make up for it by branching more times if what you care about is measure.
Preserves measure of what, exactly? The integral of over all arrangements of particles that we classify into the "Roko ALIVE" category?
I.e. it preserves the measure of the set of all arrangements of particles that we classify into the "Roko ALIVE" category.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-11T23:42:16.622Z · LW(p) · GW(p)
Yes, something like that.
Replies from: Roko↑ comment by Roko · 2010-01-12T00:14:30.241Z · LW(p) · GW(p)
But, suppose that what you really care about is what you're about to experience next, rather than measure, i.e. the sum of absolute values of all the complex numbers premultiplying all of your branches?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-12T00:43:42.336Z · LW(p) · GW(p)
But, suppose that what you really care about is what you're about to experience next, rather than what the absolute value of the complex number that premultiplies that experience is?
I think this is a more reasonable alternative to "caring about measure" (as opposed to "caring about the number of branches" which is mainly what I was arguing against in my first reply to you in this thread). I'm not sure what I can say about this that might be new to you. I guess I can point out that this is not something that "evolution would do" if mind copying technology were available, but that's another "clue" that I'm not sure what to make of.
Replies from: Roko↑ comment by Roko · 2010-01-12T00:46:32.558Z · LW(p) · GW(p)
I guess I can point out that this is not something that "evolution would do" if mind copying technology were available
Ok, I'll appease the part of me that cares about what my genes want by donating to every sperm bank in the country (an exploit that very few people use), then I'll use the money from that to buy 1000 lottery tickets determined by random qbits, and on with the QS moneymaker ;-)
↑ comment by pdf23ds · 2010-01-12T06:44:29.987Z · LW(p) · GW(p)
I am branching at a rate of 10^10^2 or so splits per second
Source? I'm curious how that's calculated.
without you especially losing anything
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you're completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn't affect your argument, of course.)
Replies from: Roko↑ comment by Nick_Tarleton · 2010-01-11T22:32:21.354Z · LW(p) · GW(p)
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce 'freedoms.'
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
Replies from: Zachary_Kurtz↑ comment by Zachary_Kurtz · 2010-01-11T22:36:14.087Z · LW(p) · GW(p)
not very. Thanks for the link.
comment by Vladimir_Nesov · 2010-01-10T01:05:33.867Z · LW(p) · GW(p)
The video of the talk has two parts, only first of which was included in the post. Links to both parts:
- Genetically enhance humanity or face extinction - PART 1
- Genetically enhance humanity or face extinction - PART 2
comment by ChristianKl · 2010-01-13T15:18:00.115Z · LW(p) · GW(p)
The key question isn't: Should we do genetic engineering when we know the complete effects of it but should we try genetically engineering even when we don't know what result we will get.
Should we gather centralized databases of DNA sequences of every human being and mine them for gene data? Are potential side effects worth the risk of starting now with genetic engineering? Do we accept the increased inequality that could result out of genetic engineering. How do we measure what constitutes a good gene? Low incarnation rates, IQ, EQ?