[LINK] Refuting common objections to cognitive enhancement

post by grouchymusicologist · 2012-02-07T16:52:39.976Z · LW · GW · Legacy · 6 comments

I've tended to think that bioethics is maybe the most profoundly useless field in mainstream philosophy. I might sum it up by saying that it's superficially similar to machine ethics except that the objects of its warnings and cautions are all unambiguously good things, like cognitive enhancements and life extension. In an era when we should by any reasonable measure be making huge amounts of progress on those problems—and in which one might expect bioethicists to be encouraging such research and helping weigh it against yet another dollar sent to the Susan G. Komen foundation or whatever—one mostly hears bioethicists quoted in the newspaper urging science to slow down. As if doubling human lifespans or giving everyone an extra 15 IQ points would in some way run the risk of "destroying that which makes us human" or something.

Anyway, this has basically been my perspective as a newspaper reader—I don't read specialty publications in bioethics. And perhaps it should come as no surprise that bioethics' usefulness to mainstream discourse would be to reinforce status quo bias, whether that's a true reflection of the field or not. In any case, it was a welcome surprise to see an interview in The Atlantic with Allen Buchanan, who apparently is an eminent bioethicist (Duke professor, President's Council on Bioethics), entirely devoted to refuting common objections to cognitive enhancement.

Some points Buchanan makes, responding to common worries:

I doubt any of these points will be at all surprising or novel to LW readers, but I was really pleased to see them covered in a mainstream publication, and to know that bioethics has people like Buchanan who are more interested in what we stand to gain from technology than what we stand to lose.
Via Brian Leiter.

6 comments

Comments sorted by top scores.

comment by Grognor · 2012-02-08T14:24:08.170Z · LW(p) · GW(p)

A much smarter human population will probably be morally, as well as cognitively, enhanced—the "evil genius" problem isn't necessarily a realistic one to worry about.

This is something I think I've noticed. If it is true, then why is it true? Some hypotheses:

  • Smart people spend more time reading and less time watching movies and television shows - ethics in books are superior/are recalled better
  • Smart people read the works of other smart people - only morally sound writing survives over time, so less exposure to unethical stuff
  • Smart people are more likely to notice cognitive dissonance and more likely to do something about it in the event of being about to commit a questionable act
  • Smart people spend more time thinking about their actions
  • Smart people have more recall of bad things others have done to them and are unwilling to put others through similar situations
  • Smart people spend more time considering the consequences of their actions

And, in the interest of the virtue of evenness (I could just be inventing ways to confirm my preconceptions, after all), some hypotheses about why smart people are less moral than non-smart people:

  • Smart people are isolated and tormented as youngsters and this makes them cynical and bitter
  • Smart people are "too far above" most people to empathize with them
  • Smart people can overcome petty obstacles like "empathy" and "guilt"
  • Smart people only care about the advancement of science, not what's actually good for people
  • Thinking about ethics for too long makes you reject them all?

That second list was much harder to come up with than the first one. Here's hoping I'm just plain right, and that's the reason why.

Anyone want to do some science to figure out which, if any, of these guesses is true?

Replies from: grouchymusicologist, gwern, Jayson_Virissimo, Dmytry
comment by grouchymusicologist · 2012-02-08T16:09:12.360Z · LW(p) · GW(p)

Interesting ideas -- I can think of a few more. On the "smart = more moral" side:

  • Smart people are more likely to be able to call on Kahneman's System 2 when necessary, which correlates with utilitarian judgments (see this paper by Joshua Greene et al.). Similarly, they're more likely to have the mental resources to resist their worst impulses, if they want to resist them.
  • Note that some of your "smart = less moral" proposals concern a world in which some people are much smarter than others. If cognitive enhancement were widespread, we might get its moral benefits without the drawbacks of smart people suffering social stigmas of various kinds (your first two bullets in the second set).
  • Being much smarter might include being much better at interpersonal skills, increasing empathy for others.
  • Likewise, if there are morality network effects -- as in the tendency for well-organized societies to be less violent -- then a smarter overall population might be very much more moral.

On the "smart = less moral" side:

  • If cognitive enhancement happens such that some people are much, much smarter than others, the temptation for the much smarter people to use their intelligence to take advantage of the less-smart people may be simply too great to resist. Presumably even very, very smart people will have their price.

By and large, I think I'd agree with you that it seems right that a smarter human population would be more moral, but it's by no means certain.

comment by gwern · 2012-02-08T20:49:51.036Z · LW(p) · GW(p)

Smart people tend to be more cooperative and accepting of economic deals, IIRC; following refs from http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r37

comment by Jayson_Virissimo · 2012-02-09T08:31:23.187Z · LW(p) · GW(p)
  • Smart people are more moral because they have a greater ability to recognise what is in their self-interest and being moral is in their self-interest a significant proportion of the time (in other words, morality is instrumentally rational).

Note: I am not affirming this hypothesis, I'm merely think it is worth considering.

comment by Dmytry · 2012-02-09T07:27:08.197Z · LW(p) · GW(p)

IMO its fairly straightforward. Morality requires intelligence just as construction of buildings that do not fail requires intelligence. To decide on an action based on some high level moral imperative, one needs to think a fair lot.

Most people are just too stupid to be moral. They were with their own hands murdering minorities if they were in right position in 3rd reich. They were burning witches. They aren't even facing a choice to be moral or not. They are hundred percent amoral as far as big picture goes. They need to obey very direct simple rules made by others, the end result might be moral-ish for good set of rules. They can't reason from their action to any high level moral imperative. In a discussion they'll say that you can't either. Hell they'll say it with emphasis, seeing it as virtue - the 'its wrong' kind of can't.

The intelligent people... A normal kid who's grown with presumption of mental disability, among mentally disabled, will play mentally disabled to get slack. So do many intelligent people grow up with such habit, spoiled by having plausible deniability of intent via being stupid.

In light of this I think that intelligence enhancement, in culture that does progress towards improved morality, would improve morality.

comment by fburnaby · 2012-02-10T11:57:56.655Z · LW(p) · GW(p)

As a nerd, I have a (usually socially unacceptable) impulse to offer 16 possible ways that some plan could go wrong. It's fun, and on occasion useful. It seems very possible to me that your impression of "the state of bioethics" comes from a selection effect, where bioethecists show off their coolest objections to an obviously good thing.

Actually, in engineering school, I learned the same notion -- "shoot lame puppies early". It's a good plan to look for every possible (for a reasonably narrow definition of "possible") way your design could fail before you move further.

All I'm trying to say is that just because these philosophers are talking about cases that probably don't matter doesn't mean that no-one should think about them. On the very small chance that they do matter, the payoffs for having thought about them are large.