Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation

post by Roko · 2009-06-18T15:09:57.954Z · LW · GW · Legacy · 92 comments

Jamais Cascio writes in the atlantic:

Pandemics. Global warming. Food shortages. No more fossil fuels. What are humans to do? The same thing the species has done before: evolve to meet the challenge. But this time we don’t have to rely on natural evolution to make us smart enough to survive. We can do it ourselves, right now, by harnessing technology and pharmacology to boost our intelligence. Is Google actually making us smarter? ...

 ... Modafinil isn’t the only example; on college campuses, the use of ADD drugs (such as Ritalin and Adderall) as study aids has become almost ubiquitous. But these enhancements are primitive. As the science improves, we could see other kinds of cognitive-modification drugs that boost recall, brain plasticity, even empathy and emotional intelligence. They would start as therapeutic treatments, but end up being used to make us “better than normal.”

Read the whole article here.

This relates to cognitive enhancement as existential risk mitigation, where Anders Sandberg wrote:

Would it actually reduce existential risks? I do not know. But given correlations between long-term orientation, cooperation and intelligence, it seems likely that it might help not just to discover risks, but also in ameliorating them. It might be that other noncognitive factors like fearfulness or some innate discounting rate are more powerful.

The main criticisms of this idea generated in the Less Wrong comments were:

The problem is not that people are stupid. The problem is that people simply don't give a damn. If you don't fix that, I doubt raising IQ will be anywhere near as helpful as you may think. (Psychohistorian)

Yes, this is the key problem that people don't really want to understand. (Robin Hanson)

Making people more rational and aware of cognitive biases material would help much more (many people)

These criticisms really boil down to the same thing: people love their cherished falsehoods! Of course, I cannot disagree with this statement. But it seems to me that smarter people have a lower tolerance for making utterly ridiculous claims in favour of their cherished falsehood, and will (to some extent) be protected from believing silly things that make them (individually) feel happier, but are highly unsupported by evidence. Case in point: religion. This study1 states that

Evidence is reviewed pointing to a negative relationship between intelligence and religious belief in the United States and Europe. It is shown that intelligence measured as psychometric g is negatively related to religious belief. We find that in a sample of 137 countries the correlation between national IQ and disbelief in God is 0.60.

Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world. The data concerning religion detracts support from this hypothesis. There is also direct evidence to show that a whole list of human cognitive biases are more likely to be avoided by being more intelligent - though far from all (perhaps even far from most?) of them. This paper2 states:

In a further experiment, the authors nonetheless showed that cognitive ability does correlate with the tendency to avoid some rational thinking biases, specifically the tendency to display denominator neglect, probability matching rather than maximizing, belief bias, and matching bias on the 4-card selection task. The authors present a framework for predicting when cognitive ability will and will not correlate with a rational thinking tendency.

Anders Sandberg also suggested the following piece of evidence3 in favour of the hypothesis that increased intelligence leads to more rational political decisions:

Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N=183−130). Longitudinal studies at the country level (N=94−16) allow the analysis of causal relationships. It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability.

Thus the hypothesis that increasing peoples' intelligence will make them believe fewer falsehoods and will make them vote for more effective government has at least two pieces of empirical evidence on its side.

 

 


1. Average intelligence predicts atheism rates across 137 nations, Richard Lynn,  John Harvey and Helmuth Nyborg, Intelligence Volume 37, Issue 1,

2. On the Relative Independence of Thinking Biases and Cognitive Ability, Keith E. Stanovich, Richard F. West, Journal of Personality and Social Psychology, 2008, Vol. 94, No. 4, 672–695

3. Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty, Heiner Rindermann, Intelligence, Volume 36, Issue 4

92 comments

Comments sorted by top scores.

comment by Arenamontanus · 2009-06-18T18:28:04.978Z · LW(p) · GW(p)

In many debates about cognition enhancement the claim is that it would be bad, because it would produce compounding effects - the rich would use it to get richer, producing a more unequal society. This claim hinges on the assumption that there would be an economic or social threshold to enhancer use, and that it would produce effects that were strongly in favour of just the individual taking the drug.

I think there is good reason to suspect that enhancement has positive externalities - lower costs due to stupidity, individual benefits that produce tax money, perhaps better governance, cooperation and more great ideas. In fact, it might be that these benefits are more powerful than the individual ones. If everybody got 1% smarter, we would not notice much improvement in everyday life, but the economy might grow a few percent and we would get slightly faster technological development and better governance. That might actually turn the problem into a free rider problem: unless you really want to be smarter taking the enhancer might be a cost to you (risk of side-effects, for example). So you might want everybody else to take the enhancers, and then reap the benefit without the cost.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-06-22T15:03:12.199Z · LW(p) · GW(p)

There's a historical IQ enhancer we can use to look for this effect: food.

comment by wuwei · 2009-06-18T17:31:25.845Z · LW(p) · GW(p)

I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.

Replies from: Roko, HughRistik, Vladimir_Nesov
comment by Roko · 2009-06-18T18:07:48.705Z · LW(p) · GW(p)

And also, this argument is vulnerable to the reversal test. If you think that higher IQ increases existential risk, then you think that lower IQ decreases it. Presumably you don't believe that putting lead in the water supply would decrease existential risks?

Replies from: steven0461, wuwei, wuwei
comment by steven0461 · 2009-06-18T21:52:00.639Z · LW(p) · GW(p)

believing lead in the water supply would decrease existential risks != advocating putting lead in the water supply

Replies from: Roko
comment by Roko · 2009-06-18T22:06:57.246Z · LW(p) · GW(p)

See correction

comment by wuwei · 2009-06-18T23:56:09.156Z · LW(p) · GW(p)

If you decreased the intelligence of everyone to 100 IQ points or lower, I think overall quality of life would decrease but that it would also drastically decrease existential risks.

Edit: On second thought, now that I think about nuclear and biological weapons, I might want to take that back while pointing out that these large threats were predominantly created by quite intelligent, well-intentioned and rational people.

Replies from: steven0461
comment by steven0461 · 2009-06-19T00:17:05.717Z · LW(p) · GW(p)

If you decreased the intelligence of everyone to 100 IQ points or lower, that would probably eliminate all hope for a permanent escape from existential risk. Risk in this scenario might be lower per time unit in the near future, but total risk over all time would approach 100%.

Replies from: Roko, Roko, wuwei
comment by Roko · 2009-06-19T01:10:00.881Z · LW(p) · GW(p)

Consider a world without nuclear weapons. What would there be to prevent world war I ad infinitum? As a male of conscriptable age, I would consider such a scenario to be so bad as to be not much better than global thermonuclear war.

Replies from: Vladimir_Nesov, taw
comment by Vladimir_Nesov · 2009-06-19T04:43:50.674Z · LW(p) · GW(p)

Why do you think it's the nuclear weapons that keep the current peace, and not the memory of past wars, and more generally/recently cultural moral progress? This is related to your prediction in resource depletion scenario.

comment by taw · 2009-06-19T03:58:37.684Z · LW(p) · GW(p)

List of wars by death toll is very interesting.

There's little evidence for theory that threat of global thermonuclear war creates global peace.

  • Even during the world wars, percentage of people who died of violence seems vastly smaller than in typical hunter gatherer societies.
  • There were long periods of peace before, most notably 1815-1914 where military technology was essentially equivalent to that of World War I. Before that 18th century was relatively bloodless too.
  • One of top ten most deadly wars happened just a few years ago. So even accepting the premise that thermonuclear threat prevents war, we face either wide proliferation, or it won't really do much to stop wars.
  • One of the countries with massive nuclear weapons stockpiles suffered total collapse. This might happen again in the future, in near future most likely to Pakistan or North Korea, but in longer term to any country.
  • Countries having nuclear weapons engaged in plenty of conventional wars, mostly on smaller scale, and fought each other by proxy.
comment by Roko · 2009-06-19T01:07:04.239Z · LW(p) · GW(p)

I had exactly the same thought.

Also, on a more pragmatic and personal level, increasing average human intelligence increases the probability of immortality and other "surprisingly good" outcomes of humans or other intelligences optimizing our world, such as universal beauty, health, happiness and better quality of life. This needn't be through superintelligence, it could just be through the intelligence/wealth production correlation.

comment by wuwei · 2009-06-19T00:27:49.674Z · LW(p) · GW(p)

That's a good point, but it would be more relevant if this were a policy proposal rather than an epistemic probe.

Replies from: steven0461
comment by steven0461 · 2009-06-19T00:37:50.248Z · LW(p) · GW(p)

I don't see why this being an epistemic probe makes risk per near future time unit more relevant than total risk integrated over time.

The whole thing is kind of academic, because for any realistic policy there'd be specific groups who'd be made smarter than others, and risk effects depend on what those groups are.

comment by wuwei · 2009-06-18T19:08:55.317Z · LW(p) · GW(p)

You seem to be assuming that the relation between IQ and risk must be monotonic.

I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.

Replies from: Roko
comment by Roko · 2009-06-18T20:19:57.924Z · LW(p) · GW(p)

This claim is false - The reversal test does not require the function risk(IQ) to be monotonic. It only requires that the function is locally monotonic around the current IQ value of 100.

comment by HughRistik · 2009-06-18T20:26:04.577Z · LW(p) · GW(p)

I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline).

Could you elaborate a bit more on why you think this? Are there any historical examples you are thinking of?

Replies from: wuwei
comment by wuwei · 2009-06-19T00:00:27.632Z · LW(p) · GW(p)

To answer your second question: No, there aren't any historical examples I am thinking of. Do you find many historical examples of existential risks?

Edit: Global nuclear warfare and biological weapons would be the best candidates I can think of.

Replies from: HughRistik
comment by HughRistik · 2009-06-19T05:01:48.903Z · LW(p) · GW(p)

Could you answer my first question, too? Which are the intelligent, well-intentioned, and relatively rational humans you are thinking of? Scientists developing nanotech, biotech, and AI? Policy-makers? Who? How would an example disaster scenario unfold in your view?

Are you saying that the very development of nanotech, biotech, and AI would create an elevated level of existential risk? If so, I would agree. A common counter-argument I've heard is that whether we like it or not, someone is going to make progress in at least one of those areas, and that we should try to be the first movers rather than someone less scrupulous.

In terms of safety, using AI as an example:

World with no AI > World where relatively scrupulous people develop an AI > World where unscrupulous people develop an AI

Think about how the world would be if Russia or Germany had developed nukes before the US.

Global nuclear warfare and biological weapons would be the best candidates I can think of.

Intelligence did allow the development of nukes. Yet given that we already have them, global intelligence would probably decrease the risk of them being used.

Let's assume, for the sake of argument, that the mere development of future nanotech, biotech, and AI doesn't go horribly wrong and create an existential disaster. If so, then the existential risk will lie in how these technologies are used.

I will suggest that there is a certain threshold of intelligence greater than ours where everyone is smart enough not to do globally harmful stunts with nuclear weapons, biotech, nanotech, and AI and/or smart enough to create safeguards where small amounts of intelligent crazy people can't do so either. The trick will be getting to that level of intelligence without mishap.

Replies from: HughRistik
comment by HughRistik · 2009-06-19T05:47:04.441Z · LW(p) · GW(p)

I was reading the Wikipedia Cuban Missile Crisis article, and it does seem that intelligence helped avert catastrophe. There are multiple points where things could have gone wrong but didn't due to people being smart enough not to do something rash. I suggest that even greater intelligence might ensure that situations like this never develop or are resolved.

Here are some interesting parts:

That morning, a U-2 piloted by USAF Major Rudolf Anderson, departed its forward operating location at McCoy AFB, Florida, and at approximately 12:00 p.m. Eastern Standard Time, was shot down by an S-75 Dvina (NATO designation SA-2 Guideline) SAM launched from an emplacement in Cuba. The stress in negotiations between the USSR and the U.S. intensified, and only later was it learned that the decision to fire was made locally by an undetermined Soviet commander on his own authority.

If this guy had been smarter, maybe this mistake would never have been made.

We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn't have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn't meet, we'd simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought "Well, it might have been an accident, we won't attack." Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.

Luckily, Kruschev and McNamara were smart enough not to escalate. Their intelligence protected against the risk caused by the stupid Soviet commander.

Arguably the most dangerous moment in the crisis was unrecognized until the Cuban Missile Crisis Havana conference in October 2002, attended by many of the veterans of the crisis, at which it was learned that on October 26, 1962 the USS Beale had tracked and dropped practice depth charges on the B-39, a Soviet Foxtrot-class submarine which was armed with a nuclear torpedo. Running out of air, the Soviet submarine was surrounded by American warships and desperately needed to surface. An argument broke out among three officers on the B-39, including submarine captain Valentin Savitsky, political officer Ivan Semonovich Maslennikov, and chief of staff of the submarine flotilla, Commander Vasiliy Arkhipov. An exhausted Savitsky became furious and ordered that the nuclear torpedo on board be made combat ready. Accounts differ about whether Commander Arkhipov convinced Savitsky not to make the attack, or whether Savitsky himself finally concluded that the only reasonable choice left open to him was to come to the surface.[29]

At the Cuban Missile Crisis Havana conference, Robert McNamara admitted that nuclear war had come much closer than people had thought. Thomas Blanton, director of the National Security Archive, said that "a guy called Vasili Arkhipov saved the world."

Basically, a stupid dude on the sub wanted to use the missile, but a smart dude stopped him.

Yes, existential risk ultimately came from the intelligent developers of nuclear weapons. Yet once the cat was out of the bag, existential risks came from people being stupid, and those risks were counteracted by people being smart. I would expect that more intelligence would be even more helpful in potential disaster situations like this.

The real risk seems to be from weapons developed by smart people falling into the hands of stupid people. Yet if even the stupidest people were smart enough not to play around with mutually assured destruction, then the world would be a safer place.

Replies from: Annoyance
comment by Annoyance · 2009-06-22T22:12:59.014Z · LW(p) · GW(p)

What relationship does the kind of 'smartness' possessed by the individuals in question have with IQ?

I don't think there are good reasons for thinking they're one and the same.

Replies from: MichaelBishop, HughRistik
comment by Mike Bishop (MichaelBishop) · 2009-06-22T23:27:14.073Z · LW(p) · GW(p)

I agree with Annoyance here. My guess is that a higher IQ may help the individuals in the situations Hughristik describes, but this is not the type of evidence we should consider very convincing. In this example, I would guess that differences in the individual's desire and ability to think through the consequences of their actions is far more important than differences in there IQ. This may be explained by the incentives facing each individual.

Replies from: HughRistik, Annoyance
comment by HughRistik · 2009-06-24T00:04:01.595Z · LW(p) · GW(p)

In this example, I would guess that differences in the individual's desire and ability to think through the consequences of their actions is far more important than differences in there IQ.

This may be true, but "ability to think through the consequences of actions" is probably not independent of general intelligence. People with higher g are better at thinking through everything. This is what the research I linked to (and much that I didn't link to) shows.

This graph from one of the articles shows that people with higher IQ are less likely to be unemployed, have illegitimate children, live in poverty, or be incarcerated. These life outcomes seem potentially related to considering consequences and planning for the long-term. If intelligence is related to positive individual life outcomes, then it would be unsurprising if it is also related to positive group or world outcomes.

In the case of avoiding use of nuclear weapons, there is probably only a certain threshold of intelligence necessary. Yet from the historical example of the Cuban Missile Crisis, the thinking involved wasn't always trivial:

We had to send a U-2 over to gain reconnaissance information on whether the Soviet missiles were becoming operational. We believed that if the U-2 was shot down that—the Cubans didn't have capabilities to shoot it down, the Soviets did—we believed if it was shot down, it would be shot down by a Soviet surface-to-air-missile unit, and that it would represent a decision by the Soviets to escalate the conflict. And therefore, before we sent the U-2 out, we agreed that if it was shot down we wouldn't meet, we'd simply attack. It was shot down on Friday [...]. Fortunately, we changed our mind, we thought "Well, it might have been an accident, we won't attack." Later we learned that Khrushchev had reasoned just as we did: we send over the U-2, if it was shot down, he reasoned we would believe it was an intentional escalation. And therefore, he issued orders to Pliyev, the Soviet commander in Cuba, to instruct all of his batteries not to shoot down the U-2.

Both sides were constantly guessing the reasoning of the other.

In short, we do have reasons to suspect a relationship between intelligence and restraint with existentially risky technologies. People with higher intelligence don't merely have greater "book smarts," they have better cognitive performance in general and better life and career outcomes on an individual level, which may also extrapolate to a group/world level. Will more research be necessary to make us confident in this notion? Of course, but our current knowledge of intelligence should establish it as probable.

Furthermore, people with higher intelligence probably have a better ability to guess the moves of other people with existentially risky technologies and navigate Prisoners' Dilemmas of mutually assured destruction, as we see in the historical example of the Cuban Missile Crisis. We don't have rigorous scientific evidence for this point yet, though I don't think it's a stretch, and hopefully we will never have a large sample size of existential crises.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-24T03:43:17.278Z · LW(p) · GW(p)

I'm not sure we have serious disagreements on this. Research on intelligence enhancement sounds like a good idea, for many reasons. I'm just choosing to emphasize that there are probably other much more effective approaches to reducing existential risks, and its by no means impossible that intelligence enhancement could increase existential risks.

comment by Annoyance · 2009-06-23T16:26:37.304Z · LW(p) · GW(p)

What about the inherent incentive that motivates people even in the absence of strong external factors?

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-23T21:56:45.548Z · LW(p) · GW(p)

I'm not sure I understand you. Are you referring to the distinction between intrinsic and extrinsic motivation?

Replies from: Annoyance
comment by Annoyance · 2009-06-24T19:41:08.380Z · LW(p) · GW(p)

More like a distinction between different types of intrinsic factors.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-06-24T21:28:16.970Z · LW(p) · GW(p)

I still have no idea what you're talking about and how it relates to my comment.

comment by HughRistik · 2009-06-22T23:57:07.284Z · LW(p) · GW(p)

When I said "smartness," I was thinking of general intelligence, the g-factor. As it happens, g does have a high correlation with IQ (0.8 as I recall, though I can't find the source right now). g is a highly general factor related to better performance in many areas including career and general life tasks, not just in academic settings (see p. 342 for a summary of research), so we should hypothesize that nuclear missile restraint is related to g also.

Replies from: conchis
comment by conchis · 2009-06-23T16:40:33.172Z · LW(p) · GW(p)

As it happens, g does have a high correlation with IQ

Someone who knows the details of this is welcome to correct me if I'm wrong, but as I understand it g is a hypothetical construct derived via factor analysis on the components of IQ tests, so it will necessarily have a high correlation with those tests (provided the results of the components are themselves correlated).

Replies from: Annoyance
comment by Annoyance · 2009-06-23T16:48:42.823Z · LW(p) · GW(p)

Correct. g is the degree to which performances on various subtypes of IQ tests are statistically correlated - the degree that performance on one predicts performance on another.

It's a very crude concept, and one that has not been reliably identified as being detectable without use of IQ tests, although several neurophysiologic properties have been suggested as indicating g.

comment by Vladimir_Nesov · 2009-06-18T17:59:27.881Z · LW(p) · GW(p)

That's a kind of the giant cheesecake fallacy. Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.

Replies from: wuwei, saturn, Eliezer_Yudkowsky
comment by wuwei · 2009-06-18T19:07:41.561Z · LW(p) · GW(p)

And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.

I'm talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-18T19:15:21.219Z · LW(p) · GW(p)

I agree, this doesn't fall clearly under the original concept of giant cheesecake fallacy, but it points to a good non-specious generalization of that concept, for which I gave a self-contained explanation in my comment.

Aside from that, your reply addresses issues irrelevant to my critique of your assertion. It sounds like a soldier-argument.

Replies from: HughRistik
comment by HughRistik · 2009-06-18T20:25:07.318Z · LW(p) · GW(p)

It's not the giant cheesecake fallacy, but Vladimir Nesov is completely correct when he says:

Capability increases risk caused by some people, but it also increases the power of other people to mitigate the risks. Knowing about the increase in the capability of these factors doesn't help you in deciding which of them wins.

Anyone arguing that existential risks are elevated by increasing intelligence must also account for the mitigating factor against existential risk that intelligence also plays.

Replies from: timtyler
comment by timtyler · 2009-06-19T01:34:25.665Z · LW(p) · GW(p)

That is rather easily accounted for, I would think. Attack is easier than defense. It is easier to build a bomb than to defend against bomb attacks; it is easier to build a laser than to defend against laser attacks - and so on.

Replies from: HughRistik
comment by HughRistik · 2009-06-19T05:02:24.653Z · LW(p) · GW(p)

This is true. Yet capability to attack isn't the same thing as actually attacking.

Even at our current level of intelligence, and the world is not ravaged by nuclear weapons or biological weapons. Maybe we are just lucky so far.

All else being equal, smarter people are probably less likely to attack with globally threatening weapons, particularly when mutually assured destruction is a factor. In cases of MAD, attack isn't exactly "easy" when you are ensuring your own destruction as well. There are some crazy people with nukes, but you have to be crazy and stupid to attack in the case of MAD, and nobody so far has that combination of craziness and stupidity. MAD is an IQ test that all humans with nukes have passed so far (the US bombing Japan was not under MAD).

I propose a study:

The participants are a sample of despots randomly assigned to two conditions. The control condition is given an IQ test and some nukes. The experimental condition is given intelligence enhancement, an IQ test, and some nukes. At the end of the experiment, scientists stationed on the moon will measure the effect of the intelligence manipulation on nuke usage.

Replies from: cousin_it
comment by cousin_it · 2009-06-19T10:12:30.998Z · LW(p) · GW(p)

But the US did bomb Japan. For each new existentially threatening tech, the first power to develop it won't be bound by MAD.

Replies from: loqi, Vladimir_Golovin, HughRistik
comment by loqi · 2009-06-19T16:30:12.579Z · LW(p) · GW(p)

And notice that it didn't provoke a nuclear war, and the human race still exists. Nuclear weapons weren't an existential threat until multiple parties obtained them. If MAD isn't a concern in using a given weapon, it doesn't sound like much of an existential threat.

Replies from: cousin_it
comment by cousin_it · 2009-06-19T17:34:56.796Z · LW(p) · GW(p)

If MAD isn't a concern in using a given weapon, it doesn't sound like much of an existential threat.

I dont understand the logic of this sentence. If I create an Earth-destroying bomb in my basement, MAD doesn't apply but it's still an existential threat. Similar reasoning works for nanotech, biotech and AI.

comment by Vladimir_Golovin · 2009-06-19T12:13:43.263Z · LW(p) · GW(p)

There could be cases when an older-generation technology can be used to assure destruction. Say, if the new tech doesn't prevent ICBMs and nuclear explosions, both sides will still be bound by MAD.

comment by HughRistik · 2009-06-19T19:09:52.184Z · LW(p) · GW(p)

This is a problem, but not necessarily an existential risk, which is the topic under discussion. Existential risk has a particular meaning: it must be global, whereas the US bombing Japan was local.

comment by saturn · 2009-06-18T20:26:28.762Z · LW(p) · GW(p)

If we assume that causing risk requires a certain intelligence level and mitigating risks requires a certain (higher) level, changing the distribution of intelligence in a way that enlarges both groups will not, in general, enlarge both by the same factor.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-18T20:32:59.148Z · LW(p) · GW(p)

Obviously. A coin is also going to land on exactly one of the sides (but you don't know which one). Why do you pronounce this fact?

Replies from: timtyler
comment by timtyler · 2009-06-19T01:36:29.104Z · LW(p) · GW(p)

That statement shows a way in which the claim that increasing the number of intelligent people will increase rather than decrease risk might be supported.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-18T19:56:34.748Z · LW(p) · GW(p)

How the heck is that a giant cheesecake fallacy?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-06-18T20:15:17.720Z · LW(p) · GW(p)

Both are special cases of the following fallacy. A certain factor increases the strength of some possible positive effect, and also the strength of some possible negative effect, with the consequences of these effects taken in isolation being mutually exclusive. An argument is then given that since this factor increases the positive effect (negative effect), the consequences are going to be positive (negative), and therefore the factor itself is instrumentally desirable (undesirable). The argument doesn't recognize the other side of the possible consequences, ignoring the possibility that the opposite effect is going to dominate instead.

Maybe it has another existing name; the analogy seems useful.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-19T09:04:19.846Z · LW(p) · GW(p)

Giant cheesecake is about the jump from capability to motive, usually in the presence of anthropomorphism or other reasons to assume the preference without thinking.

This sounds more like a generic problem of technophilia (phobia) - mostly just confirmation bias or standard filtering of arguments. It probably does need a name, though, like Appeal to Selected Possibilities or something like that.

comment by Scott Alexander (Yvain) · 2009-06-18T15:42:37.574Z · LW(p) · GW(p)

Really, really, really doubtful that correlations between national IQ and, well, anything prove anything besides that certain countries are generally better off than others. That correlation is probably just differentiating First World countries from Third World countries in general - the First World has better health and education, and also better government. Although I'm agnostic on the existence of racial IQ differences, those aren't what's going on here, considering the wide variation in success of countries with similar races.

Same with IQ versus religion within and between countries: it's probably just an artifact of religion vs. wealth correlations. I scanned those articles and I didn't see anything saying they'd adjusted for it; if there is, then I'll start getting excited.

Replies from: Arenamontanus, Roko, Drahflow, Roko
comment by Arenamontanus · 2009-06-18T18:18:58.019Z · LW(p) · GW(p)

The national/regional IQ literature is messy, because there are so many possible (and even likely) feedback loops between wealth, schooling, nutrition, IQ and GDP. Not to mention the rather emotional views of many people on the topic, as well as the lousy quality of some popular datasets. Lots of clever statistical methods have been used, and IQ seems to retain a fair chunk of explanatory weight even after other factors have been taken into account. Some papers have even looked at staggered data to see if IQ works as a predictor of future good effects, which it apparently does.

Whether it would be best to improve IQ, health or wealth directly depends not just on which has the biggest effect, but also on how easy it is and how the feedbacks work.

comment by Roko · 2009-06-18T16:40:11.522Z · LW(p) · GW(p)

it's probably just an artifact of religion vs. wealth correlations.

If religion is negatively correlated to wealth, then presumably one would attach some likelihood to increasing wealth leading to decreased religious belief. We all take cognitive enhancers causing us to get richer, then we all stop believing in silly things, like God. This still results in increased IQ leading to truer beliefs.

This is a good old causation/correlation debate; but it seems to me that without further evidence we should take the IQ/religiosity study as weak evidence in favour of the hypothesis that IQ causes non-religiosity, possibly mediated by wealth:

high-IQ -----> non-religiosity

high-IQ -----> high-Wealth ------> non-religiosity

comment by Drahflow · 2009-06-18T16:38:40.369Z · LW(p) · GW(p)

Or intelligent people are just better at getting wealthy.

Replies from: Roko
comment by Roko · 2009-06-18T16:56:48.583Z · LW(p) · GW(p)

This is almost certainly true. Therefore, we have

high-IQ -----> high-Wealth ------> non-religiosity

comment by Roko · 2009-06-18T16:35:04.321Z · LW(p) · GW(p)

Really, really, really doubtful that correlations between national IQ and, well, anything prove anything besides that certain countries are generally better off than others. That correlation is probably just differentiating First World countries from Third World countries in general - the First World has better health and education, and also better government.

The paper you are referring to - reference 3 - "Estimating state IQ: Measurement challenges and preliminary correlates" - is looking at variation over US states, e.g. Alaska, Alabama, ... not countries. You should re-write your comment taking this into account.

comment by Unnamed · 2009-06-18T23:34:16.353Z · LW(p) · GW(p)

The study showing a correlation between "IQ" and quality of government (reference 3) estimated IQ based on the performance of public school 4th and 8th graders on standardized tests in math and reading. With that measure, the opposite causal direction seems far more likely: high quality state government leads to better public schools and thus higher test scores (which the author uses as a proxy for IQ).

State IQ was estimated from the National Assessment of Educational Progress (NAEP) standardized tests for reading and math that are administered to a sample of public school children in each of the 50 states. ... State data were available for grades 4 and 8. ... For each year, for each test, the national mean and standard deviation was used to standardize the test to have a mean of 100 and a standard deviation of 15. This standardization places the scores on the typical metric for IQ tests. The means of the standardized reading scores for grades 4 and 8 were averaged across years as were the means of the standardized math scores. State IQ was defined as the average of mean reading and mean math scores.

Replies from: Arenamontanus, Roko, Roko
comment by Arenamontanus · 2009-06-19T15:55:52.069Z · LW(p) · GW(p)

This is why papers like H. Rindermann, Relevance of Education and Intelligence for the Political Development of Nations: Democracy, Rule of Law and Political Liberty, Intelligence, v36 n4 p306-322 Jul-Aug 2008 are relevant. This one looks at lagged data, trying to infer how much effect schooling, GDP and IQ at time t1 affects schooling, GDP and IQ at time t2.

The bane of this type of studies is of course the raw scores - how much cognitive ability is actually measured by school scores, surveys, IQ tests or whatever means that are used - and whether averages is telling us something important. One could imagine a model where extreme outliers were the real force of progress (I doubt this one, given that IQ does seem to correlate with a lot of desirable things and likely has network effects, but the data is likely not strong enough to rule out an outlier theory).

Replies from: Roko
comment by Roko · 2009-06-20T00:44:31.298Z · LW(p) · GW(p)

Thanks Anders. It occurs to me at this point that having a personal Anders to back you up with relevant references when in a tight spot is a significant cognitive enhancement.

comment by Roko · 2009-06-19T00:00:06.743Z · LW(p) · GW(p)

This certainly indicates that the opposite causal direction is more likey, given just that evidence.

I suspect that both directions are active; but I would need further evidence to back this up.

comment by Roko · 2009-06-20T00:36:22.570Z · LW(p) · GW(p)

See correction to article

comment by CronoDAS · 2009-06-19T06:54:19.914Z · LW(p) · GW(p)

Many people in the comments made the claim that making people more intelligent will, due to human self-deceiving tendencies, make people more deluded about the nature of the world.

Well, what I meant to say was that, we can't take it for granted that making people smarter won't make them more biased, in the absence of data. It might not seem likely to happen, but we can't assign it a probability of "too small to matter" just yet.

(This post does, indeed, contain relevant data that suggests that smarter people believe fewer absurdities...)

Replies from: Arenamontanus
comment by Arenamontanus · 2009-06-19T16:03:45.548Z · LW(p) · GW(p)

One bias that I think is common among smart, academically minded people like us is that the value of intelligence is overestimated. I certainly think we have some pretty good objective reasons to believe intelligence is good, but we also add biases because we are a self-selected group with a high "need for cognition" trait, in a social environment that rewards cleverness of a particular kind. In the population at large the desire for more IQ is noticeably lower (and I get far more spam about Viagra than Modafinil!).

If I were on the Hypothetical Enhancement Grants Council, I think I would actually support enhancement of communication and cooperative ability slightly more than pure cognition. More cognitive bang for the buck if you can network a lot of minds.

comment by loqi · 2009-06-18T18:37:25.978Z · LW(p) · GW(p)

Though I lean toward agreeing with the conclusion that increased IQ would mitigate existential risk, I've been somewhat skeptical of the assertions you've previously made to that effect. This post provides some pretty reasonable support for your position.

The statement "Can I find some empirical data showing a corellation between IQ and quality of government" does make me curious about your search strategy, though. Did you specifically look for contrary evidence? Are there any other correlations with IQ (besides the old "more scientists to kill us" argument) that might directly or indirectly contribute to risk, rather than reduce it?

Kudos and karma to anyone who can dig up evidence unambiguously contradicting Roko's hypothesis.

Replies from: Roko
comment by Roko · 2009-06-18T20:26:55.452Z · LW(p) · GW(p)

My search strategy was to put "IQ" "religion" etc into google scholar and google. I found no papers that suggested IQ correlates with increased religiosity. I found the reference to good governance by chance; it was a pleasant surprise.

I did not actively look for contradictory evidence.

Replies from: Annoyance, curious
comment by Annoyance · 2009-06-22T15:49:39.078Z · LW(p) · GW(p)

I did not actively look for contradictory evidence.

I hate to discourage you when you're otherwise doing quite well, but the above is a major, major error.

Due to the human tendency towards confirmation bias, it's vastly important that you try to get a sense of the totality of the evidence, with a heavy emphasis on the evidence that contradicts your beliefs. If you have to prioritize, look for the contradicting stuff first.

Replies from: Roko
comment by Roko · 2009-06-22T16:13:51.566Z · LW(p) · GW(p)

I suppose if I thought anyone would do anything with this idea - like if someone said "OK, great idea, we're going to appoint you as an advisor to the new enhancement panel", I'd start getting very cautious and go make damn sure I wasn't wrong.

But as the situation is ... I am not particularly incentivized to do this; and others at LW will probably be better at finding evidence against this than I am.

Replies from: Annoyance
comment by Annoyance · 2009-06-22T21:08:47.938Z · LW(p) · GW(p)

I'd start getting very cautious and go make damn sure I wasn't wrong.

You should be doing that anyway.

But as the situation is ... I am not particularly incentivized to do this

Interesting. Does it bother you that you are not strongly motivated to avoid error?

Replies from: Alicorn
comment by Alicorn · 2009-06-22T21:14:31.790Z · LW(p) · GW(p)

There is a legitimate question of what errors are worth the time to avoid. Roko made a perfectly sensible statement - that it's not his top priority right now to develop immense certitude about this proposition, but it would become a higher priority if the answer became more important. It is entirely possible to spend all of one's time attempting to avoid error (less time necessary to eat etc. to remain alive and eradicate more error in the long run); I notice that you choose to spend a fair amount of your time making smart remarks to others here instead of doing that. Does it bother you that you are at certain times motivated to do things other than avoid some possible instances of error?

Replies from: Annoyance
comment by Annoyance · 2009-06-22T22:10:48.719Z · LW(p) · GW(p)

Positive errors can be avoided by the simple expedient of not committing them. That usually carries very little cost.

Replies from: Alicorn
comment by Alicorn · 2009-06-22T22:23:13.509Z · LW(p) · GW(p)

I agree completely, but this doesn't seem to be Roko's situation: he's simply not performing the positive action of seeking out certain evidence.

Replies from: Annoyance
comment by Annoyance · 2009-06-23T16:31:45.510Z · LW(p) · GW(p)

But that action is a necessary part of producing a conclusion.

Holding a belief, without first going through the stages of searching for relevant data, is a positive error - one that can be avoided by the simple expedient of not reaching a conclusion before an evaluation process is complete. That costs nothing.

Asserting a conclusion is costly, in more than one way.

Replies from: thomblake, Vladimir_Nesov
comment by thomblake · 2009-06-23T16:45:45.553Z · LW(p) · GW(p)

Humans hold beliefs about all sorts of things based on little or no thought at all. It can't really be avoided. It might be an open question whether one should do something about unjustified beliefs one notices one holds. And I don't think there's anything inherently wrong with asserting an unjustified belief.

Of course, I'm even using 'unjustified' above tentatively - it would be better to say "insufficiently justified for the context" in which case the problem goes away - certainly seeing what looks like a flower is sufficient justification for the belief that there is a flower, if nothing turns on it.

Not sure which sort of case Roko's is, though.

comment by Vladimir_Nesov · 2009-06-23T17:50:38.080Z · LW(p) · GW(p)

At each point, you may reach a conclusion with some uncertainty. You expect the conclusion (certainty) to change as you learn more. It would be an error to immediately jump to inadequate levels of certainty, but not to pronounce an uncertain conclusion.

comment by curious · 2009-06-18T20:40:32.678Z · LW(p) · GW(p)

there's also the possibility of causality in the other direction -- that good governance can raise the IQ of a population (through any number of mechanisms -- better nutrition, better health care, better education, etc).

Replies from: Roko
comment by Roko · 2009-06-18T23:00:21.169Z · LW(p) · GW(p)

Again, finding correlation between IQ and quality of government constitutes weak evidence for the claim that increased IQ causes better government. Note that the authors of the paper made this claim too.

comment by aausch · 2009-06-30T21:45:28.349Z · LW(p) · GW(p)

I am slow and lazy today, so please forgive if I am asking for the obvious:

Do the referenced studies control for the process of acquiring education/intelligence, and test for causality?

It seems that a plausible competing hypothesis for the correlation between intelligence and, for example, religious belief, are:

  • the process of acquiring intelligence leads to removal of biases, rather than actual possession of intelligence leading to removal of biases. If we change to a different process for acquiring intelligence, we may lose side effects.
  • the process of disposing of religious beliefs leads to a more measurable or noticeable level of intelligence.
  • the process of becoming educated in current education systems (and as a result better exposing existing intelligence aptitude) works at eradicating certain sets of beliefs and biases in students

It seems to me that differentiating between data that supports these hypothesis is incredibly hard, and I wonder if the referenced researchers went to the lengths required.

Replies from: aausch
comment by aausch · 2009-07-01T01:02:03.753Z · LW(p) · GW(p)

Doh! I think missed the obvious.

This problem is related to the problem of producing FAI, according to the terms and assumptions that Eliezer has been using.

I'm willing to bet that making a human, with a broken value system, more intelligent (according to some measure of intelligence based on some kind of increased computational ability of the brain), suffers from much the same kinds of problems that throwing more computing power at an improperly designed AI does.

comment by AndrewKemendo · 2009-06-20T11:29:05.212Z · LW(p) · GW(p)

This comment seems to miss the idea:

What happens if such a complex system collapses? Disaster, of course. But don’t forget that we already depend upon enormously complex systems that we no longer even think of as technological. Urbanization, agriculture, and trade were at one time huge innovations. Their collapse (and all of them are now at risk, in different ways, as we have seen in recent months) would be an even greater catastrophe than the collapse of our growing webs of interconnected intelligence.

If in fact the future is what the rest of the article envisions, a world of accurate measures and prudent predictions, then the possibilities for collapse will become less and less.

Making the case that such largess will of course lead to the linear probability of increase in damage that would result in collapse, ignores in large part, if not the majority of the science behind cognitive development and AI science - risk mitigation and error elimination.

comment by Annoyance · 2009-06-18T15:20:37.543Z · LW(p) · GW(p)

For every Voltaire, there are a hundred Newtons, Increase Mathers, and Descartes. And countless Michael Behes.

And that's just religion. There are more sacred cows than just the traditional religions, more golden idols than could be worshiped by a hundred thousand faiths. Human cognition is a sepulchre, white-washed walls concealing corruption within.

“Religion always leads to rhetorical despotism,” Leto said. “Before the Bene Gesserit, the Jesuits were the best at it…. You learn enough about rhetorical despotism from a study of the Bene Gesserit. Of course, they do not begin by deluding themselves with it…. It leads to self-fulfilling prophecy and justifications for all manner of obscenities. (... ) It shields evil behind walls of self-righteousness which are proof against all arguments against the evil..."

Replies from: Cyan
comment by Cyan · 2009-06-18T15:57:38.889Z · LW(p) · GW(p)

Nice Heart of Darkness reference.

Replies from: gwern, Annoyance
comment by gwern · 2009-06-18T20:44:55.049Z · LW(p) · GW(p)

Hm, where's the Conrad ref? I see a God Emperor of Dune ref (Dune seems pretty popular here, I've noticed), but not that.

Replies from: Cyan
comment by Cyan · 2009-06-18T20:58:38.196Z · LW(p) · GW(p)

It's the whited sepulchre thing; it's one of the central themes of Heart of Darkness. (Google tells me the original quote is from Matthew 23:27).

comment by Annoyance · 2009-06-22T15:57:40.930Z · LW(p) · GW(p)

Thanks.

The important point is that when we look at the topics on which we can know with high confidence what the rational and correct positions are, there are often lots and lots and lots of highly intelligent people who take the wrong positions.

There was a point in history where atheism and antitheism was highly correlated with intelligence - as in Voltaire's day - but intelligence was not at all correlated with atheism or antitheism.

I suspect that's still true. Most 'scientists' are at least atheists, but if you look across all people with above-average intelligence most of them are theists still.

Intelligence gives people the ability to build taller, stronger, and more effective walls. It doesn't seem to help to induce people not to build them in the first place, or to tear down existing ones.

Replies from: orthonormal
comment by orthonormal · 2009-06-22T18:40:21.600Z · LW(p) · GW(p)

There was a point in history where atheism and antitheism was highly correlated with intelligence - as in Voltaire's day - but intelligence was not at all correlated with atheism or antitheism.

You keep using this word "correlated". I do not think it means what you think it means.

Namely, if A is positively correlated with B, then B is positively correlated with A. B does not have to happen the majority of times A happens for this to be the case.

Replies from: Annoyance
comment by Annoyance · 2009-06-22T21:07:40.330Z · LW(p) · GW(p)

Namely, if A is positively correlated with B, then B is positively correlated with A.

I said highly correlated. A corr B means B corr A, but the strength of one correlation doesn't have anything to do with the strength of the other.

Replies from: orthonormal, Cyan
comment by orthonormal · 2009-06-22T21:22:52.538Z · LW(p) · GW(p)

No, you said, as I quoted, that intelligence was not at all correlated with atheism, despite atheism being highly correlated with intelligence. This is uncontroversially and trivially impossible; if p(A|B)≠p(A) where p(A) and p(B) are positive, then p(B|A)≠p(B).

A corr B means B corr A, but the strength of one correlation doesn't have anything to do with the strength of the other.

The coefficient of correlation between A and B is the same as the coefficient of correlation between B and A, so this is false. I believe you mean, rather, that having a positive test for a rare disease can still leave you less than 50% likely to have the disease, while having the disease makes you very likely to test positive for it. However, the correlation is still strong in both directions: your chance of having the disease has jumped from "ridiculously unlikely" to just "unlikely" given that positive test.

Replies from: Annoyance
comment by Annoyance · 2009-06-22T21:45:44.644Z · LW(p) · GW(p)

The coefficient of correlation between A and B is the same as the coefficient of correlation between B and A, so this is false.

No, it's not false. The vast majority of intelligent people - educated, knowledgeable people - were once theists of one sort of another. The fact that significantly more of them were atheistic/antitheistic than the general population does not change that choosing one at random was still grossly unlikely to produce an AT/AnT.

If you continue to apply a mathematical model that is not being referenced in this context by my use of language, I'm going to become annoyed with you.

Replies from: orthonormal
comment by orthonormal · 2009-06-22T22:27:04.803Z · LW(p) · GW(p)

The vast majority of intelligent people - educated, knowledgeable people - were once theists of one sort of another. The fact that significantly more of them were atheistic/antitheistic than the general population does not change that choosing one at random was still grossly unlikely to produce an AT/AnT.

So, in other words, you mean precisely what Cyan and I had assumed you meant, but you refuse to acknowledge that the word "correlation" has an unambiguous and universal meaning that differs greatly from your usage of it; if you persist in this, you will misinterpret correlation to mean implication where it does not.

For example, smoking is correlated with lung cancer, but a randomly chosen smoker probably does not have lung cancer.

I don't know what else to say on this topic, other than that this is not a case of you being contrarian: you are simply wrong, and you should do yourself the favor of admitting it.

ETA: I'm going to leave this thread now, as the delicious irony of catching Annoyance in a tangential error is not a worthy feeling for a rationalist to pursue.

Replies from: Annoyance
comment by Annoyance · 2009-06-23T16:22:41.330Z · LW(p) · GW(p)

you refuse to acknowledge that the word "correlation" has an unambiguous and universal meaning

It's not universal. The general language use has a meaning that isn't the same as the statistical. That domain-specific definition does not apply outside statistics.

You are simply wrong.

comment by Cyan · 2009-06-22T21:10:36.336Z · LW(p) · GW(p)

If you mean statistical correlation, then corr(x,y) = corr(y,x). I think you mean something more like implication, e.g., your claim is that at one time in the past, atheist implied intelligent but intelligent did not imply atheist.

Replies from: Annoyance
comment by Annoyance · 2009-06-22T21:13:18.542Z · LW(p) · GW(p)

If the correlation is sufficiently small, it can be lower than the error rate in detecting it.

And though the two concepts are distinct, in this context they're the same. Implication and statistical correlation can be the same when what's implied is a likelihood instead of a certainty.

Replies from: Cyan
comment by Cyan · 2009-06-22T23:06:18.394Z · LW(p) · GW(p)

Implication and statistical correlation can be the same when what's implied is a likelihood instead of a certainty.

I can't tell if I disagree with you in a substantive way or just in your word usage (i.e., semantics). Can you please translate this assertion into math?