Transhumanist Nationalism and AI Politics
post by jacob_cannell · 2015-04-11T18:39:42.133Z · LW · GW · Legacy · 16 commentsContents
16 comments
From this article by Zoltan Istvan, in regards to the looming Global AI Arms race, he says:
As the 2016 US Presidential candidate for the Transhumanist Party, I don't mind going out on a limb and saying the obvious: I also want AI to belong exclusively to America. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as America has done for much of the last century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched in, let's say, North Korea, or Iran, or increasingly authoritarian Russia? What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country? The possible danger is overwhelming.
Now, to some extent I expect many Americans, on reflection, would at least partly agree with the above statement - and that should be concerning.
Consider the issue from the perspective of Russian, Chinese (or really any foreign) readers with similar levels of national pride.
One equivalent postionally reflected statement from a foreign perspective might read like this:
I also want AI to belong exclusively to China. Of course, I would hope to share the nonmilitary benefits and wisdom of a superintelligence with the world, as China has done for much of the this century with its groundbreaking innovation and technology. But can you imagine for a moment if AI was developed and launched by, let's say, the US NSA, or Israel, or India? . ..
On a related note, there was an interesting panel recently with Robin Li (CEO of Baidu), Bill Gates, and Elon Musk. They spent a little time discussing AI superintelligence. Robin Li mentioned that his new head of research - Andrew Ng - doesn't believe superintelligence is an immediate threat. In particular Ng said: "Worrying about AI risk now is like worrying about overpopulation on Mars." Li also mentioned that he has been advocating for a large chinese government investment in AI.
16 comments
Comments sorted by top scores.
comment by [deleted] · 2015-04-12T02:30:31.272Z · LW(p) · GW(p)
This perspective puzzled me for a moment. It puzzled me, not because Istvan is necessarily wrong, but because his concerns seem so irrelevant. For me, the sentence "superintelligence will belong to the US", takes a while to parse because it doesn't even type-check. Superintelligence will be enough of a game-changer that nations will mean something very different from what they do now, if they exist at all.
Istvan seems like someone modeling the Internet by thinking of a postal system, and then imagining it running really really fast.
Now, a more charitable reading would interpret his AI as some sort of superhuman but non-FOOMed tool AI, in which case his concerns make a bit more sense. But even in this case, this seems pretty much irrelevant. The US couldn't keep nuclear secrets from the Russians in the 50's, and this was before the Internet.
Replies from: dxu, Viliam↑ comment by dxu · 2015-04-12T03:00:56.051Z · LW(p) · GW(p)
Agree. In particular, this passage here
What if another national power told that superintelligence to break all the secret codes and classified material that America's CIA and NSA use for national security? What if this superintelligence was told to hack into the mainframe computers tied to nuclear warheads, drones, and other dangerous weaponry? What if that superintelligence was told to override all traffic lights, power grids, and water treatment plants in Europe? Or Asia? Or everywhere in the world except for its own country?
makes me think Istvan really doesn't understand what a "superintelligence" (or an "intelligence") is.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2015-04-12T15:23:03.983Z · LW(p) · GW(p)
To give him the benefit of doubt, he might be choosing his arguments based on what he expects his readers to understand. Skimming some of the comments to that article suggests that even this simplified example might have been of excessive inferential distance to some readers.
The "let's hope the first superintelligent belongs to the US" could be steelmanned as "let's hope that the values of the first superintelligence are based on those of Americans rather than the Chinese", which seems reasonable given that there's no guarantee that people from different cultural groups would have compatible values. (Of course, this still leaves the problem that I'd expect there to be plenty of people even within the US who had incompatible values...)
↑ comment by Viliam · 2015-04-13T12:45:02.190Z · LW(p) · GW(p)
For me, the sentence "superintelligence will belong to the US", takes a while to parse because it doesn't even type-check.
It means when the superintelligence starts converting people to paperclips, for sentimental reasons the Americans will be the last ones converted.
Of course, unless it conflicts with some more important objective, such as making more paperclips.
comment by Vaniver · 2015-04-11T20:34:00.435Z · LW(p) · GW(p)
I believe the common opinion of Zoltan Istvan is that he's mostly interested in self-promotion, and so I am not surprised that he is emphasizing the more contentious possibilities.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-04-13T03:47:26.376Z · LW(p) · GW(p)
That's not really a fertile direction of criticism. Whether or not he's engaging in self-promoting provocation doesn't affect the validity of his position. Whether the USA can be trusted as the sole custodian of super intelligent AI is however an interesting question, since American exceptionalism appears to be in decline.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2015-04-13T20:31:26.985Z · LW(p) · GW(p)
IAWYC, but disagree on the last sentence: it's not an interesting question because it's a wrong question. Superintelligent AI can't have a "custodian". Geopolitics of non-superintelligent AI that is smarter than a human but won't FOOM is a completely different question, probably best debated by people who speculate about cyberwarfare since it's more their field.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-04-13T21:36:45.888Z · LW(p) · GW(p)
"non-superintelligent AI that is smarter than a human but won't FOOM" ...is most likely a better framing of the issue. I nevertheless think a fooming AI could be owned, so long as we have some channels of control open. That the creation or maintenance of such channels would be difficult doesn't render the idea impossible in theory.
comment by dxu · 2015-04-12T03:08:55.281Z · LW(p) · GW(p)
Istvan appears to be treating the issue of AI largely in the same manner that US politicians treated the space race back in the 1960's: as a competition between nations to see who can do it "first". I think it should be obvious to most LW readers that such an attitude is... problematic.
Replies from: SanguineEmpiricist↑ comment by SanguineEmpiricist · 2015-04-12T05:50:25.324Z · LW(p) · GW(p)
Space race led to a lot of technology right? Might as well send the money to a good direction.
comment by Shmi (shminux) · 2015-04-11T20:37:36.407Z · LW(p) · GW(p)
What else do you expect a "US Presidential candidate" to say? Also, the guy appears to have more ego than smarts.
comment by passive_fist · 2015-04-12T22:53:39.638Z · LW(p) · GW(p)
As usual, politics is the mind-killer, and being a member of the 'transhumanist' party does not make you exempt from this. I fail to immediately see why it is important which country superintelligence is first developed in, except empty nationalistic sentiments.
Everything we have learned from thinking about FAI is that the distinction between friendly/unfriendly AI goes far deeper than the intentions of its creators. The most moral person could wind up developing the most evil UFAI.
Now, to some extent I expect many Americans, on reflection, would at least partly agree with the above statement - and that should be concerning.
I agree.
comment by Normal_Anomaly · 2015-04-13T20:24:58.455Z · LW(p) · GW(p)
My reaction to the first quoted statement was a big "Huh?". The only reason it would matter where superintelligent AI is first developed is that the researchers in different countries might do friendliness more or less well. A UFAI is equally catastrophic no matter who builds it; an AI that is otherwise friendly but has a preference for one country would . . . what would that even mean?Create eutopia and label it "The United Galaxy of America"? Only take the CEV of Americans instead of everybody? Either way, getting friendliness right means national politics is probably no longer an issue.
Also: I did not vote for this guy in the Transhumanist Party primaries!
comment by Teerth Aloke · 2019-03-06T03:13:26.827Z · LW(p) · GW(p)
I wouldn't trust USA with a super intelligence, after their conduct in Korea, in the nuclear attack on Japan and the bombardment of civilians in Vietnam and the blockade of Iraq in 1990s that killed over half a million. Not to forget the interference in 1970s and 1980s that installed a large number of military dictatorships, the support for the genocide of Bangladeshis by Pakistan in 1971 (USA nearly intervened to save the Islamist military dictatorship in war from the freedom fighters and secular democratic government of India).
comment by RedMan · 2015-04-18T16:47:55.239Z · LW(p) · GW(p)
What are dangerous things that a malign AI superintelligence can do, which a large enough group of humans with sufficient motivation cannot do? All the "horrible threats" listed are things that are well within the ability of large organizations that exist today. So why would an "AI superintelligence" able to execute those actions on its' own, or at the direction of its' human masters be more of a problem than the status quo?
comment by [deleted] · 2015-04-13T08:06:49.991Z · LW(p) · GW(p)
I don't really trust America either, but our preferences don't matter much. It will be either an international consortium or America, Russia or others don't really have a big enough equivalet of the Valley for this.