Posts

Comments

Comment by claynaff on Superintelligence 28: Collaboration · 2015-03-24T12:33:19.324Z · LW · GW

The greatest danger is that an arms race will lead to the creation of a superintelligence which will immediately be used to dominate all others. Speculative threats by an autonomous superintelligence are plausible but are less certain than the first-strike logic inherent in such an arms race. Here's what we know from recent history: a) the instinct for domination is alive and well in the human species, and where circumstances allow an intelligent psychopath to reach the pinnacle of power, all available means will be deployed to maintain his (usually his) power. Cf. Stalin, Hitler, Mao, Kim (x3), Saddam, Putin, etc., and b) the logic of this kind of arms race dictates that if you've got it, you must use it. Multiple centers of power would almost certainly lead to a cyberwar or perhaps outright war. It only makes sense that the first to gain power must use it to suppress all other pretenders. Collaboration on a precursor project, similar perhaps to the Human Genome Project, might at least point us in the right direction. Perhaps it could focus on the use of AI to build an Internet immune system that might limit mitigate today's threats and constrain future one. Still, better ideas are needed to thwart the various catastrophic scenarios ahead.

Comment by claynaff on Superintelligence 21: Value learning · 2015-02-03T03:14:28.263Z · LW · GW

I think Christiano's idea sets us on the right path. One of the things that troubled me in reading Bostrom's book is the prospect that a superintelligence under human control could be at least as dangerous as an autonomous superintelligence. For example, if superintelligence were hatched by one of the competing military superpowers -- Russia, China, or the U.S., let's say -- as an oracle or genie, it might be used for short-sighted or selfish purposes with catastrophic consequences. With superintelligence on his side, the cartoon nightmare of a single man out to rule the world could, um, hardly be ruled out. Our knowledge of human nature gives us every reason to expect abuse. If, however, a superintelligence could be attuned to seeking approval from not just Hugh but humanity, that might have a very different and laudable result. With the proliferation of cell phones through nearly every part of the world (half of the African population is expected to have one by 2017), this opens the possibility that a superintelligence could seek the approval of a broad swath of humanity. Even better would be approval for decisions whose consequences are hidden behind a Rawlsian veil, so that people expressing an opinion cannot know whether they personally will be a winner or loser by the decision. Under such circumstances, people are, on average, remarkably fair. Still, there may be both practical and theoretical shortcomings to my suggestion. I'd welcome critiques.

Comment by claynaff on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T13:07:40.173Z · LW · GW

Unless it is deliberately or accidentally altered, an emulation will possess all of the evolved traits of human brains. These include powerful mechanisms to prevent an altruistic absurdity such as donating one's labor to an employer. (Pure altruism -- an act that benefits another at the expense of one's genetic interests -- is strongly selected against.) There are some varieties of altruism that survive: kin selection (e.g., rescuing a drowning nephew), status display (making a large donation to a hospital), and reciprocal aid (helping a neighbor in hopes they'll help you when aid is needed), but pure altruism (suicide bombing is a hideous example) is quite rare and self-limiting. That would be true even within an artificial Darwinian environment. Therefore, we have a limiting factor on what to expect in a world with brain emulations. Also, I must note, we have a limiting factor on TedHowardNZ's description of evolution above. Evo does not often climb down from a fitness peak (thus we are stuck with a blind spot in our eyes), and certainly not when the behaviors entailed reduce fitness. Only a changing environment can change the calculus of fitness in ways that allow prosocial behaviors to flourish w/o a net cost to fitness. But even a radically changed environment could not force pure altruism to exist in a Darwinian system.

Comment by claynaff on Superintelligence 15: Oracles, genies and sovereigns · 2014-12-23T16:11:44.949Z · LW · GW

I’m grateful for these summaries and discussions. Having only just dived into the reading group, I ask forgiveness if I am overtracking some prior comment. It seems to me that “human values” and “oversight” often go unexamined as we consider the risks and utility of superintelligence. I mean no disrespect to Katja in saying that (she’s summarizing, after all), but to say “human values” is either to reduce to the common denominator of our evolutionary psychology or to ignore the vast cultural and ideological diversity of humanity. Either way, it’s a real problem. Evo-psych clearly shows that we are Darwinian creatures, and not the self-sacrificing, haploid-diploid bee-type, either. By and large, we cooperate for selfish motives, and we greatly favor our offspring over the interests of other's children. Men tend to fight (or compete, depending on the local social dynamics) for dominance and to win the most sought-after women. Women tend to use their social-manipulation skills to advance their own reproductive interests. That’s the evo-psych sketch (with all the limitations of a sketch). Culture influences and often overwhelms those underlying instincts. Viz., the celibate priest. But that’s only hardly a comfort. Consider just two scenarios. In one, a narcissistic, aggressive male is placed in charge of the superintelligent oracle or genie. In imagining the consequences, there are plenty of examples to consider, from Henry VIII to Stalin to Kim Jong-un. What’s especially sobering, however, is to note that these types are far from rare in the gene pool. in our society they tend to be constrained by our institutions. Take off those constraints and you get … drug kingpins, the Wolf of Wall Street, the CIA torture program, and so on. Put a highly successful male in charge of the superintelligence program, therefore, and you have a high probability of a dictator. On the other hand, imagine a superintelligence guided by the “human values” of a fundamentalist Christian or an Islamist. Those are cultural overlays, to be sure, but not ones that promise a happy outcome. So, a major part of the puzzle, it seems to me, is figuring out how to have humanistic and rational governance -- to the extent that governance is possible -- over a superintelligence of any kind. If anything, it militates against the safe-seeming oracle and creates an incentive for some kinds of autonomy -- the ability to refuse a genocidal command, for example.

Regards,

Clay Farris Naff, Science and Religion Writer