[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions

post by plex (ete) · 2015-01-28T15:29:07.226Z · LW · GW · Legacy · 8 comments

From a paper by Center for Technology and National Security Policy & National Defense University:

"Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.

National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions."

So strong AI is on the american military's radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.

8 comments

Comments sorted by top scores.

comment by JoshuaZ · 2015-01-28T15:54:30.277Z · LW(p) · GW(p)

I haven't read the entire piece but from your excerpt and a quick skim it looks like they are focusing on AI going wrong in prosaic ways like hacking the AI to change what it should do, not that a strong AI might have problems all on its own. I don't know how much of that is that this piece is just an overview piece that discusses a variety of technological risks.

Replies from: ete
comment by plex (ete) · 2015-01-28T17:02:04.403Z · LW(p) · GW(p)

Yes, that does seem like the primary focus. However, they cite this article about Stephen Wolfram when saying "It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings." which suggests that the researchers are at least aware of the wider risks around creating AGI, even if they don't choose to focus on them.

comment by MugaSofer · 2015-01-29T13:08:14.533Z · LW(p) · GW(p)

many now believe that strong AI may be achieved sometime in the 2020s

Yikes, but that's early. That's a lot sooner than I would have said, even as a reasonable lower bound.

Replies from: None
comment by [deleted] · 2015-01-29T21:25:01.770Z · LW(p) · GW(p)

You have a credible reason for thinking it will take longer?

Replies from: MugaSofer
comment by MugaSofer · 2015-01-29T22:21:01.852Z · LW(p) · GW(p)

I'm no expert, but even Kurzweil - who, from past performance, is usually correct but over-optimistic by maybe five, ten years - doesn't expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.

2020 is in five years. The kind of progress that would seem to imply - from where we are now to full-on human-level AI in just five years - seems incredible.

Replies from: None
comment by [deleted] · 2015-01-30T15:09:06.935Z · LW(p) · GW(p)

Kurzweil's methodology for selecting those dates is suspect. He calculated the number of FLOPS he thinks, by back of the envelope whole-brain-emulation estimates, it would require to run a human-level AGI. The most powerful supercomputer today, Tianhe-2 in China, exceeds this level. So the human race has access to enough computing power to run even an inefficient emulative AGI today, by Kurzweil's own estimates. The years he quotes are when that computing power would be available for $1,000 USD. If you believe in a takeoff scenario however, it should only matter when the first AGI is created, not how much it costs to buy the equipment to run another one.

So we have sufficient computational power today to run an artificial general intelligence. The problem then, is software. How long will it take to write the software underlying the first AGI? And for whatever value you claim, do you have credible reasoning underlying that choice?

Personally I think 5 years is a bit fast. But the quote was 2020s, the midpoint of which is still 10 years away. I think 10 years is doable if we really, really try. What's your estimate?

comment by NxGenSentience · 2015-01-28T22:24:13.186Z · LW(p) · GW(p)

Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.

Replies from: ete
comment by plex (ete) · 2015-01-29T03:26:43.684Z · LW(p) · GW(p)

Glad it's of interest to you. I found it while checking the sources of this motherboard article. There's another document linked from there which you may or may not have seen, but it lacks mention of strong AI, instead focusing on automating war in general.

After reading that I feel like in the near future it'll be much easier to justify concrete AI takeover mechanisms to the public.