Centre for the Study of Existential Risk (CSER) at Cambridge makes headlines.
post by betterthanwell · 2012-11-26T20:56:00.183Z · LW · GW · Legacy · 16 commentsContents
Here is a small selection of CSER press coverage from the last two days: Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES Humanity’s last invention and our uncertain future None 16 comments
As of an hour ago, I had not yet heard of the Centre for the Study of Existential Risk.
Luke announced it to Less Wrong, as The University of Cambridge announced it to the world, back in April:
CSER at Cambridge University joins the others.
Good people involved so far, but the expected output depends hugely on who they pick to run the thing.
CSER is scheduled to launch next year.
Here is a small selection of CSER press coverage from the last two days:
http://www.bbc.co.uk/news/technology-20501091
http://www.guardian.co.uk/education/shortcuts/2012/nov/26/cambridge-university-terminator-studies
http://www.theregister.co.uk/2012/11/26/new_centre_human_extinction_risks/
http://www.slashgear.com/new-ai-think-tank-hopes-to-get-real-on-existential-risk-26258246/
http://www.techradar.com/news/world-of-tech/super-brains-to-guard-against-robot-apocalypse-1115293
http://slashdot.org/topic/bi/cambridge-university-vs-skynet/
http://www.businessinsider.com/researchers-robots-risk-human-civilization-2012-11
http://www.newscientist.com/article/dn22534-megarisks-that-could-drive-us-to-extinction.html
http://news.cnet.com/8301-11386_3-57553993-76/killer-robots-cambridge-brains-to-assess-ai-risk/
http://www.foxnews.com/tech/2012/11/26/terminator-center-to-open-at-cambridge-university/
Google News: All 119 news sources...
Here's an excerpt from one quite typical story appearing in tech-tabloid theregister.co.uk today:
Cambridge boffins fear 'Pandora's Unboxing' and RISE of the MACHINES
Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species.
A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk (CSER) to analyse the ultimate risks to the future of mankind - including bio- and nanotech, extreme climate change, nuclear war and artificial intelligence.
Apart from the frequent portrayal of evil - or just misguidedly deadly - AI in science fiction, actual real scientists have also theorised that super-intelligent machines could be a danger to the human race.
Jaan Tallinn, the former software engineer who was one of the founders of Skype, has campaigned for serious discussion of the ethical and safety aspects of artificial general intelligence (AGI).
Tallinn has said that he sometimes feels he is more likely to die from an AI accident than from cancer or heart disease, CSER co-founder and philosopher Huw Price said.
[...]
Humanity’s last invention and our uncertain future
In 1965, Irving John ‘Jack’ Good sat down and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine. Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built. [...]
16 comments
Comments sorted by top scores.
comment by Sean_o_h · 2012-11-27T21:11:30.833Z · LW(p) · GW(p)
Hi,
Let me introduce myself: I'm Sean and I work as project manager at FHI (finally got around to registering!). In posts here I won't be speaking on behalf of FHI unless I explicitly state so (although, like Stuart, I imagine I often will be). I'm not involved officially with CSER, but I'm in communication with them and hope to be keeping up to date with them over the coming months.
A few comments on your observations:
2) CSER have done a deliberate and well-orchestrated "media splash" campaign over the last week, but I believe they're finished with this now. They've got some big names involved and a good support structure in place in Cambridge, which helps.
3) My understanding is that CSER hasn't published anything yet because they don't exist yet in a practical sense - they've been founded but nobody's employed, and they're still gathering seed funding.
4) The Sunday Times article's a bit unfortunate and the general feeling at FHI is that we're not too impressed by the journalist's work, but please note that the more "controversial" statements are the journalist's own thoughts (it's not clear in all places if you skim the article like I did at first). CSER has some good people behind it, and at the time of writing the FHI plans to support it and collaborate with it where possible - we think it's a very positive development in the field of Xrisk. Even the term getting out there is a positive!
Replies from: betterthanwell↑ comment by betterthanwell · 2012-11-27T22:46:32.369Z · LW(p) · GW(p)
Welcome, and thanks for the comments.
Even the term getting out there is a positive!
Agreed.
If journalism demands that you stick to Hollywood references when communicating a concept,
it wouldn't be so bad if journalists managed to understand and convey the distinction between:
- The wholly implausible, worse than useless Terminator humanoid hunter-killer robot scenario.
- The not completely far-fetched Skynet launches every nuke, humanity dies scenario.
↑ comment by RomeoStevens · 2012-11-28T01:44:15.424Z · LW(p) · GW(p)
I think it works as a hierarchy of increasingly complex models. Readers will stop at whichever rung they are comfortable with depending on their curiosity and background.
My real life conversations on X-risk tend to go
Terminator
Drones
Skynet
Specialized AI
General AI
Friendly AI
comment by Manfred · 2012-11-26T22:47:35.882Z · LW(p) · GW(p)
News stories in post: 16
Number with a picture from the movie series Terminator: 8 / 16
Number referencing Terminator in text (some with text had no picture, and vice versa): 11 / 16
Popular but not as popular: HAL references.
News stories with no Terminator picture and no textual references to HAL or Arnold Schwarzenegger: 1 / 16, the New Scientist.
Replies from: wuncidunci, Rain↑ comment by wuncidunci · 2012-11-26T23:20:18.758Z · LW(p) · GW(p)
To be fair the Guardian story only references Terminator in the header. The text body is written by Lord Martin Rees and is a short but clear description of X-risk without any sci-fi references. It also focuses more on other X-risks, perhaps a difference in opinion amongst the founders?
Replies from: dbaupp, Sean_o_h↑ comment by dbaupp · 2012-11-26T23:49:47.792Z · LW(p) · GW(p)
("Lord Martin Rees is a British cosmologist and astrophysicist. He has been Astronomer Royal since 1995 and Master of Trinity College, Cambridge since 2004. He was President of the Royal Society between 2005 and 2010". For anyone like me who didn't know.)
Replies from: AlexMennen↑ comment by AlexMennen · 2012-11-27T01:14:26.309Z · LW(p) · GW(p)
Interesting; there is now a member of a national legislature who is publicly concerned about existential risk. I wonder if he's planning to try to use his political power to reduce x-risk. My guess: probably not. He appears to be rather a lot more interested in science than in politics, and I'm not sure to what extend the average member of the House of Lords even has political power.
Replies from: turchin↑ comment by turchin · 2012-11-27T12:08:26.954Z · LW(p) · GW(p)
By the way he wrote excelent book on x-risks.
http://books.google.ru/books/about/The_End_of_the_World.html?id=CLvuO9_lDmwC&redir_esc=y
download: http://www.avturchin.narod.ru/Rees.doc
↑ comment by Sean_o_h · 2012-11-27T21:16:46.864Z · LW(p) · GW(p)
Tallinn and Price are very concerned with AI-related Xrisk. Martin Rees currently considers biological risks his no.1 concern (which is not to say he's unconcerned by AI); he's famously offered bets on a major (~1 million death) bio-related catastrophe occuring in the coming years. http://online.wsj.com/article/SB124121965740478983.html
comment by IlyaShpitser · 2012-11-27T22:00:55.917Z · LW(p) · GW(p)
I remember a post by Hanson (can't seem to find the exact url at the moment), where he said that academic big names are "risk averse," but if a long shot topic becomes hot/fashionable, the big names simply move in on the innovators' turf, and take over the topic.
Replies from: Sean_o_h, Nonecomment by lukeprog · 2012-11-27T05:01:38.660Z · LW(p) · GW(p)
There were some serious errors in the coverage of this story in The Sunday Times (UK).
Replies from: betterthanwell↑ comment by betterthanwell · 2012-11-27T11:30:42.379Z · LW(p) · GW(p)
Yudkowsky seemed to me simplistic in his understanding of moral norms. “You would not kill a baby,” he said to me, implying that was one norm that could easily be programmed into a machine.
“Some people do,” I pointed out, but he didn’t see the full significance. SS officers killed babies routinely because of an adjustment in the society from which they sprang in the form of Nazism. Machines would be much more radically adjusted away from human social norms, however we programmed them.
Wow. This particular mistake seems to be an unlikely and even difficult mistake to make in good faith,
as opposed to, for example, by outright dishonesty.
Update: I told Appleyard of his mistake, and he simply denied that his article has made a mistake on this matter.
Never mind, it seems they don't even try to be honest.
comment by JoshuaFox · 2013-03-20T20:05:14.745Z · LW(p) · GW(p)
An article at CAM, the Cambridge alumni magazine. (H/T my wife, who gets it in hardcopy).
Nothing too new, but it is good to see the basic AI x-risk concepts laid out with a minimum of snarkiness in a publication aimed at a closed, elite audience. I think that more reasonable ideas about AI x-risk are gaining social status..