Barack Obama's opinions on near-future AI [Fixed]

post by scarcegreengrass · 2016-10-12T15:46:44.334Z · score: 5 (5 votes) · LW · GW · Legacy · 11 comments

This is a link post for


Comments sorted by top scores.

comment by Lightwave · 2016-10-12T16:48:07.506Z · score: 7 (7 votes) · LW(p) · GW(p)

This is also interesting: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity

comment by scarcegreengrass · 2016-10-13T11:57:11.354Z · score: 3 (3 votes) · LW(p) · GW(p)

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

comment by DanArmak · 2016-10-13T23:19:20.108Z · score: 5 (7 votes) · LW(p) · GW(p)

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

comment by skeptical_lurker · 2016-10-15T13:08:49.689Z · score: 1 (3 votes) · LW(p) · GW(p)

Time to Godwin myself:

1930's Germany: The problem with relativity is that it's developed by Jews. We need an ethnically pure physics.

2010's USA : The problem with AI is that it's developed by white men. We need an ethnically diverse compsci.

comment by scarcegreengrass · 2016-10-14T14:11:15.460Z · score: 1 (1 votes) · LW(p) · GW(p)

Both of those Ito remarks referenced supposedly widespread perspectives. But personally, i have almost never encountered these perspectives before.

comment by turchin · 2016-10-14T15:58:53.129Z · score: 2 (2 votes) · LW(p) · GW(p)

White house also relized a pdf with concrete recommendations:

Some interesting lines:

Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

comment by scarcegreengrass · 2016-10-17T16:00:18.377Z · score: 0 (0 votes) · LW(p) · GW(p)

Wow, very surprising! 13 sounds very MIRI-ish.

comment by skeptical_lurker · 2016-10-15T13:20:04.935Z · score: 1 (1 votes) · LW(p) · GW(p)

Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing.

Please tell me this isn't an actual possibility. Surely nuclear launch must rely on multi-factor authentication with one-time-pads and code phrases in sealed, physical envelopes. A brain the size of a planet could not break a one-time pad. I know a superhuman AI could probably hack the net, but please tell me that nuclear missiles are not connected to the internet.

But... Obama must know the capacity of America's nuclear security. The best reason I can think for him to raise this possibility is to confuse America's enemies into thinking that the nuclear weapons are not properly secured, so that they will attack the nuclear launch codes which are actually secure, rather than attempting a more low-tech attack like another September 11.

comment by SithLord13 · 2016-10-17T12:34:57.605Z · score: 1 (1 votes) · LW(p) · GW(p)

I think the best reason for him to raise that possibility is to give a clear analogy. Nukes are undoubtedly airgapped from the net, and there's no chance anyone with the capacity to penetrate would think otherwise. It's just an easy to grasp way for him to present it to the public.

comment by scarcegreengrass · 2016-10-17T16:08:41.529Z · score: 0 (0 votes) · LW(p) · GW(p)

Well, security isn't really about the attack vectors you are aware of (trying to guess the one-time pad), it's about keeping an eye out for corner cases you are not yet aware of. An extremely sophisticated software system would be more likely to try avenues like causing a diplomatic crisis, manipulating people who have access to the codes, direct observation of the authentication data via specialized hardware, etc.

Also, yes, he was probably speaking informally / inaccurately.

comment by ChristianKl · 2016-10-13T13:10:09.470Z · score: 1 (5 votes) · LW(p) · GW(p)

tl;dr Obama doesn't really now what he's talking about but tries to use talking points to make sense of the new project.