Global online debate on the governance of AI

post by CarolineJ · 2018-01-05T15:31:29.091Z · LW · GW · 5 comments

Contents

5 comments

Hi guys,

For background, I’m a French EA, attended a CFAR workshop, and recently decided to work on AI policy as it is a pressing and neglected issue. I’ve been working for The Future Society for a few weeks already and would like to share with you this opportunity to impact policy-making. The Future Society is a Harvard Kennedy School-incubated think tank dedicated to the governance of emerging advanced technologies. It has partnerships with the Future of Life Institute and the Centre for the Study of Existential Risk.

The think-tank provides an participatory debate platform to people all around the world

The objective is to craft actionable and ethical policies that will be delivered in a White Paper, to the White House, the OECD, the European Union and other policymaking institutions that the think-tank is working with.

Because we know AI policy is hard, the idea is to use collective intelligence to provide innovative and reasonable policies. The debate is hosted on an open source collective intelligence software resulting from a research project funded by the European Commission, technologically supported by MIT. It’s based on research on collective intelligence, going from open and exploratory questions to more in-depth discussions. Right now, we are in the “Ideation” phase, which is very open. You can make constructive answers and debate with other people who are also interested in crafting AI Policies with instant translation.

The platform is like an online forum articulated around several issues, both short-term and long-term oriented. You have six themes, including “AI Safety and Security”, “Reinvent Man & Machine Relationship” and “Governance Framework”.

So far, most of the answers have been very constructive. But with you guys… it can be even better.

Because you are Rationalists, I really wanted to pick your brains to think rationally and critically about AI governance.

It would be great if you guys could participate, on the topic you’re most interested in, knowing that a) it will be impactful b) you will be able to challenge your thoughts with other people passionate about AI social impacts. Of course, you don’t have to talk about AI safety if you’d rather focus on other topics.

Don't hesitate to post short (yet insightful) comments to start, just to fuel the debate.

So please connect on the debate, and participate.

The debate is here

5 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2018-01-06T15:16:09.856Z · LW(p) · GW(p)

I always feel that AI Strategy not deeply informed by a technical understanding of the alignment problem is misguided (i.e. your models of and uncertainties around the alignment problem determine ~90% of the variance in strategy). This is to be contrasted with quetions like "Whose values ought to be imbued in the AI?" or "Should industry be allowed to create AI?". Whether industry is a moral authority or not is one question, but simply asking whether its leaders have deep security mindset is a better question, not least because it immediately zooms you in on the small set of possible AI projects that could ever be alignable.

Replies from: paulfchristiano, Kaj_Sotala
comment by paulfchristiano · 2018-01-06T20:36:32.355Z · LW(p) · GW(p)
i.e. your models of and uncertainties around the alignment problem determine ~90% of the variance in strategy

I don't think that's the case in most groups. There is a huge amount of disagreement even holding fixed views about the difficulty of the alignment problem, with more of the variance explained by questions about what different groups of people might agree to, the dynamics of different kinds of conflict, the internal dynamics of a group with a nominal commitment to safety, etc.

comment by Kaj_Sotala · 2018-01-07T16:26:50.104Z · LW(p) · GW(p)

I think that if we're talking about modern-day policy, the alignment problem isn't really relevant yet - we have no idea of how the alignment problem should inform policy at this stage - wheras questions like "whose values should be imbued in AI" (or rather, in ML applications) are becoming very relevant.

comment by Elizabeth (pktechgirl) · 2018-01-05T22:46:58.080Z · LW(p) · GW(p)

Moved to front page.

comment by CarolineJ · 2018-01-15T10:35:57.091Z · LW(p) · GW(p)

Thank you for your answers. I think they are very helfpul. According to you, where are our best chances of strengthening AI Safety in the short term? If you were a short-term AI strategist, where would you invest your efforts?