[brainstorm] - What should the AGIrisk community look like?

post by whpearson · 2017-05-27T13:00:07.341Z · LW · GW · Legacy · 14 comments

I've been thinking for a bit what I would like the AGI risk community to look like. I'm curious what all your thoughts are.

I'll be posting all my ideas, but I encourage other people to post their own ideas.

14 comments

Comments sorted by top scores.

comment by Raemon · 2017-05-27T20:21:27.583Z · LW(p) · GW(p)

Periodically, there are job openings for positions of power that seem like they might matter quite a bit - executive director of the Partnerships for AI, person-in-charge-of-giving-out-government-grants-on-AI-stuff, etc, and... it seems like we have very few people who are remotely qualified for those positions, and willing to move to Washington DC or work in a very corporate, traditionally-signally-environment.

I think we need people who are willing to do that, and to acquire the skills and background that'd make them marketable for that.

Replies from: whpearson
comment by whpearson · 2017-05-27T22:08:27.792Z · LW(p) · GW(p)

Bits of government are changing. A palatable potential strategy would be to try and get in and make contacts by being in usds. I suspect they will be trying to do some machine learning at some point (fraud/intrusion detection if nothing else), if someone could position yourself as a subject matter expert/technical architect on this (with conference talks like AI in government), they might be in a good position.

comment by James_Miller · 2017-05-27T15:27:22.842Z · LW(p) · GW(p)

Members helping each other reach positions of power and influence so we can slightly reduce the probably of our light-cone being destroyed by a paperclip maximizer.

Replies from: whpearson
comment by whpearson · 2017-05-27T16:41:36.006Z · LW(p) · GW(p)

I think humans naturally would help each other in this way, if they trust each other. Having this as a cultural norm seems like it would be attractive to people who pay lip service to the AGI risk stuff and were just in it for the boost. Perhaps this would work if you had to invest a lot upfront.

I have a natural aversion to this. I'm curious what form you think this would take?

Replies from: James_Miller
comment by James_Miller · 2017-05-27T20:29:38.642Z · LW(p) · GW(p)

It would mostly work on a one-on-one basis were we use connections and personal knowledge to give advice and help.

comment by whpearson · 2017-05-27T13:06:48.285Z · LW(p) · GW(p)

An organization that is solution agnostic (not a research institute, there are conflicts of interest there) and is dedicated to discovering the current state of AGI research and informing its members. It would not just search under the lampposts of current AI work but also organize people to look at emerging work. It would output summaries of new things and may even try to do something like the doomsday clock but for AGI.

comment by madhatter · 2017-05-28T15:43:33.673Z · LW(p) · GW(p)

More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.

comment by tristanm · 2017-05-27T16:07:00.721Z · LW(p) · GW(p)

The guest list at the Asilomar Conference should give you a pretty good idea of what the AGI risk community already looks like.

My question is what do you find inadequate about the current status of the AGI community?

Replies from: whpearson
comment by whpearson · 2017-05-27T16:33:15.397Z · LW(p) · GW(p)

I'd like more talks like:

"How can we slow the start of the agi arms race?"

"How can we make sure the AGI risk community has an accurate view of what is going on in AGI relevant research (be it neuroscience or AI)?"

Replies from: tristanm
comment by tristanm · 2017-05-27T21:08:32.184Z · LW(p) · GW(p)

FHI has done a fair amount of work trying to determine the implications of different types of policies and strategies regarding AGI development and the long term effect of those strategies on risk. One of those issues is the topic of openness in AI development which is especially relevant given the existence of OpenAI, and how the approach to openness may or may not increase the likelihood of an AI arms race under different scenarios.

I think at this point the AGI risk community and the machine learning/ neuroscience communities are pretty well connected and aware of each other's overall progress. You'll notice that Demis Hassabis, Ilya Sustkever, Yoshua Bengio, Yann LeCun, to name just a few, are all experts in machine learning development and were attendees of the Asilomar conference.

Replies from: whpearson
comment by whpearson · 2017-05-27T23:00:03.632Z · LW(p) · GW(p)

Neuroscience != Neural Networks and machine learning probably isn't the only relevant bit of AI work going on currently to AGI. This is what I would like .

I'm also interested in who is currently trying to execute on getting the policies and strategies followed for minimising an arms race. Not just writing research papers.

comment by whpearson · 2017-05-27T13:28:13.318Z · LW(p) · GW(p)

An organisation that regularly surveys AI and AGI researchers/students on safety topics and posts research into different ways with engaging with them.

comment by tukabel · 2017-05-27T21:02:13.246Z · LW(p) · GW(p)

let's better start with what it should NOT look like...

e.g.

  • no government (some would add word "criminals")
  • no evil companies (especially those who try to deceive the victims with "no evil" propaganda)
  • no ideological mindfcukers (imagine mugs from hardcore religious circles shaping the field - does not matter whether it's traditional stone age or dark age cult or modern socialist religion)
Replies from: AlexMennen
comment by AlexMennen · 2017-05-28T05:18:49.768Z · LW(p) · GW(p)

no ideological mindfcukers

That rules out paying too much attention to the rest of your comment.