AIXSU - AI and X-risk Strategy Unconference
post by David_Kristoffersson · 2019-09-03T11:35:39.283Z · ? · GW · 3 commentsContents
3 comments
Start: Friday, November 29, 10am
End: Sunday, December 1, 7pm
Location: EA Hotel, 36 York Street, Blackpool
AIXSU is an unconference on AI and existential risk strategy. As it is an unconference, the event will be created by the participants. There will be an empty schedule which you, the participants, will fill up with talks, discussions and more.
AIXSU is inspired by TAISU [LW · GW], which was a successful AI Safety unconference at the EA Hotel [LW(p) · GW(p)] in August. The AI and existential risk strategy space seems to be in need for more events and AIXSU hopes to close this gap a bit. The unconference will be three days long.
To enable high-level discussion during the unconference, we require that all participants have some prior involvement with AI or existential risk strategy. AI and existential risk strategy concerns the broad spectrum of things we need to solve in order for humanity to handle the technological transitions ahead of us. Topics of interest include but are not limited to: Macrostrategy, technological forecasting, technological scenarios, AI safety strategy, AI governance, AI policy, AI ethics, cooperative principles and institutions, and foundational philosophy on the future of humanity. Here is an incomplete list of sufficient criteria:
- You have participated in one of the following: Strategy, ideas, and life paths for reducing existential risks [EA · GW], AI Safety Camp, MSFP/AISFP, Human-aligned AI Summer School, Learning-by-doing AI Safety workshop [? · GW], and have an interest in strategic questions.
- Are currently or have previously worked for or interned at an established existential risk reduction organization
- Have published papers or sufficiently high quality blog posts on strategy-related topics
- Combination of involvement in AI safety or other existential risk work with interest in strategy. For example, you’ve worked on AI safety on and off for a few years and also have an active interest in strategy-related questions.
- You are pursuing a possible future in AI strategy or existential risk strategy and have read relevant texts on the topic.
If you feel uncertain about qualifying, please feel free to reach out and we can have a chat about it.
You can participate in the unconference as many or as few days as you would like to. You are also welcome to stay longer at the EA Hotel before or after the unconference.
Price: Pay what you want (cost price is £10/person/day).
Food: All meals will be provided by EA Hotel. All food will be vegan.
Lodging: The EA hotel has two dorm rooms that have been reserved for AIXSU participants. If the dorm rooms are filled up enough, or if you would like your own room, there are many nearby hotels that you can book. We will provide information on nearby hotels.
Attendance is on a first-come, first-served basis. Make sure to apply soon if you want to secure your spot.
3 comments
Comments sorted by top scores.
comment by mschons · 2019-10-28T17:55:43.436Z · ? · GW
Earlier this year we hosted a X-Risk Strategy workshop with the Convergence Team in Cologne (https://forum.effectivealtruism.org/posts/cPZ9w2Wxxu2kA9EDg/workshop-strategy-ideas-and-life-paths-for-reducing) with around 20 participants from around Germany.
The overall rating was on a scale from -3 to +3 with 0 indicating an average workshop event at ~ +2. We had overall a very positive feedback from participants with a wide range of backgrounds:
"I believe the workshop has helped me to internalize the goal of reducing X-risks. As a result, I anticipate that I will take more concrete steps towards a career in this area than I otherwise would have."
"I really got a motivational boost, especially thanks to the conversation I had with Justin in the evening.
It has become stronger (just meeting like minded people had a lot of influence) but also the possibility of negative impact is now more prominent and I will take it into account more."
"I am still of the opinion that X-Risk is one of the most important causes to tackle, possibly even stronger now. I enjoyed being among other Effective Altruists and I feel as if I have a better impression of the community now."
"I am now more motivated to self-study and maybe try and build something before continuing my university studies."
...
Negative feedback included 'no major insights' or' too much focus on helping non-math people understanding models' which is somehow expected in a diverse crowd.
As an organizer of this event I'm likely to be biased but this event was
- helpful and valuable to novices (I cannot say that we had real experts around) and educated them about risks and potential downsides of this topic
- considered valuable from people with a strong interest in AI safety. And I got much positive feedback from people from other EA communities for hosting such an event.
All in all, I would be surprised seeing major downsides to this event and I'm pretty confident that participants will benefit overall.
comment by Jan Kulveit (jan-kulveit) · 2019-09-04T00:41:13.601Z · ? · GW
FWIW, my personal feelings about this
- I expect this to have possibly large downside risks, being harmful for novices, and unclear value for experts
- while some AI safety camp members considered organizing an event focused on strategy, after consulting experts, the decision was not to do it
- happening like this make me update on the value of EA Hotel in the negative direction
(apologies for not providing comprehensive justification; I also won't have time for much discussion here; opinion is purely my personal feeling and not a position of any organization I'm involved with)
Replies from: David_Kristoffersson↑ comment by David_Kristoffersson · 2019-09-06T00:55:46.831Z · ? · GW
I expect the event to have no particular downside risks, and to give interesting input and spark ideas in experts and novices alike. Mileage will vary, of course. Unconferences foster dynamic discussion and a living agenda. If it's risky to host this event, then I expect AI strategy and forecasting meetups and discussions at EAG to be risky and they should also not be hosted.
I and other attendees of AIXSU pay careful attention to potential downside risks. I also think it's important we don't strangle open intellectual advancement. We need to figure out what we should talk about; not that we shouldn't talk.
AISC: To clarify: AI safety camp is different and puts bigger trust in the judgement of novices, since teams are generally run entirely by novices. The person who proposed running a strategy AISC found the reactions from experts to be mixed. He also reckoned the event would overlap with the existing AI safety camps, since they already include strategy teams.
Potential negative side effects of strategy work is a very important topic. Hope to discuss it with attendees at the unconference!