Apply to HAIST/MAIA’s AI Governance Workshop in DC (Feb 17-20)
post by Phosphorous (everett-smith), Xander Davies (xanderdavies), CMD (Chinmay Deshpande), Paramedic (Fiona_Pollack), tlevin (trevor) · 2023-01-31T02:06:54.656Z · LW · GW · 0 commentsContents
Dates & Logistics Audience Content Format None No comments
The Harvard AI Safety Team and MIT AI Alignment, supported by the Cambridge Boston Alignment Initiative (CBAI), are excited to announce a 3-day workshop on AI Governance, Policy, and Strategy (GPS) on President’s Day weekend (February 17th-20th) in Washington, DC.
The workshop is designed for those looking to explore or accelerate a career in AI-related policymaking or GPS research, and it will have a specific emphasis on mitigating catastrophic risks from advanced AI. Participants will have the opportunity to meet career professionals working in research, technology policy, and national security. Travel and accommodation will be provided.
Applications open today and are due at 11:59 EST on February 3rd. Apply now. The application consists of several short answer questions; we expect it to take 30-40 minutes. We’ll release decisions by February 5th.
We expect the application process to be competitive; existential risk or AI safety-oriented groups are encouraged to recommend applicants with this link.
Dates & Logistics
Participants should plan to arrive at the venue (in Arlington, about a 10 minute drive from DCA) by 6pm on Friday, February 17th, and leave around 3:30pm on Monday, February 20th. We will provide housing and food, and will reimburse travel costs.
Audience
We’re targeting the workshop at two different groups:
- Early-career professionals and undergraduate or graduate students (at any university) who work on reducing risks from advanced AI or who have some previous engagement with these risks and are considering a career focused on mitigating them through governance, policy, or strategy.
- Harvard and MIT undergraduate or graduate students who are interested in AI policy and strategy, but who have not previously substantially engaged with catastrophic risks from advanced AI. This group will have previously met for three reading-and-discussion sessions of two hours each, which will provide them with basic background on risks from advanced AI and transformative AI timelines.
We are especially interested in applicants with a technical background who are interested in exploring GPS.
Content
The workshop will cover topics in AI GPS including (but not limited to):
- The inputs to AI progress (hardware, algorithms, and data) and the opportunities these present for governance.
- The strategic implications of the rate of AI progress for states, corporations, and interest groups.
- Possibilities for international coordination on governing AI systems.
- Whether advanced AI systems will tend to produce unipolar or multipolar distributions of power.
Format
The workshop will include a mix of:
- Reading and discussion sessions on policy topics.
- Q&As and social events with AI GPS professionals.
- A role-based simulation of public and private responses to an AI-related crisis.
- Interactive sessions designed to sharpen participants’ practical problem-solving abilities.
If you have any questions, you can comment below or reach out to Everett Smith at ets57[at]georgetown[dot]edu
0 comments
Comments sorted by top scores.