Akash's Shortform

post by Akash (akash-wasil) · 2024-04-18T15:44:25.096Z · LW · GW · 5 comments

5 comments

Comments sorted by top scores.

comment by Akash (akash-wasil) · 2024-04-18T15:44:25.830Z · LW(p) · GW(p)

I think now is a good time for people at labs to seriously consider quitting & getting involved in government/policy efforts.

I don't think everyone should leave labs (obviously). But I would probably hit a button that does something like "everyone at a lab governance team and many technical researchers spend at least 2 hours thinking/writing about alternative options they have & very seriously consider leaving."

My impression is that lab governance is much less tractable (lab folks have already thought a lot more about AGI) and less promising (competitive pressures are dominating) than government-focused work. 

I think governments still remain unsure about what to do, and there's a lot of potential for folks like Daniel K to have a meaningful role in shaping policy, helping natsec folks understand specific threat models, and raising awareness about the specific kinds of things governments need to do in order to mitigate risks.

There may be specific opportunities at labs that are very high-impact, but I think if someone at a lab is "not really sure if what they're doing is making a big difference", I would probably hit a button that allocates them toward government work or government-focused comms work.

Written on a Slack channel in response to discussions about some folks leaving OpenAI. 

Replies from: alexander-gietelink-oldenziel, davekasten
comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-04-18T17:25:17.512Z · LW(p) · GW(p)

I'd be worried about evaporative cooling. It seems that the net result of this would be that labs would be almost completely devoid of people earnest about safety.

I agree with you government pathways to impact are most plausible and until recently undervalued. I also agree with you there are weird competitive pressures at labs. 

Replies from: akash-wasil
comment by Akash (akash-wasil) · 2024-04-19T00:18:28.934Z · LW(p) · GW(p)

I do think evaporative cooling is a concern, especially if everyone (or a very significant amount) of people left. But I think on the margin more people should be leaving to work in govt. 

I also suspect that a lot of systemic incentives will keep a greater-than-optimal proportion of safety-conscious people at labs as opposed to governments (labs pay more, labs are faster and have less bureaucracy, lab people are much more informed about AI, labs are more "cool/fun/fast-paced", lots of govt jobs force you to move locations, etc.)

I also think it depends on the specific lab– EG in light of the recent OpenAI departures, I suspect there's a stronger case for staying at OpenAI right now than for DeepMind or Anthropic. 

comment by davekasten · 2024-04-18T16:42:56.807Z · LW(p) · GW(p)

I largely agree, but think given government hiring timelines, there's no dishonor in staying at a lab doing moderately risk-reducing work until you get a hiring offer with an actual start date.  This problem is less bad for the special hiring authorities being used for AI stuff oftentimes, but it's still not ideal.

comment by Akash (akash-wasil) · 2024-04-24T20:14:37.346Z · LW(p) · GW(p)

I'm interested in writing out somewhat detailed intelligence explosion scenarios. The goal would be to investigate what kinds of tools the US government would have to detect and intervene in the early stages of an intelligence explosion. 

If you know anyone who has thought about these kinds of questions, whether from the AI community or from the US government perspective, please feel free to reach out via LessWrong.