Posts

Technical Risks of (Lethal) Autonomous Weapons Systems 2024-10-23T20:41:13.238Z
UNGA Resolution on AI: 5 Key Takeaways Looking to Future Policy 2024-03-24T12:23:51.698Z
NAIRA - An exercise in regulatory, competitive safety governance [AI Governance Institutional Design idea] 2024-03-19T17:43:43.219Z
Heramb's Shortform 2024-02-06T01:49:19.113Z
Why building ventures in AI Safety is particularly challenging 2023-11-06T16:27:36.535Z
Overcorrecting in AGI Timeline Forecasts in the current AI boom 2023-04-10T16:43:55.174Z

Comments

Comment by Heramb on Heramb's Shortform · 2024-04-28T15:25:35.648Z · LW · GW

Everyone who seems to be writing policy papers/ doing technical work seems to be keeping generative AI at the back of their mind, when framing their work or impact.

This narrow-eyed focus on gen AI might almost certainly be net-negative for us- unknowingly or unintentionally ignoring ripple effects of the gen AI boom in other fields (like robotics companies getting more funding leading to more capabilities, and that leads to new types of risks).

And guess who benefits if we do end up getting good evals/standards in place for gen AI? It seems to me companies/investors are clear winners because we have to go back to the drawing board and now advocate for the same kind of stuff for robotics or a different kind of AI use-case/type all while the development/capability cycles keep maturing.

We seem to be in whack-a-mole territory now because of the overton window shifting for investors.

Comment by Heramb on Heramb's Shortform · 2024-02-06T01:49:19.203Z · LW · GW

(Copying my quick take from the EA Forum)

I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren't aligned with theirs; US policymakers' incentive right now is to curb China's tech growth and fun trade war reasons, not pause AI.

This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.

It also makes China more likely to treat this as a tech race, which sets up interesting competitive race dynamics between the US and China, which I don't see talked about enough. 

Comment by Heramb on Five neglected work areas that could reduce AI risk · 2023-09-24T15:12:24.097Z · LW · GW

Great post! With institutional design, would you have any advice on making it less abstract and increasing the value of such a proposal?

I cannot help but shrug off the feeling that just about anyone can whip up a structure/design which considers a couple of the stakeholders - what would a design which moves the needle have/be able to do? 

Comment by Heramb on AI pause/governance advocacy might be net-negative, especially without a focus on explaining x-risk · 2023-09-03T12:34:30.426Z · LW · GW

I agree with the concern about accidentally making it harder for X-risk regulations to be passed - probably also something to keep in mind for the part of the community that works on mitigating the misuse of AI. 
Here are some concerns specifically to this point which I have and am curious what people think about it: 

1. Policy Feasibility: Policymakers often operate on short-term electoral cycles, which inherently conflict with the long-term nature of x-risks. This temporal mismatch reduces the likelihood of substantial policy action. Therefore, advocacy strategies should focus on aligning x-risk mitigation with short-term political incentives. 

2. Incrementalism as Bayesian Updating: A step-by-step regulatory approach can serve as real-world Bayesian updating. Initial, simpler policies can act as 'experiments,' the outcomes of which can inform more complex policies. This iterative process increases the likelihood of effective long-term strategies. 

3. Balanced Multi-Tiered Regulatory Approach: Addressing immediate societal concerns or misuse (like deep fakes) seems necessary to any sweeping AI x-risk regulation since it seems to be in the Overton window and constituents' minds. In such a scenario, it would require significant political or social capital to pass something only aimed at x-risks but not about the other concerns. 

By establishing regulatory frameworks that address more immediate concerns based on multi-variate utility functions, we can probably lay the groundwork for more complex regulations aimed at existential risks. This is also why I think X-risk policy advocates come off as radical, robotic or "a bit out there" - they are so focused on talking about X-risk that they forget the more immediate or short-term human concerns. 

With X-risk regulation, there doesn't seem to be a silver bullet; these things will require intellectual rigour, pragmatic compromise and iterations themselves (also say hello to policy inertia). 
 

Comment by Heramb on Scholarship: How to Do It Efficiently · 2023-06-26T10:59:42.398Z · LW · GW

Very nice approach! I like the almost algorithmic flow; Other approaches i find important: a) talking to 2-4 people for 20 mins who are working on the problem but are not too far along (so the conversation can have an informal tone) b) talking to 1-2 people who don't have an idea about it ( this gives a bird's eye view) c) going to a conference to see what kind of language people use, what are the presentations at the cusp/current edge of development are up to; this also helps form connections (maybe for use of the first two steps)

Comment by Heramb on The AI governance gaps in developing countries · 2023-04-04T13:47:37.276Z · LW · GW

Excellent work! I have also been pretty concerned about gaps in the global AI Governance ecosystem but a bit sceptical of how impactful focusing on developing countries would be. This essay reminds me that LMICs are still a part of the ecosystem, and one hole can cause a leaky bucket.

 

Particularly love the bit on incentivizing checks and balances instead of forcing it on countries!