Posts

ARENA 4.0 Impact Report 2024-11-27T20:51:54.844Z
AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0 2024-07-06T11:34:57.227Z
Announcing the London Initiative for Safe AI (LISA) 2024-02-02T23:17:47.011Z
Reward Hacking from a Causal Perspective 2023-07-21T18:27:39.759Z
Incentives from a causal perspective 2023-07-10T17:16:28.373Z
Agency from a causal perspective 2023-06-30T17:37:58.376Z
Causality: A Brief Introduction 2023-06-20T15:01:39.377Z
Introduction to Towards Causal Foundations of Safe AGI 2023-06-12T17:55:24.406Z

Comments

Comment by James Fox on ARENA 4.0 Impact Report · 2024-12-03T04:41:18.222Z · LW · GW

Thank you for your comment.

We are confident that ARENA's in-person programme is among the most cost-effective technical AI safety training programmes: 
- ARENA is highly selective, and so all of our participants have the latent potential to contribute meaningfully to technical AI safety work
- The marginal cost per participant is relatively low compared to other AI safety programmes since we only cover travel and accommodation expenses for 4-5 weeks (we do not provide stipends)
- The outcomes set out in the above post seem pretty strong (4/33 immediate transitions to AI safety roles and 24/33 more actively pursuing them)
- There are lots of reasons why technical AI safety engineering is not the right career fit for everyone (even those with the ability). Therefore, I think that 2/33 people updating against working in AI safety after the programme is actually quite a low attrition rate. 
- Apart Hackathons have quite a different theory of change compared with ARENA. While hackathons can be valuable for some initial exposure, ARENA provides 4-weeks of comprehensive training in cutting-edge AI safety research (e.g., mechanistic interpretability, LLM evaluations, and RLHF implementation) that leads to concrete outputs through week-long capstone projects.

Comment by James Fox on AI Alignment Research Engineer Accelerator (ARENA): Call for applicants v4.0 · 2024-09-05T23:02:27.293Z · LW · GW

Sorry for not seeing this. Hopefully, the first paragraph of the summary answers this question.  We're excited about running more ARENA iterations exactly because its track record has been pretty strong.

Comment by James Fox on Utility Maximization = Description Length Minimization · 2023-06-23T14:24:48.716Z · LW · GW

I know you've acknowledged Friston at the end, but I'm just commenting for other interested readers' benefit that this is very close to Karl Friston’s active inference framework, which posits that all agents minimise the discrepancies (or prediction errors) between their internal representations of the world and their incoming sensory information through both action and perception.

Comment by James Fox on Progress on Causal Influence Diagrams · 2021-10-02T13:13:37.270Z · LW · GW

Hi Vanessa, Thanks for your question! Sorry for taking a while to reply. The answer is yes if we allow for mixed policies (i.e., where an agent can correlate all of their decision rules for different decisions with a shared random bit), but no if we restrict agents to only be able to use behavioural policies (i.e., decision rules for each of an agent's decisions are independent because they can't access a shared random bit). This is analogous to the difference between mixed and behavioural strategies in extensive form games, where (in general) a subgame perfect equilibrium (SPE) is only guaranteed to exist in mixed strategies (and the game is finite etc by Nash' theorem). 

Note that If all agents in the MAIM have perfect recall (where they remember their previous decisions and the information that they knew at previous decisions), then there is guaranteed to exist a SPE in behavioural policies). In fact, Koller and Milch showed that only a weaker criterion of "sufficient recall" is needed (https://www.semanticscholar.org/paper/Ignorable-Information-in-Multi-Agent-Scenarios-Milch-Koller/5ea036bad72176389cf23545a881636deadc4946).

In a forthcoming journal paper, we expand significantly on the the theoretical underpinnings and advantages of MAIMs and so we will provide more results there.