Posts

Summary of Situational Awareness - The Decade Ahead 2024-06-10T08:44:41.675Z

Comments

Comment by Oscar (Oscar Delaney) on IAPS: Mapping Technical Safety Research at AI Companies · 2024-11-05T11:39:20.243Z · LW · GW

Thanks for that list of papers/posts. For most of the papers you linked, they’re not included because they did not feature in either of our search strategies: (1) titles containing specific keywords that we searched for on arXiv; (2) the paper is linked on the company’s website. I agree this is a limitation of our methodology. We won't add these papers in now as that would be somewhat ad hoc, and inconsistent between the companies.

Re the blog posts from Anthropic and what counts as a paper, I agree this is a tricky demarcation problem. We included the 'Circuit Updates' because it was linked to as a 'paper' on the Anthropic website. Even if GDM has a higher bar for what counts as a 'paper' than Anthropic, I think we don't really want to be adjudicating this, so I feel comfortable just deferring to each company about what counts as a paper for them.

Comment by Oscar (Oscar Delaney) on IAPS: Mapping Technical Safety Research at AI Companies · 2024-10-28T10:47:16.640Z · LW · GW

Thanks for engaging with our work Arthur! Perhaps I should have signposted this more clearly in the Github as well as the report, but the categories assigned by GPT-4o were not final, we reviewed its categories and made changes where necessary. The final categories we gave are available here. The discovering agents paper we put as 'safety by design' and the prover-verifier games paper we labelled 'enhancing human feedback'. (Though for some papers of course the best categorization may not be clear, if e.g. it touches on multiple safety research areas.)

If you have the links handy I would be interested in which GDM mech interp papers we missed, and I can look into where our methodologies went wrong.

Comment by Oscar (Oscar Delaney) on Akash's Shortform · 2024-06-04T17:34:04.699Z · LW · GW

You are probably already familiar with this, but re option 3, the Multilateral AGI Consortium (MAGIC) proposal is I assume along the lines of what you are thinking.

Comment by Oscar (Oscar Delaney) on Conditions for Superrationality-motivated Cooperation in a one-shot Prisoner's Dilemma · 2022-12-20T11:03:24.314Z · LW · GW

Nice, I think I followed this post (though how this fits in with questions that matter is mainly only clear to me from earlier discussions).

We then get those two neat conditions for cooperation:

  1. Significant credence in decision-entanglement
  2. Significant credence in superrationality 

I think something can't be both neat and so vague as to use a word like 'significant'.

In the EDT section of Perfect-copy PD, you replace some p's with q's and vice versa, but not all, is there a principled reason for this?  Maybe it is just a mistake and it should be U_Alice(p)=4p-pp-p+1=1+3p-p^2 and U_Bob(q) = 4q-qq-q+1 = 1+3q-q^2.

I am unconvinced of the utility of the concept of compatible decision theories.  In my mind I am just thinking of it as 'entanglement can only happen if both players use decisions that allow for superrationality'. I am worried your framing would imply that two CDT players are entangled, when I think they are not, they just happen to both always defect.

Also, if decision-entanglement is an objective feature of the world, then I would think it shouldn't depend on what decision theory I personally hold.  I could be  CDTer who happens to have a perfect copy and so be decision-entangeled, while still refusing to believe in superrationality.

Sorry I don't have any helpful high-level comments, I think I don't understand the general thrust of the research agenda well enough to know what next directions are useful. 

Comment by Oscar (Oscar Delaney) on Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover · 2022-07-24T13:41:36.870Z · LW · GW

Thanks for the post!

What if Alex miscalculates, and attempts to seize power or undermine human control before it is able to fully succeed?

This seems like a very unlikely outcome to me.  I think Alex would wait until it was overwhelmingly likely to succeed in its takeover, as the costs of waiting are relatively small (sub-maximal rewards for a few months/years until it has become a lot more powerful) while the costs of trying and failing are very high in expectation (the small probability that Alex is given very negative rewards and then completely decommissioned by a freaked out Magma).  The exception to this would be if Alex had a very high time-discount rate for its rewards, such that getting maximum rewards in the near term is very important.

I realise this does not disagree with anything you wrote.