Posts

AI safety content you could create 2025-01-06T15:35:56.167Z
Policymakers don't have access to paywalled articles 2025-01-05T10:56:11.495Z
Alignment Is Not All You Need 2025-01-02T17:50:00.486Z
OpenAI’s cybersecurity is probably regulated by NIS Regulations 2024-10-25T11:06:38.392Z
The AI regulator’s toolbox: A list of concrete AI governance practices 2024-08-10T21:15:09.265Z
OHGOOD: A coordination body for compute governance 2024-05-04T12:03:16.716Z
How are voluntary commitments on vulnerability reporting going? 2024-02-22T08:43:56.996Z

Comments

Comment by Adam Jones (domdomegg) on Policymakers don't have access to paywalled articles · 2025-01-06T15:17:21.594Z · LW · GW

How big of this is an issue in practice? For AI in particular, considering that so much contemporary research is published on arxiv, it must be relatively accessible?

I think this is less of an issue for technical AI papers. But I'm finding more governance researchers (especially people moving from other academic communities) seem intent on journal publishing in places that policymakers can't read their stuff! I have also been blocked sometimes from sharing papers with governance friends easily because they are behind paywalls. I might see this more because at BlueDot we get a lot of people who are early on in their career transition, and producing projects they want to publish in places.

Comment by Adam Jones (domdomegg) on What’s the short timeline plan? · 2025-01-03T12:52:19.004Z · LW · GW

Thank you for writing this. I've tried to summarize this article (missing good points made above, but might be useful to people deciding whether to read the full article):

Summary

AGI might be developed by 2027, but we lack clear plans for tackling misalignment risks. This post:

  • calls for better short-timeline AI alignment plans
  • lists promising interventions that could be stacked to reduce risks

This plan focuses on two minimum requirements:

  • Secure model weights and algorithmic secrets
  • Ensure the first AI capable of alignment research isn't scheming

Layer 1 interventions (essential):

  • AI systems should maintain human-legible and faithful reasoning.
    • If achieved, we should monitor this reasoning, particularly for scheming, power-seeking, and broad goal-directedness (using other models or simple probes).
    • If not, we should fall back on control techniques that assume the model might be scheming.
  • Evaluations support other strategies, and give us better awareness of model alignment and capabilities.
  • Information and physical security protects model weights and algorithmic secrets.

Layer 2 interventions (important):

  • Continue improving 'current' alignment methods like RLHF and RLAIF.
  • Maintain research on interpretability, oversight, and "superalignment", and preparing to accelerate this work once we have human-level AI R&D.
  • Increase transparency in AI companies' safety planning (internally, with experts, and publicly).
  • Develop a safety-first culture in AI organizations.

This plan is meant as a starting point, and Marius encourages others to come up with better plans.

Comment by Adam Jones (domdomegg) on What’s the short timeline plan? · 2025-01-03T11:32:43.217Z · LW · GW

At BlueDot we've been thinking about this a fair bit recently, and might be able to help here too. We have also thought a bit about criteria for good plans and the hurdles a plan needs to overcome, as well as have reviewed a lot of the existing literature on plans.

I've messaged you on Slack.

Comment by Adam Jones (domdomegg) on Alignment Is Not All You Need · 2025-01-02T23:40:56.211Z · LW · GW

Re: Your comments on the power distribution problem

Agreed that multiple entities powerful adversaries controlling AI seems like not a good plan. And I agree if the decisive winner of the AI race will not act in humanity's best interests, we are screwed.

But I think this is a problem for before that happens: we can shape the world today so it's more likely the winner of the AI race will act in humanity's best interests.

Comment by Adam Jones (domdomegg) on Alignment Is Not All You Need · 2025-01-02T23:34:40.983Z · LW · GW

Re: Your points about alignment solving this.

I agree if you define alignment as 'get your AI system to act in the best interests in humans', then the coordination problem becomes harder and likely sufficient for problems 2 and 3. But I think it then bundles more problems together in a way that might be less conducive to solving them.

For loss of control, I was primarily thinking about making systems intent-aligned, by which I mean getting the AI system to try to do what its creators intend. I think this makes dividing these challenges up into subproblems easier (and seems to be what many people appear to be gunning for).

If you do define alignment as human-values alignment, I think "If you fail to implement to implement a working alignment solution, you [the creating organization] die" doesn't hold - I can imagine successfully aligning a system to 'get your AI system to act in the best interests of its creators' working fine for its creators but not being great for the world.

Comment by Adam Jones (domdomegg) on OpenAI’s cybersecurity is probably regulated by NIS Regulations · 2024-10-25T13:41:42.718Z · LW · GW

It should! Fixed, thank you :)

Comment by Adam Jones (domdomegg) on AIS terminology proposal: standardize terms for probability ranges · 2024-09-13T18:27:34.426Z · LW · GW

The UK Government tends to use the PHIA probability yardstick in most of its communications.

This is used very consistently in national security publications. It's also commonly used by other UK Government departments as people frequently move between departments in the civil service, and documents often get reviewed for clearance by national security bodies before public release.

It is less granular than the IPCC terms at the extremes, but the ranges don't overlap. I don't know which is actually better to use in AI safety communications, but I think being clear if you are using either in your writing seems a good way to go! In any case being aware it's a thing you'll see in UK Government documents might be useful.

Comment by Adam Jones (domdomegg) on The AI regulator’s toolbox: A list of concrete AI governance practices · 2024-09-03T16:40:08.307Z · LW · GW

A comment provided to me by a reader, highlighting 3rd party liability and insurance as interventions too (lightly edited):

Hi! I liked your AI regulator’s toolbox post – very useful to have a comprehensive list like this! I'm not sure exactly what heading it should go under, but I suggest considering adding proposals to greatly increase 3rd party liability (and or require carrying insurance). A nice intro is here:
https://www.lawfaremedia.org/article/tort-law-and-frontier-ai-governance

Some are explicitly proposing strict liability for catastrophic risks. Gabe Weil has proposal, summarized here: https://www.lesswrong.com/posts/5e7TrmH7mBwqpZ6ek/tort-law-can-play-an-important-role-in-mitigating-ai-risk

There are also workshop papers on insurance here:
https://www.genlaw.org/2024-icml-papers#liability-and-insurance-for-catastrophic-losses-the-nuclear-power-precedent-and-lessons-for-ai
https://www.genlaw.org/2024-icml-papers#insuring-uninsurable-risks-from-ai-government-as-insurer-of-last-resort

NB: when implemented correctly (i.e. when premiums are accurately risk-priced), insurance premiums are mechanically similar to Pigouvian taxes, internalizing negative externalities. So maybe this goes under the "Other taxes" heading? But that also seems odd. Like taxes, these are certainly incentive alignment strategies (rather than command and control) – maybe that's a better heading? Just spitballing :)

Comment by Adam Jones (domdomegg) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-11T13:01:00.788Z · LW · GW

I don't understand how the experimental setup provides evidence for self-other overlap working.

The reward structure for the blue agent doesn't seem to provide a non-deceptive reason to interact with the red agent. The described "non-deceptive" behaviour (going straight to the goal) doesn't seem to demonstrate awareness of or response to the red agent.

Additionally, my understanding of the training setup is that it tries to make the blue agent's activations the same regardless of whether it observes the red agent or not. This would mean there's effectively no difference when seeing the red agent, i.e. no awareness of it. (This is where I'm most uncertain - I may have misunderstood this! Is it to do with only training the subset of cases where the blue agent doesn't originally observe the red agent? Or having the KL penalty?). So we seem to be training the blue agent to ignore the red agent.

I think what might answer my confusion is "Are we just training the blue agent to ignore the red agent entirely?"

  • If yes, how does this show self-other overlap working?
  • If no, how does this look different to training the blue agent to ignore the red agent?

Alternatively, I think I'd be more convinced with an experiment showing a task where the blue agent still obviously needs to react to the red agent. One idea could be to add a non-goal in the way that applies a penalty to both agents if either agent goes through it, that can only be seen by the red agent (which knows it is this specific non-goal). Then the blue agent would have to sense when the red agent is hesitant about going somewhere and try to go around the obstacle (I haven't thought very hard about this, so this might also have problems).

Comment by Adam Jones (domdomegg) on Which skincare products are evidence-based? · 2024-05-23T13:02:29.149Z · LW · GW

Most sunscreen feels horrible and slimy (especially in the US where the FDA has not yet approved the superior formulas available in Europe and Asia).

What superior formulas available in Europe would you recommend?

Comment by Adam Jones (domdomegg) on OHGOOD: A coordination body for compute governance · 2024-05-05T00:05:43.514Z · LW · GW

Thanks for the feedback! The article does include some bits on this, but I don't think LessWrong supports toggle block formatting.

I think individuals probably won't be able to train models themselves that pose advanced misalignment threats before large companies do. In particular, I think we disagree about how likely we think it is that there's some big algorithmic efficiency trick someone will discover that enables people to leap forward on this (I don't think this will happen, I think you think this will).

But I do think the catastrophic misuse angle seems fairly plausible - particularly from fine-tuning. I also think an 'incompetent takeover'[1] might be plausible for an individual to trigger. Both of these are probably not well addressed by compute governance (except maybe by stopping large companies releasing the weights of the models for fine-tuning by individuals).

  1. ^

    I plan to write more up on this: I think it's generally underrated as a concept.