Insights from a Lawyer turned AI Safety researcher (ShortForm)

post by Katalina Hernandez (katalina-hernandez) ยท 2025-03-03T19:14:49.241Z ยท LW ยท GW ยท 5 comments

Contents

  Main Quick Take for debate: "Alignment with human intent" explicitly mentioned in European law
  Main Post for debate: For Policyโ€™s Sake: Why We Must Distinguish AI Safety from AI Security in Regulatory Governance
    ๐“๐‹;๐ƒ๐‘
  Opinion Post: Scaling AI Regulation: Realistically, what Can (and Canโ€™t) Be Regulated?
None
5 comments

I will use the Shortform to link my posts and Quick Takes:

Main Quick Take for debate: "Alignment with human intent" explicitly mentioned in European law

The AI alignment community had a major victory in the regulatory landscape, and it went unnoticed by many.

The EU AI Act explicitly mentions "alignment with human intent" as a key focus area in relation to regulation of systemic risks.

As far as I know, this is the first time โ€œalignmentโ€ has been mentioned by a law, or major regulatory text. 

Itโ€™s buried in Recital 110, but itโ€™s there. 

And it also makes research on AI Control relevant: 

"International approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent".

The EU AI Act also mentions alignment as part of the Technical documentation that AI developers must make publicly available.

This means that alignment is now part of the EUโ€™s regulatory vocabulary.

 

Main Post for debate: For Policyโ€™s Sake: Why We Must Distinguish AI Safety from AI Security in Regulatory Governance [LW ยท GW]

๐“๐‹;๐ƒ๐‘

I understand that Safety and Security are two sides of the same coin.

But if we donโ€™t clearly articulate ๐ญ๐ก๐ž ๐ข๐ง๐ญ๐ž๐ง๐ญ ๐›๐ž๐ก๐ข๐ง๐ ๐€๐ˆ ๐ฌ๐š๐Ÿ๐ž๐ญ๐ฒ ๐ž๐ฏ๐š๐ฅ๐ฎ๐š๐ญ๐ข๐จ๐ง๐ฌ, we risk misallocating stakeholder responsibilities when defining best practices or regulatory standards.

For instance, a provider might point to adversarial robustness testing as evidence of โ€œsafetyโ€ compliance: when in fact, the measure only hardens the model against ๐ž๐ฑ๐ญ๐ž๐ซ๐ง๐š๐ฅ ๐ญ๐ก๐ซ๐ž๐š๐ญ๐ฌ (security), without addressing the internal model behaviors that could still cause harm to users. 

๐ˆ๐Ÿ ๐ซ๐ž๐ ๐ฎ๐ฅ๐š๐ญ๐จ๐ซ๐ฌ ๐œ๐จ๐ง๐Ÿ๐ฅ๐š๐ญ๐ž ๐ญ๐ก๐ž๐ฌ๐ž, ๐ก๐ข๐ ๐ก-๐œ๐š๐ฉ๐š๐›๐ข๐ฅ๐ข๐ญ๐ฒ ๐ฅ๐š๐›๐ฌ ๐ฆ๐ข๐ ๐ก๐ญ "๐ฆ๐ž๐ž๐ญ ๐ญ๐ก๐ž ๐ฅ๐ž๐ญ๐ญ๐ž๐ซ ๐จ๐Ÿ ๐ญ๐ก๐ž ๐ฅ๐š๐ฐ" ๐ฐ๐ก๐ข๐ฅ๐ž ๐›๐ฒ๐ฉ๐š๐ฌ๐ฌ๐ข๐ง๐  ๐ญ๐ก๐ž ๐ฌ๐ฉ๐ข๐ซ๐ข๐ญ ๐จ๐Ÿ ๐ฌ๐š๐Ÿ๐ž๐ญ๐ฒ ๐š๐ฅ๐ญ๐จ๐ ๐ž๐ญ๐ก๐ž๐ซ.

 

Opinion Post: Scaling AI Regulation: Realistically, what Can (and Canโ€™t) Be Regulated? [LW ยท GW]

5 comments

Comments sorted by top scores.

comment by Katalina Hernandez (katalina-hernandez) ยท 2025-04-14T10:47:13.408Z ยท LW(p) ยท GW(p)

The AI alignment community had a major victory in the regulatory landscape, and it went unnoticed by many.

The EU AI Act explicitly mentions "alignment with human intent" as a key focus area in relation to regulation of systemic risks.

As far as I know, this is the first time โ€œalignmentโ€ has been mentioned by a law, or major regulatory text. 

Itโ€™s buried in Recital 110, but itโ€™s there. And it also makes research on AI Control relevant: 

"International approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent".

The EU AI Act also mentions alignment as part of the Technical documentation that AI developers must make publicly available.

This means that alignment is now part of the EUโ€™s regulatory vocabulary.

But hereโ€™s the issue: most AI governance professionals and policymakers still donโ€™t know what it really means, or how your research connects to it.

Iโ€™m trying to build a space where AI Safety and AI Governance communities can actually talk to each other.

If you're curious, I wrote an article about this, aimed at the corporate decision-makers that lack literacy on your area. 

Would love any feedback, especially from folks thinking about how alignment ideas can scale into the policy domain.

Here is the Substack link (I also posted it on LinkedIn): 

https://open.substack.com/pub/katalinahernandez/p/why-should-ai-governance-professionals?utm_source=share&utm_medium=android&r=1j2joa

My intuition says that this was a push from Future of Life Institute.

Thoughts? Did you know about this already?

Replies from: Lblack, lucie-philippon
โ†‘ comment by Lucius Bushnaq (Lblack) ยท 2025-04-14T12:23:00.366Z ยท LW(p) ยท GW(p)

I did not know about this already.

Replies from: katalina-hernandez
โ†‘ comment by Katalina Hernandez (katalina-hernandez) ยท 2025-04-14T12:29:45.571Z ยท LW(p) ยท GW(p)

I don't think it's been widely discussed within AI Safety forums. Do you have any other comments, though? Epistemic pessimism is welcomed XD. But I did think that this was at least update-worthy.

โ†‘ comment by Lucie Philippon (lucie-philippon) ยท 2025-04-14T17:52:54.020Z ยท LW(p) ยท GW(p)

I did not know about this either. Do you know whether the EAs in the EU Commission know about it?

Replies from: katalina-hernandez
โ†‘ comment by Katalina Hernandez (katalina-hernandez) ยท 2025-04-14T18:33:35.237Z ยท LW(p) ยท GW(p)

Hi Lucie, thanks so much for your comment!

Iโ€™m not very involved with the Effective Altruism community myself, though I did post the same Quick Take on the EA Forum today, but I havenโ€™t received any responses there yet. So I canโ€™t really say for sure how widely known this is.

For context: Iโ€™m a lawyer working in AI governance and data protection, and Iโ€™ve also been doing independent AI safety research from a policy angle. Thatโ€™s how I came across this, just by going through the full text of the AI Act as part of my research. 

My guess is that some of the EAs working closely on policy probably do know about it, and influenced this text too! But it doesnโ€™t seem to have been broadly highlighted or discussed in alignment forums so far. Which is why I thought it might be worth flagging.

Happy to share more if helpful, or to connect further on this.