Scaling AI Regulation: Realistically, what Can (and Can’t) Be Regulated?

post by Katalina Hernandez (katalina-hernandez) · 2025-03-11T16:51:41.651Z · LW · GW · 0 comments

Contents

  If policymakers do not follow along once we mention basic terms such as "mechanistic interpretability", "scalable oversight", or "alignment constraints", how can regulation even begin to be effective?
  Is regulation completely useless?
  If regulators were actually listening to you, what would you tell them?
  How do you see the "Brussels effect" playing out?
  Are sandboxes a step in the right direction?
  The EU AI Act  introduces some oversight obligations.
  Let me know if you'd like to see a bigger breakdown of the different provisions of the EU AI Act from a Safety perspective, and what are the most problematic areas!
None
No comments

There’s a certain frustration that comes with watching people regulate a field they don’t understand.

If you’ve ever spent time in AI policy, you’ve probably had moments where you wished policymakers, lawyers, or corporate compliance teams would just listen

That they would put aside grandstanding, drop the vague legalese, and ask the right questions before drafting rules that will shape the future.

Maybe you’ve watched them frame AI risk in ways that are sadly disconnected from reality. Maybe you’ve seen them focus on what sounds good on paper, rather than what would actually work in practice. 

Maybe you’ve been in meetings where regulators ask how we can “prove” that AI won’t be misused, or that it will follow human expectations, without realizing that’s a set of unsolved problems...

Maybe, at some point, you’ve wondered:

I am not a ML engineer or an alignment researcher. My background is in law and AI governance, and I just spend a lot of time thinking about these gaps. 

I believe the only way regulation can work is if it follows safety research, rather than pretending compliance structures alone will make AI safer.

I know that many in this community are skeptical, if not outright dismissive, of regulatory efforts like the EU AI Act

And to be honest, the more I work in policy, the more I develop some of that skepticism too. 

Not about AI governance per se, but about approaches that rigidly separate Policy and Governance from Safety in silos that do not interact. 

If policymakers do not follow along once we mention basic terms such as "mechanistic interpretability", "scalable oversight", or "alignment constraints", how can regulation even begin to be effective?

That said, regulation is happening, albeit slowly and with flaws. Still, it’s going to shape AI deployment whether we like it or not. 

And from where I stand, we should not be happy for AI regulations to be drafted without meaningful input from Safety researchers. 

Without technical literacy, and without accounting for the actual limits of AI systems to be transparent, and meet "legal definitions" of explainability. 

I’m posting here because I want to hear from you:

Is regulation completely useless? 

Should we assume that AI safety is an unsolvable governance problem and that researchers will have to figure it out on their own? Or are there specific things you think could be useful if done right?

If regulators were actually listening to you, what would you tell them? 

If you had control over policy, what would you mandate or prohibit? What interventions could actually move the needle?

How do you see the "Brussels effect" playing out? 

Could the EU’s approach influence AI policy elsewhere, or is this just another case of performative compliance?

Are sandboxes a step in the right direction? 

The EU AI Act is mandating that every member state establishes at least one regulatory sandbox, by mid-2026

Essentially, controlled environments where AI-related innovation can be tested under oversight. Is this a waste of time, or do you see potential in a structured approach to AI experimentation?

The EU AI Act  introduces some oversight obligations. 

But it does not mandate that funds are allocated to specific areas of research such as mechanistic interpretability, alignment or even to "safety research" in generic terms. 

Most of the obligations imposed on AI developers are about drafting transparent technical documentation for external auditing and submitting it to regulatory bodies. Nothing too different from what companies like Anthropic or OpenAI already do. 

This is where my personal skepticism comes from: the EU had an opportunity to influence the pace of safety research and innovation. Instead, they emphasized obligations to "prove compliance with the Act" and did not provide meaningful guidelines on what would consitute adequate, technical compliance benchmarks.

So, in your view:

Are we regulating the right things? If not, what would real AI safety regulation look like? Should we even try? Or is this all just buying time?"

I’m asking all of this in good faith. I’m not here to defend regulation, I'm stress-testing it

I also know LessWrong is not the place for sugarcoating. So if you think governance is a lost cause, if you think the AI Act is a bureaucratic disaster, if you think policymakers will always be 10 steps behind... I want to hear it.

But I also want to hear if there’s hope.

Where do you think regulation could actually help? If you were forced to design a policy framework that wasn’t useless, what would it look like? If anything in AI governance or someone's policy work has ever impressed you, what was it?

I know governance people and safety people often work in silos. That’s why I want to hear your perspectives.

One relevant provision that caught my attention is Article 68 of the EU AI Act, which mandates the creation of a Scientific Panel of Independent Experts.

This panel will be responsible for advising the AI Office on systemic risks, oversight methodologies, and classification of general-purpose AI models. Notably, the selection criteria emphasize expertise in AI, independence from providers, and the ability to operate objectively.

Would any of you consider being part of this panel? 

I don’t believe residency in the EU is a requirement, but I can confirm this through my contacts at the AI Office. Given the skepticism toward governance structures, do you think this panel has any potential to shape meaningful oversight, or is this just another bureaucratic layer without real enforcement power?


Tear this apart. Tell me why I’m wrong. Or, if there’s even one thing you think could work, tell me what that is.

Let me know if you'd like to see a bigger breakdown of the different provisions of the EU AI Act from a Safety perspective, and what are the most problematic areas!

Looking forward to the discussion.

0 comments

Comments sorted by top scores.