The Milton Friedman Model of Policy Change

post by JohnofCharleston · 2025-03-04T00:38:56.778Z · LW · GW · 2 comments

Contents

  A Model of Policy Change
  Crises Can Be Schelling Points
  Avoid Being Seen As “Not Serious”
  What Crises Can We Predict?
None
2 comments

One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates skillfully exploiting a crisis.

 

In the last year or so, I’ve had dozens of conversations about the DC policy community. People unfamiliar with this community often share a flawed assumption, that reaching policymakers and having a fair opportunity to convince them of your ideas is difficult. As “we”[1] have taken more of an interest in public policy, and politics has taken more of an interest in us, I think it’s important to get the building blocks right.

Policymakers are much easier to reach than most people think. You can just schedule meetings with congressional staff, without deep credentials.[2] Meeting with the members themselves is not much harder. Executive Branch agencies have a bit more of a moat, but still openly solicit public feedback.[3] These discussions will often go well. By now policymakers at every level have been introduced to our arguments, many seem to agree in principle… and nothing seems to happen.

Those from outside DC worry they haven’t met the right people, they haven’t gotten the right kind of “yes”, or that there’s some lobbyists working at cross purposes from the shadows. That isn’t it at all. Policymakers are mostly waiting for an opening, a crisis, when the issue will naturally come up. They often believe that pushing before then is pointless, and reasonably fear that trying anyway can be counterproductive.

A Model of Policy Change

“There is enormous inertia — a tyranny of the status quo — in private and especially governmental arrangements. Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable.”[4] 

Milton Friedman

That quote is the clearest framing of the problem I’ve found; every sentence is doing work. This is what people who want to make policy change are up against, especially when that policy change is outside the current Overton Window. Epistemically, I believe his framing at about 90% strength. I quibble with Friedman’s assertion that only crises can produce real change. But I agree this model explains most major policy change, and I still struggle to find good counter-examples.

Crises Can Be Schelling Points

This theory, which also underlies Rahm Emmanuel’s pithier “Never let a good crisis go to waste,” is widely believed in DC. It’s how policies previously outside the Overton Window were passed hastily:

It’s also why policies don’t always have to be closely related to the crisis that spawned them, like FDA reforms after thalidomide.[6] 

Policy change is a coordination problem at its core. In a system with many veto points, like US federal government policy, there is a strong presumption for doing nothing. Doing nothing should be our strong presumption most of the time; the country has done well with its existing policy in most areas. Even in areas where existing policies are far from optimal, random changes to those policies are usually more harmful than helpful.

Avoid Being Seen As “Not Serious”

Policymakers themselves have serious bottlenecks. There is less division between policy-makers and policy-executors than people think. Congress is primarily policy-setting, but it also participates in foreign policy, conducts investigations, and makes substantive budgetary determinations. At the other end of Pennsylvania Avenue, the Executive Branch ended up with much more rule-making authority than James Madison intended. Those long-term responsibilities have to be balanced against the crisis of the day. No one wants to start a policy-making project when it won’t “get traction.”

Many people misunderstand the problem with pushing for policy that’s outside the Overton Window. It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met. The taboo policy preference isn’t the problem; it’s the implication that you don’t understand their constraints.

Unless you know what you’re doing and explain your theory of change, asking a policymaker for help in moving an Overton Window is a bigger ask than you may realize. You’re inadvertently asking them to do things the hard, tedious way that almost never works. By making the ask at all, you’re signaling that either you don’t understand how most big policy change happens, or that you misunderstand how radical your suggested policy is. Because policymakers are so easy to reach, they have conversations like that relatively often. Once they slot you into their “not serious” bucket, they’ll remain agreeable, but won’t be open to further policy suggestions from you.

What Crises Can We Predict?

The takeaway from this model is that people who want radical policy change need to be flexible and adaptable. They need to:

At the “called it” step, when you argue that you predicted this and that your policy would have prevented/addressed/mitigated the crisis, it helps if it’s true.

What crises, real or perceived, might surprise policymakers in the next few years? Can we predict the smoke?[7] Can we write good, implementable policy proposals to address those crises?

If so, we should call our shots; publish our predictions and proposals somewhere we can refer back to them later. They may come in handy when we least expect.[8] 

  1. ^

    Left deliberately undefined, so I don’t get yelled at. Again.

  2. ^
  3. ^

     Up to and including the White House requesting comment on behalf of the Office of Science and Technology Policy: https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/

  4. ^

     “Capitalism and Freedom”, 1982 edition from University of Chicago Press, pages xiii-xiv.

  5. ^

     “Stress Test: Reflections on Financial Crises”, 2014 edition from Crown, New York.

  6. ^
  7. ^
  8. ^

2 comments

Comments sorted by top scores.

comment by mako yass (MakoYass) · 2025-03-04T03:44:31.661Z · LW(p) · GW(p)

I wonder what the crisis will be.

I think it's quite likely that if there is a crisis that leads to beneficial response, it'll be one of these three:

  • An undeployed privately developed system, not yet clearly aligned nor misaligned, either:
    • passes the Humanity's Last Exam benchmark, demonstrating ASI, and the developers go to congress and say "we have a godlike creature here, you can all talk to it if you don't believe us, it's time to act accordingly."
    • Not quite doing that, but demonstrating dangerous capability levels in red-teaming, ie, replication ability, ability to operate independently, pass the hardest versions of the turing test, get access to biolabs etc. And METR and hopefully their client go to the congress and say "This AI stuff is a very dangerous situation and now we can prove it."
  • A deployed military (beyond frontier) system demonstrates such generality that, eg, Palmer Luckey (possibly specifically Palmer Luckey) has to go to congress and confess something like "that thing we were building for coordinating military operations and providing deterrence, turns out it can also coordinate other really beneficial tasks like disaster relief, mining, carbon drawdown, research, you know, curing cancer? But we aren't being asked to use it for those tasks. So, what are we supposed to do? Shouldn't we be using it for that kind of thing?" And this could lead to some mildly dystopian outcomes, or not, I don't think the congress or the emerging post-prime defence research scene is evil, I think it's pretty likely they'd decide to share it with the world (though I doubt they'd seek direct input from the rest of the world on how it should be aligned)

Some of the crises I expect, I guess, wont be recognized as crises. Boiled frog situations.

  • A private system passes those tests, but instead of doing the responsible thing and raising the alarm, the company just treats it like a normal release and sells it. (and the die is rolled and we live or we don't.)

Or crises in the deployment of AI that reinforce the "AI as tool" frame so deeply that it becomes harder to discuss preparations for AI as independent agents:

  • Automated invasion: a country is successfully invaded, disarmed, controlled and reshaped with almost entirely automated systems, minimal human presence from the invading side. Probable in gaza or taiwan.
    • It's hard to imagine a useful policy response to this. I can only imagine this leading to reactions like "Wow. So dystopian and oppressive. They Should Not have done that and we should write them some sternly worded letters at the UN. Also let's build stronger AI weapons so that they can't do that to us."
  • A terrorist attack or a targeted assassination using lethal autonomous weapons.
    • I expect this to just be treated as if it's just a new kind of bomb.
comment by davekasten · 2025-03-04T01:10:30.268Z · LW(p) · GW(p)

This essay earns a read for the line, "It would be difficult to find a policymaker in DC who isn’t happy to share a heresy or two with you, a person they’ve just met" alone.

I would amplify to suggest that while many things are outside the Overton Window, policymakers are also aware of the concept of slowly moving the Overton Window, and if you explicitly admit you're doing that, they're usually on board (see, e.g., the conservative legal movement, the renewable energy movement, etc.).  It's mostly only if you don't realize you're proposing that that you trigger a dismissive response.