DOGE Might Be Worth Influencing
post by LTM · 2025-04-11T00:40:34.866Z · LW · GW · 1 commentsThis is a link post for https://routecause.substack.com/p/doge-might-be-worth-influencing
Contents
Introduction Directly influencing the US government Raising the Salience of AI Safety Reasons For / Against Informal Influence Matters How to ‘Influence’ Good Value Influence Conclusion None 1 comment
TL;DR DOGE is central to initiatives important to senior decision makers in the US government. There are legitimate concerns about their longevity of influence, politicised nature, and independence from existing agendas. However, their proximity to decision-makers and growing administrative authority makes them an unusually accessible channel for advancing AI safety considerations. Their influence could significantly shape America's approach during critical technological developments, offering a cost-effective point of intervention.
Introduction
DOGE, the Department of Government Efficiency, has gathered widespread attention in recent months. Its secretive nature, apparent influence over president Trump, and the questionable credentials and experience of its members has made the quasi-department both an object of ridicule and a source of genuine concern.
Despite this, it presents a unique opportunity for raising the political salience of AI safety concerns through influencing US government policy. DOGE staff skews notably young and technologically capable (although a lot of information about the semi-government department is difficult to verify). They have shown remarkable willingness to upend decades of convention and participate in the executions of initiatives that would have been unthinkable under any previous administration. [1]
Most significantly, DOGE appears to have direct access to the highest echelons of the US government. Influencing a government is hard, with most estimates of the total cost of US lobbying settling on the low billions of dollars a year. Yet this small cohort of relatively inexperienced 20-somethings have found themselves with the ear of at least the richest man in the world, and possibly much of Trump’s cabinet.
DOGE offers two distinct paths to shaping a beneficial post-AGI future for humanity.
- Channeling their significant discretion in how specific policies are executed across the US government.
- Using their proximity to senior officials to raise AI safety awareness among those who will make critical decisions during technological transitions.
Directly influencing the US government
Although they are new, small, and widely criticised, DOGE has demonstrated amazing potential to acquire decision-making authority and divert significant resources. In their brief existence, they’ve collectively already gained temporary admin privileges to the centralised payment system of the US government, orchestrated cuts to USAID, and threatened further disruption across state-dependent enterprises. It is likely that most of these actions originated from directives by Trump, Musk, and other senior figures in the US government with DOGE merely implementing rather than guiding. However, this distinction may prove less important than their growing operational authority.
The implementation of any grand vision inevitably requires thousands of granular decisions abstracted away from the level above. In both the private sector and in government, this creates enormous influence opportunities for those positioned between those with the power to make decisions and the technical specialists with the skills to implement them. These intermediate layers in a hierarchy shape outcomes through countless small acts of discretion that cumulatively define how an abstract directive manifests in practice.
DOGE forms a kind of middleware between the President’s inner circle and the traditional machinery of the government. Conventional initiatives will reliably continue to flow through establishing channels. But when personal agendas require demonstrated personal loyalty to be reliably executed, DOGE is ideally placed to become the preferred lead. The personal judgement of a small cohort without substantial governmental expertise acquires great importance under these scenarios.
At the advent of powerful AI the White House may want to pursue policies falling into the latter category. Ideas which go against norms or formal constraints on executive power, which could only be meaningfully executed by a loyal and technically aware team who Trump has come to trust. If AGI is developed during the Trump presidency pre-2028, the safety consciousness of those implementing these unconventional directives could determine whether humanity has a future at all.
Even for initiatives going through the conventional government apparatus, the first steps and initial direction may increasingly pass through this unofficial middleware. Early decisions set precedents and establish processes which can prove difficult to modify later. A programme cost cutting agencies such as USAISI or branches of the DoD involved with monitoring foreign AI development should be guided as much as possible by individuals who understand AI’s technical trajectory and the existential implications for humanity. Those individuals may well be current members of the DOGE staff, and improving their understanding of the threat AI poses and the governance interventions being pursued could save valuable programs from being sacrificed in the name of efficiency.
Raising the Salience of AI Safety
Alternatively, they could raise the salience of AI safety concerns within the upper echelons of the US government. Trump has spoken highly of DOGE staffers, highlighting their intelligence and generally supporting the project. He likely has a reasonable amount of contact with them, and certainly has a lot of contact with Musk, who seems to lead the project personally. Should AI safety perspectives become widely embraced within the DOGE team, these ideas might be meaningfully considered rather than immediately dismissed.
This strategy seems quite nebulous for a few reasons. Trump acts on impulse, and is not the type to carefully consider expert advice. Despite his impulsive nature, most of his successfully implemented and long-lasting policies he has been saying he will do for years. Although the exact implementation of tariffs may have been a surprise, they reflect convictions about trade which he has expressed publicly since well before he seriously entered politics. Given this pattern, he might not be receptive to the radically unfamiliar and technically abstract attitudes of his underlings getting in the way of his reshaping of the world.
Regardless of Trump’s narrow interests, the administration inevitably confronts issues extending beyond the President’s core priorities. The emergence of powerful AI, while it touches on concerns around China and the labour market, has no immediate solution within the platform Trump campaigned on. In these domains, outside perspectives which people senior in the administration are frequently exposed to may have significant influence. This creates natural openings for technically proficient DOGE staffers to introduce technical safety initiatives into conversations where priorities have not yet been set in stone.
Reasons For / Against
These two pathways, directly applying discretion in implementation and acting as intellectual conduits, represent the primary theories of change. Whether pursuing this opportunity is worth your time depends on how you feel about the exact structure of DOGE and how it fits into the White House and broader federal government.
Why DOGE presents an opportunity:
- Their backgrounds align with AI safety awareness. Young, technically educated software engineers are far more likely to take emerging technological risks seriously than both traditional government officials and the general public.
- DOGE brings legitimate technical credentials. Credentials, technical experience, and the trust of people in power don’t mean everything, they do imply a level of technical literacy not often found in government circles. This expertise differential creates natural influence opportunities, as decision-markers frequently defer to specialists on complex matters beyond their own technical understanding.
Small teams are more efficient to influence. Influencing practical policy usually requires convincing a very large number of people across a complex bureaucracy. Initiatives frequently stall amid departmental friction, procedural hurdles, and external red tape. In contrast, this small team has been able to exert at least some influence on governmental operations within months. If you are looking to influence the US government, this is likely a very cheap and quick way of trying to do so.
Why DOGE is not worth the investment:
- Their influence may be short-lived. Although when Trump announced the creation of DOGE he said the unit would last until at least July 4th 2026, he isn’t known to honour his public declarations.
- Their controversial reputation creates association risks. Exact metrics are hard to come by, but it seems safe to assume that DOGE is widely disliked among America’s political and economic leadership. Associating AI safety policies with DOGE initiatives risks unnecessarily politicising these issues when broad and bipartisan coordination is most critical. This polarisation could undermine the consensus building needed for effective governance during rapid technological change.
- Personal loyalty to Trump / Musk was a selecting factor. While most of the DOGE staff likely have a high degree of technical competence, so do many people not trying to rewrite the social security system from scratch. They were chosen for more than just technical knowledge and their loyalty may make them unwilling to advocate positions unpopular among those they look up to. Particularly in times of crisis, they may not be willing to act against their patrons to instead pursue the best interests of humanity.
- DOGE has a specific role. The team exists to find fraud, reduce expenditure, and increase executive control over the US government’s financial outgoings. This goal is dear to many of the important people DOGE could be in a position to influence. This mandate may restrict their ability to prioritise other concerns, even ones which seem far more pressing to them as individuals.
Informal Influence Matters
If DOGE or its successor programmes become a fixture of this administration, their staff members are likely to be physically near the halls of power during rapidly evolving situations. This creates rare opportunities for influence at critical times.
The handful of individuals with direct access to the President, or his social media presence, or who simply happen to be on hand to advise a critical decision have the potential to change the course of a crisis.
Consider the aftermath of Joe Biden’s disastrous final debate performance. The question of whether he would run persisted for weeks without resolution. The conclusive announcement ending his campaign for good came not through formal White House channels or a televised address, but through a single tweet. A message, almost certainly not drafted by Biden himself, effectively ended a presidency outright. If it had been undecided then, that tweet turned his resignation from a painful possibility to an irreversible political reality.
An individual with nothing but access privileges to Biden’s twitter effectively put him out of the race, changing the course of American democracy in the process. This informal actor may have been properly instructed and acted according to their place in an organisation, but it’s hard to tell for sure. They may have decided to do what they thought everyone supported, but no one else was brave enough to do.
There are millions of those kinds of informal power wielders all around the world. And it seems like we know the names of several who operate in the highest offices of the most AI-enabled country in the world.
How to ‘Influence’
You can try to change the functioning of DOGE in two ways. Talk to them, or join them. [2]
Many people who work for DOGE have been publicly identified, formally or otherwise. Giving them information about the state of AI in the world and the risks it poses to humanity is a legally and ethically sound way to improve decision making within the US government. Investing in technical safety research and implementing international regulatory measures is in everyone’s interests. Ensuring DOGE staff understand how to best advance these objectives is worthwhile engagement.
A significantly more risky alternative is to directly seek employment within the quasi-department. This would open up much more potential for direct impact, but could ruin your personal credibility depending on what DOGE goes on to do and how they are perceived by the public.
Good Value Influence
Through its proximity to powerful people in the US government and duty in enacting their vision, DOGE has found itself in an abnormally influential place. With its young and technically proficient staff, this organisation represents an unusually accessible channel for introducing AI safety concerns into federal decision making. Ensuring they have the tools needed to guide the world’s foremost superpower towards safe AI development is a relatively cheap and easy way to prepare us all for a crisis.
There are risks associated with intentionally raising the profile of AI safety concerns with DOGE. Most notably, this runs the risk of politicising AI safety in the eyes of the US political elite and the voting public. Such a project could also exacerbate race dynamics by further convincing decision makers in the US government that AI development is a threat, and so the US should make sure they stay ahead.
Despite this, the practical reality is that providing DOGE personnel with information about frontier AI development and potential safety frameworks is a surprisingly cheap and realistic intervention. Influencing the US government as a whole is practically impossible, and DOGE might be the avenue to making a reasonable attempt.
Conclusion
DOGE's distinctive position near the heart of the US government presents a key opportunity for raising the political salience of AI safety considerations. Direct outreach to DOGE personnel could inadvertently politicize safety considerations or accelerate race dynamics. However, it represents a highly cost efficient pathway for shaping how the world's most powerful nation approaches emerging AI systems. Through their direct implementation of White House policies and the advice they may provide to senior decision makers, DOGE could efficiently use an improved understanding of the dangers of powerful AI to help push humanity that bit closer to the bright future we deserve.
- ^
Emailing every federal employee asking them to explain their work, giving yourself the ability to edit the federal payments system on a whim, and trying to rewrite the social security system from scratch. These feel more like the antics of an SF software startup than a serious government department.
- ^
To clarify, I am not advocating for applying “influence” in a manipulative or power-seeking sense. The dangers of AGI as currently developed are discussed very publicly as they should be, along with technical and governance approaches for risk mitigation. Encouraging governments, businesses, and the public to take these risks seriously represents a wholly legitimate effort to give humanity the best post-AGI future possible.
1 comments
Comments sorted by top scores.
comment by Viliam · 2025-04-11T08:34:41.547Z · LW(p) · GW(p)
I guess I have a different model of DOGE. In my opinion, their actual purpose is to have a pretext to fire any government employee that might pose an obstacle to Trump. For example, suppose that you start investigating some illegal activity and... surprise!, the next day you and dozen other randomly selected people are fired in the name of fighting bureaucracy and increasing government efficiency... and no one will pay attention to this, because too many things like that happen every day.