Why the 2024 election matters, the AI risk case for Harris, & what you can do to help

post by Alex Lintz (alex-lintz) · 2024-09-24T19:32:46.893Z · LW · GW · 2 comments

Contents

  Executive Summary
    This election matters more than most give it credit for
    This election is more tractable than most believe
    Donating & volunteering are likely among the best ways to spend money and time this year
  Introduction
  The 2024 election is much more important than other elections
    This is an especially critical period for AI
    Assessing the candidates on their ability & willingness to mitigate catastrophic risks from AI
        Other characteristics relevant to navigating risks from AI:
    A second Trump term would likely be far more damaging for liberal democracy than the last
  Influencing the US election is tractable
    US presidential elections are often close
        US presidential elections are often surprisingly close.
        Given how close US elections tend to be, many efforts can be counterfactually responsible for flipping the election.
    Having an impact on the election is relatively straightforward
        The election is more tractable than a lot of other work.
    There is still low-hanging fruit
        Most political donations don't go to effective organizations.
        Many promising strategies have not been sufficiently explored or are underfunded.
        Even many top organizations aren’t operating very efficiently.
    Electoral politics has poor incentives and talent retention
        Top talent is not well-incentivized to work in electoral politics 
        Efficacy is often poorly incentivized
  Should you donate to the election?
    What’s the chance that donations flip the election?
    Overall effects on existential risk: Trump vs Harris
        This quick guesswork implies a ~15% (or 1.25 percentage point) reduction in the likelihood of an existential catastrophe due to AGI within the next 8 years if Harris wins.
    Assessing the relative value of donations
    Downside risks & clarity of impact
    Conclusion & recommendations
  Donation, volunteer, & fundraising opportunities
    Top recommendations
    Get involved
  Appendix
    How the model works: Estimating the probability of $10 million flipping the election
    The opportunity cost of a Trump presidency
    The candidates on some EA cause areas
      Artificial Intelligence
        Candidates’ Strengths and AI
        Positions of the candidates on AI
        Trump has expressed some concerns about AI but some of his top supporters seem to be vocal AI accelerationists with a strong interest in shaping AI policy in a second Trump administration.
        Harris tends to focus on present harms, but has expressed some concern about existential risk.
        Expected impact of the candidates on AI policy
      Pandemic response and biosecurity
        The Biden Administration has taken action on biosecurity which Trump has expressed plans to reverse if reelected.
        Trump presided over the beginning of the Covid pandemic and had, at best, a mixed record in terms of handling it. He may appoint known anti-vaccine activist RFK Jr. to a leading role in his administration.
      Global health
      Climate change
        Trump is clearly not going to take positive action on climate change and is likely to roll back much of the progress made under Biden
      Nuclear Risk
        Trump probably exacerbated nuclear risk in his first term and would likely do so again in a second term.
        Harris seems likely to continue with the status quo on US nuclear policy, perhaps making marginal progress on increasing safety.
      Farm animal welfare
        Harris appears to be a strong advocate of animal welfare, even addressing some issues for farmed animal welfare.
        We know little about Trump’s views on animal welfare but it seems unlikely we’ll see any positive developments on farm animal welfare from his administration.
      US-China & international relations
        The next administration will inherit a tense international situation which could easily get worse. AI has the potential to add further challenges.
        Harris, assuming she acts similarly to Biden, is likely to perform well when it comes to international relations.
        While Trump had some foreign policy successes, his second term would likely degrade US alliances and could open up new security problems for the US and its allies around the world.
None
2 comments
Jason Vines, 2015. Creative Commons

Epistemic status: I’ve attempted to be objective in my comparison of candidates but the result is surely far from perfect. Also, given the election is so soon, I haven’t had time to solicit nearly as much feedback as I would have liked. Please take our conclusions with a grain of salt. 

[Note: This is similar to a post [EA · GW] I wrote on the EA Forum but contains important revisions and more information on cost effectiveness & AI risk concerns.[2]

Executive Summary

I have spent recent months investigating and working on the election with a group of volunteers[1], many of whom come from AI safety circles. This piece makes the case that electing Harris is both incredibly important and that it's still possible to meaningfully affect her chances of victory. Taking into account key considerations (e.g. AI risk concerns and the potential for short timelines) and using some rough modeling, we conclude that contributing to the 2024 US election may be among the most impactful uses of time and money currently available. 

This election matters more than most give it credit for

This election is more tractable than most believe

Donating & volunteering are likely among the best ways to spend money and time this year


Introduction

The US is the world’s most powerful state and the only superpower that is also a liberal democracy. If one of the presidential candidates is expected to be much worse on important issues (e.g., protecting liberal democracy, AI safety, biosecurity, climate change, global health, or animal welfare), and if the election is expected to be very close, then contributing to helping the better candidate win has the potential for extraordinary value.

We know of no lever, other than the US presidential election, that allows as many individuals to clearly influence such important trajectory changes. Several community members have dropped other AI work to stop Trump from winning this year. 

The 2024 election is much more important than other elections

This is an especially critical period for AI

Many leaders and employees at frontier AI labs have discussed having TAI timelines before 2030 (Sam Altman: 4-5 years, Dario Amodei: Human level in 2 years, Shane Legg: 50% AGI by 2028). This means the next President has a good chance of presiding over a decisive period for addressing AI risk concerns.

If we reach AGI in the next 4 years there will be many key decisions to be made around governing AI, deployment, and international cooperation. The ability of the president to think critically, learn from advisors, and work with competent people may be crucial for humanity's future. Even if timelines are longer than 4 years, the next few years will be important years for getting in place protections, safety funding, and legislation which can mitigate risk down the line.

Assessing the candidates on their ability & willingness to mitigate catastrophic risks from AI

While other factors matter a lot when it comes to the value of a presidency, let’s focus for now on how we expect the candidates to fare on preventing existential risk from AI. To add a degree of objectivity and specificity, we’ll focus on individual characteristics we’d like to see in a world leader governing the transition to potentially dangerous AI systems.

Appreciation of risks from AI: Unclear

Willingness to listen to AI-concerned advisors: Moderate Harris win

General competence: Moderate Harris win

Sense of caution and taking general risks seriously (e.g. about deployment): Moderate Harris win  

Willingness to do things outside the Overton Window (which might be necessary to ensure AI goes well): Moderate Trump win 

Smart and able to discern good arguments: Strong Harris win

Respect for democratic norms: Strong Harris win

Not overly motivated by power[4], someone who is unlikely to use powerful AI for personal or political gain: Strong Harris win

Competence in executing projects, not prone to bureaucratic red tape: Moderate Trump win

Other characteristics relevant to navigating risks from AI:

Ultimately, people will rank the candidates on each of these metrics differently and the relative importance of each metric is unclear.

In our eyes some of the more important factors in Harris’ favor are her relative lack of power-seeking, the general competence of her & her team, her respect for democratic institutions, and the likelihood that she will listen to reasonable advisors about AI safety. On the other hand, Trump seems able and willing to execute on unusual policies and actions to a greater degree than Harris. It’s also possible he could swing hard toward being concerned about AI safety, though his position ultimately seems higher variance and it’s unclear whether he’d take reasonable actions were he convinced.  

Though Trump has some advantages, we think his strong power-seeking drive, lack of respect for democratic institutions, and general lack of competence are essentially disqualifying all on their own. We think these characteristics represent too great a risk when considering that the next president could be responsible for guiding the country through a transition to AGI. Overall, we believe Harris is considerably more likely to safely manage the transition to powerful AI systems compared to Trump.[6] 

A second Trump term would likely be far more damaging for liberal democracy than the last

Another key consideration for the value of a presidency is the likelihood that they would weaken Democratic institutions. Democracy welcomes free and open debate of ideas. It challenges leaders to benefit constituents — or be quickly, peacefully booted. Authoritarianism, meanwhile, has more often let an insulated group of elites sit back as famines, genocides, and other catastrophes unfolded around them — sometimes at their command. We expect a second Trump presidency to be much worse for liberal democracy, both domestically and globally (see the section on US-China and international relations below), than a Harris presidency.

In his first term, Trump and allies were held back by sane civil servants who rejected their dangerous ideas, such as launching nuclear weapons or using the military to overturn the election. Trump’s failure to overturn the 2020 election has shaped his second term agenda, which is now aimed at destroying checks and balances through unprecedented power over the military, courts, and key agencies. His plans reflect these new priorities:

  1. Execute a massive purge of independent civil servants: Trump has mentioned several times his desire to pass the “Schedule F” executive order. This would give him the power to fire up to 50,000 civil servants who have traditionally checked the president’s power, including in legal, regulatory, and military contexts.
  2. Assemble vetted loyalists: Trump’s allies have spent tens of millions on “Project 2025”, one goal of which is to screen Trump loyalists to replace independent civil servants. In their own words, “Our goal is to assemble an army of aligned, vetted, trained, and prepared conservatives to go to work on Day One to deconstruct the Administrative State.”
  3. Expand presidential control: As reported, “Project 2025 proposes that the entire federal bureaucracy, including independent agencies such as the Department of Justice, be placed under direct presidential control.”
  4. Appoint more loyal judges: While this happened in his first term as well, it’s become apparent just how impactful the appointment of judges loyal to Trump has been, at the Supreme Court, appellate, and district levels. Both Trump’s immunity ruling & Cannon’s ruling to throw out Trump’s classified documents case have been highly unusual & appear politically motivated.

Another key change from Trump’s last term is the presence of an extremely Trump-friendly Supreme Court which has, among other things, granted the president unprecedented immunity from the law. It remains to be seen just how much immunity Trump would have, but in her dissenting opinion, Judge Sotomayor said, “[When the president] uses his official powers in any way, under the majority's reasoning, he now will be insulated from criminal prosecution… [if he] orders the Navy's Seal Team 6 to assassinate a political rival? Immune." It seems likely that Trump would be legally able to take or offer bribes and it’s possible he couldn’t be prosecuted even for organizing a military coup. While the full implications remain unclear, experts are near-universally concerned about this ruling and its implications for how Trump could behave in a second term.

It’s worth spelling out just how concerning and forceful Trump’s attempts to overturn the 2020 election were. He pressured state officials not to certify the election, considered using the military to seize ballots in an effort to ‘prove fraud’, tried to use the DoJ to legitimize his fraud claims, pressured his VP to dispute the election results, and ultimately was responsible for a violent march on the capital.

Perhaps most damning, Trump tried to implement a fake electors plot in which he asked Mike Pence to certify electoral college votes which falsely claimed that Trump had won the election in some states where he had lost. Dozens of the false electors have since been indicted criminally. Had Pence been willing to go along with the plot, it’s unclear what would have happened. Unfortunately, no such check is likely to exist in 2028 should an attempt be made to circumvent the process for the Republican nominee - Vance has said he would have gone along with the plot had he been VP at the time.

While each has been deniable, Trump has made allusions to attempting to stay in power beyond his second term, saying things like, “We are going to win four more years. And then after that, we’ll go for another four years because they spied on my campaign. We should get a redo of four years.” Characteristically, he’s also said he wouldn’t attempt a third term. It appears highly likely that Trump will take actions to further degrade democracy in the United States. The possibility that he attempts to illegally tamper with the 2028 election should be taken seriously. 

Influencing the US election is tractable

US presidential elections are often close

US presidential elections are often surprisingly close.

There’s a good chance the 2024 election will be very close too (i.e. likely decided by <300,000 votes)[7]

Given how close US elections tend to be, many efforts can be counterfactually responsible for flipping the election.

Having an impact on the election is relatively straightforward

The election is more tractable than a lot of other work.

There is still low-hanging fruit

Estimates for how effectively top RCT-tested interventions generate net swing-state votes this election range from about several hundred to several thousand dollars per vote.[8] Top non-RCT-able interventions could be even better.

Most political donations don't go to effective organizations.

Many promising strategies have not been sufficiently explored or are underfunded.

Even many top organizations aren’t operating very efficiently.

Electoral politics has poor incentives and talent retention

From an outside view it’s reasonable to believe that, given the amount of money spent each cycle, some kind of efficient market should exist within electoral politics. However, there are a number of characteristics of the sector which make it less efficient than one might think.

Top talent is not well-incentivized to work in electoral politics 

Efficacy is often poorly incentivized

Should you donate to the election?

What’s the chance that donations flip the election?

To get a better sense of the tractability of influencing the election, we made a model to estimate a lower bound for the effectiveness of donations. The model make the following key assumptions:

Our model predicts that $10 million in donations to the most effective organizations would have between 0.11% and 0.46% chance of flipping the election. Our mainline estimate is that $10 million would have a 0.16% chance of changing the outcome in Harris’ favor.

For more details on the model, see the Appendix.

Overall effects on existential risk: Trump vs Harris

The following is a rough attempt to pull together the effect of a Harris victory on mitigating existential risk from an AGI catastrophe within the next 8 years. This is based on extremely rough and made up numbers but we think it’s a useful exercise.  

Here’s some numbers (feel free to make a copy to run your own numbers)

This quick guesswork implies a ~15% (or 1.25 percentage point) reduction in the likelihood of an existential catastrophe due to AGI within the next 8 years if Harris wins.

Other large contributing impacts are the effect of a Trump administration on our ability to survive AGI after 2032, probability of nuclear war, ability to reduce the chance of catastrophic biorisks, influence on future election cycles, and stable totalitarianism. We can also consider the potential decreased value of the future should a Trump administration guide us into the post-AGI future.

If we conservatively assume these other risk factors are equivalent in value to the risk from AGI over the next 8 years, we estimate a Harris victory would reduce existential risk by 2.5%.

Assessing the relative value of donations

Combining these estimates with the earlier model of the effect of donations on flipping the election, the implied overall decrease in existential risk is 0.003 - 0.01% for each $10 million donation to improve Harris’ chance of victory. To get an intuition check on what these numbers mean, we can calculate the theoretical impact of spending all the money committed to EA causes (let’s assume $26 billion) at this level of impact. Doing so would be similar in value to a 10-30% reduction in existential risk. It’s not clear how that compares with other causes, but it seems likely worthwhile to use all EA money in exchange for a guaranteed 10 percentage point x-risk reduction . Of course, the election can’t productively absorb $26 billion (we know of $4 million in funding gaps we’re especially excited about and expect at least another $20 million could be absorbed productively).

It’s challenging to compare donation opportunities against significantly different cause areas. However, considering marginal donation opportunities for the election and other top causes, we feel reasonably confident that election donations are among the best opportunities for donations currently available.

Downside risks & clarity of impact

The primary potential downside of donations to the election is that the EA & Rationalist communities could become more politicized and be less impactful under a potential Trump administration given more donations. We think this is a real concern and are wary of politicizing discussions around AI safety. We expect posts like this won’t have large effects on the margin and are likely worthwhile in expectation, especially given other posts arguing for Trump [EA · GW]. We’re doing this in part because there are so few posts about the election on Rationalist or EA sites.

On the other hand, perhaps we’re fundamentally wrong about which candidate is best. For instance, we could be wrong about Trump’s impact on the probability of existential risk. Perhaps, for instance, it will be crucially important that our next president carries out some highly unusual and risky actions quickly. In a case like that, a Trump administration may be a better fit. Perhaps Elon Musk will influence the next Trump administration’s AI policy to prioritize x-risk mitigation, as evidenced by his support of SB 1047 (though his other actions and beliefs are cause for concern on this front).

Conclusion & recommendations

There is of course a high degree of uncertainty but we lean toward more donations from the rationalist and EA communities being worthwhile. We feel confident that at least $20 million would be worth donating but haven’t yet analyzed the extent to which cost effectiveness decreases after that point.

It’s worth noting as well that the election is an unusual opportunity in that it can absorb a lot of money in ways that AI governance or safety cannot.

Donation, volunteer, & fundraising opportunities

Top recommendations

We believe these recommendations, put together over the last few months by a team with considerable expertise in election impact evaluations, are the best publicly shareable resource for donors available today. Their top recommendations include the following:

The recommendations we can make publicly are, by necessity, sparse on details due to the dual-use nature of the research, among other things. If you’re interested in more information on which organizations to donate to and why, please reach out to ee.interventions@gmail.com. The authors of these recommendations would be happy to send more information about alternative donation opportunities and the methodology used to determine the best organizations.

The election as an intervention to improve AI governance is unique in that it’s relatively easy to fundraise among donors not concerned about existential risk. If you know high net-worth individuals who might be interested in a similar (but less AI-oriented) analysis, consider sending them our Substack post on this topic (sample email draft here). Please do reach out if you’re interested in doing this and we'd be happy to help!

Get involved

If you’re interested in getting involved (even for an hour or two a week), check out this collection of opportunities and fill out this form to let us know your background, interests, and availability. Once you fill out this form we can share opportunities with you which we believe are likely to be particularly impactful. These will include opportunities with established organizations, but also options to get involved with projects run by our team (depending on funding there may be paid options).


Appendix

How the model works: Estimating the probability of $10 million flipping the election

The opportunity cost of a Trump presidency

As we detail below, Trump is likely to actively cause great harm (relative to the regulatory status quo) in key EA cause areas. But even if Trump did not actively cause harm – e.g., by repealing Biden’s executive order on AI, which he vowed to do on day one –, his presidency would still be extremely bad in opportunity cost terms. He would be the center of societal attention for at least four years and the entire political conversation would revolve around his agenda and rhetoric (e.g., attempts to grab power across the branches of government, “revenge” against his political opponents, the border/immigration, culture war topics, etc.). An enormous amount of progressive resources would be bound up resisting Trump’s agenda instead of improving public policy.

The candidates on some EA cause areas

The following are quick assessments, often based on limited information. Given how little we know of Harris’ current policy views and intentions, much of our assessment assumes she’ll behave similarly to Biden on key issues (which seems likely).

Artificial Intelligence

Candidates’ Strengths and AI

With how quickly AI is advancing, arguably what matters most is not the candidates' current stances on AI, but how the candidates will respond to new evidence of risks:

For the reasons in our sections on the candidates' personal character and international relations below, the answers strongly favor Harris.

Positions of the candidates on AI

There are similarities in how both candidates have discussed AI so far, with both candidates discussing global leadership and staying ahead of China as a priority when discussing AI, and pointing out its potential for causing harm. There are also important differences

Trump has expressed some concerns about AI but some of his top supporters seem to be vocal AI accelerationists with a strong interest in shaping AI policy in a second Trump administration.

Harris tends to focus on present harms, but has expressed some concern about existential risk.

Expected impact of the candidates on AI policy

Competence and integrity of the administration, relationships with other countries and labs, respecting science/strong arguments, not acting out of self-interest, corruption, and other intangibles strongly point in favor of transformative AI going better under a Harris administration than Trump.

Pandemic response and biosecurity

While we don’t know much about Harris’ opinions about biosecurity, the Biden administration has taken reasonable steps to address the risks from pandemics. Our best guess is that Harris would continue in a similar vein.

The Biden Administration has taken action on biosecurity which Trump has expressed plans to reverse if reelected.

Trump presided over the beginning of the Covid pandemic and had, at best, a mixed record in terms of handling it. He may appoint known anti-vaccine activist RFK Jr. to a leading role in his administration.

Global health

A proxy for the candidates’ track records on global health is how much money their administrations asked Congress to approve for global health programs. (How much money Congress actually approved probably depended less on the candidates and more on Congress.) The most recent budget request explains that these programs work “to combat infectious diseases, prevent child and maternal deaths, bolster nutrition, control the HIV/AIDS epidemic, and build the capacity of partner countries to prevent, detect, and respond to future infectious disease outbreaks to prevent them from becoming national or global emergencies.”

On average, the Biden Administration requested $4.4 billion more per year for global health than the Trump Administration[9]. For reference, GiveWell moved about $1 billion (including funding from Open Philanthropy) in 2022. While it’s unclear how Harris will compare to Biden on international aid spending, it seems highly likely she’ll allocate billions more. If she spends at the same level as Biden (and Trump reverts to his prior spending), getting her into office would lead to ~$16 billion going to international aid that otherwise wouldn’t have.

Given the above model and the assumptions above, each dollar spent on the election likely nets $2.6 dollars of USG global health spending. Assuming a 0.16% chance of flipping the election by spending $10 million, this implies a more than $26 million expected return in terms of foreign aid spending alone. If we assume US government global health spending is 1/10th as effective as GiveWell top charities, that would imply election giving is about 25% as effective as GiveWell on global health alone. We think the fact that election spending is in the same ballpark as GiveWell in terms of effectiveness in this domain alone is evidence election donations are likely worthwhile.  

Climate change

We don’t yet know the specifics of Harris’ climate platform, but she has long prioritized climate change.

Trump is clearly not going to take positive action on climate change and is likely to roll back much of the progress made under Biden

Nuclear Risk

Trump probably exacerbated nuclear risk in his first term and would likely do so again in a second term.

Harris seems likely to continue with the status quo on US nuclear policy, perhaps making marginal progress on increasing safety.

Farm animal welfare

Harris appears to be a strong advocate of animal welfare, even addressing some issues for farmed animal welfare. 

We know little about Trump’s views on animal welfare but it seems unlikely we’ll see any positive developments on farm animal welfare from his administration. 

US-China & international relations

The next administration will inherit a tense international situation which could easily get worse. AI has the potential to add further challenges.

Harris, assuming she acts similarly to Biden, is likely to perform well when it comes to international relations.

While Trump had some foreign policy successes, his second term would likely degrade US alliances and could open up new security problems for the US and its allies around the world.


  1. ^

    We’re a group of researchers and activists who dropped our other work (mostly related to AI governance) or are volunteering to help beat Trump this year. We've built a strong network and contributed to a few high-leverage projects we believe are impacting the election. Some team members would rather not be named publicly, mostly due to concerns about a public partisan record, especially under a Trump administration. 

  2. ^

    If you already read that piece the most important additions are found in the sections: 'Should you donate to the election?' & 'Assessing the candidates...'

  3. ^

     "What scared Kelly even more than the tweets was the fact that behind closed doors in the Oval Office, Trump continued to talk as if he wanted to go to war. He cavalierly discussed the idea of using a nuclear weapon against North Korea, saying that if he took such an action, the administration could blame someone else for it to absolve itself of responsibility". NBC News

  4. ^

     The concern here is similar to that raised in Reducing long-term risks from malevolent actors [EA · GW]

  5. ^

    I expect people will disagree on whether this is good or bad.

  6. ^

     In a survey run in March of this year, the mean prediction of 26 AI safety and governance experts was that the expected value of the future given a Biden victory would be 25% greater than given a Trump victory (the median answer was 9%). We suspect respondents would give similar answers for a Harris administration today.

  7. ^

     The reason US elections are generally so close is because the electoral college system means that only a few states with evenly split political demographics actually matter for the election. That means the entire presidential race affecting 333 million people comes down to just a few states with a total population of around 30 million voters.

  8. ^

    Experts  we've talked to estimate that the cost per net vote was somewhere between $400 & $10,000 under Biden (post-debate, getting votes for Biden was rough). Under Harris cost per net vote was likely between $700-2000 a month ago but that has gone up to perhaps $1000-2500 now.  

  9. ^

    This was relayed to us by someone from one of the largest progressive political organizations in the country. You’ll need to have access to the Analyst Institute in order to see it but, if you do, you can find some relevant research here.

  10. ^

    You can find the relevant information here, though again only if you have access through the Analyst Institute.

  11. ^

    This was very clearly the case in Hillary’s 2016 campaign. Obama managed this well in 2008 but his team struggled with it in 2012.

  12. ^

     P(doom from AGI before 2032 | Harris victory & AGI in 2024-32)

  13. ^

     P(doom from AGI before 2032 | Trump victory & AGI in 2024-32)

  14. ^

     This is a fairly conservative estimate that attempts to take into account that many funding gaps are likely to be filled. A reasonable first-guess estimate would be more like $1000 but that doesn’t account for the fact that large funders may jump in to fill the gap if not enough donations are received.

  15. ^

    These are from a top pollster but are now at least 3 weeks old (e.g. before the debate). Using updated numbers slightly increases likely effect size, but not by much. 

  16. ^

     This estimate is similar, if somewhat more optimistic, than a recent EA Forum post [EA · GW] on the topic. They estimated generic swing state votes as having a 1 in 6 million chance of flipping the election and Pennsylvania votes as 1 in 3 million.

2 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2024-09-24T20:53:19.524Z · LW(p) · GW(p)

Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.

I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?

You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).

comment by RHollerith (rhollerith_dot_com) · 2024-09-24T21:06:05.270Z · LW(p) · GW(p)

You spend a whole section on the health of US democracy. Do you think if US democracy gets worse, then risks from AI get bigger?

It would seem to me that if the US system gets more autocratic, then it becomes slightly easier to slow down the AI juggernaut because fewer people would need to be convinced that the juggernaut is too dangerous to be allowed to continue.

Compare with climate change: the main reason high taxes on gasoline haven't been imposed in the US like they have in Europe is that US lawmakers have been more afraid of getting voted out of office by voters angry that it cost them more to drive their big pickup trucks and SUVs than their counterparts in Europe have been. "Less democracy" in the US would've resulted in a more robust response to climate change! I don't see anything about the AI situation that makes me expect a different outcome there: i.e., I expect "robust democracy" to interfere with a robust response to the AI juggernaut, especially in a few years when it becomes clear to most people just how useful AI-based products and services can be.

Another related argument is that elites don't want themselves and their children to be killed by the AI juggernaut any more than the masses do: its not like castles (1000 years ago) or taxes on luxury goods where the interests of the elites are fundamentally opposed to the interests of the masses. Elites (being smarter) are easier to explain the danger to than the masses are, so the more control we can give the elites relative to the masses, the better our chances of surviving the AI juggernaut, it seems to me. But IMHO we've strayed too far from the original topic of Harris vs Trump, and one sign that we've strayed too far is that IMHO Harris's winning would strengthen US elites a little more than a Trump win would.

Along with direct harms, a single war relevant to US interests could absorb much of the nation’s political attention and vast material resources for months or years. This is particularly dangerous during times as technologically critical as ours

I agree that a war would absorb much of the nation's political attention, but I believe that that effect would be more than cancelled out by how much easier it would become to pass new laws or new regulations. To give an example, the railroads were in 1914 as central to the US economy as the Internet is today -- or close to it -- and Washington nationalized the railroads during WWI, something that simply would not have been politically possible during peacetime.

You write that a war would consume "vast material resources for months or years". Please explain what good material resources do, e.g., in the hands of governments, to help stop or render safe the AI juggernaut? It seems to me that if we could somehow reduce worldwide material-resource availability by a factor of 5 or even 10, our chances of surviving AI get much better: resources would need to be focused on maintaining the basic infrastructure keeping people safe and alive (e.g., mechanized farming, police forces) with the result that there wouldn't be any resources left over to do huge AI training runs or to keep on fabbing ever more efficient GPUs.

I hope I am not being perceived as an intrinsically authoritarian person who has seized on the danger from AI as an opportunity to advocate for his favored policy of authoritarianism. As soon as the danger from AI is passed, I will go right back to being what in the US is called a moderate libertarian. But I can't help but notice that we would be a lot more likely to survive the AI juggernaut if all of the world's governments were as authoritarian as for example the Kremlin is. That's just the logic of the situation. AI is a truly revolutionary technology. Americans (and Western Europeans to a slightly lesser extent) are comfortable with revolutionary changes; Russia and China much less so. In fact, there is a good chance that as soon as Moscow and Beijing are assured that they will have "enough access to AI" to create a truly effective system of surveilling their own populations, they'll lose interest in AI as long as they don't think they need to continue to invest in it in order to stay competitive militarily and economically with the West. AI is capable of transforming society in rapid, powerful, revolutionary ways -- which means that all other things being equal, Beijing (who main goal is to avoid anything that might be described as a revolution) and Moscow will tend to want to supress it as much as practical.

The kind of control and power Moscow and Beijing (and Tehran) have over their respective populations is highly useful for stopping those populations from contributing to the AI juggernaut. In contrast, American democracy and the American commitment to liberty is not much good at stopping anything. (And the US government was specifically designed by the Founding Fathers to make it a lot of hard work to impose any curb or control on the freedom of the American people). America's freedom, particularly economic and intellectual freedom, in contrast, is highly helpful to the Enemy, namely, those working to make AI more powerful. If only more of the world's countries were like Russia, China and Iran!

I used to be a huge admirer of the US Founding Fathers. Now that I know how dangerous the AI juggernaut is, I wish that Thomas Jefferson had choked on a chicken bone and died before he had the chance to exert any influence on the form of any government! (If the danger from AI is ever successfully circumnavigated, I will probably go right back to being an admirer of Thomas Jefferson.) It seemed like a great idea at the time, but now that we know how dangerous AI is, we can see in retrospect that it was a bad idea and that the architects of the governmental systems of Russia, China and Iran were in a very real sense "more correct": those choices of governmental architectures make it easier for humanity to survive the AI gotcha (which was completely hidden from any possibility of human perception at the time those governmental architectural decisions were made, but still, right is right).

I feel that the people who recognize the AI juggernaut for the potent danger that it is are compartmentalizing their awareness of the danger in a regrettable way. Maybe a little exercise would be helpful. Do you admire the inventors of the transistor? Still? I used to, but no longer do. If William Shockley had slipped on a banana peel and hit his head on something sharp and died before he had a chance to assist in the invention of the transistor, that would have been a good thing, I now believe, because the invention of the transistor would have been delayed -- by an expected 5 years in my estimation -- giving humanity more time to become collectively wiser before it must confront the great danger of AI. Of course, we cannot hold Shockley morally responsible because there is no way he could have known about the AI danger. But still, if your awareness of the potency of the danger from AI doesn't cause you to radically re-evaluate the goodness or badness of the invention of the transistor, then you're showing a regrettable lapse in rationality IMHO. Ditto the development of the internet. Ditto most advances in computing. The Soviets distrusted information technology. The Soviets were right (for the wrong reason, but right is still right)!

(The OP might not believe that AI is a tremendous danger and has used AI safety to help him argue for something he cares about more -- in which case this comment is not addressed to him, but rather to those readers who consider AI risks to be the primary consideration in this conversation.)