post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2024-09-24T20:53:19.524Z · LW(p) · GW(p)

Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.

I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?

You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).

Replies from: alex-lintz
comment by Alex Lintz (alex-lintz) · 2024-09-25T18:29:16.779Z · LW(p) · GW(p)

Some reasons why the anti-democratic tendencies might matter:

  • This might be the guy in charge of deploying AGI and negotiating with other nations about it. I think we should be very concerned about the values of the person with the most power over this process. While purely caring about democracy could matter a lot for this, it's also a signal of a general lack of values and lack of thinking about values, that seems concerning if he can make decisions about governing AGI with massive downstream effects.
  • I think his anti-democratic tendencies also display his intense power-hunger. It seems dangerous to have someone with this characteristic wielding power over the development of an incredibly powerful technology that could be used for all kinds of nefarious purposes. 
  • There is also some small chance that Trump either attempts to seize power or manipulates the coming election for a supporter of his. I think this probably increases the chances of a far less competent and value-aligned person taking the helm in 2028. 
  • In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).

As far as Taiwan I worry that Trump's strategic ambiguity has a few too many dashes of ambiguity on this front which could lead to increased chance of crises which could escalate into something much worse. I don't really have strong opinions about how Trump vs Kamala would fare on Taiwan though to be honest. 

comment by RHollerith (rhollerith_dot_com) · 2024-09-24T21:06:05.270Z · LW(p) · GW(p)

You spend a whole section on the health of US democracy. Do you think if US democracy gets worse, then risks from AI get bigger?

It would seem to me that if the US system gets more autocratic, then it becomes slightly easier to slow down the AI juggernaut because fewer people would need to be convinced that the juggernaut is too dangerous to be allowed to continue.

Compare with climate change: the main reason high taxes on gasoline haven't been imposed in the US like they have in Europe is that US lawmakers have been more afraid of getting voted out of office by voters angry that it cost them more to drive their big pickup trucks and SUVs than their counterparts in Europe have been. "Less democracy" in the US would've resulted in a more robust response to climate change! I don't see anything about the AI situation that makes me expect a different outcome there: i.e., I expect "robust democracy" to interfere with a robust response to the AI juggernaut, especially in a few years when it becomes clear to most people just how useful AI-based products and services can be.

Another related argument is that elites don't want themselves and their children to be killed by the AI juggernaut any more than the masses do: its not like castles (1000 years ago) or taxes on luxury goods where the interests of the elites are fundamentally opposed to the interests of the masses. Elites (being smarter) are easier to explain the danger to than the masses are, so the more control we can give the elites relative to the masses, the better our chances of surviving the AI juggernaut, it seems to me. But IMHO we've strayed too far from the original topic of Harris vs Trump, and one sign that we've strayed too far is that IMHO Harris's winning would strengthen US elites a little more than a Trump win would.

Along with direct harms, a single war relevant to US interests could absorb much of the nation’s political attention and vast material resources for months or years. This is particularly dangerous during times as technologically critical as ours

I agree that a war would absorb much of the nation's political attention, but I believe that that effect would be more than cancelled out by how much easier it would become to pass new laws or new regulations. To give an example, the railroads were in 1914 as central to the US economy as the Internet is today -- or close to it -- and Washington nationalized the railroads during WWI, something that simply would not have been politically possible during peacetime.

You write that a war would consume "vast material resources for months or years". Please explain what good material resources do, e.g., in the hands of governments, to help stop or render safe the AI juggernaut? It seems to me that if we could somehow reduce worldwide material-resource availability by a factor of 5 or even 10, our chances of surviving AI get much better: resources would need to be focused on maintaining the basic infrastructure keeping people safe and alive (e.g., mechanized farming, police forces) with the result that there wouldn't be any resources left over to do huge AI training runs or to keep on fabbing ever more efficient GPUs.

I hope I am not being perceived as an intrinsically authoritarian person who has seized on the danger from AI as an opportunity to advocate for his favored policy of authoritarianism. As soon as the danger from AI is passed, I will go right back to being what in the US is called a moderate libertarian. But I can't help but notice that we would be a lot more likely to survive the AI juggernaut if all of the world's governments were as authoritarian as for example the Kremlin is. That's just the logic of the situation. AI is a truly revolutionary technology. Americans (and Western Europeans to a slightly lesser extent) are comfortable with revolutionary changes; Russia and China much less so. In fact, there is a good chance that as soon as Moscow and Beijing are assured that they will have "enough access to AI" to create a truly effective system of surveilling their own populations, they'll lose interest in AI as long as they don't think they need to continue to invest in it in order to stay competitive militarily and economically with the West. AI is capable of transforming society in rapid, powerful, revolutionary ways -- which means that all other things being equal, Beijing (who main goal is to avoid anything that might be described as a revolution) and Moscow will tend to want to supress it as much as practical.

The kind of control and power Moscow and Beijing (and Tehran) have over their respective populations is highly useful for stopping those populations from contributing to the AI juggernaut. In contrast, American democracy and the American commitment to liberty makes the US relatively bad at using political power stopping some project or activity being done by its population. (And the US government was specifically designed by the Founding Fathers to make it a lot of hard work to impose any curb or control on the freedom of the American people). America's freedom, particularly economic and intellectual freedom, in contrast, is highly helpful to the Enemy, namely, those working to make AI more powerful. If only more of the world's countries were like Russia, China and Iran!

I used to be a huge admirer of the US Founding Fathers. Now that I know how dangerous the AI juggernaut is, I wish that Thomas Jefferson had choked on a chicken bone and died before he had the chance to exert any influence on the form of any government! (In the unlikely event that the danger from AI is successfully circumnavigated in my lifetime, I will probably go right back to being an admirer of Thomas Jefferson.) It seemed like a great idea at the time, but now that we know how dangerous AI is, we can see in retrospect that it was a bad idea and that the architects of the governmental systems of Russia, China and Iran were in a very real sense "more correct": those choices of governmental architectures make it easier for humanity to survive the AI gotcha (which was completely hidden from any possibility of human perception at the time those governmental architectural decisions were made, but still, right is right).

I feel that the people who recognize the AI juggernaut for the potent danger that it is are compartmentalizing their awareness of the danger in a regrettable way. Maybe a little exercise would be helpful. Do you admire the inventors of the transistor? Still? I used to, but no longer do. If William Shockley had slipped on a banana peel and hit his head on something sharp and died before he had a chance to assist in the invention of the transistor, that would have been a good thing, I now believe, because the invention of the transistor would have been delayed -- by an expected 5 years in my estimation -- giving humanity more time to become collectively wiser before it must confront the great danger of AI. Of course, we cannot hold Shockley morally responsible because there is no way he could have known about the AI danger. But still, if your awareness of the potency of the danger from AI doesn't cause you to radically re-evaluate the goodness or badness of the invention of the transistor, then you're showing a regrettable lapse in rationality IMHO. Ditto most advances in computing. The Soviets distrusted information technology. The Soviets were right -- probably for the wrong reason, but right is still right, and no one who recognizes AI for the potent danger it is should continue to use the Soviet distrust of info tech as a point against them.

(This comment is addressed to those readers who consider AI to be so dangerous as to make AI risk the primary consideration in this conversation. I say that to account for the possbility that the OP cares mainly about more mundane political concerns and brought up AI safety because he (wrongly IMHO) believes it will help him make his argument.)

Replies from: mhampton
comment by mhampton · 2024-10-26T16:14:34.022Z · LW(p) · GW(p)

Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?

Also, what do you make of the author's comment below [LW(p) · GW(p)]?

  • In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).
Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2024-11-15T05:24:33.969Z · LW(p) · GW(p)

Some people are more concerned about S-risk than extinction risk, and I certainly don't want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.

the public seems pretty bought-in on AI risk being a real issue and is interested in regulation.

There's a huge gulf between people's expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people (including the "safety" people) whose careers would be set back by many years if they had to find employment outside the AI field and the people who've invested a few hundred billion into AI are a powerful lobbying group in opposition to the members of the general public who tell pollsters they are concerned.

I don't actually know enough about the authoritarian countries (e.g., Russia, China, Iran) to predict with any confidence how likely they are to prevent their populations from contributing to human extinction through AI. I can't help but notice though that so far the US and the UK have done the most to advance the AI project. Also, the government's deciding to shut down movements and technological trends is much more normalized and accepted in Russia, China and Iran than it is in the West, particularly in the US.

I don't have any prescriptions really. I just think that the OP (titled "why the 2024 election matters, the AI risk case for Harris, & what you can do to help", currently standing at 23 points) is badly thought out and badly reasoned, and I wish I had called for readers to downvote it because it encourages people to see everything through the Dem-v-Rep lens (even AI extinction risk, whose causal dependence on the election we don't actually know) without contributing anything significant.

comment by mhampton · 2024-10-30T21:22:24.492Z · LW(p) · GW(p)

This is a comprehensive, nuanced, and well-written post. A few questions:

How likely do you think it is that, under a Harris administration, AI labs will successfully lobby Democrats to kill safety-oriented policies, as happened with SB 1047 on the state level? Even if Harris is on net better than Trump this could greatly reduce the expected value of her presidency from an x-risk perspective.

Related to the above, is it fair to say that under either party, there will need to be advocacy/lobbying for safety-focused policies on AI? If so, how do you make tradeoffs between this and the election? i.e. if someone has $x to donate, what percentage should they give to the election vs. other AI safety causes?

How much of your assessment of the difference in AI risk between Harris and Trump is due to the concrete AI policies you expect each of them to push, vs. how much is due to differences in competence and respect for democracy? 

I can't find much information about the Movement Labs quiz and how it helps Harris win. Could you elaborate, privately if needed? If the quiz is simply matching voters with the candidate who best matches their values, is it because it will be distributed to voters who lean Democrat, or does its effectiveness come through a different path?

comment by momom2 (amaury-lorin) · 2024-09-25T13:12:43.874Z · LW(p) · GW(p)
  • Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Harris wins[12] = 30%

  • Probability of existential catastrophe before 2032 assuming AGI arrives in that period and Trump wins[13] = 35%.

A lot of your AI-risk reason to support Harris seems to hinge on this, which I find very shaky. How wide are your confidence intervals here?
My own guesses are much more fuzzy. According to your argument, if my intuition was .2 vs .5, then it's an overwhelming case for Harris but I'm unfamiliar enough with the topic that it could easily be the reverse.

I would greatly appreciate more details on how you reach your numbers (and if they're vibes, reason whether to trust those vibes).
Alternatively, I feel like I should somehow discount the strength of the AI-risk reason based on how likely I think these numbers are to more or less hold true, but I don't know a principled way to do it.