Why Should I Assume CCP AGI is Worse Than USG AGI?

post by Tomás B. (Bjartur Tómas) · 2025-04-19T14:47:52.167Z · LW · GW · 16 comments

Contents

16 comments

Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI will be aligned with the national interest of a great power. These people seem to take as an axiom that a USG AGI will be better in some way than a CCP AGI. Has anyone written a justification for this assumption?

I am neither an American citizen nor a Chinese citizen.

What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system? Why should I expect a USG AGI to be better than a CCP AGI? It does not seem to me super obvious that I should cheer for either party over the other. And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.

16 comments

Comments sorted by top scores.

comment by Tenoke · 2025-04-19T18:41:46.826Z · LW(p) · GW(p)

Western AI is much more likely to be democratic and have humanity's values a bit higher up. Chinese one is much more likely to put CCP values and control higher up. 

But yes, if it's the current US administration specifically, neither option is that optimistic.

Replies from: Haiku, MichaelDickens, martinkunev
comment by Haiku · 2025-04-19T19:57:52.677Z · LW(p) · GW(p)

I don't know what it would mean for AI to "be democratic." People in a democratic system can use tool AI, but if ASI is created, there will be no room for human decision-making on any level of abstraction that the AI cares about. I suppose it's possible for an ASI to focus its efforts solely on maintaining a democratic system, without making any object-level decisions itself. But I don't think anyone is even trying to build such a thing.

If intent-aligned ASI is successfully created, the first step is always "take over the world," which isn't a very democratic thing to do. That doesn't necessarily mean there is a better alternative, but I do so wish that AI industry leaders would stop making overtures to democracy out of the other side of their mouth. For most singularitarians, this is and always has been about securing or summoning ultimate power and ushering in a permanent galactic utopia.

Replies from: Tenoke
comment by Tenoke · 2025-04-19T20:05:03.786Z · LW(p) · GW(p)

Democratic in the 'favouring or characterized by social equality; egalitarian.' sense (one of the definitions from Google), rather than about Elections or whatever.

For example, I recently wrote a Short Story of my Day in 2035 in the scenario where things continue mostly like that and we get positive AGI that's similarish enough to current trends. There, people influenced the initial values - mainly via The Spec, and can in theory vote to make some changes to The Spec that governs the general AI values, but in practice by that point AGI controls everything and it's more or less set in stone. Still, it overall mostly tries to fulfil people's desires (overly optimistic that we go this route, I know).

I'd call that more democratic than one that upholds CCP values specifically.

comment by MichaelDickens · 2025-04-19T22:08:33.676Z · LW(p) · GW(p)
  1. I strongly suspect that a Trump-controlled AGI would not respect democracy.
  2. I strongly suspect that an Altman-controlled AGI would not respect democracy.
  3. I have my doubts about the other heads of AI companies.
comment by martinkunev · 2025-04-19T21:52:46.183Z · LW(p) · GW(p)

Western AI is much more likely to be democratic

This sounds like "western AI is better because it is much more likely to have western values"


I don't understand what you mean by "humanity's values". Also, one could maybe argue that "democratic" societies are those where actions are taken based on whether the majority of people can be manipulated to support them.

comment by Vladimir_Nesov · 2025-04-19T15:49:28.763Z · LW(p) · GW(p)

The state of the geopolitical board will influence how the pre-ASI chaos unfolds, and how the pre-ASI AGIs behave. Less plausibly intentions of the humans in charge might influence something about the path-dependent characteristics of ASI (by the time it takes control). But given the state of the "science" and lack of the will to be appropriately cautious and wait a few centuries before taking the leap, it seems more likely that the outcome will be randomly sampled from approximately the same distribution regardless of who sets off the intelligence explosion.

comment by Rafael Harth (sil-ver) · 2025-04-19T15:54:37.022Z · LW(p) · GW(p)

I've also noticed this assumption. I myself don't have it, at all. My first thought has always been something like "If we actually get AGI then preventing terrible outcomes will probably require drastic actions and if anything I have less faith in the US government to take those". Which is a pretty different approach from just assuming that AGI being developed by government will automatically lead to a world with values of government . But this a very uncertain take and it wouldn't surprise me if someone smart could change my mind pretty quickly.

comment by Darklight · 2025-04-19T14:56:41.477Z · LW(p) · GW(p)

It seems like it would depend pretty strongly on which side you view as having a closer alignment with human values generally. That probably depends a lot on your worldview and it would be very hard to be unbiased about this.

There was actually a post about almost this exact question [EA · GW] on the EA Forums a while back. You may want to peruse some of the comments there.

comment by sanyer (santeri-koivula) · 2025-04-19T18:29:57.629Z · LW(p) · GW(p)

I don't think it's possible to align AGI with democracy. AGI, or at least ASI, is an inherently political technology. The power structures that ASI creates within a democratic system would likely destroy the system from within. Whichever group would end up controlling an ASI would get decisive strategic advantage over everyone else within the country, which would undermine the checks and balances that make democracy a democracy.

comment by AnthonyC · 2025-04-19T19:55:21.043Z · LW(p) · GW(p)

As things stand today, if AGI is created (aligned or not) in the US, it won't be by the USG or agents of the USG. I'll be by a private or public company. Depending on the path to get there, there will be more or less USG influence of some sort. But if we're going to assume the AGI is aligned to something deliberate, I wouldn't assume AGI built in the US is aligned to the current administration, or at least significantly less so than the degree to which I'd assume AGI built in China by a Chinese company would be aligned to the current CCP. 

For more concrete reasons regarding national ideals, the US has a stronger tradition of self-determination and shifting values over time, plausibly reducing risk of lock-in. It has a stronger tradition (modern conservative politics notwithstanding) of immigration and openness.

In other words, it matters a lot whether the aligned US-built AGI is aligned to the Trump administration, the Constitution, the combined writings of the US founding fathers and renowned leaders and thinkers, the current consensus of the leadership at Google or OpenAI, the overall gestalt opinions of the English-language internet, or something else. I don't have enough understanding to make a similar list of possibilities for China, but some of the things I'd expect it would include don't seem terrible. For example, I don't think a genuinely-aligned Confucian sovereign AGI is anywhere near the worst outcome we could get.

comment by O O (o-o) · 2025-04-19T18:55:27.263Z · LW(p) · GW(p)

Chinese culture is just less sympathetic in general. China practically has no concept of philanthropy, animal welfare. They are also pretty explicitly ethnonationalist. You don’t hear about these things because the Chinese government has banned dissent and walled off its inhabitants.

However, I think the Hong Kong reunification is going better than I'd expect given the 2019 protests. You'd expect mass social upheaval, but people are just either satisfied or moderately dissatisfied.

comment by Mis-Understandings (robert-k) · 2025-04-19T15:42:10.733Z · LW(p) · GW(p)

I am neither an American citizen nor a Chinese citizen.

does not describe most people who make that argument.

Most of these people are US citizens, or could be. under liberalism/democracy those sorts of people get a say in the future, so  think AGI will be better if it gives those sorts of people a say. 

Most people talking about the USG AGI have structural investments in the US, which are better and give them more chances to bid on not destroying the world. (many are citizens or are in the US block). Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block, it is better for people who are only US aligned if the US gets more powerful, since it will probably support its traditional allies even when it is vastly more powerful, as it did during the early cold war. (This was obvious last year and no longer obvious).

In short, the USG was commited to international liberalism, which is a great thing for AGI to have for various reasons which are hard to say, but basically of the form that liberals are commited to not doing crazy stuff. 

People who can't reason well about the CCP's internal ideologies /political conflicts(like me), and predict ideological alignemnt for AGI, think that USG AGI will use the frames of international liberalism (which don't let you get away with terrible things even if you are powerful), and worry about frames of international realism (which they assign to China, since they cannot tell, and argue that if you have the power you must/should/can use it to do anything, including ruining everybody else). 

As a summary, if you are not an american citizen, do not trust the US natsec framing. A lot of this is carryover from times when the US liberal international block (global international order), was stronger, and so as a block framing it is better iff the US block is somehow bigger, which at the time it was. 

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2025-04-19T20:04:04.629Z · LW(p) · GW(p)

Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block

At the risk of getting too into politics...

IMO, this was maybe-true for the previous administrations, but is completely false for the current one. All people making the argument based on something like this reasoning need to update.

Previous administrations were more or less dead inertial bureaucracies. Those actually might have carried on acting in democracy-ish ways even when facing outside-context events/situations, such as suddenly having access to overwhelming ASI power. Not necessarily because were particularly "nice", as such, but because they weren't agenty enough to do something too out-of-character compared to their previous democracy-LARP behavior.

I still wouldn't have bet on them acting in pro-humanity ways (I would've expected some more agenty/power-hungry governmental subsystem to grab the power, circumventing e. g. the inertial low-agency Presidential administration). But there was at least a reasonable story there.

The current administration seems much more agenty: much more willing to push the boundaries of what's allowed and deliberately erode the constraints on what it can do. I think it doesn't generalize to boring democracy-ish behavior out-of-distribution, I think it eagerly grabs and exploits the overwhelming power. It's already chomping at the bit to do so.

Replies from: robert-k
comment by Mis-Understandings (robert-k) · 2025-04-19T20:52:10.765Z · LW(p) · GW(p)

I don't think that people from the natsec version have made that update, since they have been talking this line for a while. 

But the dead organization framing matters here.

In short, people think that democratic institutions are not dead (especially electoralism). If AGI is "Democratic", that live institution, in which they are a stakeholder, will have the power to choose to do fine stuff. (and might generalize to everybody is a stakeholder) Which is + ev, especially for them.

They also expect that China as a live actor will try to kill all other actors if given the chance. 

comment by MattJ · 2025-04-19T22:03:25.094Z · LW(p) · GW(p)

We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.

There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.

comment by martinkunev · 2025-04-19T22:01:26.387Z · LW(p) · GW(p)

To add to the discussion, my impression is that many people in the US believe they have some moral superiority or know what is good for other people. The whole "we need a manhattan project for AI" discourse is reminiscent of calling for global domination. Also, doing things for the public good is controversial in the US as it can infringe on individual freedom.

This makes me really uncertain as to which AGI would be better (assuming somebody controls it).