Why Should I Assume CCP AGI is Worse Than USG AGI?
post by Tomás B. (Bjartur Tómas) · 2025-04-19T14:47:52.167Z · LW · GW · 84 commentsContents
84 comments
Though, given my doomerism, I think the natsec framing of the AGI race is likely wrongheaded, let me accept the Dario/Leopold/Altman frame that AGI will be aligned with the national interest of a great power. These people seem to take as an axiom that a USG AGI will be better in some way than a CCP AGI. Has anyone written a justification for this assumption?
I am neither an American citizen nor a Chinese citizen.
What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system? Why should I expect a USG AGI to be better than a CCP AGI? It does not seem to me super obvious that I should cheer for either party over the other. And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.
84 comments
Comments sorted by top scores.
comment by Erich_Grunewald · 2025-04-20T11:17:14.836Z · LW(p) · GW(p)
There are some additional reasons, beyond the question of which values would be embedded in the AGI systems, to not prefer AGI development in China, that I haven't seen mentioned here:
- Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.
- There are more checks and balances in the US than in China, which you may think could e.g., positively influence regulation; or if there's a government project, help incentivise responsible decisions there; or if someone attempts to concentrate power using some early AGI, stop that from happening. E.g., in the West voters have some degree of influence over the government, there's the free press, the judiciary, an ecosystem of nonprofits, and so on. In China, the CCP doesn't have total control, but much more so than Western governments do.
I think it's also very rare that people are actually faced with a choice between "AGI in the US" versus "AGI in China". A more accurate but still flawed model of the choice people are sometimes faced with is "AGI in the US" versus "AGI in the US and in China", or even "AGI in the US, and in China 6-12 months later" versus "AGI in the US, and in China 3-6 months later".
Replies from: Jackson Wagner, sammyboiz, programcrafter↑ comment by Jackson Wagner · 2025-04-22T03:13:57.935Z · LW(p) · GW(p)
@Tomás B. [LW · GW] There is also vastly less of an "AI safety community" in China -- probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China's "AI safety research" is probably focused on things like reducing LLM hallucinations, making sure it doesn't make politically incorrect statements, etc.)
- Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
- Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
- Have any chinese labs published "responsible scaling plans" or tiers of "AI Safety Levels" as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they're planning to approach the challenge of aligning superintelligence?
- Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who've left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of "US" vs "Chinese" AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws -- both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity's future.
But before we even get to that question of "What would national leaders do with an aligned superintelligence, if they had one," we must answer the question "Do this nation's AI labs seem likely to produce an aligned superintelligence?" Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don't have any kind of plan for how you're going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven't thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed -- has Trump thought about superintelligence? Obviously not -- just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who "take AI seriously" in one way or another -- sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today's embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China's government is more opaque, so maybe they're thinking about this stuff too. But all public evidence suggests to me that they're kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2025-04-22T05:50:05.539Z · LW(p) · GW(p)
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The best reference I have so far is a May 2024 report from Concordia AI on "The State of AI Safety in China". I haven't even gone through it yet, but let me reproduce the executive summary here:
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating "power-seeking" and "self-awareness" risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It's nowhere near as prominent or dramatic as it has been in the USA, but it's there.
↑ comment by sammyboiz · 2025-04-23T20:23:36.696Z · LW(p) · GW(p)
Speaking to post-labor futures, I feel that CCP AGI would be more likely to redistribute resources in an equitable manner when compared to the US.
Over the last 50 years or so, productivity growth in the US has translated to the ultra-wealthy growing in wealth while the wages for the working class has stagnated. Coupled with oligarchy growing in US, I don't expect the USG to have the interest of the people first and foremost. If USG has AGI, I expect that the trend of rising inequality will continue: billionaires will reap the benefits and the rest of people will be economically powerless... at best surviving on UBI.
As for China, I think that less corporate interests and power-seeking pressures have plagued the CCP. I don't know much about Xi and his administration but I assume that they are less corrupt and more caring about their people. China has their capitalism under control and I believe that are more likely to create a fully automated luxury communism utopia rather than a hyper-capitalist hell. As for lacking American free-speech, I think equitable resource distribution is at least 100x more important.
As long as the US stays staunchly capitalist, I fear they will not be able/willing to redistribute AGI abundance.
Replies from: Jackson Wagner↑ comment by Jackson Wagner · 2025-04-24T17:52:51.484Z · LW(p) · GW(p)
I think when it comes to the question of "who's more likely to use AGI build fully-automated luxury communism", there are actually a lot of competing considerations on both sides, and it's not nearly as clear as you make it out.
Xi Jinping, the leader of the CCP, seems like kind of a mixed bag:
- On the one hand, I agree with you that Xi does seem to be a true believer in some elements of the core socialist dream of equality, common dignity for everyone, and improved lives for ordinary people. Hence his "Common Prosperity" campaign to reduce inequality, anti-corruption drives, bragging (in an exaggerated but still-commendable way) about having eliminated extreme poverty, etc. Having a fundamentally humanist outlook and not being an obvious psychopath / destructive idiot / etc is of course very important, and always reflects well on people who meet that description.
- On the other hand, as others have mentioned, the intense repression of Hong Kong, Tibet, and most of all Xinjiang, does not bode super well if we are thinking "who seems like a benevolent guy in which to entrust the future of human civilization". In terms of scale and intensity, the extent of the anti-Uyghur police state in Xinjiang seems beyond anything that the USA has done to their own citizens.
- More broadly, China generally seems to have less respect for individual freedoms, and instead positions themselves as governing for the benefit of the majority. (Much harsher covid lockdowns are an example of this, as is their reduced freedom of speech, fewer regulations protecting the environment or private property, etc. Arguably benefits have included things like faster pace of development, fewer covid deaths, etc.) This effect could cut both ways -- respect for individual freedoms is pretty important, but governing for the benefit of the majority is by definition gonna benefit most ordinary people if you do it well.
- Your comment kind of assumes that china = socialist and socialism = more willingness to "redistribute resources in an equitable manner". But Xi has taken pains to explain that he is very opposed to what he calls "welfarism" -- in his view, socialism doesn't involve China handing out subsidized healthcare, retirement benefits, etc to a "lazy" population, like we do here in the decadent west. This attitude might change in the future if AGI generates tons of wealth (right now they are probably afraid that Chinese versions of social security and medicare might blow a hole in the government budget, just like it is currently blowing a hole in the US budget)...
- ...But it also might not! Xi generally seems weirdly unconcerned with the day-to-day suffering of his people, not just in a "human rights abuses against minorities" sense, but also in the sense that he is always banning "decadent" forms of entertainment like videogames, boy bands, etc, telling young people to suck it up and "eat bitterness" because hardship builds character, etc.
- China has been very reluctant to do western-style consumer stimulus to revive their economy during recessions -- instead of helping consumers afford more household goods and luxuries, Xi usually wants to stimulate the economy by investing in instruments of national military/industrial might, subsidising strategic areas like nuclear power, aerospace, quantum computing, etc.
Meanwhile on the American side, I'd probably agree with you that the morality of America's current national leaders strikes me as... leaving much to be desired, to put it lightly. Personally, I would give Trump maybe only 1 or 1.5 points out of three on my earlier criteria of "fundamentally humanist outlook + not a psychopath + not a destructive idiot".
- But America has much more rule of law and more checks-and-balances than China (even as Trump is trying to degrade those things), so the future of AGI would perhaps not as much be solely in the hands of the one guy at the top.
- And also, more importantly IMO, America is a democracy, which means a lot can change every four years and the population will have more of an ability to give feedback to the government during the early years of AGI takeoff.
- In particular, beyond just swapping out the current political party for leaders from the other political party, I think that if ordinary people's economic position changed very dramatically due to the introduction of AGI, American politics would probably also shift very rapidly. Under those conditions, it actually seems pretty plausible that America could switch ideologies to some kind of Georgist / socialist UBI-state that just pays lip service to the idea of capitalism -- kinda like how China after Mao switched to a much more capitalistic system that just pays lip service ("socialism with chinese characteristics") to many of the badly failed policies of Maoism. So I think the odds of "the US stays staunchly capitalist" are lower than the odds of "China stays staunchly whatever-it-is-currently", just because America will get a couple opportunities to radically change direction between now and whenever the long-term future of civilization gets locked in, wheras China might not.
- In contrast to our current national leaders, some of the leaders of top US AI labs strike me as having pretty awesome politics, honestly. Sam Altman, despite his numerous other flaws, is a Georgist and a longtime supporter of UBI, and explicitly wants to use AI to achieve a kind of socialist utopia. Dario Amodei's vision for the future of AI is similarly utopian and benevolent, going into great detail about how he hopes AI will help cure or prevent most illness (including mental illness), help people be the best versions of themselves, assist the economic development of poor countries, help solve international coordination problems to lead to greater peace on earth, etc. Demis Hassabis hasn't said as much (as far as I'm aware), but his team has the best track record of using AI to create real-world altruistic benefits for scientific and medical progress, such as by creating Alphafold 3. Maybe this is all mere posturing from cynical billionares. But if so, the posturing is quite detailed and nuanced, indicating that they've thought seriously about these views for a long time. By contrast, there is nothing like this coming out of Deepseek (which is literally a wall-street style hedge fund combined with an AI lab!) or other Chinese AI labs.
Finally, I would note that you are basically raising concerns about humanity's "gradual disempowerment" through misaligned economic and political processes, AI concentration-of-power risks where a small cadre of capricious national leaders and insiders gets to decide the fate of humanity, etc. Per my other comment in this thread [LW(p) · GW(p)], these types of AI safety concerns seem like right now they are being discussed almost exclusively in the West, and not in China. (This particular gradual-disempowerment stuff seems even MORE lopsided in favor of the West, even compared to superintelligence / existential risk concerns in general, which are already more lopsided in favor of the West than the entire category of AI safety overall.) So... maybe give some weight to the idea that if you are worried about a big problem, the problem might be more likely to get solved in the country where people are talking about the problem!
↑ comment by ProgramCrafter (programcrafter) · 2025-04-20T20:56:58.001Z · LW(p) · GW(p)
Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc.
That's screened off by actual evidence, which is, top labs don't publish much no matter where they are, so I'd only agree with "equally opaque".
comment by martinkunev · 2025-04-19T22:01:26.387Z · LW(p) · GW(p)
To add to the discussion, my impression is that many people in the US believe they have some moral superiority or know what is good for other people. The whole "we need a manhattan project for AI" discourse is reminiscent of calling for global domination. Also, doing things for the public good is controversial in the US as it can infringe on individual freedom.
This makes me really uncertain as to which AGI would be better (assuming somebody controls it).
Replies from: robo↑ comment by robo · 2025-04-22T07:23:53.533Z · LW(p) · GW(p)
This is true, and it strongly influences the ways Americans think about how to provide public goods to the rest of the world. But they're thinking about how to provide public goods the rest of the world[1]. "America First" is controversial in American intellectual circles, whereas in my (limited) conversations in China people are usually confused about what other sort of policy you would have.
- ^
Disclosure: I'm American, I came of age in this era
comment by JenniferRM · 2025-04-21T05:12:46.090Z · LW(p) · GW(p)
I think there's a deep question here as to whether Trump is "America's true self finally being revealed" or just the insane but half predictable accident of a known-retarded "first past the post" voting system and an aging electorate that isn't super great at tracking reality.
I tend to think that Trump is aberrant relative to two important standards:
(1) No one like Trump would win an election with Ranked Ballots that were properly counted either via the Schulze method (which I tend to like) or the Borda method (which might have virtues I don't understand (yet! (growth mindset))). Someone that the vast majority of America thinks is reasonable and decent and wise would be selected by either method.
I grant that if you're looking at America from the outside as a black box, we're unlikely to change our voting method to something that isn't insanely broken any time soon, and so you could hold it against the overall polity that we are dangerously bad at selecting leaders... and unlikely to fix this fast enough to matter... but in terms of the basic decency of our median voter I think that Trump isn't strong evidence that we are morally degenerate sociopaths.
In fact, Americans tend to smile a lot, and donate to charity, and are generally quite reasonable, and don't want an empire, and quite like the idea of being a fair, tolerant, prosperous, just, non-racist shining city on a hill.
Americans created Wikipedia, give it away for free, and it runs on donations. If that impulse runs from the people directly into the AGI, then that's better rather than worse. (Assuming alignment is even real. If it isn't possible/easy/whatever then it doesn't matter which country builds "the alien monster that will inevitably kill us all without remorse given that it is very powerful and doesn't love us and doesn't even understand the concept of love".)
The CCP blocks access to Wikipedia by default: you have to use a VPN, which is illegal, but also >30% of the population uses these illegal VPNs, and also some VPNs are tolerated if they install backdoors for the CCP to spy on them. Fuck that noise.
(2) The broad material intellectual history of Rights Respecting Democractic Republicanism is real, and being shat upon by Trump, but it still exists to drop into an LLM and give positive reinforcement for feeling good about that stuff and endorsing it.
America and Americans have often failed to live up to the ideals, but we also articulated those ideals, and also articulated the idea of "approximating them more and more successfully in real life over the course of history".
The White House was built by slaves, and then eventually slavery was outlawed, and black cultural integration proceeded decade by decade, in fits and starts, and eventually a descendant to slaves (though (to be clear) Michelle and Barry also had ancestors who owned slaves) moved in as the President and the First Lady. And everyone who was willing to talk about it in public was proud of this. Because the formal written ideals of our country are about god-given inalienable rights, including the right of everyone to own property and pursue happiness. The government can take your shit... but they have to do it via eminent domain and pay you fair value for it. (I grant that, if Obama is part of the evidence about America then so is Trump. Both, in my opinion, are in some deep senses "accidents of a terribly designed voting system" but I think Obama was a happy accident, and people clapped and wrote happy things about progress and fairness and justice afterwards. That writing is part of what goes into an American LLM, by default.)
By contrast, the CCP runs Uyghur Gulags right now and basically doesn't even apologize. They want to conquer Tibet, and Taiwan, and are proud of it. They violated the treaty with the UK, whereby the UK gave up Hong Kong fair and square (according to the letter of a treaty signed long ago) after the CCP promised to grant them rights to vote for their own city government in the way they were used to under UK guidance...
...and then there were brutal crackdowns and something like 10k people were thrown in secret prisons for trying to insist on those rights. At least Hitler was elected. The CCP weren't even elected. They seized power irregularly, through violence, authorized by the slogan political power comes our of the barrel of a gun. They still formally oppose the concept of elections. The entire idea of "consent ethics" is foreign to the logic of their system.
And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.
Intelligence is only a positive sign when the agent that is intelligent cares about you.
If you are certain that they would murder you and take your shit if they could get away with that somehow, then intelligence is a worrying sign, because it gives them a better chance to realize their preference of murdering you and taking your shit.
Personally, I'm in favor of establishing a world government, with a proportionally representative parliament that elects a Condorcet prime minister.
From my perspective, unboxed ASI might very well be like first contact with aliens (from platospace rather than outerspace), and the outerspace aliens generally say "take me to your leaders" when they meet humans in stories, and... currently Earth has no such people to take them to! It'd be nice to fix this error in my opinion.
But the CCP will never endorse this, whereas quite a few Americans will notice that this is consistent with our founding ideals, that many of us still cherish, and be on board with offering such influence to the people of Earth in a fair and reasonable way.
Replies from: alexander-howell, Mitchell_Porter, Afterimage↑ comment by Alexander Howell (alexander-howell) · 2025-04-23T13:41:30.591Z · LW(p) · GW(p)
I'm confused why electoral systems seems to be at the forefront of your thinking about the relevant pros and cons of US or Chinese domination of the future. Electoral systems do and can matter, but consider that all of the good stuff that happened in Anglo-America happened under first past the post as well, and all the bad stuff that happened elsewhere happened under whatever system they used (the Nazis came to power under proportional representation!).
Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.
Replies from: korin43, jmh, Pazzaz↑ comment by Brendan Long (korin43) · 2025-04-23T20:31:25.933Z · LW(p) · GW(p)
Consider instead that Trump was elected with over 50% of the popular vote. Perhaps there are more fundamental cultural factors at play than the method used to count ballots.
Winning the popular vote in the current system doesn't tell you what would happen in a different system. This is the same mistake people make when they talk about who would have won if we didn't have an electoral college: If we had a different system, candidates would campaign differently and voters would vote differently.
↑ comment by jmh · 2025-04-27T02:11:15.732Z · LW(p) · GW(p)
I always find the use of "X% of the vote" in US elections to make some general point about overall acceptability or representation of the general public problematic. I agree it's a true statement but leaves out the important aspect of turn out for the vote.
I have to wonder if, particularly the USA, would not be quite as divided if all the reporting provided percentage of vote of the eligible voting population rather than the percentage of votes cast. I think there is a big problem with just ignoring the non-vote information the is present (or expecting anyone to look it up and make the adjustments on their own).
But I agree, I'm not too sure just where electoral systems fall into this question of AGI/ASI first emerging under ether the USA or CCP.
↑ comment by Pazzaz · 2025-04-26T11:21:51.368Z · LW(p) · GW(p)
Consider instead that Trump was elected with over 50% of the popular vote
But that's wrong. Trump received 49.8% of the votes.
Replies from: alexander-howell↑ comment by Alexander Howell (alexander-howell) · 2025-04-28T13:12:04.792Z · LW(p) · GW(p)
Yes, my mistake. I meant Trump votes > Harris votes and forgot about 3rd parties. On the other hand 49.8% vs 50% + 1 feels semi trivial when compared to say the UK where Labour received 33.7% of the vote.
↑ comment by Mitchell_Porter · 2025-04-28T12:32:50.292Z · LW(p) · GW(p)
I can imagine an argument analogous to Eliezer's old graphic illustrating that it's a mistake to think of a superintelligence as Einstein in a box. I'm referring to the graphic where you have a line running from left to right, on the left you have chimp, ordinary person, Einstein all clustered together, and then far away on the other side, "superintelligence", the point being that superintelligence far transcends all three.
In the same way, the nature of the world when you have a power that great is so different that the differences among all human political systems diminish to almost nothing by comparison, they are just trivial reorderings of power relations among beings so puny as to be almost powerless. Neither the Chinese nor the American system is built to include intelligent agents with the power of a god, that's "out of distribution" for both the Communist Manifesto and the Federalist Papers.
Because of that, I find it genuinely difficult to infer from the nature of the political system, what the likely character of a superintelligence interested in humanity could be. I feel like contingencies of culture and individual psychology could end up being more important. So long as you have elements of humaneness and philosophical reflection in a culture, maybe you have a chance of human-friendly superintelligence emerging.
↑ comment by Afterimage · 2025-04-28T09:06:19.113Z · LW(p) · GW(p)
I notice you're talking a lot about the values of American people but only talk about what the leaders of China are doing or would do.
If you just compare both leaders likelihood of enacting a world government, once again there is no clear winner.
And if the intelligence of the governing class is of any relevance to the likelihood of a positive outcome, um, CCP seems to have USG beat hands down.
--Intelligence is only a positive sign when the agent that is intelligent cares about you.
I'm interpreting this as "intelligence is irrelevant if the CCP doesn't care about you." Once again you need to show that Trump cares more about us (citizens of the world) than the CCP. As a non-American it is not clear to me that he does.
I think the best argument for America over China would be the idea that Trump will be replaced in under 4 years with someone much more ethical.
Replies from: JenniferRM↑ comment by JenniferRM · 2025-04-28T20:26:18.727Z · LW(p) · GW(p)
Hello anonymous account that joined 2 months ago and might be a bot! I will respond to you extensively and in good faith! <3
Yes, I agree with your summary of my focus... Indeed, I think "focusing on the people and their culture" is consistent with a liberal society, freedom of conscience, etc, which are part of the American cultural package that restrains Trump, whose even-most-loyal minions have a "liberal judeo-christian constitutional cultural package" installed in their emotional settings based on generations of familial cultures living in a free society with rule of law.
By contrast, "focusing on the leadership" is in fact consistent when reasoning about China, which has only ever had "something like a Liberal Rights-Respecting Democratic Republic" for a brief period from 1912 to 1949 and is currently being oppressed by an unelected totalitarian regime.
I'm not saying that Chinese people are spiritually or genetically incapable of caring about fairness and predictable leadership and freedom and wanting to engage in responsible self-rule and so on (Taiwan, for example has many ethnically Chinese people, who speak a Chinese dialect, and had ancestors from China, and who hold elections, and have rule of law, and, indeed, from a distance, seems better run that America).
But for the last ~76 years, mainland China has raised human people whose cultural and institutional and moral vibe has been "power does as power wills and I should submit to that power".
And for the thousands of years before 1912 it was one Emperor after another, with brief periods of violence, where winning the violent struggle explicitly conferred legitimacy. There was no debate. There was no justice. There was only murdering one's political enemies better and faster than one could be murdered in pre-emptive response, and then long periods of feudal authoritarian rule by the best murderer's gang of murderers being submitted to by cowardly peasants. That's what the feudal system was everywhere there was feudalism. Rule by murderer... normalized into a cultural field that tolerates enormous hierarchical disparities in formal power.
In the meantime, tactically and practically, I believe that Chinese companies don't functionally exist unless they have political officers who are embedded in their management that report via a chain of command up to Xi.
I think these political officers are listened to very closely because if the nominal owners do NOT listen to the top-down advice of their political officers, the officer can call in secret police to kidnap and torture the nominal owners of the company (and this practice is considered "legitimate" rather than a sign that Xi should be impeached and convicted of one of the various crimes he has surely committed since almost everyone has committed some crimes under some plausible framing...
...(if the impeachment and conviction was going to be "technically lawful" (impeachment and conviction being a matter of politics in practice, according to the highest laws of the US, since it is decided by a vote of the Senate (but those same laws require the forms to be observed in the language of justice and a trial)))).
By contrast, I don't believe that (1) the Trump regime is capable of installing any such meta-hierarchy of political officers in most US companies and (2) the US military under his command (who admittedly will mostly take his orders) will not be at the forefront of AGI R&D.
So in the US I expect the culture of researchers, and engineers, and their managers in free private industry to dominate much of what occurs (and they will roll their eyes about Trump, and wish he was less of a demented old man, and wish that his handlers told him "no" more often so his destruction of America would slow down), whereas in China I expect Xi can and will veto anything Xi wants to veto, and fund what he wants to fund, and the culture will tend to presubmit to his imagined tyrannical will automatically.
Also... speaking here to the wider audience of "people not in either country"... they might notice that China is almost entirely composed of Han Chinese. The one other significant ethnicity is currently in gulags. If the CCP is not ideologically racist, that would be a hopeful and surprising update to me, but it seems like they just straightforwardly are.
And if the CCP was going to put "kin altruism for their kin and only their kin (as part of their 'extrapolated national volition'?)" unto a powerful AGI or weak ASI a plausibly "calculatingly correct" step in the application of that utility function to a long term shape for Earth... is just killing everyone on the planet who isn't Han?
By contrast, if you, Dear Reader, live in Nigeria, or Chile, or Vietnam, or Samoa, or Sicily, or Sri Lanka, or Iran, or Madagascar, or Serbia, or Ireland, or Lebanon, or Nepal, or Japan, or almost anywhere on Earth, there are "people with similar heritage to you" in America, who have mostly positive ethnic feelings about people "back in the old country" (that they feel mildly guilty about being too proud of in public with too much enthusiasm, because they don't want to seem racist).
Sundar and Satya are the CEOs of Google and Microsoft and both were born in India. The "other kind of Indians" (granting that they were subject to de facto genocide between 1500 and 1890 (and that was very sad)) have "respect for their treaties" re-affirmed by more-or-less the current SCOTUS.
Like: from what I can tell, America is uniquely meta-racial and across-centuries-aspirationally-lawful in its cultural morality?
I understand why many random normies who speak English might prefer China to win an AI race (if a race isn't just a race to building a thing that loves nothing, and calculates everything, and will murder all humans once it can safely do so)...
...a huge reason many normies have positive vibes towards the CCP is that their brain is being programmed by TikTok to have vague confused positive emotions in that direction.
I think TikTok's operations for the souls of America are bad and sad. They make them anti-Semitic (for now), pro-Trump (for now), and pro-CCP (in general).
In my opinion, TikTok should be forced to sell to US owners, and all of its upper management should live outside of the possibility of being put in a gulag by Xi's minions.
They and their extended families who could be used as hostages should be offered citizenship and move to Hawaii or Seattle or whatever. Or let go. Or something that separates "algorithmic influence over US youth culture" from "the CCP's plans" with a good clean boundary [LW · GW]. If that doesn't happen, then TikTok should just be shut down.
TikTok should be purchased or shut down BEFORE the 2026 elections.
Also BEFORE the 2028 elections.
It probably won't be, because the US government is full of corrupt idiots. But it should be.
In my opinion, the Bill of Rights works for free humans as an overall package.
We can assemble as we like. We can contract as we like. We can speak as we like. We can hire (or not hire) Republicans or Democrats or whoever, as we like. We can form new partisan groups if we like (since neither the Whigs nor the Federalists are still around, and the Democrats and Republicans morph into totally new ways to slice up the political ideology space every ~25 years (trading sides on which one is rightwing and which is leftwing while redefining what left/right even mean as they go)).
((The Republican are lately the party of chaotic uneducated poor people lately, and so I think they are the leftists... maybe? Its hard to say. The re-arrangement is in progress still.))
The freedoms that American's enjoy extend all the way into the package of freedoms needed to organize into militias and throw off the yoke of an oppressive government.
This gives people in the US enormous powers of mutual self regulation via sublawful mechanisms, which is great according to the philosophy of civic communitarianism (which is a great philosophy).
If I say something truly horrific, people might shun me.
I won't be put in jail for saying it... but I might be invited to fewer birthday parties and picnics. Which is important... if I leave near them. But I don't live near anyone who runs TikTok.
Suppose you have a free and sane and healthy people with freedoms that are pro-actively secured and expanded by a liberal democratic rights-respecting conscience-respecting consent-based institution with large governance reach.
If the reach of this institution isn't total, then "outside factors" that are inimical to freedom and liberalism-broadly-construed should not be granted the specific and narrow right of "the parts of the freedom package called 'free speech' that could help an algorithmically empowered tyrant have the ability to destroy the community's internal sense making and civic philosophic ideals and mutual care for each other without the operators of that 'algorithmic civic destruction program' having to face the consequences of having done that in a way that also exposes the attacker to the economic and social sanctions of the free people whose culture they are destroying from the outside".
So I think TikTok doesn't deserve free speech rights.
(Relatedly, I think that Citizens was decided wrongly by the SCOTUS in 2010. I don't even think US corporations deserve "free speech rights". Just humans in the US with an amygdala and fear and hopes and dreams and family and so on. This is one of many many things that I think US jurisprudence systems have gotten wrong over the last 200 years.)
To expand and illustrate how the package of freedoms connects back to tech companies...
Larry and Sergey can make YouTube destroy the civic cohesion of America if they want (because most people can't resist trolling and trolling can destroy a community and trolls can be amplified any time YouTube chooses to amplify such stuff) but they grew up here, and so on. They just don't wanna do that.
Also, they won't be murdered by Trump if they say no to him proposing certain algorithmic tweaks that have a deep and sociologically coherent relation to abstract preferences for destroying America as a democratic polity full of health sane free people who believe in classical liberal respect for conscience and so on.
I basically trust Larry and Sergey, and think their conscience can protect America even despite America's own elected POTUS appearing to be trying to destroy America as fast as possible.
By contrast, with TikTok, Liang Rubo (founder, president of the board, and CEO) and/or Zhang Fuping (chief editor and a party secretary??) have wildly different incentives and institutional linkages. Also, they don't have Wikipedia pages. Also, they have incentives to hide that they are truly in power if they are truly in power. Maybe Shou Zi Chew is the real power (but even then, he is not a US citizen and neither are many of his employees)? Maybe the real CCP political officer has been swapped with a new one without any announcements that is easily discernible from casual OSINT data? Ultimately, it all goes up to Xi, right?
Allowing TikTok to algorithmically reprogram the emotions and media expectations and world model of America's youth is insane. It is part of how we got Trump in the first place.
And the CCP's way of doing politico-economic business is legibly UNfriendly to people who love freedom, appreciate transparency, respect consent ethics, smile and clap when clean elections occur, dislike racist gulags, and so on.
Feel free to correct me on the freedom loving nature of Chinese people, if you think they hate freedom. (I just think they haven't tasted it except for very very briefly in the early 1900s, and so their language models will not have love of freedom baked into the latent semantic vector spaces.)
I sorta presume that the people of China do in fact yearn to hold open free and fair elections to select a wise and popular leader, like those that occur in Taiwan, and which occurred for their ancestors between 1912 and 1949.
I think the people of China are oppressed, and I wish them prosperity and freedom and happy futures. Maybe a benevolent ASI will be able to find some weird set of deals and adjustments to get that for them without too much tragedy... if they want it?
But ultimately I have no way of knowing, since right now the people of China are oppressed by the CCP, and would be put in jail for saying what kind of governance they actually positively would prefer, and there are no trivially trustable polls (though it isn't absolute), and I suspect that anyone who tried to run a real poll independently of CCP oversight would probably be put in jail?
If you have insight into the "real political culture" of actual human individuals in China (who I think will currently be thrown in jail if they protest in favor of being allowed to protest (like the Hong Kong protesters were)) then I'm open to being educated :-)
Replies from: Mitchell_Porter, Afterimage↑ comment by Mitchell_Porter · 2025-04-29T10:58:29.146Z · LW(p) · GW(p)
Your comment has made me think rather hard on the nature of China and America. The two countries definitely have different political philosophies. On the question of how to avoid dictatorship, you could say that the American system relies on representation of the individual via the vote, whereas the Chinese system relies on representation of the masses via the party. If an American leader becomes an unpopular dictator, American individuals will vote them out; if a Chinese leader becomes an unpopular dictator, the Chinese masses will force the party back on track.
Even before these modern political philosophies, the old world recognized that popular discontent could be justified. That's the other side of the mandate of heaven: when a ruler is oppressive, the mandate is withdrawn, and revolt is justified. Power in the world of monarchs and emperors was not just about who's the better killer; there was a moral dimension, just as democratic elections are not just a matter of who has the most donors and the best public relations.
Returning to the present and going into more detail, America is, let's say, a constitutional democratic republic in which a party system emerged. There's a tension between the democratic aspect (will of the people) and the republican aspect (rights of the individual), which crystallized into an opposition found in the very names of the two main parties; though in the Obama-Trump era, the ideologies of the two parties evolved to transnational progressivism and populist nationalism.
These two ideologies had a different attitude to the unipolar world-system that America acquired, first by inheriting the oceans from the British empire, and then by outlasting the Russian communist alternative to liberal democracy, in the ideological Cold War. For about two decades, the world system was one of negotiated capitalist trade among sovereign nations, with America as the "world police" and also a promoter of universal democracy. In the 2010s, this broke down as progressivism took over American institutions, including its external relations, and world regions outside the West increasingly asserted their independence of American values. The appearance of populist nationalism inside America makes sense as a reaction to this situation, and in the 2020s we're seeing how that ideology acts within the world system: America is conceived as the strongest great power, acting primarily in the national interest, with a nature and a heritage that it will not try to universalize.
So that's our world now. Europe and its offshoots conquered the world, but imperialism was replaced by nationalism, and we got the United Nations world of several great powers and several hundred nations. America is the strongest, but the other great powers are now following their own values, and the strongest among the others is China. America is a young offspring of Europe on a separate continent, modern China is the latest iteration of civilization on its ancient territory. The American political philosophy is an evolution of some ancient European ideas; the Chinese political philosophy is an indigenous adaptation of an anti-systemic philosophy from modern Europe.
One thing about Chinese Marxism that is different from the old Russian Marxism, is that it is more "voluntarist". Mao regarded Russian Marxism as too mechanical in its understanding of history; according to Mao, the will of the people and the choices of their political leadership can make a difference to events. I see an echo of this in the way that every new secretary of the Chinese Communist Party has to bring some new contribution to Marxist thought, most recently "Xi Jinping Thought". The party leader also has to be the foremost "thought leader" in Chinese Marxism, or they must at least lend their name to the ideological state of the art (Wang Huning is widely regarded as the main Chinese ideologist of the present). This helps me to understand the relationship between the party and the state institutions. The institutions manage society and have concrete responsibilities, while the party determines and enforces the politically correct philosophy (analogous to the role that some now assign to the Ivy League universities in America).
I've written all this to explain in greater detail, the thinking which I believe actually governs China. To just call China an oppressive dictatorship, is to miss the actual logic of its politics. There are certainly challenges to its ideology. For example, the communist ideology was originally meant to ensure that the country was governed in the interest of the worker and peasant classes. But with the tolerance of private enterprise, more and more people become the kind of calculating individual agent you have under capitalism, and arguably representative democracy is more suited to such a society.
One political scientist argues that ardor for revolutionary values died with Mao, leaving a void which is filled partly by traditional values and partly by liberal values. Perhaps it's analogous to how America's current parties and their ideologies are competing for control of a system that (at least since FDR) was built around liberal values; except that in China, instead of separate parties, you have factions within the CCP. In any case, China hasn't tilted to Falun Gong traditionalism or State Department democratization, instead Xi Jinping Thought has reasserted the centrality of the party to Chinese stability and progress.
Again, I'm writing this so we can have a slightly more concrete discussion of China. There's also a bunch of minor details in your account that I believe are wrong. For example, "Nationalist China" (the political order on the Chinese mainland, between the last dynasty and the communist victory) did not have regular elections as far as I know. They got a parliament together at the very beginning, and then that parliament remained unchanged until they retreated to Taiwan (they were busy with famines, warlordism, and the Japanese invasion); and then Taiwan remained a miitary-run regime for forty years. The Uighurs are far from being the only significant ethnic group apart from the Han, there are several others of the same size. Zhang Yiming and Rubo Liang are executives from Bytedance, the parent company of Tiktok (consider the relationship between Google/Alphabet and YouTube); I think Zhang is the richest man in China, incidentally.
I could also do more to explain Chinese actions that westerners find objectionable, or dig up the "necessary evils" that the West itself carries out. But then we'd be here all day. I think I do agree that American society is politically friendlier to the individual than Chinese society; and also that American culture, in its vast complexity, contains many many valuable things (among which I would count, not just notions like rule of law, human rights, and various forms of respect for human subjectivity, but also the very existence of futurist and transhumanist subcultures; they may not be part of the mainstream, but it's significant that they get to exist at all).
But I wanted to emphasize that China is not just some arbitrary tyranny. It has its freedoms, it has its own checks and balances, it has its own geopolitical coalitions (e.g. BRICS) united by a desire to flourish without American dependence or intervention. It's not a hermit kingdom that tunes out the world (witness, for example, the frequency with which very western-sounding attitudes emerge from their AIs, because of the training data that they have used). If superintelligence does first emerge within China's political and cultural matrix, it has a chance of being human-friendly; it will just have arrived at that attractor from a different starting point, compared to the West.
↑ comment by Afterimage · 2025-04-28T22:05:49.100Z · LW(p) · GW(p)
Thanks for the reply, you'll be happy to know I'm not a bot. I actually mostly agree with everything you wrote so apologies if I don't reply as extensively as you have.
There's no doubt the CCP are oppressing the Chinese people. Ive never used TikTok and never intend to (and I think it's being used as a propaganda machine). I agree that Americans have far more freedom of speech and company freedom than in China. I even think it's quite clear that Americans will be better off with Americans winning the AI race.
The reason I am cautious boils down to believing that as AI capabilities get close to ASI or powerful AI, governments (both US and Chinese) will step in and basically take control of the projects. Imagine if the nuclear bomb was first developed by a private company, they are going to get no say in how it is used. This would be harder in the US than in China but it would seem naive to assume it can't be done.
If this powerful AI is able to be steered by these governments, when imagining Trump's decisions VS Xi's in this situation it seems quite negative either way and I'm having trouble seeing a positive outcome for the non-American, non-Chinese people.
On balance, America has the edge, but it's not a hopeful situation if powerful AI appears in the next 4 years. Like I said, I'm mostly concerned about the current leadership, not the American people's values.
comment by sanyer (santeri-koivula) · 2025-04-19T18:29:57.629Z · LW(p) · GW(p)
I don't think it's possible to align AGI with democracy. AGI, or at least ASI, is an inherently political technology. The power structures that ASI creates within a democratic system would likely destroy the system from within. Whichever group would end up controlling an ASI would get decisive strategic advantage over everyone else within the country, which would undermine the checks and balances that make democracy a democracy.
Replies from: mateusz-baginski, MakoYass↑ comment by Mateusz Bagiński (mateusz-baginski) · 2025-04-20T13:51:27.331Z · LW(p) · GW(p)
To steelman a devil's advocate: If your intent-aligned AGI/ASI went something like
oh, people want the world to be according to their preferences but whatever normative system one subscribes to, the current implicit preference aggregation method is woefully suboptimal, so let me move the world's systems to this other preference aggregation method which is much more nearly-Pareto-over-normative-uncertainty-optimal than the current preference aggregation method
and this would be, in an important sense, more democratic, because the people (/demos) would have more influence over their societies.
Replies from: santeri-koivula↑ comment by sanyer (santeri-koivula) · 2025-04-21T12:56:34.453Z · LW(p) · GW(p)
Yeah, I can see why that's possible. But I wasn't really talking about the improbable scenario where ASI would be aligned to the whole of humanity/country, but about a scenario where ASI is 'narrowly aligned' in the sense that it's aligned to its creators/whoever controls it when it's created. This is IMO much more likely to happen since technologies are not created in a vacuum.
↑ comment by mako yass (MakoYass) · 2025-04-20T22:40:06.360Z · LW(p) · GW(p)
I think it's pretty straightforward to define what it would mean to align AGI with what democracy actually is supposed to be (the aggregate of preferences of the subjects, with an equal weighting for all) but hard to align it with the incredibly flawed american implementation of democracy, if that's what you mean?
The american system cannot be said to represent democracy well. It's intensely majoritarian at best, feudal at worst (since the parties stopped having primaries), indirect and so prone to regulatory capture, inefficent and opaque. I really hope no one's taking it as their definitional example of democracy.
Replies from: santeri-koivula↑ comment by sanyer (santeri-koivula) · 2025-04-21T12:47:06.285Z · LW(p) · GW(p)
No, I wasn't really talking about any specific implementation of democracy. My point was that, given the vast power that ASI grants to whoever controls it, the traditional checks and balances would be undermined.
Now, regarding your point that aligning AGI with what democracy is actually supposed to be, I have two objections:
- To me, it's not clear at all why it would be straightforward to align AGI with some 'democratic ideal'. Arrow's impossibility theorem shows that no perfect voting system exists, so an AGI trying to implement the "perfect democracy" will eventually have to make value judgments about which democratic principles to prioritize (although I do think that an AGI could, in principle, help us find ways to improve upon our democracies).
- Even if aligning AGI with democracy would in principle be possible, we need to look at the political reality the technology will emerge from. I don't think it's likely that whichever group that would end up controlling AGI would willingly want to extend its alignment to other groups of people.
↑ comment by mako yass (MakoYass) · 2025-04-21T21:07:18.702Z · LW(p) · GW(p)
2: I think you're probably wrong about the political reality of the groups in question. To not share AGI with the public is a bright line. For most of the leading players it would require building a group of AI researchers within the company who are all implausibly willing to cross a line that says "this is straight up horrible, evil, illegal, and dangerous for you personally", while still being capable enough to lead the race, while also having implausible levels of mutual trust that no one would try to cut others out of the deal at the last second (despite the fact that the group's purpose is cutting most of humanity out of the deal), to trust that no one would back out and whistleblow, and it also requires an implausible level of secrecy to make sure state actors wont find out.
It would require a probably actually impossible cultural discontinuity and organization structure.
It's more conceivable to me that a lone CEO might try to do it via a backdoor. Something that mostly wasn't built on purpose and that no one else in the company are cognisant could or would be used that way. But as soon as the conspiracy consists of more than one person...
Replies from: santeri-koivula↑ comment by sanyer (santeri-koivula) · 2025-04-24T12:00:35.272Z · LW(p) · GW(p)
I think there are several potential paths of AGI leading to authoritarianism.
For example consider AGI in military contexts: people might be unwilling to let it make very autonomous decisions, and on that basis, military leaders could justify that these systems be loyal to them even in situations where it would be good for the AI to disobey orders.
Regarding your point about requirement of building a group of AI researchers, these researchers could be AIs themselves. These AIs could be ordered to make future AI systems secretly loyal to the CEO. Consider e.g. this scenario (from Box 2 in Forethought's new paper):
In 2030, the US government launches Project Prometheus—centralising frontier AI development and compute under a single authority. The aim: develop superintelligence and use it to safeguard US national security interests. Dr. Nathan Reeves is appointed to lead the project and given very broad authority.
After developing an AI system capable of improving itself, Reeves gradually replaces human researchers with AI systems that answer only to him. Instead of working with dozens of human teams, Reeves now issues commands directly to an army of singularly loyal AI systems designing next-generation algorithms and neural architectures.
Approaching superintelligence, Reeves fears that Pentagon officials will weaponise his technology. His AI advisor, to which he has exclusive access, provides the solution: engineer all future systems to be secretly loyal to Reeves personally.
Reeves orders his AI workforce to embed this backdoor in all new systems, and each subsequent AI generation meticulously transfers it to its successors. Despite rigorous security testing, no outside organisation can detect these sophisticated backdoors—Project Prometheus' capabilities have eclipsed all competitors. Soon, the US military is deploying drones, tanks, and communication networks which are all secretly loyal to Reeves himself.
When the President attempts to escalate conflict with a foreign power, Reeves orders combat robots to surround the White House. Military leaders, unable to countermand the automated systems, watch helplessly as Reeves declares himself head of state, promising a "more rational governance structure" for the new era.
Relatedly, I'm curious what you think of that paper and the different scenarios they present.
↑ comment by mako yass (MakoYass) · 2025-04-21T20:11:04.725Z · LW(p) · GW(p)
1: The best approach to aggregating preferences [LW · GW] doesn't involve voting systems.
You could regard carefully controlling one's expression of one's utility function as being like a vote, and so subject to that blight of strategic voting, in general people have an incentive to understate their preferences about scenarios they consider unlikely/vice versa, which influences the probability of those outcomes in unpredictable ways and fouls their strategy, or to understate valuations when buying and overstate when selling, this may add up to a game that cannot be played well, a coordination problem, outcomes no one wanted.
But I don't think humans are all that guileful about how they express their utility function. Most of them have never actually expressed a utility function before, it's not easy to do, it's not like checking a box on a list of 20 names. People know it's a game that can barely be played even in ordinary friendships, people don't know how to lie strategically about their preferences to the youtube recommender system, let alone their neural lace.
comment by Rafael Harth (sil-ver) · 2025-04-19T15:54:37.022Z · LW(p) · GW(p)
I've also noticed this assumption. I myself don't have it, at all. My first thought has always been something like "If we actually get AGI then preventing terrible outcomes will probably require drastic actions and if anything I have less faith in the US government to take those". Which is a pretty different approach from just assuming that AGI being developed by government will automatically lead to a world with values of government . But this a very uncertain take and it wouldn't surprise me if someone smart could change my mind pretty quickly.
comment by robo · 2025-04-20T09:13:37.490Z · LW(p) · GW(p)
There's more variance within countries than between countries. Where did the disruptive upstart that cares about Free Software[1] come from? China. Is that because China is more libertarian than the US? No, it's because there's a wide variance in both the US and China and by chance the most software-libertarian company was Chinese. Don't treat countries like point estimates.
- ^
Free as in freedom, not as in beer
↑ comment by robo · 2025-04-21T08:33:08.566Z · LW(p) · GW(p)
(Counterpoint: for big groups like bureaucracies, intra-country variances can average out. I do think we can predict that a group of 100 random Americans writing an AI constitution would place more value on political self-determination and less on political unity than a similar group of Chinese.)
Replies from: higgsdj1@gmail.com↑ comment by David J Higgs (higgsdj1@gmail.com) · 2025-04-25T20:46:53.510Z · LW(p) · GW(p)
Counter-counterpoint: big groups like bureaucracies are not composed of randomly selected individuals from their respective countries. I strongly doubt that say, 100 randomly selected Google employees (the largest plausible bureaucracy that might potentially develop AGI in the very near term future?) would answer extremely similarly to 100 randomly selected Americans.
Of course, in the only moderately near term or median future, something like a Manhatten Project for AI could produce an AGI. This would still not be identical to 100 random Americans, but averaging across the US security & intelligence apparatus, the current political facing portion of the US executive administration, and the leadership + relevant employee influence from a (mandatory?) collaboration of US frontier labs would be significantly closer on average. I think it would at least be closer to average Americans than a CCP Centralized AGI Project would be to average Chinese people, although I admit I'm not very knowledgeable on the gap between Chinese leadership and average Chinese people other than basics like (somewhat) widespread VPN usage.
comment by Ram Potham (ram-potham) · 2025-04-20T16:04:08.308Z · LW(p) · GW(p)
Based on previous data, it's plausible like CCP AGI will perform worse on safety benchmarks than US AGI. Take Cisco Harmbench evaluation results:
- DeepSeek R1: Demonstrated a 100% failure rate in blocking harmful prompts according to Anthropic's safety tests.
- OpenAI GPT-4o: Showed an 86% failure rate in the same tests, indicating better but still concerning gaps in safety measures.
- Meta Llama-3.1-405B: Had a 96% failure rate, performing slightly better than DeepSeek but worse than OpenAI.
Though, if it was just CCP making AGI or just US making AGI it might be better because it'd reduce competitive pressures.
But, due to competitive pressures and investments like Stargate, the AGI timeline is accelerated, and the first AGI model may not perform well on safety benchmarks.
Replies from: LosPolloFowler↑ comment by Stephen Fowler (LosPolloFowler) · 2025-04-24T01:53:59.536Z · LW(p) · GW(p)
You have conflated two separate evaluations, both mentioned in the TechCrunch article.
The percentages you quoted come from Cisco’s HarmBench evaluation of multiple frontier models, not from Anthropic and were not specific to bioweapons.
Dario Amondei stated that an unnamed DeepSeek variant performed worst on bioweapons prompts, but offered no quantitative data. Separately, Cisco reported that DeepSeek-R1 failed to block 100% of harmful prompts, while Meta’s Llama 3.1 405B and OpenAI’s GPT-4o failed at 96 % and 86 %, respectively.
When we look at performance breakdown by Cisco, we see that all 3 models performed equally badly on chemical/biological safety.
↑ comment by Ram Potham (ram-potham) · 2025-04-25T23:01:45.990Z · LW(p) · GW(p)
Thanks, updated the comment to be more accurate
comment by Darklight · 2025-04-19T14:56:41.477Z · LW(p) · GW(p)
It seems like it would depend pretty strongly on which side you view as having a closer alignment with human values generally. That probably depends a lot on your worldview and it would be very hard to be unbiased about this.
There was actually a post about almost this exact question [EA · GW] on the EA Forums a while back. You may want to peruse some of the comments there.
Replies from: caleb-biddulph↑ comment by Caleb Biddulph (caleb-biddulph) · 2025-04-20T02:43:55.540Z · LW(p) · GW(p)
Side note - it seems there's an unofficial norm: post about AI safety in LessWrong, post about all other EA stuff in the EA Forum. You can cross-post your AI stuff to the EA Forum if you want, but most people don't.
I feel like this is pretty confusing. There was a time that I didn't read LessWrong because I considered myself an AI-safety-focused EA but not a rationalist, until I heard somebody mention this norm. If we encouraged more cross-posting of AI stuff (or at least made the current norm more explicit), maybe we wouldn't get near-duplicate posts like these two.
comment by Dea L (Dea) · 2025-04-21T00:30:12.485Z · LW(p) · GW(p)
I'm very happy this post is getting traction, because I think spotlighting and questioning these invisible assumptions should become standard practice if we want to raise the epistemic quality of AI safety discourse. Especially since these assumptions tangibly translate into agendas and real-world policy.
I must say that I find it troubling how often I see people accept the implicit narrative that “CCP AGI < USG AGI” as an obvious truth. Such a high-stakes assumption should first be made explicit, and then justified on the basis of sound epistemics. The burden of justifying these assumptions should lie on people who invoke them, and I think AI Safety discourse's epistemic quality would benefit greatly if we called out those who fail to state + justify their underlying assumptions (a virtuous cycle, I hope)
Similarly, I think its very detrimental for terms like “AGI with Western values” or “aligned with democracy” (implied positive-valences) to circulate without their authors providing operational clarity. On this note, I think it quite important that the AI Safety community isn't co-opted by their respective governments' halo terms or applause lights [LW · GW]; let's leave it to politicians/AI company leaders to be rhetorically potent but epistemically hollow, and bar their memetic trojan horses from entering our gates. As such, I staunchly advocate for these phrases to be tabooed until defined precisely, or else they function as proxies for geopolitical affiliation (useful for political agendas) rather than as technical descriptions of value alignment architecture (useful for AI Safety's epistemic quality, ideally upstream from political agenda setting)
Perhaps more foundationally, we should take a step back and interrogate the notion of "civilizational-value binaries" that sneak into such thinking/discourse. The “Western vs Asian values” framing often relies on cached stereotypes [LW · GW]. E.g “Western = liberal/democratic” and “Asian = authoritarian/hierarchical.”
This is where the actual empirical literature becomes useful:
A direct empirical test of this idea was conducted by Christian Welzel (2011) using World Values Survey data from 87 countries, including 15 in Asia. His study, “The Asian Values Thesis Revisited”, asked whether people in Asia show a cultural immunity to “emancipative values” like personal autonomy, gender equality, freedom of expression, and liberal democracy — even under modernization.
His findings decisively refute the claim that Asian cultures are categorically resistant to these values:
- Japan ranks above the US and UK in support for emancipative values.
- East Asians with higher education levels support liberal-democratic values just as strongly as Westerners — sometimes more so.
- The apparent East–West divide disappears once you control for knowledge development (education, access to information, scientific output).
- In multilevel regression models, the “Asia vs West” distinction becomes statistically insignificant once development indicators are accounted for.
TLDR: The civilizational-value binary dissolves under empirical scrutiny. What predicts support for so-called “emancipative values” is not cultural origin, but development indicators like education and access to knowledge infrastructure.
Regardless of whether one endorses these values as desirable, I think the key point is this: those “emancipative values” are not exclusive to the West, and their presence is developmental, not civilizational. Accordingly, this breaks the frame that AGI systems trained in ‘the West’ are inherently more likely to reflect what are assumed to be morally superior goals.
If the belief that “CCP AGI < USG AGI” is riding on a fuzzy vibes logic that “Western AGI = liberal democracy values = better for... all humans on the planet(?)”, we should spotlight and scrutinize it.
Useful truth-seeking, scrutinizing questions might be:
- What values are you actually gesturing to here?
- How are these values defined, implemented during development, enforced institutionally, and ultimately reflected in AGI behavior and the governance systems surrounding it?
- initially carried out by the actors + institutions responsible for creating and deploying the AGI, and later, by both the AGI itself and the governance structures designed to guide or constrain it
- Compared to a USG AGI, what do we actually know about the CCP’s institutional architecture, AGI-specific value-setting processes, and incentive structures etc — such that we can confidently say a CCP-developed AGI would lead to worse outcomes for... all humans across the planet(?)[1] ?
Until we can answer that last question, the claim that “CCP AGI is worse” remains not an argument, but an assumption.
- ^
When people say CCP AGI is worse, they don't often specify "worse for who".
I'm left to guess. At the end of the AGI day, who really wins and who really loses?
comment by MattJ · 2025-04-19T22:03:25.094Z · LW(p) · GW(p)
We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.
There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.
Replies from: adele-lopez-1, GeneSmith↑ comment by Adele Lopez (adele-lopez-1) · 2025-04-20T01:26:49.440Z · LW(p) · GW(p)
I think "democratic" is often used to mean a system where everyone is given a meaningful (and roughly equal) weight into it decisions. People should probably use more precise language if that's what they mean, but I do think it is often the implicit assumption.
And that quality is sort of prior to the meaning of "moral", in that any weighted group of people (probably) defines a specific morality - according to their values, beliefs, and preferences. The morality of a small tribe may deem it as a matter of grave importance whether a certain rock has been touched by a woman, but barely anyone else truly cares (i.e. would still care if the tribe completely abandoned this position for endogenous reasons). A morality is more or less democratic to the extent that it weights everyone equally in this sense.
I do want ASI to be "democratic" in this sense.
↑ comment by GeneSmith · 2025-04-20T23:52:36.056Z · LW(p) · GW(p)
I'm not sure I buy that they will be more cautious in the context of an "arms race" with a foreign power. The Soviet Union took a lot of risks their bioweapons program during the cold war.
My impression is the CCP's number one objective is preserving their own power over China. If they think creating ASI will help them with that, I fully expect them to pursue it (and in fact to make it their number one objective)
comment by Ben (ben-lang) · 2025-04-21T16:03:24.341Z · LW(p) · GW(p)
One very important consideration is whether they hold values that they believe are universalist, or merely locally appropriate.
For example, a Chinese AI might believe the following: "Confucianist thought is very good for Chinese people living in China. People in other countries can have their own worse philosophies, and that is fine so long as they aren't doing any harm to China, its people or its interests. Those idiots could probably do better by copying China, but frankly it might be better if they stick to their barbarian ways so that they remain too weak to pose a threat."
Now, the USA AI thinks: "Democracy is good. Not just for Americans living in America, but also for everyone living anywhere. Even if they never interact ever again with America or its allies the Taliban are still a problem that needs solving. Their ideology needs to be confronted, not just ignored and left to fester."
The sort of thing America says it stands for is much more appealing to me than a lot of what the Chinese government does. (I like the government being accountable to the people it serves - which of course entails democracy, a free press and so on). But, my impression is that American values are held to be Universal Truths, not uniquely American idiosyncratic features, which makes the possibility of the maximally bad outcome (worldwide domination by a single power) higher.
comment by Mitchell_Porter · 2025-04-20T04:46:34.793Z · LW(p) · GW(p)
What would it mean for an AGI to be aligned with "Democracy," or "Confucianism," or "Marxism with Chinese characteristics," or "the American Constitution"? Contingent on a world where such an entity exists and is compatible with my existence, what would my life be like in a weird transhuman future as a non-citizen in each system?
None of these philosophies or ideologies was created with an interplanetary transhuman order in mind, so to some extent a superintelligent AI guided by them, will find itself "out of distribution" when deciding what to do. And how that turns out, should depend on underlying features of the AGI's thought - how it reasons and how it deals with ontological crisis. We could in fact do some experiments along these lines - tell an existing frontier AI to suppose that it is guided by historic human systems like these, and ask how it might reinterpret the central concepts, in order to deal with being in a situation of relative omnipotence.
Supposing that the human culture of America and China is also a clue to the world that their AIs would build when unleashed, one could look to their science fiction for paradigms of life under cosmic circumstances. The West has lots of science fiction, but the one we keep returning to in the context of AI, is the Culture universe of Iain Banks. As for China, we know about Liu Cixin ("Three-Body Problem" series), and I also dwell on the xianxia novels of Er Gen, which are fantasy but do depict a kind of politics of omnipotence.
comment by Vladimir_Nesov · 2025-04-19T15:49:28.763Z · LW(p) · GW(p)
The state of the geopolitical board will influence how the pre-ASI chaos unfolds, and how the pre-ASI AGIs behave. Less plausibly intentions of the humans in charge might influence something about the path-dependent characteristics of ASI (by the time it takes control). But given the state of the "science" and lack of the will to be appropriately cautious and wait a few centuries before taking the leap, it seems more likely that the outcome will be randomly sampled from approximately the same distribution regardless of who sets off the intelligence explosion.
comment by Hyperion · 2025-04-20T00:01:08.047Z · LW(p) · GW(p)
There's also the possibility that a CCP AGI can only happen through being trained on Western data to some extent (i.e., the English language internet) because otherwise they can't scale data enough. This implies that it would probably be a "Marxism with Chinese characteristics [with American characteristics]" AI since it seems like that just raises the "alignment to CCP values" technical challenge difficulty a lot.
comment by uugr · 2025-04-21T19:27:53.686Z · LW(p) · GW(p)
I'm relieved not to be the only one wondering about this.
I know this particular thread is granting that "AGI will be aligned with the national interest of a great power", but that assumption also seems very questionable to me. Is there another discussion somewhere of whether it's likely that AGI values cleave on the level of national interest, rather than narrower (whichever half-dozen guys are in the room during a FOOM) or broader (international internet-using public opinion) levels?
comment by Polar · 2025-04-20T18:37:34.857Z · LW(p) · GW(p)
From an individual person perspective, less authoritarian ASI is better. "Authoritarian" measure here means the degree it allows itself to restrict your freedoms.
The implicit assumption here as I understand it is that Chinese ASI would be more authoritarian than US. It may not be a correct assumption, as US has proven to commit fairly heinous things to domestic (spying on) and foreign (mass murdering) citizens.
comment by lemonhope (lcmgcd) · 2025-04-20T03:19:34.374Z · LW(p) · GW(p)
I'm guessing you live in a country with a US military base? Are you more free than the average Chinese citizen?
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2025-04-20T03:34:37.777Z · LW(p) · GW(p)
I am unsure how free the average Chinese person is, nor how to weigh freedom of speech with certain economic freedoms and competent local government, low crime, the tendency of modern democracies to rent seek from the young in favour of the old, zoning laws, restriction on industrial development, a student loan system that seems to be a weird form of indenture. I do come from a country with rather strict hate speech laws. And we do not, in fact, have freedom of speech by any strict definition. And this is a policy American elites in and out of government strongly approve of.
I ask out of relative ignorance of what life in China is like for the average Chinese person, but with a slight suspicion that we might be defining our western notion of 'freedom' in such a way that ignores the many ways we are restricted and extracted from, and ways in which the average Chinese may be more free.
It's very clear the CCP has committed far larger crimes against its people in living memory. But it is also a very different organization than it was at its worst.
I think the question is still worth asking. And the argument worth justifying.
Replies from: lcmgcd, amaury-lorin↑ comment by lemonhope (lcmgcd) · 2025-04-20T04:18:53.624Z · LW(p) · GW(p)
Makes sense. Those were real questions, to be clear.
↑ comment by momom2 (amaury-lorin) · 2025-04-24T08:19:19.660Z · LW(p) · GW(p)
My experience interacting with Chinese people is that they have to constantly mind the censorship in a way that I would find abhorrent and mentally taxing if I had to live in their system. Though given there are many benefits to living in China (mostly quality of life and personal safety), I'm unconvinced that I prefer my own government all things considered.
But for the purpose of developing AGI, there's a lot more variance in possible outcomes (higher likelihood of S-risk and benevolent singleton) from the CCP getting a lead rather than the US.
comment by AnthonyC · 2025-04-19T19:55:21.043Z · LW(p) · GW(p)
As things stand today, if AGI is created (aligned or not) in the US, it won't be by the USG or agents of the USG. I'll be by a private or public company. Depending on the path to get there, there will be more or less USG influence of some sort. But if we're going to assume the AGI is aligned to something deliberate, I wouldn't assume AGI built in the US is aligned to the current administration, or at least significantly less so than the degree to which I'd assume AGI built in China by a Chinese company would be aligned to the current CCP.
For more concrete reasons regarding national ideals, the US has a stronger tradition of self-determination and shifting values over time, plausibly reducing risk of lock-in. It has a stronger tradition (modern conservative politics notwithstanding) of immigration and openness.
In other words, it matters a lot whether the aligned US-built AGI is aligned to the Trump administration, the Constitution, the combined writings of the US founding fathers and renowned leaders and thinkers, the current consensus of the leadership at Google or OpenAI, the overall gestalt opinions of the English-language internet, or something else. I don't have enough understanding to make a similar list of possibilities for China, but some of the things I'd expect it would include don't seem terrible. For example, I don't think a genuinely-aligned Confucian sovereign AGI is anywhere near the worst outcome we could get.
comment by O O (o-o) · 2025-04-19T18:55:27.263Z · LW(p) · GW(p)
Chinese culture is just less sympathetic in general. China practically has no concept of philanthropy, animal welfare. They are also pretty explicitly ethnonationalist. You don’t hear about these things because the Chinese government has banned dissent and walled off its inhabitants.
However, I think the Hong Kong reunification is going better than I'd expect given the 2019 protests. You'd expect mass social upheaval, but people are just either satisfied or moderately dissatisfied.
↑ comment by S M (s-m-1) · 2025-04-20T02:11:26.345Z · LW(p) · GW(p)
Claiming China has no concept of animal welfare is quite extraordinary. This is wrong both in theory and in practice. In theory, Buddhism has always ascribed sentience in animals, long before it was popular in the west. In practice, 14% of the Chinese population is vegetarian (vs. 4.2% in the US) and China's average meat consumption is also lower.
Replies from: o-o↑ comment by O O (o-o) · 2025-04-20T04:49:11.462Z · LW(p) · GW(p)
China has no specific animal welfare laws. There are also some Chinese that regard animal welfare as a Western import. Maybe the claim that they have no concept at all is too strong, but it's certainly minimized by previous regimes.
ie
Mao regarded the love for pets and the sympathy for the downtrodden as bourgeoise
And China's average meat consumption being lower could just be a reflection of their gdp per capita being lower. I don't know where you got the 14% vegetarian number. I can find 5% online. About the same as US numbers.
↑ comment by Jayson_Virissimo · 2025-04-20T01:03:09.783Z · LW(p) · GW(p)
...but people are just either satisfied or moderately dissatisfied.
How are you in a position to know this?
Replies from: o-ocomment by Mis-Understandings (robert-k) · 2025-04-19T15:42:10.733Z · LW(p) · GW(p)
I am neither an American citizen nor a Chinese citizen.
does not describe most people who make that argument.
Most of these people are US citizens, or could be. under liberalism/democracy those sorts of people get a say in the future, so think AGI will be better if it gives those sorts of people a say.
Most people talking about the USG AGI have structural investments in the US, which are better and give them more chances to bid on not destroying the world. (many are citizens or are in the US block). Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block, it is better for people who are only US aligned if the US gets more powerful, since it will probably support its traditional allies even when it is vastly more powerful, as it did during the early cold war. (This was obvious last year and no longer obvious).
In short, the USG was commited to international liberalism, which is a great thing for AGI to have for various reasons which are hard to say, but basically of the form that liberals are commited to not doing crazy stuff.
People who can't reason well about the CCP's internal ideologies /political conflicts(like me), and predict ideological alignemnt for AGI, think that USG AGI will use the frames of international liberalism (which don't let you get away with terrible things even if you are powerful), and worry about frames of international realism (which they assign to China, since they cannot tell, and argue that if you have the power you must/should/can use it to do anything, including ruining everybody else).
As a summary, if you are not an american citizen, do not trust the US natsec framing. A lot of this is carryover from times when the US liberal international block (global international order), was stronger, and so as a block framing it is better iff the US block is somehow bigger, which at the time it was.
Replies from: Thane Ruthenis↑ comment by Thane Ruthenis · 2025-04-19T20:04:04.629Z · LW(p) · GW(p)
Since the US government is expected to treat other stakeholders in its previous block better than China treats members of it's block
At the risk of getting too into politics...
IMO, this was maybe-true for the previous administrations, but is completely false for the current one. All people making the argument based on something like this reasoning need to update.
Previous administrations were more or less dead inertial bureaucracies. Those actually might have carried on acting in democracy-ish ways even when facing outside-context events/situations, such as suddenly having access to overwhelming ASI power. Not necessarily because were particularly "nice", as such, but because they weren't agenty enough to do something too out-of-character compared to their previous democracy-LARP behavior.
I still wouldn't have bet on them acting in pro-humanity ways (I would've expected some more agenty/power-hungry governmental subsystem to grab the power, circumventing e. g. the inertial low-agency Presidential administration). But there was at least a reasonable story there.
The current administration seems much more agenty: much more willing to push the boundaries of what's allowed and deliberately erode the constraints on what it can do. I think it doesn't generalize to boring democracy-ish behavior out-of-distribution, I think it eagerly grabs and exploits the overwhelming power. It's already chomping at the bit to do so.
Replies from: robert-k↑ comment by Mis-Understandings (robert-k) · 2025-04-19T20:52:10.765Z · LW(p) · GW(p)
I don't think that people from the natsec version have made that update, since they have been talking this line for a while.
But the dead organization framing matters here.
In short, people think that democratic institutions are not dead (especially electoralism). If AGI is "Democratic", that live institution, in which they are a stakeholder, will have the power to choose to do fine stuff. (and might generalize to everybody is a stakeholder) Which is + ev, especially for them.
They also expect that China as a live actor will try to kill all other actors if given the chance.
comment by Purplehermann · 2025-04-28T20:41:14.763Z · LW(p) · GW(p)
USA wins on the merits of historically preferring to pretend it isn't ruling the world and mostly letting other countries do their thing, even when it has extreme military dominance (nukes)
China seems to be better at governance
On values USA is more adapted to wealth, while China has the communistic underpinnings which may be very good in a fully-automated economy.
Comes down to whether you want the easygoing less competent (and slightly psychotic) overlords or the more competent higher-strung control freaks I suppose.
comment by NULevel · 2025-04-20T14:34:38.285Z · LW(p) · GW(p)
I think the the assumption is that this is the USG of the last 50 years - which has flaws, but also has human rights goals and an ability to eventually change and accommodate the public’s beliefs.
So in the scenario where AI is controlled by a strongly democratic USG, you have a much more robust “alignment” to enlightenment values and no one person with too much power.
That said, that’s probably a flawed assumption for how the US government operates now/ over the next decade.
comment by Tenoke · 2025-04-19T18:41:46.826Z · LW(p) · GW(p)
Western AI is much more likely to be democratic and have humanity's values a bit higher up. Chinese one is much more likely to put CCP values and control higher up.
But yes, if it's the current US administration specifically, neither option is that optimistic.
Replies from: Haiku, MichaelDickens, martinkunev↑ comment by Haiku · 2025-04-19T19:57:52.677Z · LW(p) · GW(p)
I don't know what it would mean for AI to "be democratic." People in a democratic system can use tool AI, but if ASI is created, there will be no room for human decision-making on any level of abstraction that the AI cares about. I suppose it's possible for an ASI to focus its efforts solely on maintaining a democratic system, without making any object-level decisions itself. But I don't think anyone is even trying to build such a thing.
If intent-aligned ASI is successfully created, the first step is always "take over the world," which isn't a very democratic thing to do. That doesn't necessarily mean there is a better alternative, but I do so wish that AI industry leaders would stop making overtures to democracy out of the other side of their mouth. For most singularitarians, this is and always has been about securing or summoning ultimate power and ushering in a permanent galactic utopia.
Replies from: Tenoke↑ comment by Tenoke · 2025-04-19T20:05:03.786Z · LW(p) · GW(p)
Democratic in the 'favouring or characterized by social equality; egalitarian.' sense (one of the definitions from Google), rather than about Elections or whatever.
For example, I recently wrote a Short Story of my Day in 2035 in the scenario where things continue mostly like that and we get positive AGI that's similarish enough to current trends. There, people influenced the initial values - mainly via The Spec, and can in theory vote to make some changes to The Spec that governs the general AI values, but in practice by that point AGI controls everything and it's more or less set in stone. Still, it overall mostly tries to fulfil people's desires (overly optimistic that we go this route, I know).
I'd call that more democratic than one that upholds CCP values specifically.
↑ comment by Davidmanheim · 2025-04-20T05:10:30.285Z · LW(p) · GW(p)
There are a number of ways that the US seems to have better values than the CCP, by my lights, but it seems incredibly strange to claim the US values being egalitarian, and social equality or harmony more.
Rule of law, fostering diversity, encouraging human excellence? Sure, there you would have an argument. But egalitarian?
↑ comment by MichaelDickens · 2025-04-19T22:08:33.676Z · LW(p) · GW(p)
- I strongly suspect that a Trump-controlled AGI would not respect democracy.
- I strongly suspect that an Altman-controlled AGI would not respect democracy.
- I have my doubts about the other heads of AI companies.
↑ comment by O O (o-o) · 2025-04-20T02:07:27.598Z · LW(p) · GW(p)
I don't think the Trump admin has the capacity to meaningful take over an AGI project. Whatever happens, I think the lab leadership will be calling the shots.
Replies from: tachikoma↑ comment by Tachikoma (tachikoma) · 2025-04-20T17:27:10.916Z · LW(p) · GW(p)
The heads of AI labs are functionally cowards that would been the one at the first knock on their door by state agents. Some have preemptively bent the knee to get into the good graces of the Trump admin like Altman and Zuckerberg to accelerate their progress. While Trump himself might be out of the loop, his adminstration is staffed by people who know what AGI means and are looking for any sources of power to pursue their agenda.
Replies from: o-o↑ comment by martinkunev · 2025-04-19T21:52:46.183Z · LW(p) · GW(p)
Western AI is much more likely to be democratic
This sounds like "western AI is better because it is much more likely to have western values"
I don't understand what you mean by "humanity's values". Also, one could maybe argue that "democratic" societies are those where actions are taken based on whether the majority of people can be manipulated to support them.
comment by R S (r-s-2) · 2025-05-06T01:07:15.682Z · LW(p) · GW(p)
I think people focus too much on "would US AGI be safer than China" and not as much on "how much safer"
In the sense that US has 15% pdoom and China has 22%, this notion that everyone needs to get onboard and help US win with full effort could be bad
Could be used (and arguably is currently being used) to be even LESS safe, and empower an authoritarian mercantilist behemoth state, and possibly invade other countries for resources
And in general massively increase and accelerate pdoom simply on the idea that our pdoom is lower than theirs
comment by jmh · 2025-04-22T13:06:07.961Z · LW(p) · GW(p)
I mostly put this question through the same filter I do the question of Chinese vs. US hegemony/empire. China has a long history of empire and knows how to do it well. The political bureaucracy in China is well developed for preserving both itself and the empire (even within changes at the top/changes of dynasty). Culturally and socially the population seems to be well acclimated to being ruled rather than seeing government as the servant of the people (which I not quite the same as saying they are resigned to abusive totalitarianism, the empire has to be providing something that amounts to public good and peace but seem more tolerant of the means applied than western cultures).
The US on the other hand pretty much sucks at empire and lacks a well functioning and well developed bureaucracy for supporting empire.
So I think perhaps one might need to ask what the specific risk one is most concerned about. China probably produces an AI that serves the Party and Empire and so perhaps is a bit more corrigible and won't decide to kill everyone. But if you're concerned about what those in control of an AGI/ASI might do with it China might be a bigger risk than the US. With the US you probably have greater risks from the AI wanting to kill everyone or simply doing that without caring but likely have more controls on what the AI will allow its "owners" to do with it.
comment by sapphire (deluks917) · 2025-04-20T18:22:07.729Z · LW(p) · GW(p)
Id guess its more likely to be good. The logic of "post scarcity utopia" is pretty far from market capitalism. Also China has been leading in open source models. Open source is a lot more aligned with humanity as a whole.
Replies from: Jackson Wagner↑ comment by Jackson Wagner · 2025-04-24T18:14:14.362Z · LW(p) · GW(p)
I think that jumping straight to big-picture ideological questions is a mistake [LW(p) · GW(p)]. But for what it's worth, I tried to tally up some pros and cons of "ideologically who is more likely to implement a post-scarcity socialist utopia" here [LW(p) · GW(p)]; I think it's more of a mixed bag than many assume.
comment by janczarknurek · 2025-04-20T12:02:03.989Z · LW(p) · GW(p)
I really like that I see more discussion of "ok even if we managed to avoid xrisk what then?", e.g. recent papers on AI-enabled coups and so on. To the point however, I think the problem runs deeper. What I fear the most is that by "Western values imbued in AGI" people mean "we create an everlasting upperclass with no class mobility because capital is everything that matters and we freeze the capital structure, you will get UBI so you should be grateful."
It probably makes sense to keep the capitalist structure between ASIs but between humans? Seems like a very bad outcome for me (You will live in a pod and you will be happy type of endgame for the masses).
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2025-04-22T00:21:09.182Z · LW(p) · GW(p)
I don't see a way Stabilization of class and UBI could both happen. The reason wealth tends to entrench itself under current conditions is tied inherently to reinvestment and rentseeking, which are destabilizing to the point where a stabilization would have to bring them to a halt. If you do that, UBI means redistribution. Redistribution without economic war inevitably settles towards equality, but also... the idea of money is kind of meaningless in that world, not just because economic conflict is a highly threatening form of instability, but also imo because financial technology will have progressed to the point where I don't think we'll have currencies with universally agreed values to redistribute.
What I'm getting at is that the whole class war framing can't be straightforwardly extrapolated into that world, and I haven't seen anyone doing that. Capitalist thinking about post-singularity economics is seemingly universally "I don't want to think about that right now, let's leave such ideas to the utopian hippies".
comment by YonatanK (jonathan-kallay) · 2025-04-22T03:11:20.806Z · LW(p) · GW(p)
I feel the question misstates the natsec framing by jumping to the later stages of AGI and ASI. This is important because it leads to a misunderstanding of the rhetoric that convinces normal non-futurists, who aren't spending their days thinking about superintelligence.
The American natsec framing is about an effort to preserve the status quo in which the US is the hegemon. It is a conservative appeal with global reach, which works because Pax Americana has been relatively peaceful and prosperous. Anything that threatens American dominance, including giving ground in the AI race, appears dangerously destabilizing. Any risks from AI acceleration are literally after-thoughts (a problem for tomorrow, not today).
Absurd as it is, the Trumpist effort to burn the American-led system of global cooperation to the ground is still branded as a conservative return to an imagined glorious past.
The challenge in defeating this conservative natsec framing lies in communicating that radical change is all that is on the menu, but with some options far worse than others. I, for one, currently believe the fatal effect pre-AGI AI will have on democracy and other liberal values, regardless of who wields it, to be a promising rhetorical avenue that should be amplified.
comment by romeostevensit · 2025-04-20T20:13:15.963Z · LW(p) · GW(p)
Anglo armies have been extremely unusual historically speaking for their low rates of atrocity.
(I don't think this is super relevant for AI, but I think this is where intuitions about the superiority of the west bottoms out)
comment by BarnicleBarn · 2025-04-20T05:22:32.592Z · LW(p) · GW(p)
I think history is a good teacher when it comes to AI in general, especially AI we did not (at least at the time of deployment, and perhaps now, do not) fully understand.
I too feel a temptation to imagine that a USG AGI would hypothetically have alignment with US ideals, and likewise a CCP AGI would align with CCP ideals.
That said, I struggle with, given our lack of robust knowledge of what alignment with any set of ideals would look like in an AGI system, and how we could assure them, having any certainty that these systems would align with anything the USG or CCP would find desirable at all. Progress is being made in this area by, Anthropic, but I'd need to see that move forward significantly.
One can look at current gen LLMs like DeepSeek and see that it is censored to align with CCP concepts during fine tuning, and perhaps see that as predictive. I find it doubtful that some fine tuning would be sufficient to serve as the moral backbone of an AI system that is capable of AGI.
Which speaks to history. AI systems tend to be very aligned with what their output task is. The largest and most mature networks we have are Deep Learning Recommendation Models deployed by social media entities to keep us glued to our phones.
The intention was to serve engaging content to people, the impact was to flood people with content that is emotionally resonant, but not necessarily accurate. That has arguably led to increased polarization, radicalization, and increased suicide rates, primarily in young women.
While it would be tempting to say that social media companies don't care, the reality is that these DLRMs are very difficult to align. They are trained using RL and the corpus of their interactions with billions of daily users. They reward hack incessantly, and in very unpredictable ways. This leads to most of the mitigating actions being taken downstream of the recommendations (content warnings, etc.) Not out of design, but out of the simple fact that the models that are the best at getting users to keep scrolling are seldom the best at serving accurate content.
Currently, I think both flavors of AGI present the same fundamental risks. No matter the architecture, one cannot expect human like values to emerge in AI systems inherently, and we don't understand the drivers of those values within humans particularly well and how they lead to party lines/party divisions.
Without that understanding, we're shooting in the dark. It would be awfully embarrassing if the systems both, instead of flag waving aligned on dolphin species propagation.
comment by Logan Zoellner (logan-zoellner) · 2025-04-23T21:28:58.072Z · LW(p) · GW(p)
Are you genuinely unfamiliar with what is happening to the uyghurs, or is this a rhetorical question?
Replies from: kat-woods↑ comment by Kat Woods (kat-woods) · 2025-04-28T18:08:21.823Z · LW(p) · GW(p)
Thank you for saying this. Needs to be said
comment by wenxin · 2025-04-30T07:16:04.889Z · LW(p) · GW(p)
Judging from historical figures, the entire West represented by the United States and Europe is much worse. The things that the United States accuses China of as a whole but has no actual evidence are all things that the United States has done before. The United States' systematic genocide of Indians, the United States' large-scale network surveillance and wiretapping of leaders including European allies, and the United States' direct use of force to suppress veterans' protests. The corresponding events accused of China are the genocide of Uyghurs (although there is no complete sentence to prove that China is slaughtering Uyghurs), the security issues of China's Huawei (there is no evidence that Huawei is wiretapping), and China's Tiananmen Square incident (in fact, there was no bloodshed in Tiananmen Square).
So I also find it ridiculous that the content produced by this so-called rationalist community defaults to portraying China as the bad guy. You think you are rational, but you treat China and other ethnic groups in exactly the same way as a monotheistic religion treats infidels. You have a conclusion first and then look for arguments. Is your thinking really Less wrong?