My Assessment of the Chinese AI Safety Community

post by Lao Mein (derpherpize) · 2023-04-25T04:21:19.274Z · LW · GW · 94 comments

Contents

95 comments

I've heard people be somewhat optimistic about this AI guideline from China. They think that this means Beijing is willing to participate in an AI disarmament treaty due to concerns over AI risk. Eliezer noted that China is where the US was a decade ago in regards to AI safety awareness, and expresses genuine hope that his ideas of an AI pause can take place with Chinese buy-in.

I also note that no one expressing these views understands China well. This is a PR statement. It is a list of feel-good statements that Beijing publishes after any international event. No one in China is talking about it. They're talking about how much the Baidu LLM sucks in comparison to ChatGPT. I think most arguments about how this statement is meaningful are based fundamentally on ignorance - "I don't know how Beijing operates or thinks, so maybe they agree with my stance on AI risk!" 

Remember that these are regulatory guidelines. Even if they all become law and are strictly enforced, they are simply regulations on AI data usage and training. Not a signal that a willingness for an AI-reduction treaty is there. It is far more likely that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. A domestic regulation framework for nuclear power is not a strong signal for a willingness to engage in nuclear arms reduction. 

Maybe it is true that AI risk in China is where it was in the US in 2004. But the US 2004 state was also similar to the US 1954 state, so the comparison might not mean that much. And we are not Americans. Weird ideas are penalized a lot more harshly here. Do you really think that a scientist is going to walk up to his friend from the Politburo and say "Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith, they're even promising to not widen the current AI gap they have with us!" Well, China isn't in this race for parity or to be second best. China wants to win. But that's for another post.

Remember that Chinese scientists are used to interfacing with our Western counterparts and know to say the right words like "diversity", "inclusion", and "no conflict of interest" that it takes to get our papers published. Just because someone at Beida makes a statement in one of their papers doesn't mean the intelligentsia is taking this seriously. I've looked through the EA/Rationalist/AI Safety forums in China, and they're mostly populated by expats or people physically outside of China. Most posts are in English, and they're just repeating/translating Western AI Safety concepts. A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this). The only significant AI safety group is Anyuan(安远), and they're only working on field-building. Also, there is only one group doing technical alignment work in China, the founder was paying for everything out of pocket and was unable to navigate Western non-profit funding. I've still not figured out why he wasn't getting funding from Chinese EA people (my theory is that both sides assume that if funding was needed, the other side would have already contacted them).

You can't just hope an entire field into being in China. Chinese EAs have been doing field-building for the past 5+ years, and I see no field. If things keep on this trajectory, it will be the same in 5 more years. The main reason I could find is the lack of interfaces, people who can navigate both the Western EA sphere and the Chinese technical sphere. In many ways, the very concept of EA is foreign and repulsive to the Chinese mindset - I've heard Chinese describe an American's reason to go to college (wanting to change the world) as childishly naive and utterly impractical. This is a very common view here and I think it makes approaching alignment from an altruistic perspective doomed to fail. However, there are many bright Chinese students and researchers recently laid off who are eager to get into the "next big thing", especially on the ground floor. Maybe we can work with that.

I mostly made this post because I want to brainstorm possible ideas/solutions, so please comment if you have any insights.

Edit: I would really appreciate it if someone could get me on a podcast to discuss these ideas.

94 comments

Comments sorted by top scores.

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2023-04-26T14:26:22.798Z · LW(p) · GW(p)

I'm a Westerner, but did business in China, have quite a few Chinese friends and acquaintances, and have studied a fair amount of classical and modern Chinese culture, governance, law, etc.

Most of what you're saying makes sense with my experience, and a lot of Western ideas are generally regarded as either "sounds nice but is hypocritical and not what Westerns actually do" (a common viewpoint until ~10 years ago) with a later idea of "actually no, many young Westerners are sincere about their ideas - they're actually just crazy in an ideological way about things that can't and won't work" that is a somewhat newer idea. (白左, etc)

The one place I might disagree with you is that I think mainland Chinese leadership tends to have two qualities that might be favorable towards understanding and mitigating AI risk:

(1) The majority of senior Chinese political leadership are engineers and seem intrinsically more open to having conversations along science and engineering lines than the majority of Western leadership. Pathos-based arguments, especially emerging from Western intellectuals, do not get much uptake in China and aren't persuasive. But concerns around safety, second-order effects, third-order effects, complex system dynamics, causality, etc, grounded in scientific, mathematical, and engineering principles seem to be engaged with easily at face value in private conversations, and with a level of technical sophistication that there doesn't need to be as much direct reliance on asking for industry leaders and specialists to explain and contextualize diagrams, concepts, technologies, etc. Senior Chinese leadership also seem to be better - this is just my opinion - at identifying credible and non-credible sources of technical information and identifying experts who make sound arguments grounded in causality. This is a very large advantage.

(2) In recent decades, it seems like mainland Chinese leadership are able to both operate on longer timescales - credibly making and implementing multi-decade plans and running them - as well as making rapid changes in technology adoption, regulation, and economic markets once a decision has been made in an area. The most common examples we see in the West are videos of skyscrapers being constructed very rapidly, but my personal example is I remember needing to go pay my rent with shoeboxes full of 100 renminbi notes during the era of Hu Jintao's chairmanship and being quite shocked when China went to near cashless almost overnight. 

I think those two factors - genuine understanding of engineering and technical causality, combined with greater viability for engaging in both longer timescale and short-timescale action, seem like important points worth mentioning.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T02:51:55.964Z · LW(p) · GW(p)

I fully endorse this post.

comment by simeon_c (WayZ) · 2023-04-26T13:33:38.221Z · LW(p) · GW(p)

Thanks for writing this.

Overall, I don't like the post much under it's current form. There's ~0 evidence (e.g. from Chinese newspapers) and there is very little actual argumentation. I like that you give us a local view but putting a few links to back your claims would be very very appreciated. Right now it's hard to update on your post given that the claims are very empirical and without any external sources.

More minorly: "A domestic regulation framework for nuclear power is not a strong signal for a willingness to engage in nuclear arms reduction" I also disagree with this statement. I think it's definitely a signal.

comment by Cervera · 2023-04-25T07:01:18.188Z · LW(p) · GW(p)

A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).

 

Can you expand on this? Why would it be a bad idea? I have interacted with mainland chinese people (outside of china) and I'm not really making the connection. 

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-25T11:55:27.695Z · LW(p) · GW(p)

Let's just say that weirdness in China is very different from weirdness in the West. AI safety isn't even a weird concept here. It's something people talk about, briefly think over, then mostly forget, like Peter Thiel's new book. People are generally receptive to it. What AI safety needs to get traction in the Chinese idea sphere is to rapidly disassociate with really really weird ideas like EA. EA is like trying to shove a square peg into the round hole of Chinese psychology. It's a really bad sign that the AI Safety toehold in China is clustered around EA.

Rationality is pretty weird too, and is honestly just extra baggage. Why add it to the conversation? 

We don't need rationality or EA to get Chinese to care about AI safety. Trying to import the Western EA-AI safety-Rationality memeplex wholesale is both unnecessary and detrimental.

Replies from: avturchin, ryan_b, TrevorWiesinger, meena-kumar, sean-h, Algon, hairyfigment, Sodium
comment by avturchin · 2023-04-25T15:19:02.676Z · LW(p) · GW(p)

Interestingly,  Yud is attractive to Russian mindset (similarly to Karl Marx). I heard 12 old children discussing HPMOR on the beach, and their parents were not rationalists. 

Replies from: RomanS
comment by RomanS · 2023-04-27T12:55:20.814Z · LW(p) · GW(p)

From my observations, the Chinese mindset is much more different from the American one than the Russian mindset. 

In comparison to Chinese, Russians are just slightly unusual Europeans, with mostly the same values, norms, worldviews as Americans.

I attribute it to the 3000+ years of a relatively strong cultural isolation of China, and to the many centuries of Chinese rulers running all kinds of social engineering (including the purposeful modification of the language according to political goals). 

comment by ryan_b · 2023-04-25T13:34:16.852Z · LW(p) · GW(p)

We don't need rationality or EA to get Chinese to care about AI safety. Trying to import the Western EA-AI safety-Rationality memeplex wholesale is both unnecessary and detrimental.

I agree with this, not out of any particular expertise on China, but instead because porting a whole memeplex across a big cultural boundary is always unnecessary and detrimental.

comment by trevor (TrevorWiesinger) · 2023-04-26T02:45:18.564Z · LW(p) · GW(p)

I just want to note that rationality can fit into the Chinese idea sphere, very neatly; it's just that it's not effortless to figure out how to make it work. 

The current form e.g. the sequences, is wildly inappropriate. Even worse, a large proportion of the core ideas would have to be cut out. But if you focus on things like human intelligence amplification and forecasting and cognitive biases, it will probably fit into the scene very cleanly.

I'm not willing to give any details, until I can talk with some people and get solid estimates on the odds of bad outcomes, like the risk that rationality will spread, but AI safety doesn't, and then the opportunity is lost. The "baggage" thing you mentioned is worth serious consideration, of course. But I want to clarify that yes, EA won't fit, but rationality can (if done right, which is not easy but also not hard), please don't rule it out prematurely.

Replies from: RomanS
comment by RomanS · 2023-04-27T18:21:01.141Z · LW(p) · GW(p)

I just want to note that rationality can fit into the Chinese idea sphere, very neatly

I agree. Some starting points (kudos to GPT-4): 

Confucius (551-479 BCE) - taught that people should think critically and rationally about their actions and decisions in order to lead a life of harmony and virtue. One of his famous sayings is, "When you know a thing, to hold that you know it; and when you do not know a thing, to allow that you do not know it - this is knowledge."

Mencius (372-289 BCE) - believed that individuals can cultivate their moral and intellectual capabilities through rational thinking and learning. Mencius emphasized the importance of moral reasoning and introspection in making ethical decisions.

Mozi (470-391 BCE) - advocated for a rational and pragmatic approach to decision-making. He argued that people should evaluate the potential consequences of their actions based on the benefits or harms they would bring to society. Mozi's philosophy encouraged rational thinking and objective analysis in the pursuit of social harmony and the greater good.

Zhuangzi (369-286 BCE) - believed that individuals should cultivate their understanding of the natural world and develop their innate abilities to think and reason. Zhuangzi encouraged the cultivation of a clear and unbiased mind in order to achieve harmony with the Dao, or the natural order of the universe.

Xunzi (312-230 BCE) - believed that people must be taught to act morally and rationally. Xunzi emphasized the importance of education, self-discipline, and reflection in developing moral character and rational decision-making abilities.

comment by Meena Kumar (meena-kumar) · 2023-04-26T12:11:03.792Z · LW(p) · GW(p)

I think a similar thing is true in India.

comment by Sean H (sean-h) · 2023-04-27T19:26:18.612Z · LW(p) · GW(p)

I think the most surprising part about your post (and the best part I guess) is discovering how many people in the West have a very poor understanding of how the CCP (and Chinese politics) work. Do you have good newsletters / sites that people could follow?

Replies from: lb_rv
comment by lb_rv · 2023-04-28T15:40:22.789Z · LW(p) · GW(p)

Not about the CCP or politics but I've found Chinese Doom Scroll tremedously useful as a window into Chinese culture and ways of thinking. It's a daily translation of popular Weibo posts that the author encouters while doom scrolling.

comment by Algon · 2023-04-25T13:20:48.868Z · LW(p) · GW(p)

I wonder if Yud would be willing to write a rat fic aimed at Chinese audiences? He seems to read a bunch of Xinxia, so he's probably absorbed some of the memes of China's male youth. Maybe a fanfic of "Oh my god! Earthlings are insane!" would be a good choice, based on my impression of the novel's themes and what its readership is like. 

EDIT: I think the rationality angle is important for making progress on AI safety, but I'm not sure which parts are necessary. Also, what part of HPMOR would make it especially bade for Chinese audiences? The libertarian sympathies? The trans-humanism doesn't seem like it would be that harmful, given the popularity of novels Embers ad infinatum. Which is another novel that Yud could write a fanfic for.

Replies from: derpherpize, meena-kumar
comment by Lao Mein (derpherpize) · 2023-04-25T13:43:35.527Z · LW(p) · GW(p)

The most common response I get when I talked to coworkers about AI risk wasn't denial or an attempt to minimize the problem. It was generally something like "That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship." And then a shrug before they went back to their tasks. I don't see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.

I can't really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It's not something you can argue people out of. Just trust me on this one.

Replies from: Bjartur Tómas, rank-biserial, ChristianKl, Algon
comment by Tomás B. (Bjartur Tómas) · 2023-04-25T16:38:56.282Z · LW(p) · GW(p)

Is it "insanely cringe" for different reasons than it is "insanely cringe" for English audiences? I suspect most Americans, if exposed to it, would describe it as cringe. There is much about it that is cringe, and I say this with some love. 

comment by rank-biserial · 2023-04-25T16:27:41.621Z · LW(p) · GW(p)

"That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship."

The Chinese stated preferences here closely track Western revealed preferences. Americans are more likely to dismiss AI risk post-hoc in order to justify making more money, whereas it seems that Chinese people are less likely to sacrifice their epistemic integrity in order to feel like a Good Guy, Hire people, and pay them money! [LW · GW]

Replies from: Bjartur Tómas
comment by Tomás B. (Bjartur Tómas) · 2023-04-25T16:40:52.701Z · LW(p) · GW(p)

Pay Terry Tao his 10 million dollars!

comment by ChristianKl · 2023-04-26T14:17:44.463Z · LW(p) · GW(p)

The most common response I get when I talked to coworkers about AI risk wasn't denial or an attempt to minimize the problem. It was generally something like "That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship." 

If that's true I would assume that the people who work on creating the AI guidelines, understand the problem. This would in turn suggests that they take reasonable steps to address it.

Is your model that the people writing the guidelines would be well-intentioned but lack the political power to actually enforce useful guidelines?

comment by Algon · 2023-04-25T14:01:53.212Z · LW(p) · GW(p)

The most common response I get when I talked to coworkers about AI risk wasn't denial or an attempt to minimize the problem. It was generally something like "That sounds really interesting. If a company working on the problem was paying a lot, I would consider jumping ship." And then a shrug before they went back to their tasks. I don't see how rationality helps with anything. We know what the problem is, and just want to be paid to solve it.

Yeah, but a lot of people can say that w/o producing good work. There's been a number of complaints about field building attempts bringing in low quality people who don't make progress on the core problem. Now, I am NOT sayng you need to read the sequences and be a member of our cult to make progress. But the models in there do seem important to seeing what is hard about alignment. Now, many smart people have these models themselves, drawing from the same sources Yudkowsky did. But many smart people don't have these models and bounce off of alignment. 

I can't really explain why HPMOR is insanely cringe in a Chinese context to someone without the cultural background. It's not something you can argue people out of. Just trust me on this one.

I can sort of trust you that HPMOR is insanely cringe. I'm still not sure if a variant wouldn't work, because I don't have your model. Maybe I'll talk to some Chinese friends about this and get their opinion. You may be living in a bubble and not realize it. It happens to everyone at some point, and China must have a lot of bubbles.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-25T14:53:02.560Z · LW(p) · GW(p)

I can sort of trust you that HPMOR is insanely cringe.

The private sentiment of folks who read through all of it would probably be some degree of 'cringe' too.

I couldn't even make it halfway,  though I am fascinated by imaginative fanfiction, as it becomes too much of a childish power fantasy to ignore and suspend my disbelief while reading.

Replies from: Algon
comment by Algon · 2023-04-25T14:59:03.628Z · LW(p) · GW(p)

Yeah, you've got a point there. And yet, HPMOR is popular. Lots of people love it, and got into the LW and the rat community that way. You yourself may not have, but that's evidence in favour of high variance. So I remain unsure if something like HPMOR could work in China too. Why assume there'd be less variance in response there? 

Replies from: aristide-twain, meena-kumar
comment by astridain (aristide-twain) · 2023-04-26T10:13:25.424Z · LW(p) · GW(p)

It's about trade-offs. HPMOR/an equally cringey analogue will attract a certain sector of weird people into the community who can then be redirected towards A.I. stuff — but it will repel a majority of novices because it "taints" the A.I. stuff with cringiness by association.

This is a reasonable trade-off if:

  1. the kind of weird people who'll get into HPMOR are also the kind of weird people who'd be useful to A.I. safety;
  2. the normies were already likely to dismiss the A.I. stuff with or without the added load of cringe.

In the West, 1. is true because there's a strong association between techy people and niche fandom, so even though weird nerds are a minority, they might represent a substantial fraction of the people you want to reach.  And 2. is kind of true for a related reason, which is that "nerds" are viewed as generally cringe even if they don't specifically talk about HP fanfiction; it's already assumed that someone who thinks about computers all days is probably the kind of cringe who'd be big into a semi-self-insert HP fanfiction. 

But in China, from @Lao Mein [LW · GW]'s testimony, 1. is definitely not true (a lot of the people we want to reach would be on Team "this sounds weird and cringe, I'm not touching it") and 2. is possibly not true (if computer experts ≠ fandom nerds in Chinese popular consciousness, it may be easier to get broad audiences to listen to a non-nerdy computer expert talking about A.I.). 

comment by Meena Kumar (meena-kumar) · 2023-04-26T12:12:18.629Z · LW(p) · GW(p)

HPMOR is weird and attracts weird people.

comment by Meena Kumar (meena-kumar) · 2023-04-26T12:11:33.525Z · LW(p) · GW(p)

Yudkowsky is not the right person to start this stuff in China.

comment by hairyfigment · 2023-04-26T18:48:27.813Z · LW(p) · GW(p)

All of that makes sense except the inclusion of "EA," which sounds backwards. I highly doubt Chinese people object to the idea of doing good for the community, so why would they object to helping people do more good, according to our best knowledge?

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T02:50:41.886Z · LW(p) · GW(p)

Yes. We hold volunteering in contempt.

Replies from: hairyfigment
comment by hairyfigment · 2023-04-27T04:19:58.408Z · LW(p) · GW(p)

See, that makes it sound like my initial response to the OP was basically right, and you don't understand the argument being made here. At least one Western reading of these new guidelines was that, if they meant anything, then the bureaucratic obstacle they posed for AGI would greatly reduce the threat thereof. This wouldn't matter if people were happy to show initiative - but if everyone involved thinks volunteering is stupid, then whose job is it to make sure the official rules against a competitive AI project won't stop it from going forward? What does that person reliably get for doing the job?

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T08:20:56.241Z · LW(p) · GW(p)

Volunteering to work extra hard at your job and break things is highly valued (and rewarded). Volunteering at your local charity is childishly naive. If your labor/time/intelligence was truly worth anything, you wouldn't be giving it away for free.

Replies from: Raemon
comment by Raemon · 2023-04-27T16:40:32.567Z · LW(p) · GW(p)

Hmm.

It’s noteworthy that EA also thinks volunteering at your local charity is naive. EA is often trying to market itself in the west to the sort of people who by default might donate to local charity, but I think many of the ideas could be reworked-in-vibe substantially. (I guess the college student wanting to change the world being thought of as naive is more centrally EA though)

That all said, I basically don’t think EA is necessary to make the case for AI, but I do think you need above average rationality to do real alignment work.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T18:06:34.715Z · LW(p) · GW(p)

Donating to charity (regardless of effectiveness) is also viewed as naive. There is low empathy for poor people since many of us remember what it was like to have a much lower salary just a few years ago. I had to cut down on eating out and alcohol in order to live on $12,000 a year, but overall it was fine. I could still afford good food and expensive hobbies. There was a time when my entire family lived on a fraction of that, and it was completely fine. So why should I donate my hard-earned money to others who are living better than that?

Replies from: Richard_Kennaway, Raemon
comment by Richard_Kennaway · 2023-05-02T10:03:22.227Z · LW(p) · GW(p)

I am curious to know how Peter Singer is viewed in China (or would be, if he isn't yet known there). How would this talk go down with a Chinese audience? (The video isn't playing for me, but there is a transcript.)

The thing is, while I reject his entire framework, I have never seen anyone give an argument against, to argue that supererogation is a thing, moral duty is bounded and is owed first to those nearest to you, having more than someone else does not imply a duty to give the excess away, and so on. These things are said, but never in my reading more than asserted.

Replies from: ChristianKl, derpherpize
comment by ChristianKl · 2023-05-02T11:03:14.229Z · LW(p) · GW(p)

When it comes to helping strangers, the Chinese help strangers a lot less. There are articles like: https://medium.com/shanghai-living/4-31-why-people-would-usually-not-help-you-in-an-accident-in-china-c50972e28a82 about Chinese not helping strangers who have an accident.

When it comes to the accident that Peter Singer refers to, the article talks about it as:

There were a couple of extreme cases where Chinese people refusing to help led to the death of the person in need. Such is the case of Wang Yue, a two year old girl who was wandering alone in a narrow alley, because her mother was busy doing the laundry. She was ran over by two cars, and 18 people who passed around the area didn’t even look at her. Later a public anonymous survey revealed that 71% thought that the people who passed by didn’t stop to help her because they were afraid of getting into trouble.

The article goes on to say:

That’s not the only case in 2011 an 88 year old Chinese man fell in the street and broke his nose, while people passed him by no one helped him and he died suffocating in his own blood. After some anonymous poll the result was the same, the people didn’t blame those who didn’t help, because the recent cases show that if you try to help someone you can get into trouble.

When Peter Singer talks to the TED audience he can assume that everyone will say the would have done something. In Chinese, most people empathize with the people who did nothing. 

Replies from: Algon
comment by Algon · 2023-05-02T11:42:30.896Z · LW(p) · GW(p)

An example of the dangers of helping in China from the article that ChristianKl talked about.

While individualism in China is a big thing, this situation is more related to the fear of being accused as the responsible of the accident, even when you just tried to help.

The most popular case happened in the city of Nanjing, a city located at the west of Shanghai. The year was 2006 when Xu Shoulan, an old lady trying to get out of a bus, fell and broke her femur. Peng Yu, was passing by and helped her taking her to the hospital and giving her ¥200 (~30 USD) to pay for her treatment. After the first diagnosis Xu needed a femur replacement surgery, but she refused to pay it by herself so she demanded Peng to pay for it, as he was the responsible of the accident according to her. She sued him and after six months she won and Peng needed to cover all the medical expenses of the old lady. The court stated that “no one would, in good conscience, help someone unless they felt guilty”.

While this incident wasn’t the first, it was very popular and it showed one of the non written rules of China. [I bolded this] If you help someone it’s because you feel guilty of what happened, so in some way you were or are involved in the accident or incident.

I don't know enough about the situation to guess why this norm exists. Or even if it actually exists. But if so, it seems like a bad equillibrium.

Replies from: ChristianKl
comment by ChristianKl · 2023-05-02T14:02:09.886Z · LW(p) · GW(p)

Historically, the problem seems to be that most Communist government initiatives to get people to be more altruistic result in them LARPing and not being more altruistic. 

At the moment the Chinese government solution seems to be: "Give everybody social credit scores that measure how altruistic they are and hopefully that will get everyone to be more altruistic". 

Replies from: Algon
comment by Algon · 2023-05-02T16:18:55.758Z · LW(p) · GW(p)

I don't get how larping altruism results in the norm that "if you're helping, you're guilty". Unless people went out of their way to cause problems and act like saviors to seem altruistic? Which might be possible, but also sounds like it would be difficult to execute and isn't the only way people could goodhart altruism metrics.

Replies from: ChristianKl
comment by ChristianKl · 2023-05-02T17:20:38.985Z · LW(p) · GW(p)

If most altruism is larping then for someone who does something altruistic, people who do altruistic things become suspect.

If I remember right there are cases where in former communist countries people who always vote cooperate on the ultimatum game get punished for it. 

Communism isn't the only factor here but I would expect that it's one meaningful factor. 

comment by Lao Mein (derpherpize) · 2023-05-02T10:13:45.836Z · LW(p) · GW(p)

"OK, I guess I'm evil then."

comment by Raemon · 2023-04-27T18:09:06.618Z · LW(p) · GW(p)

Fair 'nuff.

comment by Sodium · 2023-04-26T23:51:00.792Z · LW(p) · GW(p)
comment by Haoxing Du (haoxing-du) · 2023-04-28T15:56:58.315Z · LW(p) · GW(p)

Thanks for writing this post! I want to note a different perspective. Although unlike OP, I have not lived in China since 2015 and am certainly more out of touch with how the country is today.

I do observe some of the same dynamics that OP describes, but I want to point out that China is a really big country with inherently diverse perspectives, even in the current political environment. I don’t see the dynamics described in this post as necessarily the dominant one, and certainly not the only one. I know a lot of young people, both in my social circle and online, that share many of the Western progressive values such as the pursuit of equality, freedom, and altruism. I see many people trying their best to live a meaningful life and do good for the world. (Of course, many people are not thinking about this at all, but that is the same everywhere. It's not like these concerns are that mainstream in the West.) As a small piece of evidence, 三联生活周刊 did an interview with me about AI safety recently, and it got 100k+ views on WeChat and only positive comments. I’ve also had a few people reach out to me expressing interest in EA/AI safety since the interview came out.

You can't just hope an entire field into being in China. Chinese EAs have been doing field-building for the past 5+ years, and I see no field.

Implying that they are simply "hoping the field into being" is really unfair to the Chinese EAs doing field building. Even in the US, EA was much less mainstream 5 years ago.

The main reason I could find is the lack of interfaces, people who can navigate both the Western EA sphere and the Chinese technical sphere.

I agree this is a major bottleneck.

comment by 142857 · 2023-04-26T00:15:30.695Z · LW(p) · GW(p)

A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).

This has already been done, and has pretty good reviews and some discussions

I've looked through the EA/Rationalist/AI Safety forums in China

If these are public, could you post the links to them?

there is only one group doing technical alignment work in China

Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-26T00:24:16.167Z · LW(p) · GW(p)

If these are public, could you post the links to them?

Tian-xia forums are invite-only and mostly expats. I should probably dig deeper to find native Chinese discussions.

 Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?

CSAGI. Unfortunately, their website (csagi.org) has been dead for a while. It's founded by Zhu Xiaohu. He mentioned bisimulation and reinforcement learning.

Replies from: zhu-xiaohu
comment by Zhu Xiaohu (zhu-xiaohu) · 2023-04-28T07:46:09.548Z · LW(p) · GW(p)

Hi. Thanks for mentioning us. 

Unlike main labs or companies in China, we are doing fundamental research work on the ontological crisis problem with model theory from mathematical logic trying to set a new base for analyzing and preventing the crisis. 

Due to our lacking of funding and restricted intellectual resources, the process is slower, but we will share our work when ready. 

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-28T08:54:56.783Z · LW(p) · GW(p)

Have you asked China EA people for funding? They mentioned that they would be interested in funding you.

comment by johnswentworth · 2023-04-28T21:39:42.613Z · LW(p) · GW(p)

This seems to be arguing against a starry-eyed idealist case for an "AI disarmament treaty", but not really against a cynical/realistic case. (At first I was going to say "arguing against a strawman", but no, there are in fact lots of starry-eyed idealists in alignment.)

Here's my cynical/realistic case for an "AI disarmament treaty" (or something vaguely in that cluster) with China. As the post notes, the regulations mostly provide evidence that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. For purposes of an AI treaty, that's plausibly all we need. Near-term AI is a potential threat to stability from the CCP's perspective. That's true whether the AI is built in China or somewhere else; American-built AI is still a threat to stability. So presumably the CCP would love for the US to adopt rules limiting new LLMs. If the US comes along and says "we'll block training of big new AIs if you do", the CCP's answer is plausibly "yes definitely that sounds excellent".

And sure, China won't be working much on AI safety. That's fine; the point of an "AI disarmament treaty" is not to get literally everyone working on safety. They don't even need to be motivated by X-risk. If they're willing to commit to not build big new AI, then it doesn't really matter whether they're doing that for the same reasons we want it.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-29T05:01:39.273Z · LW(p) · GW(p)

Parity in AI isn't what China is after - China doesn't want to preserve the status quo. We want to win. We want AI hegemony. We want to be years ahead of the US in terms of our AIs. And frankly, we're not that far behind - the recent Baidu LLMs perform somewhere between GPT2 and GPT3. To tie is to lose. Stopping the race now is the same as losing. 

I also don't see how LLMs can destabilize China in the near-term. Spam/propaganda isn't a big issue since you need to submit your real-life ID in order to post on Chinese sites.

Replies from: ChristianKl
comment by ChristianKl · 2023-04-29T14:12:53.507Z · LW(p) · GW(p)

If that's true, how do you explain the proposed guidelines that make it harder to train big models in China? The proposed guidelines suggest that the Cyberspace Administration of China believes that there are reasons that warrant slowing down LLM progress in China. 

comment by Chris_Leong · 2023-04-25T10:55:01.115Z · LW(p) · GW(p)

If only we could spread the meme of irresponsible Western powers charging head-first into building AGI without thinking through the consequences and how wise the Chinese regulation is in contrast.

I think it'll be easier in a few years to demonstrate the foolishness of open-source AI.

Replies from: Erich_Grunewald, MakoYass
comment by Erich_Grunewald · 2023-04-25T17:44:34.708Z · LW(p) · GW(p)

If only we could spread the meme of irresponsible Western powers charging head-first into building AGI without thinking through the consequences and how wise the Chinese regulation is in contrast.

That sort of strategy seems like it could easily backfire, where people only pick up the first part of that statement ("irresponsible Western powers charging head-first into building AGI") and think "oh, that means we need to speed up". Or maybe that's what you mean by "if only" -- that it's hard to spread even weakly nuanced messages?

comment by mako yass (MakoYass) · 2023-05-02T04:11:23.943Z · LW(p) · GW(p)

I'd be worried about the meme reaching China, venting off any pressure they might have been under to actually develop solid regulation? Why would a politician drive for better regulation, and wouldn't it be a lot harder, if american AI people are saying they already trust you?

comment by ryan_b · 2023-04-25T15:12:29.986Z · LW(p) · GW(p)

Since we are talking regulation and/or treaties, both government concerns, is there anything to the idea of working in the context of party slogans (by which I mean tifa [提法])?

What I have in mind is something like this:

  • Review the arguments about AGI in English
  • Look for conceptual overlap with slogans on various topics (I expect these to be national development, national security, and the socialist conception of historical laws, by way of example).
  • See if any can be updated gracefully to talk about AGI specifically.

I think this might work because:

  • It will sever AGI from EA/Rationalism
  • It will put the AGI arguments in a context more familiar to Chinese officials and party members at least

It will not work if:

  • Making obvious adjustments to slogans in order to associate a specific issue with it is culturally nonsensical, or viewed as presumptuous or otherwise low-status
  • It literally does not work linguistically in a "For make benefit glorious nation of Kazakstan" sort of way.

That being said, I think it might be a good exercise at least from the English side as a direct attempt at working with some Chinese conceptualization of the motivations for the problem of alignment. A few examples of what I mean, with A being the English idea we want to communicate and B being a modified slogan that points at the same problem:

A) We do not know how to build an AI that knows human values.

B) We do not know how to build an AI with Chinese characteristics.

A) An AGI will be more capable than all of humanity put together.

B) The backwards will be beaten, and all nations will be backwards compared to AGI.

A) Unaligned AGI will cause catastrophe or human extinction by default.

B) Unaligned AGI is a member of the hostile forces by default.

Replies from: Algon, derpherpize
comment by Algon · 2023-04-25T17:18:43.998Z · LW(p) · GW(p)

My worry about this is the same as my worry about making AI not-kill-everyone-ism palatable to politicians in the West. Will they get the actual, core problem, and so understand which policies will not help? Reading your suggested changes, I might pattern match AGI to a new and powerful nation. This interpretation has an implicitly human framing, viewing the advent of AGI is like the birth of a new hegemon. 


EDIT: That is to say, I'm not optimistic about confronting the mind-killing nature of political discourse, and trying to engage in that sort of discourse seems like a very hard battle. I am aware that I'm not giving a practical alternative though. So keep on thinking about this topic, please. 

Replies from: ryan_b
comment by ryan_b · 2023-04-25T20:54:28.035Z · LW(p) · GW(p)

My empathy runs deep. Providing a little more context for the sloganeering as done inside China's Communist Party, this is different from slogans in the context of American or European politics (which are almost totally tied to political campaigns for election); instead they work like this:

Phrases like these are extremely important to Communist Party politics and policy. Governing a party with more than 90 million members presents a dizzying coordination program. One way in which the Center manages this challenge is through the promulgation of slogans—also known by their Chinese term, tifa [提法]. The goal of a slogan is to package leadership priorities, strategic assessments, historical judgments, and policy programs in a phrase small enough to circulate throughout propaganda system. The ideal tifa is vague enough for cadres to easily adapt to their own sphere of responsibility but specific enough unify the work priorities of millions of party cadres and state bureaucrats.

Which of course does not resolve the problem of political competition:

Historically the role tifa play in governing China has made these slogans a central battleground for political competition. Many slogans do not just signal policy priorities, but loyalty to particular factions or patronage networks. From the outside it can be difficult to discern whether shifting slogans represent the victory of an idea or of a faction.

But I do have the sense that, at least in the case of party slogans, it is about what the priorities are and who executes them and the detailed implementation is usually a separate question.

Replies from: Algon
comment by Algon · 2023-04-25T21:38:54.336Z · LW(p) · GW(p)

But I do have the sense that, at least in the case of party slogans, it is about what the priorities are and who executes them and the detailed implementation is usually a separate question.

I don't understand why this helps. Who executes a priority, and what exactly a priority is, seem greatly correlated with the space of detailed implementation of a policy. Look at what happened with Drexlerian nanotech: the term got hijacked by people who called their pre-existing work nanotech in order to obtain resources from the US government which were earmarked for "nanotech". Why wouldn't something similair happen for AI not-kill-everyoneism? People argue over what exactly the priority is ("the AI must have chinese characteristics" vs. "the AI must be rewarded for having chinese characterisitcs and obeying the law") and who executes it (curious, brilliant people who can work on the core of the problem vs bureaucrat clout-chasers). So what if the detailed implementaion is a seperate question? The front has already collapsed. 
 

I admit to that I don't see what has made you excited about this idea, and understand if you don't want to spend the effort conveying it at the moment. And I also admit to being confused: I realized that part of where the nanotech-AI analogy might fail is in the pressures US vs. Chinese politicians face, and how the battle over priorities are fought. Another area it might fail is that I don't in what context "sloganeering" is done. Who is the audience for this? How does the existence of a dictator like Xi affect things? I've not really thought about it.

Replies from: ryan_b
comment by ryan_b · 2023-04-26T16:08:17.608Z · LW(p) · GW(p)

Another area it might fail is that I don't in what context "sloganeering" is done. Who is the audience for this? How does the existence of a dictator like Xi affect things?

This is the crux of the matter, I think: the slogans to which I am pointing are those used inside the communist party of China for the purposes of coordinating the party members and bureaucrats, who are the audience. Xi has introduced several of the slogans in current use, and has tried and failed to introduce others. That is to say, they are how the Chinese government talks to itself, and Xi is at the center of the conversation.

I focused on the slogans because I have some clue how this system works, but don't have a notion about Chinese language in general, or Chinese culture in general, or the technical culture specifically. So all I've done here is take the idea "alignment should be more of a priority in China" and the idea "I know one way the Chinese government talks about priorities" and bashed 'em together like a toddler making their dolls kiss.

The challenge is the part that is exciting to me, frankly. Communicating an important problem across cultural lines is hard, and impressive when done well, and provides me a certain aesthetic pleasure. It is definitely not the case that I have analyzed the problem at length, or done similar things before and concluded on priors that this will be an effective method.

Edit: putting the slogans into a more LessWrong context, tifa are directly a solution to the problem described n You Get About Five Words [LW · GW].

comment by Lao Mein (derpherpize) · 2023-04-27T02:49:22.950Z · LW(p) · GW(p)

Party slogans are LARP. Westerners trying to convince us of things LARPing with our LARP is... Actually, just don't do it.

Replies from: ryan_b
comment by ryan_b · 2023-04-27T13:46:50.279Z · LW(p) · GW(p)

How do ideas usually enter the party sphere, out of curiosity?

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T17:35:53.646Z · LW(p) · GW(p)

Expert opinion, conversations with their engineer friends, the "general mood" among scientists in a particular field, and opinion pieces written by influential writers. That sort of thing. 

comment by otto.barten (otto-barten) · 2023-05-01T00:29:39.143Z · LW(p) · GW(p)

Me and @Roman_Yampolskiy [LW · GW] published a piece on AI xrisk in a Chinese academic newspaper: http://www.cssn.cn/skgz/bwyc/202303/t20230306_5601326.shtml

We were approached after our piece in Time and asked to write for them (we also gave quotes for another provincial newspaper). I have the impression (I've also lived and worked in China) that leading Chinese decision makers and intellectuals (or perhaps their children) read Western news sources like Time, NYTimes, Economist, etc. AI xrisk is currently probably mostly unknown in China, and if stumbled upon people might have trouble believing it (as they have in the west). But if/when we're going to have a real conversation about AI xrisk in the west, I think the information will seep into China as well, and I'm somewhat hopeful that if this happens, it could perhaps prepare China for cooperation to reduce xrisk. In the end, no one wants to die.

Curious about your takes though, I'm of course not Chinese. Thanks for the write-up!

comment by RobertM (T3t) · 2023-04-25T05:42:05.201Z · LW(p) · GW(p)

I've heard people be somewhat optimistic about this AI guideline from China. They think that this means Beijing is willing to participate in an AI disarmament treaty due to concerns over AI risk.

I'm curious where you've seen this.  My impression from reading the takes of people working on the governance side of things is that this is mostly being interpreted as a positive sign because it (hopefully) relaxes race dynamics in the US. "Oh, look, we don't even need to try all that hard, no need to rush to the finish line."  I haven't seen anyone serious making a claim that this is downstream of any awareness of x-risk concerns, let alone intent to mitigate them.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-25T07:32:22.444Z · LW(p) · GW(p)

I've only seen vaguely positive vibes from people who needed Google Translate in order to understand it, like this post from Zvi. He comes to the conclusion that "All of that points, once again, to an eager partner in a negotiation." This isn't obvious at all. Again, a willingness to regulate nuclear power is not a strong signal for a willingness to participate in nuclear disarmament treaty - all states will eventually have some level of nuclear regulation.

comment by jacob_cannell · 2023-04-25T04:27:12.142Z · LW(p) · GW(p)

Thanks, I'm now more curious about chinese viewpoints on AI and the singularity in general.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-25T05:18:50.793Z · LW(p) · GW(p)

"Damn it, we're falling behind! GPT4 is way better than anything we have."

OpenAI's bans on Chinese users have really hurt public knowledge of GPT4, for what that's worth. The small amount of effort it takes to get a US phone number was enough to discourage casuals from getting hands-on experience, although there are now clone websites using an OpenAI API going up and down all the time. But yeah, awareness of just how good it is isn't as mainstream compared to the US.

As far as I can tell, GPT4/ChatGPT works great on Chinese, even without fine-tuning. And it blows Chinese-specializing models of Baidu and friends out of the water. It seems like a bit of a Spudnik moment. 

Replies from: jacob_cannell
comment by jacob_cannell · 2023-04-25T06:10:58.358Z · LW(p) · GW(p)

Interesting - OpenAI is actually banning chinese users, or is the great firewall banning OpenAI? I can't find a quick answer from google, instead I"m getting reports that its a mutual ban/restriction? I can't immediately see why it would be in OpenAI's interest to not allow at least paying chinese users.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-25T07:24:33.566Z · LW(p) · GW(p)

OpenAI does not accept creating an account with a Chinese/HK phone number (specifically noting that they are blocking service from those areas). I also can not access the chatgpt website from a mainland IP, but can from HK. I vaguely remember OpenAI citing US law as a reason they don't allow Chinese users access, maybe legislation passed as part of the chip ban?

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2023-04-25T18:10:42.087Z · LW(p) · GW(p)

I vaguely remember OpenAI citing US law as a reason they don't allow Chinese users access, maybe legislation passed as part of the chip ban?

Nah, the export controls don't cover this sort of thing. They just cover chips, devices that contain chips (i.e. GPUs and AI ASICs), and equipment/materials/software/information used to make those. (I don't know the actual reason for OpenAI's not allowing Chinese customers, though.)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-25T22:14:17.329Z · LW(p) · GW(p)

I don't know the actual reason for OpenAI's not allowing Chinese customers, though.

 

Any online service hosted outside of mainland China wanting to sell within has to meet quite onerous regulations.  I've thought about for some time why these exist and develop to the extent that it's harmful (and why they exist nearly everywhere).

It's partly by design, similar to the EU's regulatory apparatus that created GDPR, partly because of the natural distrust of government officials towards the intentions of foreign actors, but also because of the office dynamics.

 As scoring points against foreigners is low hanging fruit for the career prospects of middle-management government officials in many, many, government offices.

If their attempts succeed in delivering proof of misdeeds, or a public apology, then it's a guaranteed promotion.

If their attempts fail, how will the overseas stakeholders retaliate against these nameless official(s) without pointing the finger at the department/division/ministry/government/China... as a whole? 

If they do anyways, it would turn into a status fight, thus facilitating a promotion for whoever proposed it.

And then some stricter regulations will be imposed as retaliation, which increases the power and authority of the instigators.

i.e. In either case the instigator wins, so it becomes a ratcheting mechanism that encourages ever more extreme proposals and tit-for-tat behaviour, and that's even before the geopolitics come into play

In this sense increasing geopolitical tension might ironically reduce the day-to-day risk of being on the unfortunate end of such schemes, since higher level officials will be extra motivated to make sure their subordinates are more disciplined.

comment by Jordan Schneider (jordan-schneider) · 2023-05-04T12:03:04.967Z · LW(p) · GW(p)

Hi! I run a reasonably popular podcast (ChinaTalk) and am also very frustrated with the misconceptions people have around this topic! Pls reach out I’m at jordan@chinatalk.media

comment by Mitchell_Porter · 2023-04-26T10:18:15.965Z · LW(p) · GW(p)

Thanks very much for this dose of reality. So maybe a western analogy for the attitude to "AI safety" at Chinese companies, is that at first, it will be comparable to the attitude at Meta. What I mean by this: Microsoft works with OpenAI, and Google works with Anthropic, so they both work with organizations that at least talk about the danger of AI takeover, alongside more mundane concerns. But as far as I can tell, Meta does not officially acknowledge AI takeover as a real risk at all. The closest thing to an official Meta policy on the risk of AI takeover, is Yann LeCun on Twitter, telling Eliezer that it's a fantasy. 

I found an article from the "National Business Daily", from the start of this April, in which a few Chinese academics comment on the petition calling for a 6-month pause on advanced AI. They say things like: GPT-4 just does what humans tell it to do; self-awareness is the crucial threshold, and GPT-4 is nowhere near self-awareness; and even one who says that the human level of intelligence is like "the speed of light" for Turing-machine IQ, a threshold that cannot be passed... Again, I actually find that rather similar to how western researchers react to questions like, can AI surpass us, and what will happen if it does? Unlike more mundane questions of computer science, there's no consensus of verified knowledge that can reliably answer those questions for us, so people just fall back on their individual intuitions and private theories. 

Nonetheless, AI is fast-moving and I think that, as Chinese systems approach the performance of GPT-3 and GPT-4, it will unavoidably dawn on some of the people involved, that this is potentially creating autonomous intelligent beings that can equal or surpass any human; and so they will start to take public positions on what that means, the desirability of it, the inevitability of it, and so on. I can, for example, imagine a vigorous public debate over the creation of artificial persons, with some saying, just don't do it at all, others saying, they should have the rights of a citizen, still others saying, we should enforce Asimov laws and not feel bad about doing so, and so on. 

comment by mako yass (MakoYass) · 2023-04-28T04:10:14.044Z · LW(p) · GW(p)

Do you really think that a scientist is going to walk up to his friend from the Politburo and say "Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith"

This part focused me, what happens if you do this is, your politburo friend answers "that's obviously dangerous propaganda, so I'm going to generate counterarguments against it so that no one entertains it ever again, because we can't afford to have those sorts of doubts around when we're so far behind."

But of course, that reaction is not healthy. An international alignment treaty should be seen as a way for China to convince the US to let them catch up, in a sense, by getting an equal seat at the table, which they're otherwise not going to get. If they want to survive this, they'd do well to note that.

comment by ChristianKl · 2023-04-27T16:14:05.145Z · LW(p) · GW(p)

It is far more likely that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation.

This suggests that Beijing is making regulations that are driven by short-term thinking and not careful thinking over longer timelines. 

Generally, my perspective of Chinese decision-making is that government policy is written by people who are able to think over longer timelines. 

To me "AGI has to be aligned with CEV" and the sentence of "AGI has to be aligned with socialist values" sound very similar to me. Anything that's an existential threat to humanity is also an existential threat to the CCP.

Even if they all become law and are strictly enforced, they are simply regulations on AI data usage and training.

If the US would create similar rules and enforce them it would do a lot more than the six-month training pause. The rules prevent the training of large language models based on random internet data and allow only training it on high-quality data. 

If you want to slow down dangerous AI development but not slow down AI alignment work, those policies sound a lot smarter than what the letter calling for the AI training pause calls for. 

Do you really think that a scientist is going to walk up to his friend from the Politburo and say "Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith, they're even promising to not widen the current AI gap they have with us!"

No, but they might say "There's a call for US labs to stop training larger AI models, can we take actions to support that proposal to slow down US AI development? Our analysts at the Cyberspace Administration of China believe that AI could be very dangerous if its development isn't well regulated. Can we push for an international treaty that makes the recommendations of the Cyberspace Administration of China also binding on other countries?"

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T17:40:07.558Z · LW(p) · GW(p)

Chinese experts simply don't think it's a real issue yet. There is no law of physics that says our long-term assessment has to match that of the West. Just because you think it's in our best interest doesn't mean we have to. 

Replies from: ChristianKl
comment by ChristianKl · 2023-04-28T00:00:01.552Z · LW(p) · GW(p)

Do we have a good reason to assume that the people at the Cyberspace Administration of China are open and transparent about their long-term assessments? 

What kind of experts do you think would say something different publically if such assessments play into their policy?

comment by GeneSmith · 2023-04-27T04:19:16.427Z · LW(p) · GW(p)

Let us know when the podcast with you goes up!

comment by ChristianKl · 2023-04-26T16:27:39.251Z · LW(p) · GW(p)

When it comes to having an international treaty, whether or not China will agree depends a lot on the content of the treaty.

If the Chinese put their guidelines in practice, they might be open to have a treaty that basically says that all companies have to abide by the guidelines. 

The proposed requirements on training data quality can prevent models like GPT-3 and GPT-4 from being trained. 

Legal liability for anything the model does is also an effective way to slow down AI development.

comment by ShenZhen · 2023-04-26T03:59:40.498Z · LW(p) · GW(p)

Old lurker new account! Need to go work soon so very very quick high-level thoughts:

I feel like we feel some of the same frustration at mainstream/western EA, or others alignment speculaters without a deep understanding of China. I can crudely gesture at their wrong inferences sourcing from assumptions orthogonal to the truth, implying fundamental misconceptions in their model, but in the end it comes down to context: much of this is sub-verbal subtle differences that I can succinctly identify without putting in a few months of effort first. Often when I talk to even ABC EAs I want to shout JUST LIVE IN CHINA FOR TWENTY YEARS AND I WON'T NEED TO CONVINCE YOU.

Having said that, I disagree with some of these conclusions: HPMOR is cringe, yes, but also HPMOR is glorious, c'mon. It's polarizing in Chinese, yes, but it's also polarizing in English. Good selector for the target audience. Translating it to build the Alignment field is not exactly zero expected utility but pretty close.

My impression of Chinese EA field-building is that it's basically still severely under-resourced, both funding and talent (esp talent). A team of like 10 people (of various degrees of Chinese background) has been plugging away at it for a few years, but they need much more help. Marginal impact through the roof, for the handful of people that are capable of helping.

comment by NirViaje (nirviaje) · 2024-04-29T09:30:26.862Z · LW(p) · GW(p)

我浏览了中国的EA/Rationalist/AI安全论坛


Hi, do you know Concordia AI in Beijing? they are pretty active - 
https://concordia-ai.com/
https://www.zhihu.com/org/an-yuan-ai-6/posts
 

comment by Review Bot · 2024-02-13T15:59:12.754Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Vael Gates · 2023-05-01T21:12:59.334Z · LW(p) · GW(p)

Does Anyuan(安远) have a website? I haven't heard of them and am curious. (I've heard of Concordia Consulting https://concordia-consulting.com/ and Tianxia https://www.tian-xia.com/.)

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-05-02T10:12:43.242Z · LW(p) · GW(p)

My mistake, 安远 is Concordia's Chinese name.

comment by angmoh · 2023-05-01T01:47:30.139Z · LW(p) · GW(p)

Good post. 

For Westerners looking to get a palatable foothold on the priorities and viewpoints of the CCP (and Xi), I endorse "The Avoidable War" written last year by Kevin Rudd (former Prime Minister of Australia, speaks mandarin, has worked in China, loads of diplomatic experience - imo about as good of a person that exists to interpret Chinese grand strategy and explain it from a Western viewpoint). The book is (imo, for a politician), impressively objective in its analysis.

Some good stuff in there explaining the nature of Chinese cynicism about foreign motivations that echoes some of what is written in this post, but with a bit more historical background and strategic context. 

comment by Joshua Clancy (joshua-clancy) · 2023-04-30T03:12:08.928Z · LW(p) · GW(p)

I did a masters of data science at Tsinghua University in Beijing. Maybe it's a little biased, but I thought they knew their stuff. Very math heavy. At the time (2020), the entire department seemed to think graph networks, with graph based convolutions and attention was the way forward towards advance AI. I still think this is a reasonable thought. No mention of AI safety, though I did not know about the community (or concern) then.

comment by Meena Kumar (meena-kumar) · 2023-04-26T12:14:12.052Z · LW(p) · GW(p)

The main thing seems to be money. How do we get more money into alignment in a way that is not detrimental or repulsive to people we want to attract to working there?

comment by Ariel G. (ariel-g) · 2023-04-26T07:52:23.790Z · LW(p) · GW(p)

I'm trying to think of ideas here. As a recap of what I think the post says:

  • China is still very much in the race
  • There is very little AI safety activities there, possibly due to a lack of reception of EA ideas.

^let me know if I am understanding correctly.

Some ideas/thoughts:

  • it seems to me that many in AI Safety or in other specific "cause areas" are already dissociating from EA, though not much from LW.
  • I am not sure if we should expect mainstream adoption of AI Safety ideas (its not really mainstream in the west, nor is EA, or LW).
  • It seems like there are some communication issues (the Org looking for funding) that this post can help with
  • to me it is super interesting to hear that there is less resistance to the ideas of AI Safety in China. Though I don't want to fully believe that yet. Though, im not sure that the AI Safety field is people bottlenecked right now, it seems we currently don't know what to do with more people really.
  • still, it's clear that we need to have a strong field in China. Perhaps less alignment focused, and more governance? Though my impression from your post is that governance is less doable, but maybe I am misunderstanding.

I might have more thoughts later on.

(for context, I am recently involved in governance work for the EU AI Act)

Replies from: derpherpize, Lichdar
comment by Lao Mein (derpherpize) · 2023-04-27T02:39:43.326Z · LW(p) · GW(p)

I'm saying that AI Safety really can't take off as a field in China without a cadre of well-paid people working on technical alignment. If people were working on, say, interpretability work here for a high wage ($50,000-$100,000 is considered a lot for a PHD in the field), it would gain prestige and people would take it seriously.  Otherwise it just sounds like LARP.  That's how you do field building in China. You don't go around making speeches, you hire people.

My gut feeling is that hiring 10 expats in Beijing to do "field building" gets less field building done than hiring 10 college grads in Shanghai to do technical alignment work. 

comment by Lichdar · 2023-04-26T14:19:18.558Z · LW(p) · GW(p)

My experience is that the Chinese(I am one) will disassociate with the "strange part" of EA, such as mental uploading or minimization of suffering or even life extension: the basic conservative argument for species extension and life as human beings is what works.

CN is fundamentally conservative in that sense. The complications are not many and largely revolve around:

  1. How is this good for the Party.

  2. How is this good for "keeping things in harmony/close to nature/Taoism"

  3. Emotional appeals for safety and effort. The story of effort = worth is strong there, and devaluation of human effort leads to reactions of disgust.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T02:42:41.411Z · LW(p) · GW(p)

Mind uploading and life extension are far easier sells than ending factory farming, in my experience. It's not the tech part we find cringe, but the altruism part. The Chinese soul wants to make money, be cool, and work in a cool field. People trying to sell me altruism are probably part of a doomsday cult (e.g. Catholicism).

Replies from: Lichdar
comment by Lichdar · 2023-04-27T14:42:33.948Z · LW(p) · GW(p)

I think the lack of altruism part comes from the desire to compete and be superior(via effort). There's no vast desire to have altruism as the West understands it. How would you be better than others?

The upside is that "specism" is a norm. There would be zero worry about wanting to give AI "fair rights." I do think there is some consideration for "humankind", but individual rights(for humans or animals) are strictly a "nice to have, unnecessary and often harmful."

comment by Victor Lecomte (victor-lecomte) · 2023-04-26T01:01:36.493Z · LW(p) · GW(p)

It sounds like you're skeptical about EA field building because most Chinese people find "changing the world" childishly naive and impractical. Do you think object-level x-risk field building is equally doomed?

For example, if 看理想 (an offshoot of publishing house 理想国 that produces audio programs about culture and society) came out with an audio program about x-risks (say, 1-2 episodes about each of nuclear war, climate change, AI safety, biosecurity), I think the audience would be fairly receptive to it. 梁文道, one of the leaders of 看理想, has shared on his podcast 八分 (at the end of episode 114) that a big part of his worldview is “先天下之忧而忧,后天下之乐而乐” (“worry before the people fear something will happen, and be happy only after the people are happy”), a well-known quote describing ideals of Confucian governance, and which has similarities with EA ideals.

In general, I guess I would have expected Chinese people to be pretty receptive to altruism given the emphasis on contributing (贡献) to the greater good in the party line (e.g. studying the good example of Lei Feng), which gets reflected a lot in media/textbooks/etc. But maybe I spend too much time consuming Chinese media and not enough time talking to actual Chinese people.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T02:42:55.700Z · LW(p) · GW(p)

We know we can change the world - it's not even a question when your grandparents are literal peasant farmers, your parents are industrial workers, and you're a scientist. It's just that altruism itself is cringe. Volunteering is cringe. I don't know how to explain this to Westerners in a way they can understand.

Replies from: Lichdar, victor-lecomte
comment by Lichdar · 2023-04-27T14:46:43.570Z · LW(p) · GW(p)

For Westerners, I think the explanation can go like this. Say you want to do an open source thing, and spend hours on it. Okay.

But if you are in China, you could have used those hours to make money for your family.

Why would you take care of not-family unless you had too much money and time?

I do think the Chinese care about "humanity" in my experience, but Three Body Problem shows a typical version of it: survival of the mass via individual cruelties.

Like I said, though, I think the upside is that the Chinese are naturally unreceptive to anything like rights for AI.

comment by Victor Lecomte (victor-lecomte) · 2023-04-27T03:31:15.746Z · LW(p) · GW(p)

I would still like to try and understand, if that's okay. :)

Would you say the following captures some of it?

When you're a kid, altruism/volunteering is what adults / teachers / the government keep telling you "nice kids" do, so it's perceived as something uncool that you need to grow out of, and is only done by people who don't think for themselves and don't realize how the world really works.

Replies from: derpherpize
comment by Lao Mein (derpherpize) · 2023-04-27T08:27:15.692Z · LW(p) · GW(p)

That's pretty close to it. And your parents and teachers regularly take you aside and remind you that it's all LARP. I also suspect that the desire to engage in charitable behavior and political organization is motivated by cultural neuroses. Why should you go vote or politically organize or give your time away for free when you can so obviously improve your life by putting effort into your career? Christian/liberal guilt, I assume.