Survey: How Do Elite Chinese Students Feel About the Risks of AI?

post by Nick Corvino (nick-corvino) · 2024-09-02T18:11:11.867Z · LW · GW · 13 comments

Contents

  Intro
  Key Takeaways
  Background
  Report
  Will AI do more harm than good for society? 
  Concerns about the effects of AI in daily life 
  Should the development of large-scale AI systems be paused for at least six months worldwide? 
  Should the Chinese government regulate AI?
  Cooperation between the U.S. and China 
  Misaligned AI compared to other existential threats 
  Risk of human extinction from AI 
  Could AI be more intelligent than humans? 
  Most concerning risks posed by AI 
  Reflection 
  Methodology 
None
14 comments

Intro

In April 2024, my colleague and I (both affiliated with Peking University) conducted a survey involving 510 students from Tsinghua University and 518 students from Peking University—China's two top academic institutions. Our focus was on their perspectives regarding the frontier risks of artificial intelligence.

In the People’s Republic of China (PRC), publicly accessible survey data on AI is relatively rare, so we hope this report provides some valuable insights into how people in the PRC are thinking about AI (especially the risks). Throughout this post, I’ll do my best to weave in other data reflecting the broader Chinese sentiment toward AI. 

For similar research, check out The Center for Long-Term Artificial IntelligenceYouGovMonmouth UniversityThe Artificial Intelligence Policy Institute, and notably, a poll conducted by Rethink Priorities, which closely informed our survey design.

You can read the full report published in the Jamestown Foundation’s China Brief here: Survey: How Do Elite Chinese Students Feel About the Risks of AI?

 

Key Takeaways

 

Background

As the recent decision (决定) document from the Third Plenum meetings in July made clear, AI is one of eight technologies that the Chinese Communist Party (CCP) leadership sees as critical for achieving “Chinese-style modernization (中国式现代化),” and is central to the strategy of centering the country’s economic future around breakthroughs in frontier science (People’s Daily, July 22). The PRC also seeks to shape international norms on AI, including on AI risks. In October 2023, Xi Jinping announced a “Global AI Governance Initiative (全球人工智能治理倡议)” (CAC, October 18, 2023).

Tsinghua and Peking Universty are the two most prestigious universities in the PRC (by far), many of whose graduates will be very influential in shaping the country’s future. These students may also be some of China’s most informed citizens on the societal implications of AI, with both schools housing prominent generative AI and safe AI development programs. 

We collected 1028 valid responses, with 49.61% of the sample population’s respondents attending Peking University and 50.39% attending Tsinghua University. See the Methodology section for further information on sampling procedures and considerations. 

 

Report

Will AI do more harm than good for society? 

Respondents strongly believed that AI's benefits outweigh the risks. 80% of respondents agreed or strongly agreed with the statement that AI will be more beneficial than harmful for society. Only 7.49% actively believed the harms could outweigh the benefits, while 12.46% remained neutral or uncertain.

Our results closely align with a 2022 Ipsos survey where 78% of Chinese respondents viewed AI's benefits as outweighing drawbacks – the most optimistic of all countries polled. It sharply contrasts Western sentiment, where polls suggest majorities worry more about transformative AI's dangers than upsides. (In the Ipsos survey, only 35 percent of Americans believed AI offers more benefits than harms.)

China currently seems to be one of the most optimistic countries about the upsides of AI—if not the most optimistic. 

 

Women tend to be less optimistic about AI than men. Our study revealed a similar gender divide in attitudes towards AI, with male-identifying students displaying slightly greater optimism about its societal impact compared to their female-identifying counterparts. While 82.9% of males somewhat or strongly agreed that AI will do more good than harm for society, only 75.8% of females shared this positive outlook.

Concerns about the effects of AI in daily life 

Following from student’s optimism about the benefits of AI, respondents tended to not be worried about the effects of AI in their daily life. 49.12% of respondents feel somewhat or not at all worried, while 31.2% report concern, and 20.13% were neutral or uncertain. 

 

Perhaps more than any other country, China currently utilizes AI for many use cases, such as surveillance, healthcare, transportation, and education. For this reason, we wanted first to gauge a relatively broad indication of how much students actively worry about AI, then a separate indication of their perceived likelihood of more specific risks later on. We therefore chose to use the exact wording of the Rethink Priorities survey of U.S. adults (with their permission), which found that the majority (72%) of US adults worry little or not at all in their daily lives. Our data shows a similar trend toward being unconcerned. 

 

Should the development of large-scale AI systems be paused for at least six months worldwide? 

Respondents leaned towards not pausing large-scale AI systems. 43.29% disagreed or disagreed strongly with the claim that AI should be paused, while 35.16% agreed, and 21% remained neutral or uncertain. 

 

This question was inspired by the open letter issued by the Future of Life Institute in March 2022, urging AI labs to suspend development for a minimum of six months to address potential safety concerns, signed by influential figures such as Elon Musk, Steve Wozniak, and Stuart Russell.

When a YouGov poll asked this question to a pool of respondents from the United States, 58-61% (depending on framing) supported and 19-23% opposed a pause on certain kinds of AI development. Similarly, when Rethink Priorities replicated the question for US adults, altering the framing from ">1000" to "some" technology leaders signing the open letter, their estimates indicated 51% of US adults would support a pause, whereas 25% would oppose it. Both surveys show a stronger desire for the pause of AI than our results. 

When the Center for Long-Term AI asked a similar question to an exclusively Chinese population sample about ‘Pausing Giant AI Experiments,’ 27.4% of respondents supported pausing the training of AI systems more powerful than GPT-4 for at least six months, and 5.65% supported a six-month pause on all large AI model research. However, when a less specific question was asked, "Do you support the ethics, safety, and governance framework being mandatory for every large AI model used in social services?" 90.81% of participants expressed support.

Note: Our survey was conducted approximately one year after the open letter, meaning it was not as fresh on the respondents' minds. 

 

Should the Chinese government regulate AI?

In our survey's most pronounced result, 85.3 percent of respondents agreed with or strongly agreed with the claim that AI should be regulated by the Chinese government, with only 6.03% disagreeing and 8.65% remaining neutral or uncertain. 

 

Harris-MITRE poll conducted in November 2022 estimated that 82% of US adults would support such regulation. A January 2023 Monmouth University poll estimated that 55% of Americans favored having "a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices", with only 41% opposed. Using similar question framing, Rethink Priorities estimated that a sizeable majority (70% of US adults) would favor federal regulation of AI, with 21% opposed.

We chose not to specify a particular government agency that would oversee AI regulation, as the regulatory landscape in China differs from the US. Even so, our results still reflected a comparably high demand from Chinese students for the government to implement oversight and control measures.

While China has shown little fear of regulating practices it deems unsafe AI applications, it has also hypercharged development efforts, attempting to provide top labs like Baidu and Tencent with resources to compete against Western labs such as OpenAI, Google, and Anthropic. 

 

Cooperation between the U.S. and China 

Students believe AI will not be developed safely without cooperation between the U.S. and China. 60.7% of respondents disagreed that AI would develop safely, 25.68% agreed, and 13.62% remained neutral or uncertain.

A similar question was asked to American voters in a survey conducted by the Artificial Intelligence Policy Institute (AIPI), inquiring whether respondents support China and U.S. agreeing to ban AI in drone warfare, in which 59% supported and only 20% did not support. However, in another poll from AIPI, 71% of US adults, including 69% of Democrats and 78% of Republicans, disapprove of Nvidia selling high-performance chips to China, while just 18% approve, underscoring the difficulty of navigating cooperation. 

AI has been a topic of increasing prevalence for China-U.S. diplomacy. It was one of the main topics in the November 2023 Woodside Summit meeting between China President Xi Jinping and U.S. President Joe Biden in San Francisco, which spawned the commitment to a series of bilateral talks on the development of AI between the two countries. China has long complained about U.S. export controls on advanced chips and semiconductors, seeing them as obstacles to AI development. The U.S., meanwhile, justifies these restrictions by citing concerns over China's potential misuse of AI technologies and, implcitly, hurting China economically.

 

Misaligned AI compared to other existential threats 

When asked about the most likely causes of human extinction, misaligned artificial intelligence received the lowest score. Nuclear war, natural disaster, climate change, and pandemics all proved more concerning for students. 

 

Misaligned AI received the least amount of first-place votes of available options, and received the lowest aggregate score when combining all ordinal rankings. 

We asked a nearly identical question to the one Rethink Priorities asked and received similar results, with AI also receiving the lowest ranking in their survey. The only difference was that in their survey, “Climate Change” ranked above “asteroid impact” as the second most concerning existential risk. This is possibly because our question asked for a conjunction of any natural disaster - referencing both an asteroid collision and supervolcanoes as examples.

 

Risk of human extinction from AI 

A significant number of respondents were concerned about the possibility of AI threatening the existence of the human race. 17.32% agreed or strongly agreed with the possibility, while 63.71% of respondents disagreed or strongly disagreed, and 18.97% remained neutral or uncertain.

We translated this question very closely to a YouGov poll of 1000 U.S adults. Results from the YouGov poll suggested high estimates of the likelihood of extinction caused by AI: 17% reported it ‘very likely’ while an additional 27% reported it ‘somewhat likely’. When Rethink Priorities replicated the survey question, they received lower estimates, but they chose to make their questions time-bound (e.g., the likelihood of AI causing human extinction in the next 10 or 50 years). However, because we believed most students would lack a meaningful distinction between different time ranges, we left the question temporally unbounded. 

 

Could AI be more intelligent than humans? 

50% of respondents agreed or strongly agreed with the claim that AI will eventually be more intelligent than humans, while 32.59% disagreed and 17.51% remained neutral or uncertain.

When Rethink Priorities asked this question, they estimated that 67% of US adults think it is moderately likely, highly likely, or extremely likely that AI will become more intelligent than people. The Center for Long-Term AI asked a related question specifically to Chinese young and middle-aged AI-related students and scholars but chose to word it as “Strong AI” (强人工智能)—a catch-all term combining Artificial General Intelligence, Human-Level AI, and Superintelligence. Of their participants, 76% believed Strong AI could be achieved, although most participants believed Strong AI could not be achieved before 2050, and around 90% believed it would be “after 2120.” Both surveys show a more substantial reported likelihood of smarter-than-human intelligence than our results. 

 

Given the more scientific orientation and reported higher familiarity with AI among Tsinghua University students, we analyzed the responses across universities. Our findings indicate that Tsinghua students exhibited a lower tendency to believe AI would eventually surpass human intelligence levels compared to their counterparts at Peking University.

 

Most concerning risks posed by AI 

When asked which potential risks posed by AI are the most concerning, surveillance proved to be the most popular answer. 

 

 

 

 

 

The use of AI for surveillance received the most amount of first place votes of available options (26.64%). This was followed by existential risk, misinformation, wealth inequality, increased political tension, various issues related to bias (e.g., race, age, or gender), with the welfare of AI entities receiving the least amount of first-place votes. 

When aggregating ordinal rankings, surveillance also received the highest total score, but was then followed by misinformation, existential risk, increased political tension, wealth inequality, various issues related to bias, with the welfare of AI entities receiving the lowest total score.  

China has actively invested in surveillance in recent years, bolstering extensive CCTV and digital monitoring systems in its major cities. Although our results don’t show how students view the upsides of AI surveillance, they do suggest students are concerned about the potential downsides. 

 

Reflection 

Limited survey data in the PRC assesses how citizens feel about the risks of AI. It is, therefore, difficult to know to what extent our survey of Tsinghua and Peking University students aligns with the larger Chinese population. However, our results suggest that students are broadly less concerned about the risks of AI than people in the United States and Europe. 

Chinese students' optimism about the benefits of AI aligns more closely with sentiments found in the developing nations of the Global South, which tend to view the technology's potential in a more positive light overall. In terms of socioeconomic conditions and geopolitical standing, China currently finds itself between the developed and developing world, and it will be interesting to see how this shapes its views on AI in the coming years.

Among the major players in the global AI race, China's stance on addressing the risks of the technology remains the least clear. While some argue that the PRC does not take AI safety as seriously as Western nations, the country has taken notable steps to address these concerns in recent years.

Last year, the PRC co-signed the Bletchley Declaration at the UK AI Safety Summit, calling for enhanced international cooperation on developing safe and responsible AI systems. Within the PRC, the government has implemented regulations restricting the use of technologies like deepfakes and harmful recommendation algorithms. The Cyber Security Association of China (CSAC) announced an AI safety and security governance expert committee in October 2023. Major tech hubs like Shanghai, Guangdong, and Beijing, which hosts over half of China's large language models, have initiated efforts to establish benchmarks and assessments for evaluating the safety of AI applications. These measures indicate China's recognition of managing risks as AI capabilities rapidly advance, though the full extent and effectiveness of China's AI safety initiatives remain to be seen.

It's important to note that our survey occurred around one year later than many of the other studies we've used for comparative data. In that time, the field of AI has advanced significantly. Even from the date the survey was initially administered (April 18-20, 2024) to when this write-up was published (August 23), major developments have unfolded - such as the release of OpenAI's multimodal GPT-4o model and the first bilateral talk on AI  between the U.S. and China.

Given the recognized absence of data on how Chinese citizens perceive the risks of AI, we hope that future research will further investigate perspectives in China. It could be interesting to investigate how different age demographics, urban and agrarian populations, or people working in different industries in the PRC feel about AI.

 

Methodology 

To administer the survey, we leveraged the “Treehole (树洞)” online platforms, which are exclusive to each university and can be accessed only by current students. Respondents used their WeChat IDs to receive monetary compensation (a range of 3-20 RMB ($0.42-$2.80) per participant, randomly assigned). Respondents were also asked to state their university and detected IP address to mark those outside the two universities as invalid. These measures prevented multiple responses from single accounts and responses from bots. 

One key uncertainty, however, is whether the gender demographics of the survey accurately reflect the composition of Tsinghua and PKU. Survey respondents reported a gender breakdown of 59.73 percent male and 40.27 percent female. Neither university publicly discloses its official gender demographics, so definitively comparing the survey demographics to the general population is not possible. Analysis of indirect sources, however, such as departmental announcements, blog posts, and other websites led us to conclude that the likely gender ratio is approximately 60 percent male, 40 percent female. Using this as our baseline probability assumption before conducting the survey, we found that the results aligned with this estimated ratio. As a result, we believed post-stratification of the dataset was not necessary.

Finally, a note on the translation process. Given the significant structural differences between Mandarin Chinese and English, we could not translate all terms literally. For example, "transformative AI" was translated as "前沿人工智能" (frontier AI), a more commonly used phrase conveying a similar meaning. However, we structured the framing of each question so that the answer someone would give in either language would be the same, attempting to ensure language-independent responses despite disparities in phrasing. You can find the Chinese version of the report here. 调查:中国顶尖大学的学生如何看待人工智能风险?

13 comments

Comments sorted by top scores.

comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-09-02T18:29:27.057Z · LW(p) · GW(p)

Thank you for doing this work. This is is a valuable piece of evidence.

comment by ChristianKl · 2024-09-07T10:23:19.767Z · LW(p) · GW(p)

Was there any process where you had to get this report approved by any Chinese authority or were you completely free in conducting and publishing this survey?

Replies from: nick-corvino
comment by Nick Corvino (nick-corvino) · 2024-09-10T05:33:30.353Z · LW(p) · GW(p)

Nope, we did it all on our own and no one ever gave us trouble or seemed suspicious. (FWIW I'm a US citizen and my co-researcher is a Chinese citizen.) 

comment by Diziet · 2024-09-05T19:31:43.287Z · LW(p) · GW(p)

I wonder how much the survey paying a few RMB and asking for a wechat ID influenced the results. Great work working and sharing this poll though!

While this survey’s responses are anonymous, respondents did submit their WeChat IDs so that they could be remunerated for participation.

Replies from: nick-corvino
comment by Nick Corvino (nick-corvino) · 2024-09-07T02:24:32.979Z · LW(p) · GW(p)

me too. 

Anecdotally, one thing I noticed (and was a little surprised by) was that every Chinese student we interacted with about the survey didn't seem to be worried about filling it out at all. Like, it was incredibly casual for them, and they were mostly just amused because conducting a personal survey in China is so uncommon. But that's just vibes; who knows. 

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2024-09-04T12:45:35.088Z · LW(p) · GW(p)

What makes you say this: "However, our results suggest that students are broadly less concerned about the risks of AI than people in the United States and Europe"? 

Replies from: nick-corvino
comment by Nick Corvino (nick-corvino) · 2024-09-05T04:59:00.275Z · LW(p) · GW(p)

From most of the survey data collected from Chinese citizens so far (e.g., this survey, The Center for Long-Term Artificial Intelligence, this Ipsos survey, and a few other polls), they're consistently reporting low concern on AI risks compared to Westerners. Sometimes this could be due to framing, but our survey explicitly translated many of these Western surveys' questions so we could directly compare results. To be fair, we don't have nearly as much data from China. 

Replies from: capybaralet
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2024-09-14T22:59:52.009Z · LW(p) · GW(p)

OK, so it's not really just your results?  You are aggregating across these studies (and presumably ones of "Westerners" as well)?  I do wonder how directly comparable things are... Did you make an effort to translate a study or questions from studies, or are the questions just independently conceived and formulated? 

comment by momom2 (amaury-lorin) · 2024-09-05T23:20:14.537Z · LW(p) · GW(p)

This is very interesting!
I did not expect that Chinese would be more optimistic about benefits than worried about risks and that they would rank it so low as an existential risk. 
This is in contrast with posts I see on social media and articles showcasing safety institutes and discussing doomer opinions, which gave me the impression that Chinese academia was generally more concerned about AI risk and especially existential risk than the US.

I'm not sure how to reconcile this survey's results with my previous model. Was I just wrong and updating too much on anecdotal evidence?
How representative of policymakers and of influential scientists do you think these results are?

Replies from: nick-corvino
comment by Nick Corvino (nick-corvino) · 2024-09-07T02:19:20.955Z · LW(p) · GW(p)

I think the crux is that the thoughts of the CCP and Chinese citizens don't necessarily have to have a strong correlation - in many ways they can be orthogonal, and sometimes even negatively correlated (like when the gov trades off on personal freedoms for national security).  

I think recent trends suggest the Chinese gov / Xi Jingping are taking risks (especially the tail risks) more seriously, and have done some promising AI safety stuff. Still mostly unclear, tho. Highly recommend checking out Concordia AI's The State of AI Safety in China Spring 2024 Report

comment by Kabir Kumar (kabir-kumar) · 2024-09-05T18:50:24.234Z · LW(p) · GW(p)

This is very useful, thank you. 
Something that might be interesting to add at the end of surveys such as these:
"How much has this survey changed your mind on things?" - sometimes just being asked a question about something can change your mind on it, would be interesting to see if it happens and how much so.

comment by Review Bot · 2024-09-04T04:19:35.536Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Replies from: kabir-kumar
comment by Kabir Kumar (kabir-kumar) · 2024-09-05T18:55:03.303Z · LW(p) · GW(p)

rather than this, there should just be a better karma system, imo. 
one way to improve it - have the voting buttons for comments be on the opposite side of the username

comment by jmh · 2024-09-06T01:40:05.275Z · LW(p) · GW(p)