[Translation] AI Generated Fake News is Taking Over my Family Group Chat

post by mushroomsoup · 2025-01-30T20:24:22.175Z · LW · GW · 0 comments

Contents

  AI generated fake news, the tears of victims, and web traffic
  Who is behind AI-generated fake news
  Are our emotions also fabricated?
None
No comments

Translation of AI假新闻,霸占我的家族群 by 张文曦 (Wenxi Zhang) originally posted on 新周刊

Translator's Note: This is an article which appears on a list in the latest post on Jeffery Ding's ChinAI Newsletter. These were articles which he found interesting but have not yet translated. Jeffery summarized the article as follows.

How has the Chinese internet dealt with fake news stories, including those about play off the shock value of an earthquake in Tibet? This author worries about the ability of their elderly family members to discern fact from fiction.

AI generated fake news, the tears of victims, and web traffic

Once again AI-generated tear jerkers have garnered undue sympathy (lit. stolen tears).

At 9:05 am on January 7th, a 6.8-magnitude earthquake struck Dingri County, Shigatse City, Tibet. Due to the high magnitude of the earthquake and altitude of the city, this disaster received continuous coverage online. Among the many articles about the disaster and disaster relief, a video caught everyone's attention: a child, wearing a colorful hat and matching sweater, crushed under a collapsed building whose face and body are covered in dust. It’s hard not to be moved by such a high contrast and dramatic image.

"This child made me cry all day." "Is this child doing okay now? Does anyone know?"... The plight of the child in the video tugged on the heartstrings of countless netizens.

When many content creators reposted the video, they alluded to the Shigatse earthquake through phrases such as "Praying for all those in the earthquake-stricken region. Stay safe." This led viewers to think that the child is a victim of the Shigatse earthquake. Many people reposted, followed, and prayed for the affected child.

A clip of the viral video of the child wearing a colorful hat crushed under rubble. (screenshot from social media)
A clip of the viral video of the child wearing a colorful hat crushed under rubble. (screenshot from social media)

But, as early as November 2024, this video had already appeared online where its creator claimed that it was AI-generated. When interviewed, the creator said that the original intent of the video was to reflect the devastation wrought by war on the lives of civilians and not to spread rumors.

However, through the splicing of text and video, this short became associated with the otherwise unrelated Shigatse earthquake. The posters used stock content to piggy-back off of the virality of the disaster, garnered sympathy, and gained web traffic.

If the AI-generated picture of the child under the rubble accompanied by information about the earthquake was an insensitive collage, then pictures and videos generated by AI based on current events is another kind of "fabrication".

Starting on November 7, 2024, local time, many regions of Los Angeles, California erupted in flames one after another. While these fires continued to spread, a video circulated widely on social media showing the Hollywood sign engulfed by wildfires.

The AI-generated image circulating online of the Hollywood sign burning. In the picture "Hollywood" has one too many "O"s. (screenshot from social media)
The AI-generated image circulating online of the Hollywood sign burning. In the picture "Hollywood" has one too many "O"s. (screenshot from social media) 

In reality, the sign was undamaged according to on-the-ground reporting by AFP reporters based in Los Angeles. The global fire monitoring system FIRMS also did not show any fires in the region. All the videos circulating online which showed the burning Hollywood sign were AI-generated.

The phenomenon of AI technology polluting news content appeared a few years ago. "A train hit a worker repairing the tracks in Gansu, killing nine", "An explosion occurred in the Pentagon", "A 70-year-old man in the town of Huayang, Wuhua County, Guangdong Province is in a coma after a beating. His grandson committed suicide by jumping into a river. The attacker was sentenced to nine years and ten months on the first hearing"... These AI fake news articles, often difficult to identify with the naked eye, focus on topics of popular interest such as urban and rural development, class warfare, gender conflicts, and international affairs.

People online prompted "Trump falling during the process of getting arrested" which the AI-generated. (Twitter)
People online prompted "Trump falling while getting arrested" which the AI-generated. (Twitter)

Faced with the incendiary nature of these fake pictures and videos, people’s first inclination is to react emotionally. When angry, shocked, or despairing people often forget to ask that most basic of questions: Is it true?

In the era of supremacy via virality, the proliferation of false information has become a global problem. The word "post-truth" was selected as the "2016 English Word of the Year” by the Oxford English Dictionary. In the court of public opinion, facts are no longer the priority; vibes and pathos are now the main vectors of influence.

Emotions drive people to react reflexively to highly triggering content, rather than allow them to form careful analysis after calm deliberation. In the images above, the child trapped in the ruins has six fingers and the "Hollywood" sign has an extra "O", but these obvious inconsistencies and omissions have not attracted much attention.

Who is behind AI-generated fake news

From "seeing is believing, hearing is deceiving" to "seeing maybe deceiving". From "pictures present truths" to "pictures maybe deceptive". AI technology has repeatedly pushed the boundary of truth in reporting. So, who is producing all this AI-generated slop and why do they do it?

With the aid of AI technology, the cost and the technical know-how required to generate fake news are not as high as people imagine.

On July 4, 2023, Shaoxing police in Zhejiang arrested a group using ChatGPT to generate fake news. In this gang, there was a member who did not have any prior exposure to computers and only a junior high school diploma[1]. She used ChatGPT to learn how to automatically generate "scripts" and related video content, eventually she was able to generate fake videos with music and subtitles with a single click. The quickest of these can be generated in a minute.

According to the police, the team compiled trending topics, generated videos through AI software, and then released them on video sharing accounts. "Income is tied to engagement" one of them answered when asked by the police whether publishing false content is profitable.

It only takes a few seconds for AI software to generate text, images, and videos related to a given prompt. (doubao AI software)
It only takes a few seconds for AI software to generate text, images, and videos related to a given prompt. (doubao AI software)

In another case disclosed in April 2024, those involved used a writing software called "Emerald Ink" to sift for popular topics on social media and automatically generate manuscripts by replacing words with their synonyms and rearranging words and phrases. Everyday the number of articles generated in this way could reach upwards of 190,000. These patchwork articles were then posted to more than 6,000 accounts where it reached its end user. The user’s engagement translated to channel rewards given by the platform.

In short, AI generated fake news makes money by "reducing costs and increasing efficiency". Lower production costs increase profit margins.

Verifying the authenticity of a piece of news requires careful fact-checking. This typically requires evaluating the source of information, reporting suspicious content, identifying claims which need to be fact-checked, and tracing the propagation path of a particular piece of fake news. Creating pseudo-realistic news or pictures, however, only requires the use of AI tools via a few commands and a few seconds of waiting time.

Audience demand drives the impetus to create AI-generated fake news. While the younger generations may verify the authenticity of a piece of information by cross referencing multiple sources, older members of a family may not be so vigilant. Those of the older generation, who grew up with traditional media, presuppose the veracity of the content shown therein. What's more, in their world view, videos cannot be faked and are instead a record of true events.

Thus, sensational fake news will always have an audience. Those interested in international politics will be riled up by the fabricated remarks of AI Trump; those who are health-conscious will enthusiastically order wafer protein bars promoted by AI Zhang Wenhong[2]; even those without hobbies will tag everyone when reposting disaster reports and human interest stories of dubious veracity on the family group chat.

As a result, while the truth is still putting on its shoes, AI-generated fake news has already taken the express train of clickbait out the door.

Are our emotions also fabricated?

In an era when AI can easily generate realistic content, people’s emotional bandwidth becomes preoccupied by fake news. Comparing the depth of feeling of people now to the people of the past is night and day.

Philosopher Benjamin[3] keenly captured "shock" as the unique mental modality of modern people. He believed that anonymized feelings of shock and stimulation can be disruptive and trigger critical engagement. Taking the film industry as an example, Benjamin criticized it for constantly inciting shock and surprise in its audience, making the search for excitement a public pastime.

In the era of artificial intelligence, the impact of the media has not petered out, but rather intensified. This is especially true for mass-produced AI fake news.

Trained on curated prompts, the AI can create content that adapts to internet trends and different modes of communication. It is intelligent enough to quickly grasp the mix of tragedy and breaking news which will capture people’s hearts. The audience’s reaction has already been predetermined by the AI-generated fake news.

For humanity it is second nature to care about current events and the people and things happening far away. However, is our compassion not diluted when it gets triggered by AI-generated content? In the face of content which may or may not be true, we cannot be sure whether there is a child in distress or someone waiting for help.

When we are deeply moved by something that we see or hear, but are then told that it is all made up, will we become gradually become numb and indifferent? Just like the villagers in “The Boy Who Cried Wolf", next time we may not still feel any compassion for those tragedies happening far away. This may be the most troubling consequence of AI-generated fake news.

  1. ^

    Junior high in China is from seventh to ninth grade inclusive. 

  2. ^

    Zhang Wenhong is a high ranking doctor in the CCP. He is the director at several prominent Hospitals and health-related committees in the Chinese Medical Association.

  3. ^

    This is Walter Benjamin. I don't know much about philosophy so someone will have to let me know if this is consistent with his writings. 

0 comments

Comments sorted by top scores.