AI has enhanced China’s US disinformation campaigns while Washington has scaled back its ability to counter them
Generative artificial intelligence (AI) platforms have detected increasing use of their tools to improve the scale and effectiveness of disinformation campaigns targeting the United States. Bot networks flood online discourse with posts on both sides of domestic US policy issues while promoting Beijing’s narratives around foreign policy issues including Xinjiang, Hong Kong, Ukraine and Gaza. Even as AI enhances information operations’ credibility and reach, Washington has dialled back anti-disinformation capabilities.
What’s next
Advancements in generative AI will continue to expand and signal-boost Beijing’s information operations on Western social media platforms by facilitating the rapid creation of large numbers of fraudulent profiles and synthetic content. Such operations will aim to amplify political and cultural polarisation within US society and online discourse to undermine civic cohesion and democratic confidence as part of China’s broader grey-zone warfare operations against its strategic adversaries.
Subsidiary Impacts
- China-backed threat actors will be emboldened by certain US social media firms’ easing of anti-disinformation measures on their platforms.
- Relevant non-government stakeholders may need to assume a higher anti-disinformation burden following US reversals of some previous efforts.
- Although the impact of China’s information operations has been limited to date, its longer-term impact could be more extensive.
Analysis
A June report released by OpenAI revealed how China-backed actors have used generative AI tools developed by the company to enhance information operations targeting the United States, to sow and deepen divisions within US society.
Since at least 2017, similar operations attributed to China have spread and promoted disinformation on Western social media platforms including X, Facebook and TikTok, to exploit political polarisation in the United States and other democratic adversaries while promoting narratives favourable to China (see CHINA: Info operations against Taiwan will intensify – July 16, 2025).
The OpenAI report exposed activities the company characterises as a “covert influence operation”, likely by Chinese malicious actors. It alleges that those actors used OpenAI’s models to generate short social media posts in both English and Chinese to be posted on TikTok, X, Facebook and Reddit. Once published on these platforms by what the report called “main” fake accounts, the posts would then be replied to by other fraudulent accounts to make them appear organic.
Core elements
China-linked threat actors have allegedly used generative AI to create social media posts
The posts targeted issues including a Taiwanese video game with an anti-Communist Party theme and a Pakistani activist who had openly criticised Chinese investment.
Some of the posts notably supported arguments both for and against the Trump administration’s shutdown of the US Agency for International Development, the federal governmental department in charge of foreign aid (see US: Dismantling USAID is part of “America First” plan – February 13, 2025).
OpenAI’s recent discovery of China-linked actors abusing its technologies to enhance influence campaigns is a prime example of how these campaigns are typically run today.
Bot activity
China-connected disinformation campaigns are mostly run through bots. Usually created in bulk, these typically first attempt to mimic authentic social media users’ behaviour by, for example, posting landscape photos and aspiring quotes.
A relatively new phenomenon has seen some bots heavily post cryptocurrency news and updates to attract traders and enthusiasts.
Chinese disinformation operations on Western social media were first discovered around 2017. Operations were then mainly tasked with propagating Beijing’s approved narratives on domestic Chinese issues, including alleged human rights violations in the Xinjiang region and pro-democracy protests in Hong Kong. Chinese dissidents residing overseas have often become targets of these campaigns, receiving harassing and insulting comments.
The bots operate in groups, and within each group there is usually a main poster whose content, once published, is quickly replied to or reposted by other bots to increase engagement figures.
Operations evolve
Chinese information operations have gradually begun to penetrate online discourse over global issues affecting the United States and other Western countries. For example, bot posts tend to criticise the US role in the Ukraine war and Gaza conflict, often characterising Washington as a major aggressor in both.
Chinese information operations have also been detected on Western social media during elections in the United States (see INT: AI will be a growing threat to elections globally – May 8, 2025). Unlike Russian operations, which often amplify narratives and disinformation favouring one side, Chinese campaigns look to sow and deepen divisions by supporting and criticising candidates from both parties.
Chinese disinformation operations use dual amplification favouring both sides to sow distrust
Also in contrast to Russian disinformation, which is sometimes amplified by influential figures in US politics and online discourse, Chinese disinformation rarely appears to make a major impact — its content is not widely shared, and even when post engagement looks high, researchers attribute that to sharing by other Chinese disinformation bots instead of organic circulation.
US national security challenges
Chinese information operations on Western social media have not yet been able to generate tangible threats within the United States. However, despite efforts from social media companies and other stakeholders, their low-cost nature practically guarantees that they will never be completely eradicated, making them a low-likelihood but potentially high-impact risk — all it takes is one piece of disinformation going viral to cause considerable damage.
Bots and fraudulent posts are merely one facet of China’s broader information manipulation apparatus. Chinese diplomats and officials routinely post and repost false information on Western social media, sometimes directly sharing content from disinformation bots.
Chinese state media, a centrepiece in Beijing’s information warfare toolbox, are easily accessible in the United States. After Russia launched its invasion of Ukraine in 2022, Chinese state media outlets regularly amplified allegations made by Moscow against Washington and often rebroadcast content from Russian state media.
Chinese social media remain the go-to source of information and news for many Chinese speakers and immigrants residing in the United States. These social media platforms are strictly regulated and censored by the Chinese government, which means they have little incentive to detect and remove disinformation relating to the United States.
Meanwhile, the United States no longer has a centralised means of tackling disinformation following Washington’s failure to renew funding for the State Department’s Global Engagement Centre in December 2024 and the subsequent closure of its successor office, the Counter Foreign Information Manipulation and Interference hub, in April 2025.
This degrades US anti-information warfare capabilities and places the onus on private-sector stakeholders and dispersed government entities to identify and mitigate disinformation-related risks originating from China and other adversarial threat actors. The Foreign Malign Influence Centre continues to play a more limited role in countering disinformation, but even that will eventually be scaled down and absorbed into other parts of the US intelligence community following recent instructions by Director of National Intelligence Tulsi Gabbard.