China's social-media attacks are part of a larger 'cognitive warfare' campaign

The phrase “cognitive warfare” doesn’t often appear in news stories, but it’s the crucial concept behind China’s latest efforts to use social media to target its foes.

Recent stories have ranged from Meta’s “Biggest Single Takedown” of thousands of false-front accounts on Facebook, Instagram, TikTok, X, and Substack to an effort to spread disinformation about the Hawaii fires to a campaign that used AI-generated images to amplify divisive U.S. political topics. Researchers and officials expect similar efforts to target the 2024 U.S. election, as well as in any Taiwan conflict.

Chinese government and military writings say cognitive operations aim to “capture the mind” of one’s foes, shaping an adversary’s thoughts and perceptions and consequently their decisions and actions. Unlike U.S. defense documents and strategic thinkers, the People’s Liberation Army puts cognitive warfare on par with the other domains of warfare like air, sea, and space, and believes it key to victory—particularly victory without war.

Social media platforms are viewed as the main battlefield of this fight. China, through extensive research and development of their own platforms, understands the power of social media to shape narratives and cognition over events and actions. When a typical user spends 2.5 hours a day on social media—36 full days out of the year, 5.5 years in an average lifespan—it is perhaps no surprise that the Chinese Communist Party believes it can, over time, shape and even control the cognition of individuals and whole societies.

A recent PLA Daily article lays out four social-media tactics, dubbed “confrontational actions”: Information Disturbance, Discourse Competition, Public Opinion Blackout, and Block Information. The goal is to achieve an “invisible manipulation” and "invisible embedding" of information production “to shape the target audience's macro framework for recognizing, defining, and understanding events,” write Duan Wenling and Liu Jiali, professors of the Military Propaganda Teaching and Research Department of the School of Political Science at China’s National Defense University.

Information Disturbance ( ). The authors describe it as “publishing specific information on social media to influence the target audience's understanding of the real combat situation, and then shape their positions and change their actions.” Information Disturbance uses official social media accounts (such as CGTN, Global Times, and Xinhua News) to push and shape a narrative in specific ways.

While these official channels have taken on a more strident “Wolf Warrior” tone, recently, Information Disturbance is not just about appearing strong, advise the analysts. Indeed, they cite how during 2014’s “Twitter War” between the Israeli Defense Force and the Palestinian Qassam Brigade, the Palestinians managed to “win international support by portraying an image of being weak and the victim.” The tactic, which predates social media, is reminiscent of Deng Xiaoping’s Tao Guang Yang Hui ( )—literally translated as "Hide brightness, nourish obscurity.” China created a specific message to target the United States (and the West more broadly) under the official messaging of the CCP, that China was a humble nation focused on economic development and friendly relationships with other countries. This narrative was very powerful for decades; it shaped the U.S. and other nations’ policy towards China.

Discourse Competition ( )The second type is a much more subtle and gradual approach to shaping cognition. The authors describe a “trolling strategy” [ ], “spreading narratives through social media and online comments, gradually affecting public perception, and then helping achieve war or political goals.”

Here, the idea is to “fuel the flames” of existing biases and manipulate emotional psychology to influence and deepen a desired narrative. The authors cite the incredible influence that “invisible manipulation” and “invisible embedding” can have on social media platforms such as Facebook and Twitter in international events, and recommend that algorithm recommendations be used to push more and more information to target audiences with desired biases. Over time, the emotion and bias will grow and the targeted users will reject information that does not align with their perspective.

Public Opinion Blackout ( ). This tactic aims to flood social media with a specific narrative to influence the direction of public opinion.  The main tool to “blackout” public opinion are bots that drive the narrative viral, stamping out alternative views and news. Of note to the growing use of AI in Chinese influence operations, the authors reference studies that show that a common and effective method of exerting cognitive influence is to use machine learning to mine user emotions and prejudices to screen and target the most susceptible audiences, and then quickly and intensively "shoot" customized "spiritual ammunition" to the target group.

This aligned withIn another PLA article entitled, “How ChatGPT will Affect the Future of Warfare,” .” Here, the authors write that generative AI can “efficiently generate massive amounts of fake news, fake pictures, and even fake videos to confuse the public” at a n overall societal level of significance[8].   Their The idea is to create, in their words, a “flooding of lies"” while by the dissemination and Internet trolls to create "altered facts" creates confusion about facts and .   The goal is to create confusion in the target audience's cognition regarding the truth of "facts" and play on emotions of fear, anxiety and suspicion. to create an atmosphere of insecurity, uncertainty, and mistrust. The end-state for the targeted society is an atmosphere of insecurity, uncertainty, and mistrust.

Block Information ( ). The fourth type focuses on “carrying out technical attacks, blockades, and even physical destruction of the enemy's information communication channels”. The goal is to monopolize and control information flow by preventing an adversary from disseminating information. In this tactic, and none of the others, the Chinese analysts believe the United States has a huge advantage. They cite that in 2009, for example, the U.S. government authorized Microsoft to cut off the Internet instant messaging ports of Syria, Iran, Cuba and other countries, paralyzing their networks and trying to "erase" them from the world Internet. The authors also mention in 2022, Facebook announced restrictions on some media in Russia, Iran, and other countries, but falsely claim that the company did so to delete posts negative toward the United States, for the US to gain an advantage in “cognitive confrontation.”

However, this disparity in power over the network is changing. With the rise in popularity of TikTok, it is conceivable China has the ability to shape narratives and block negative information. For example, in 2019 TikTok reportedly suspended the account of a 17-year-old user in New Jersey after she posted a viral video criticizing the Chinese government’s treatment of the Uyghur ethnic minority. China has also demonstrated its influence over the Silicon Valley owners of popular social media platforms. Examples range from Mark Zuckerberg literally asking Xi what he should name his daughter to Elon Musk’s financial dependence on Communist China’s willingness to manufacture and sell Tesla cars. Indeed, Newsguard has found that since Musk purchased Twitter, engagement of Chinese, Russian, and Iranian disinformation sources has soared by roughly 70 percent.

China has also begun to seek greater influence over the next versions of the Internet, where its analysts describe incredible potential to better control how the CCP’s story is told. While the U.S. lacks an overall strategy or policy for the metaverse (which uses augmented and virtual reality technologies), the Chinese Ministry of Industry and Information Technology released in 2022 a five-year action plan to lead in this space. The plan includes investing in 100 “core” companies and “form 10 public service platforms” by 2026.

China did not invent the internet, but it seeks to be at the forefront of its future as a means of not just communication and commerce but conflict. Its own analysts openly discuss the potential power of this space to achieve regime goals not previously possible. The question is not whether it will wage cognitive warfare, but are its target’s minds and networks ready?

Opinions, conclusions, and recommendations expressed or implied within are solely those of the author(s) and do not necessarily represent the views of the Air University, the Department of the Air Force, the Department of Defense, or any other U.S. government agency. 

Peter Singer: Strategist at New America, a Professor of Practice at Arizona State University, and Founder & Managing Partner at Useful Fiction LLC, a company specializing in strategic narrative

Bring Peter Singer to your next event.

Find out more information, including fees and availability.