Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda

The United States hopes to use machine learning to create and distribute propaganda overseas in a bid to “influence foreign target audiences” and “suppress dissenting arguments,” according to a U.S. Special Operations Command document reviewed by The Intercept.

The document, a sort of special operations wishlist of near-future military technology, reveals new details about a broad variety of capabilities that SOCOM hopes to purchase within the next five to seven years, including state-of-the-art cameras, sensors, directed energy weapons, and other gadgets to help operators find and kill their quarry. Among the tech it wants to procure is machine-learning software that can be used for information warfare.

To bolster its “Advanced Technology Augmentations to Military Information Support Operations” — also known as MISO — SOCOM is looking for a contractor that can “Provide a capability leveraging agentic Al or multi‐LLM agent systems with specialized roles to increase the scale of influence operations.”

So-called “agentic” systems use machine-learning models purported to operate with minimal human instruction or oversight. These systems can be used in conjunction with large language models, or LLMs, like ChatGPT, which generate text based on user prompts. While much marketing hype orbits around these agentic systems and LLMs for their potential to execute mundane tasks like online shopping and booking tickets, SOCOM believes the techniques could be well suited for running an autonomous propaganda outfit.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document notes. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

Laws and Pentagon policy generally prohibit military propaganda campaigns from targeting U.S. audiences, but the porous nature of the internet makes that difficult to ensure.

In a statement, SOCOM spokesperson Dan Lessard acknowledged that SOCOM is pursuing “cutting-edge, AI-enabled capabilities.”

“All AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making,” he told The Intercept. “USSOCOM’s internet-based MISO efforts are aligned with U.S. law and policy. These operations do not target the American public and are designed to support national security objectives in the face of increasingly complex global challenges.”

Tools like OpenAI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.

Whether AI-generated propaganda works remains an open question, but the practice has already been amply documented in the wild. In May 2024, OpenAI issued a report revealing efforts by Iranian, Chinese, and Russian actors to use the company’s tools to engage in covert influence campaigns, but found none had been particularly successful. In comments before the 2023 Senate AI Insight Forum, Jessica Brandt of the Brookings Institution warned “LLMs could increase the personalization, and therefore the persuasiveness, of information campaigns.” In an online ecosystem filled with AI information warfare campaigns, “skepticism about the existence of objective truth is likely to increase,” she cautioned. A 2024 study published in the academic journal PNAS Nexus found that “language models can generate text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”

Unsurprisingly, the national security establishment is now insisting that the threat posed by this technology in the hands of foreign powers, namely Russia and China, is most dire.

“The Era of A.I. Propaganda Has Arrived, and America Must Act,” warned a recent New York Times opinion essay on GoLaxy, software created by the Chinese firm Beijing Thinker originally used to play the board game Go. Co-authors Brett Benson, a political science professor at Vanderbilt University, and Brett Goldstein, a former Department of Defense official, paint a grim picture showing GoLaxy as an emerging leader in state-aligned influence campaigns.

GoLaxy, they caution, is able to scan public social media content and produce bespoke propaganda campaigns. “The company privately claims that it can use a new technology to reshape and influence public opinion on behalf of the Chinese government,” according to a companion piece by Times national security reporter Julian Barnes headlined “China Turns to A.I. in Information Warfare.” The news item strikes a similarly stark tone: “GoLaxy can quickly craft responses that reinforce the Chinese government’s views and counter opposing arguments. Once put into use, such posts could drown out organic debate with propaganda.” According to these materials, the Times says, GoLaxy has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

To respond to this foreign threat, Benson and Goldstein argue a “coordinated response” across government, academia, and the private sector is necessary. They describe this response as defensive in nature: mapping and countering foreign AI propaganda.

That’s not what the document from the Special Operations Forces Acquisition, Technology, and Logistics Center suggests the Pentagon is seeking.

The material shows SOCOM believes it needs technology that closely matches the reported Chinese capabilities, with bots scouring and ingesting large volumes of internet chatter to better persuade a targeted population, or an individual, on any given subject.

SOCOM says it specifically wants “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives. This technology should be able to respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”

The Pentagon is paying especially close attention to those who might call out its propaganda efforts.

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages,” the document notes. “The capability should utilize information gained to create a more targeted message to influence that specific individual or group.”

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages.”

SOCOM anticipates using generative systems to both craft propaganda messaging and simulate how this propaganda will be received once sent into the wild, the document notes. SOCOM hopes it will use “agentic systems that replicate specific knowledge, skills, abilities, personality traits, and sociocultural attributes required for different roles of individuals comprising a team,” before moving on to “brainstorm and test operational campaigns against agent‐based replicas of individuals and groups.” These simulations are more elaborate than focus groups, calling instead for “comprehensive models of entire societies to enable MISO planners to use these models to experiment or test various multiple scenarios.”

The SOCOM wishlist continues to include a need for offensive deepfake capabilities, first reported by The Intercept in 2023.

The prospect of LLMs creating an infinite firehose of expertly crafted propaganda has been received by alarm — but generally in the context of the United States as target, not perpetrator.

A 2023 publication by the State Department-funded nonprofit Freedom House warned of “The Repressive Power of Artificial Intelligence,” predicting “AI-assisted disinformation campaigns will skyrocket as malicious actors develop additional ways to bypass safeguards and exploit open-source models.” Warning that “Generative AI draws authoritarian attention,” the Freedom House report cites potential use by China and Russia, but only mentions domestic use of the technology in a brief section about the presidential campaigns of Ron DeSantis and Donald Trump, as well as a deepfake video of Joe Biden manipulated to depict the former president making transphobic comments. The extent to which an automated propaganda machine capable of global reach warrants public concern depends on the scope of its application, according to Andrew Lohn, former director for emerging technology on the National Security Council.

“I would not be so concerned if some foreign soldiers are wrongly convinced that our special operation is going to happen Wednesday morning by helicopter from the east rather than Tuesday night by boat from the west,” said Lohn, now a senior fellow at Georgetown’s Center for Security and Emerging Technology.

The military has a history of manipulating civilian populations for political or ideological purposes. A troubling example was uncovered in 2024, when Reuters reported the Defense Department had operated a clandestine anti-vax social media campaign to undercut public confidence in the Chinese Covid vaccine, fearing its efficacy might draw Asian countries closer to a major geopolitical rival. Pentagon-created tweets described the Chinese Sinovac-CoronaVac shot — described by the World Health Organization as “safe and effective” — as “fake” and untrustworthy. According to the Reuters report, then-Special Operations Command Pacific General Jonathan Braga “pressed his bosses in Washington to fight back in the so-called information space” by backing the clandestine propaganda campaign.

William Marcellino, a behavioral scientist at the RAND Corporation focusing on the geopolitics of machine-learning systems and Pentagon procurement, told The Intercept such systems are being built out of necessity. “Regimes like those from China and Russia are engaged in AI-enabled, at-scale malign influence efforts,” he said. State-affiliated groups in China, he warned, “have explicitly designed AI at-scale systems for public opinion warfare.”

“Countering those campaigns likely requires AI at-scale responses,” he said.

SOCOM has in recent years been public about its desire for AI-created propaganda systems. These statements suggest a broader interest that includes influence operations against entire populations, as opposed to narrowly tailored toward military personnel.

In 2019, a senior Pentagon special operations official spoke at a defense symposium of the country’s “need to move beyond our 20th century approach to messaging and start looking at influence as an integral aspect of modern irregular warfare.” The official noted that this “will also require new partnerships beyond traditional actors, throughout the world, through efforts to amplify voices of [non-governmental organizations] and individual citizens who bring transparency to malign activities of our competitors.” The following year, then-SOCOM commander Gen. Richard Clarke described his interest in using AI to achieve these ends.

“As we look at the ability to influence and shape in this [information] environment, we’re going to have to have artificial intelligence and machine learning tools,” Clarke said in 2020 remarks first reported by National Defense Magazine, “specifically for information ops that hit a very broad portfolio, because we’re going to have to understand how the adversary is thinking, how the population is thinking, and work in these spaces.”

Heidy Khlaaf, chief scientist at the AI Now Institute and former safety engineer at OpenAI, warned against a fighting-fire-with-fire approach: “Framing the use of generative and agentic AI as merely a mitigation to adversaries’ use is a misrepresentation of this technology, as offensive and defensive uses are really two sides of the same coin and would allow them to use it precisely in the same way that adversaries do.”

Automated online influence campaigns might wind up having lackluster results, according to Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab. “Russia has been using AI programs to automate its influence operations. The program is not very good,” he said.

The tendency of LLMs to fabricate falsehoods and perpetuate preconceptions when prompted by users could also prove a major liability, Brooking warned. “Tasked with figuring out the ‘hearts and minds’ of a complex and understudied country, they may lean heavily on an AI to help them, which will be likely to tell them what they already want to hear,” he said.

Khlaaf added that “agentic” systems, heavily marketed by tech firms as independent digital brains, are still error-prone and unpredictable. “The introduction of agentic AI in these disinformation campaigns adds a layer of both safety and security concerns, as several research results have demonstrated how easily we can compromise and divert the behavior of agentic AI,” she told The Intercept. “With these security issues unresolved, [SOCOM] risks that their campaigns are not only compromised, but that they produce material that was not intended.”

“AI tends to make these campaigns stupider, not more effective.”

Brooking, who previously worked as an adviser to the Office of the Under Secretary of Defense for Policy on cybersecurity matters, also pointed to the mixed track record of prior U.S. online propaganda efforts. In 2022, researchers revealed a network of Twitter and Facebook accounts secretly operated by U.S. Central Command that had been pushing bogus news articles containing anti-Russian and Iranian talking points. The network, which failed to gain traction on either social network, quickly became an embarrassment for the Pentagon.

“We know from other public reporting that the U.S. has long sought to ‘suppress dissenting arguments’ and generate positive press in certain areas of operation,” he said. “We also know that these efforts have not worked very well and can be deeply embarrassing or counterproductive when revealed to the American public. AI tends to make these campaigns stupider, not more effective.”