‘The Intercept’: Pentagon wants to “suppress dissenting arguments” using AI propaganda

11:42 12.09.2025 •

The U.S. is interested in acquiring machine-learning technology to carry out AI-generated propaganda campaigns overseas.

The United States hopes to use machine learning to create and distribute propaganda overseas in a bid to “influence foreign target audiences” and “suppress dissenting arguments,” according to a U.S. Special Operations Command document reviewed by ‘The Intercept’.

The document, a sort of special operations wishlist of near-future military technology, reveals new details about a broad variety of capabilities that SOCOM hopes to purchase within the next five to seven years, including state-of-the-art cameras, sensors, directed energy weapons, and other gadgets to help operators find and kill their quarry. Among the tech it wants to procure is machine-learning software that can be used for information warfare.

To bolster its “Advanced Technology Augmentations to Military Information Support Operations” — also known as MISO — SOCOM is looking for a contractor that can “Provide a capability leveraging agentic Al or multi‐LLM agent systems with specialized roles to increase the scale of influence operations.”

So-called “agentic” systems use machine-learning models purported to operate with minimal human instruction or oversight. These systems can be used in conjunction with large language models, or LLMs, like ChatGPT, which generate text based on user prompts. While much marketing hype orbits around these agentic systems and LLMs for their potential to execute mundane tasks like online shopping and booking tickets, SOCOM believes the techniques could be well suited for running an autonomous propaganda outfit.

“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document notes. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”

Laws and Pentagon policy generally prohibit military propaganda campaigns from targeting U.S. audiences, but the porous nature of the internet makes that difficult to ensure.

In a statement, SOCOM spokesperson Dan Lessard acknowledged that SOCOM is pursuing “cutting-edge, AI-enabled capabilities.”

“All AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making,” he told The Intercept. “USSOCOM’s internet-based MISO efforts are aligned with U.S. law and policy. These operations do not target the American public and are designed to support national security objectives in the face of increasingly complex global challenges.”

Tools like OpenAI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.

SOCOM says it specifically wants “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives. This technology should be able to respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.”

The Pentagon is paying especially close attention to those who might call out its propaganda efforts.

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages,” the document notes. “The capability should utilize information gained to create a more targeted message to influence that specific individual or group.”

SOCOM anticipates using generative systems to both craft propaganda messaging and simulate how this propaganda will be received once sent into the wild, the document notes.

The SOCOM wishlist continues to include a need for offensive deepfake capabilities, first reported by The Intercept in 2023.

The prospect of LLMs creating an infinite firehose of expertly crafted propaganda has been received by alarm — but generally in the context of the United States as target, not perpetrator.

SOCOM has in recent years been public about its desire for AI-created propaganda systems. These statements suggest a broader interest that includes influence operations against entire populations, as opposed to narrowly tailored toward military personnel.

Automated online influence campaigns might wind up having lackluster results, according to Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab.

Brooking, who previously worked as an adviser to the Office of the Under Secretary of Defense for Policy on cybersecurity matters, also pointed to the mixed track record of prior U.S. online propaganda efforts. In 2022, researchers revealed a network of Twitter and Facebook accounts secretly operated by U.S. Central Command that had been pushing bogus news articles containing anti-Russian and Iranian talking points. The network, which failed to gain traction on either social network, quickly became an embarrassment for the Pentagon.

 

read more in our Telegram-channel https://t.me/The_International_Affairs