DOD ‘Social Engineering’ Program Developed Bots Capable Of Psychological Warfare
The Department of Defense funded a “large scale social deception” program, according to public spending disclosures. The Federalist has uncovered documents showing how the federal government used “social engineering” programs to develop networks of fraudulent social media accounts capable of violating Americans’ rights to speech and privacy online — and, potentially, psychological warfare.
The DOD awarded more than $9.1 million for Thomson Reuters Special Services (TRSS) for “ACTIVE SOCIAL ENGINEERING DEFENSE… LARGE SCALE SOCIAL DECEPTION” starting in 2018, according to government funding disclosures. Of the total amount promised, the federal government reportedly paid — or “outlayed” — more than $268,000 for the project.
TRSS is a subsidiary of Thomson Reuters, which owns the leftist media outlet Reuters. According to the company’s website, it offers “scalable solutions to governments and global institutions,” and its leadership “leverages real-world experience in the US Intelligence Community, Department of Defense, law enforcement and the private sector.” The Air Force awarded the contract, and the DOD’s Defense Advanced Research Projects Agency (DARPA) — the military’s shady research branch — funded the work in “large scale social deception.”
A DARPA funding document shows the Air Force’s 711th Human Performance Wing, Human Effectiveness Division, enabled this project in 2018 — initially promising just $1 million before increasing the obligated amount to more than $9.1 million. The division, according to its website, is “composed of a diverse group of scientists and engineers studying developing technologies specific to the human element of warfighting.”
Many other contracts were part of the same project, dedicated to building a network of phony online accounts, which the government would supposedly use to defend against “social engineering” attacks. While this may seem harmless, the mechanics of the system leave it open for potential use against Americans.
Funding ‘Social Engineering Defense’
The $9.1 million contract was for “Active Social Engineering Defense” — a DARPA program that is “now complete,” according to its webpage. It aimed to “develop the core technology to enable the capability to automatically identify, disrupt, and investigate social engineering attacks.” It professed to target mainly scam and phishing attempts.
The contract’s “program manager” was DARPA scientist Joshua Elliott, the documents show. He was a DARPA program manager from 2017 to 2023, according to his LinkedIn. Beforehand, he studied things like “socio-technical change,” and worked in academia for 10 years on things like “computational climate economics.” At DARPA, he was allowed to “program” $600 million in federal research and development funding, according to the Federation of American Scientists. Afterward, Elliott worked for the radical group Quadrature Climate Foundation, and more recently, Renaissance Philanthropy — started by a former staffer for Presidents Bill Clinton and Barack Obama.
Also in 2018 as part of the “Active Social Engineering Defense Program,” the government promised funding for other contracts — also under Elliott — including $2.5 million for HRL Laboratories, $8.5 million for SRI International, $7.1 million for the University of Southern California, $4.2 million for Raytheon BBN, $507.9 million for the MITRE Corporation, and $2.4 million for the Canadian Commercial Corporation (the nation’s government contracting agency).
In 2019, according to the DARPA document, the government promised funding for “active social engineering defense” from additional groups, including nearly $1 million for Carnegie Mellon University, $9.5 million for Northrop Grumman, more than $774,000 for Purdue University, nearly $1.9 million for the State University of New York Research Foundation, and $1.3 million for the infamous University of California, Berkeley.
Elliott was the program manager in fiscal year 2018, but government records show DARPA Program Manager Walter Weiss took over in fiscal year 2019. A 2021 Georgetown study said the “active social engineering defense” program would “build systems that can detect social engineering attacks and respond autonomously to trick adversaries into revealing identifying information.”
Many of these groups played integral roles in the “active social engineering defense” project, which enabled a massive network of fake, government-controlled online accounts to ensnare scammers, sometimes collecting information for further investigation. But there is apparently nothing to stop these networks from violating Americans’ rights to privacy and speech.
Networks Of Phony Online Accounts
HRL created “defense systems against social engineering attackers” as part of DARPA’s “Active Social Engineering Defense” program. In a webpage from 2018, the group boasted its system (“CHESS”) would “exploit attackers’ methods by drawing them in with automated responses” to capture their data, operating “across various media, including email, social media, and text messages.”
“CHESS seeks to activate virtual bots that act on behalf of victims and control communications with the attacker across all media,” the webpage reads. Its system would “gather as much personal information on an attacker as possible, including identifying individual bad actors and any agencies that might be behind them.”
While this may appear benign, it is a shocking admission — the government sponsored a massive network of fake social media accounts that it could manipulate, capable of capturing users’ data.
“This DARPA contract involved the development and deployment of technology to create and manage fake social media accounts at scale,” The Federalist’s CEO Sean Davis posted on X. “This is far more insidious than a simple government payment to a news agency.”
As part of the program, SRI developed a similar system, but with a more ominous moniker — “Project NEMESIS.” According to the Defense Technical Information Center, the system was “capable of integrating multiple dialog generation strategies for … integration into a live defensive service.” The group was “directed by DARPA” to develop NEMESIS, which proved “comparable to what can be achieved by human testers, but scalable to much larger populations.”
NEMESIS — to which data firm Jataware contributed – “integrates all elements of our detection, dialog engagement, and attribution services,” according to an Air Force Research Laboratory document. “Our team has demonstrated the creation and management of a multi-virtual-persona social media interaction system, which provides a strong foundation for our understanding of how to construct the key elements of Nemesis’ virtual persona management,” one document reads. “Our dialog management framework was integrated with a set of services that managed synthetic social media accounts.”
Again, while baiting scammers or foreign adversaries to capture their information sounds appealing, the system apparently enables the government to use its phony online network to target Americans. NEMESIS engages “adversaries” who do things like share so-called “disinformation” — or speech not approved by the regime.
The University of Southern California created another system as part of this program — called PIRANHA, according to an Air Force Research Laboratory document. “The PIRANHA team focused on methods to augment neural dialogue approaches worked on by other teams on the A[ctive] S[ocial] E[ngineering] D[efense] program.”
PIRANHA “gathers information and performs external vetting” to identify targets deemed threatening, using “clues from the content of a message and… information obtained from search engines and social media.” It explores any URLs linked in a message, “examine[s] the style of a message,” and “promotes agenda pushing in automated responses” to help with “gathering more information to feed back into external vetting.”
Raytheon developed yet another similar system, called SIENNA — which explicitly deployed a network of phony online accounts to gather users’ information.
SIENNA used “construction and deployment of a bot framework driven by conversational technology,” which Raytheon members originally developed for videogames, according to an online project description. When the system sees an interaction deemed hostile, it “will deploy a set of bots to engage and investigate the attackers.”
“Each bot has a role, goals, and speaking style (its persona)… to exploit what it knows so far about the nature and goals of each attacker,” the project description reads. “The bots’ true purpose is to engage, build trust, provide fake information, and most importantly to elicit information from the attacker and waste their time and resources.”
Raytheon created two technologies, according to an Air Force Research Laboratory document. “SIENNA-Bot,” a “chatbot designed to converse with an interlocutor,” and “Cervantes,” which engages in “domain-specific dialogue development” and uses “quests, i.e., series of questions of increasing complexity intended to elicit information from the interlocutor.”

Image CreditThe pipeline from a user deemed threatening, to the SIENNA chatbot, to information gathering. Screenshot | Air Force Research Laboratory
The government also worked with MITRE for “active social engineering defense” — but, while still under DARPA Program Manager Joshua Elliott, this contract was overseen by the Army Communication Electronics Command instead of the Air Force, according to the DARPA funding document.
A MITRE slideshow from May 2023 — published by the Naval Postgraduate School — discusses “Social Engineering Impacts On Government Acquisition.” It said a “social engineering attack” on a contractor could cause an “adverse effect on future government acquisitions,” and recommended “[u]tilization of AI [artificial intelligence] and ML [machine learning] tools.”
MITRE published a paper on the subject in October 2022. It recommended tactics including “AI and ML,” and “partnership with the government and private industry technology…”
“Leveraging automated, AI- and ML-enabled threat detection, reporting, and mitigation… can take the form of funneling attackers to a hollow Potemkin network, a ‘vulnerable and publicly accessible’ chatbot posing as an acquisition officer,” the paper reads.
The paper’s authors presented it at the Naval Postgraduate School’s Acquisition Research Symposium in May 2023.
MITRE “worked closely” with the Cogsec Collaborative, “which built and connected groups responding to perceived disinformation at no charge,” according to InfluenceWatch. Meanwhile, MITRE allegedly “developed a parallel framework… which employed similar techniques and tactics.”
For the promised $2.4 million contract with the Canadian Commercial Corporation (shown in public grant disclosures and the DARPA funding document), there appears to exist another contract with the identical ID number — but a different recipient.
The other contract — also with contract number FA865018C7889 — was with Toronto, Canada company Uncharted Software, and ran from September 2018 to October 2018. But it was also part of the “Active Social Engineering Defense” project. Uncharted worked with “proven Defense Advanced Research Projects Agency (DARPA) collaborators,” and the project was overseen by the Air Force’s 711th Human Performance Wing.
Uncharted created a system called “ReCourse,” which would “coordinate, monitor and selectively moderate automated, conversational, enterprise-scale bots for defense against social engineering,” according to the contract. The company designed a “human-in-the loop” system, and ReCourse would “shape bot tactics at the global enterprise level.”
Uncharted even developed a fake online profile named “Gabby,” and explained the process of setting up fake online profiles.

Image CreditUncharted’s bot account “Gabby.” Screenshot | Air Force Research Laboratory

Image CreditThe process of setting up a network of bot accounts. Screenshot | Air Force Research Laboratory

The different social engineering systems — including NEMESIS — were tested against one another for accuracy. The tests included the University of Southern California’s aforementioned system, PIRANHA. The “Friend/Foe” test, shown in the Air Force records, was particularly concerning.

Image CreditHow the different systems tested in identifying “friend” or “foe” online. Screenshot | Air Force Research Laboratory
The highest accuracy rate for classifying friendly or malicious accounts was ReCourse — at just 35 percent, which still had a six percent “false alarm” rate. The lowest accuracy was NEMESIS — at zero percent, with a 19 percent “false alarm.” One system even resulted in a 49 percent “false alarm” rate.
So even if — given the benefit of the doubt — these systems simply engage scammers and malicious foreign actors, they may not even work as intended. But regardless, the government has spent millions on creating a phony, regime-controlled online network capable of accessing Americans’ information, stifling their speech, and engaging in psychological warfare.
Logan Washburn is a staff writer covering election integrity. He is a spring 2025 fellow of The College Fix. He graduated from Hillsdale College, served as Christopher Rufo’s editorial assistant, and has bylines in The Wall Street Journal, The Tennessean, and The Daily Caller. Logan is from Central Oregon but now lives in rural Michigan.
Comments are closed.