U.S. Government Funded ‘Human-AI Teaming’ Research Monitoring ‘Social Media Messages’ During Covid Lockdowns
The National Science Foundation’s Division of Civil, Mechanical, and Manufacturing Innovation awarded nearly $70,000 to three universities in 2020 for a collaborative research grant titled “Human-AI Teaming for Big Data Analytics to Enhance Response to the COVID-19 Pandemic.”
According to documents reviewed by The Federalist, the NSF allocated $24,498 to Brigham Young University; $24,266 to George Mason University; and $20,262 to the University of Texas at Austin for the project.
“Social media data can provide important clues and local knowledge that can help emergency managers and responders better comprehend and capture the evolving nature of many disasters,” the project’s abstract states. This funding enabled researchers to study how artificial intelligence algorithms can help authorities track, and potentially act on, American’s online speech.
Intending to advance the field of “human-machine learning” by analyzing how human researchers and AI can collaborate to “comprehend social media patterns during an evolving disaster,” the project used “social media messages” to develop algorithmic surveillance practices. The stated purpose of the research was to help first responders mitigate crises.
The researchers claim their findings will “help emergency managers better train their volunteers who comb through social media using their understanding of the local knowledge and built environment to help machines see new patterns in data.” So, should the artificial intelligence in question lack the ability to interpret regionally distinguishing terms (references to minor landmarks, colloquialisms, etc.), human researchers provide context to fill the gaps.
Keri Stephens, a faculty member in the Moody College of Communication at the University of Texas at Austin and a principal investigator for the study, told The Federalist that researchers utilized “publically available data from Twitter” using various keywords and the platform’s built-in geofencing AI to focus on specific areas and communications. Researchers were especially interested in communications discussing “risky” or “preventative” behaviors, she said. For instance, if a hospital was overflowing with Covid patients, a post made in the corresponding area encouraging people to stay away might be flagged and fed into the algorithm.
Research focused on “capturing ephemeral data from a variety of social media sources” and, using “different sampling algorithms for active (machine) learning paradigms,” explored “how humans understand, process, and interpret social media messages.”
Machine learning refers to instances in which using humans for complex problem-solving and data interpretation is prohibited by expense, time, or capability. Therefore, predictive algorithms and artificial neural networks are used to fill in the gaps and subsequently learn how to perform tasks autonomously.
The project’s outcomes report states that researchers “extensively collaborated with the Community Emergency Response Teams (CERTs) led by Montgomery County CERT in the Washington D.C. Metro region for social media filtering to find relevant information for COVID-19 pandemic response.”
Taken at face value, one might not find reason for concern with providing first responders more tools to respond to emergencies. Given the recent Hawaiian wildfires, this program makes a lot of sense. If roads are blocked by debris or communities are unable to escape, and people are live-tweeting their experience, having the ability to rapidly interpret and respond to their concerns would come in handy.
In this regard, the “human-AI teaming” could prove majorly useful.
However, there is no guarantee this new method of surveillance will be used solely for the stated purpose of disaster mitigation.
Missouri v. Biden and the “Twitter Files” demonstrated that the federal government and social media companies frequently work hand-in-glove to monitor and suppress the speech of Americans.
Twitter, for instance, used the Hamilton 68 digital dashboard to algorithmically monitor and blacklist accounts suspected of amplifying Russian “disinformation” that was inconvenient for the Democrat establishment and failed presidential candidate Hillary Clinton.
[READ: By Exposing Hamilton 68, The ‘Twitter Files’ Proved The Deep State Is A Weapon Aimed Directly At You]
Technology and practices supposedly created for the betterment of American society are often used as political cudgels to silence and disenfranchise the political outgroup — typically people who are conservative and religious. This synthesis of public and private power frequently has dire consequences for these communities.
Santa Clara County Public Health Department, for instance, wrongly attained geolocation data from the data brokerage SafeGraph to levy exuberant fines against Calvary Chapel Church for defiance of Covid restrictions. If the local government hadn’t wrongly attained this data by violating the company’s terms of service, the church would not face more than $1 million in fines.
During Covid, the Centers for Disease Control and Prevention purchased similar geolocation data from SafeGraph to monitor millions of Americans’ travel patterns for compliance with lockdown protocols.
The CDC “planned to use the data to analyze compliance with curfews” and “track patterns of people visiting K-12 schools,” according to the Daily Mail.
Given the federal government’s persistent habit of working with Big Tech to censor and monitor American’s online activity, it’s likely any progress made in “human-AI teaming” will be coopted to suppress information deemed inconvenient to permanent Washington.
[READ: ‘Facebook Files’ 2.0 Reveal White House Pressured Facebook To Censor ‘True’ Content]
Brigham Young University and George Mason University did not return a request for comment by publication time.
Samuel Mangold-Lenett is a staff editor at The Federalist. His writing has been featured in the Daily Wire, Townhall, The American Spectator, and other outlets. He is a 2022 Claremont Institute Publius Fellow. Follow him on Twitter @smlenett.
Comments are closed.