Jesus' Coming Back

To Avoid An AI Catastrophe, Regulators Must Ensure Trustworthy AI, Experts Tell Lawmakers

AI’s greatest opportunity and biggest danger is the same for the civilian world as it is for the military: the potential to reveal or obscure truth. That power could be the difference between fair and unfair elections, or life or death in battle. 

On Tuesday, key innovators in AI appeared before lawmakers to discuss how new tools are delivering value to investors and makers—even as they are used to sow confusion and deceive people. 

On the one hand, new artificial intelligence tools are proving useful not just for writing essays but for a sort of prescience. A group of Harvard and MIT researchers recently demonstrated that generative AI models can predict, more accurately than polls, how people will vote based on the media that they consume. Clearspeed, whose products are used by the U.S. military, insurance companies, and other customers, offers an AI tool that uses sentiment analysis to tell whether an interviewee is responding truthfully to yes-or-no questions. Steve Wisotzki, a former Navy SEAL who manages government relations for the company, told Defense One the tech was pioneered to help U.S. and Afghan forces detect insider threats. Lots of new data has helped the company refine the model, Wisotzki said, to the point where it can pinpoint the “risk” that an interview subject is lying with more than 97 percent accuracy, based on fewer than ten questions.

But the use of AI to obscure truth are also growing. In testimony to lawmakers, Gary Marcus, a neuroscientist, author, and entrepreneur, described potential threats posed by the simple tools already available. “They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about datasets that AI companies use will have enormous unseen influence; those who choose the data will make the rules shaping society in subtle but powerful ways.”

Sam Altman, who founded OpenAI, maker of the popular new tool ChatGPT, told lawmakers that “My worst fears are that we…the technology industry cause significant harm to the world…It’s why we started the company. It’s a big part of why I’m here today.” 

Some key military officials share those concerns about the trustworthiness of AI tools even as the military seeks to use AI in a wide variety of areas. 

Lisa Sanders, the director of science and technology for U.S. Special Operations Command told Defense One, “We can’t ignore [tools like ChatGPT] because…the genie is out of the bottle.” 

Effective operations in the gray zones of the future will mean “discovering ground truth kind of faster than the environment might want you to, faster than I might be able to, that sort of super capability to…arrive at high certainty of ground truth faster than the layperson,” Sanders told Defense One in an exclusive interview during Global SOF event in Tampa, Florida. .

SOCOM is working to merge sensor data, open-source data, and artificial intelligence to help operators in gray-zone environments. Wearing special glasses or headphones, SOF troops will better grasp local population sentiment, understand languages without internet-connected translators, and, essentially, predict various events. But such tools can turn from asset to liability if they are not trustworthy. Sanders said a key concern is “how do I validate either my conclusions that I’ve made or even the raw data source.” 

That job of validating truth won’t just fall to someone back home who has written the software and structured the data before deployment. Some of it will fall to the operator or his or her team in the moment of conflict or potential conflict. So how do you give them enough faith in the product that they can use it in tense moments without giving that operator too much confidence in a tool that only seems perfect?

Sanders said special operators will need to be to “understand and tweak those algorithms…especially because we know a lot of this is coming from a non-military non-national-security background. It’s going to have inherent biases. So being able to to see where those biases are taking us down a path that we need to adjust or to identify the biases that we’re putting out I think is important and how do you do that without having scientists with you the whole way? How can we as the end user, gain confidence and trust in that tool in order to appropriately utilize it to help us do our mission?”

But reliance on those “non-military, non-national security background” companies to produce products that are safe and well-vetted is not a good idea unless Congress can compel them to forthrightly describe how their models actually work and what data goes into them. Congress must also ensure that outside government agencies and independent researchers can evaluate them, experts testified Tuesday.

OpenAI’s Altman proposed that the United States push to create an international monitoring organization, similar perhaps to the IAEA, to watch the development of AI tools. He suggested the U.S. government set up standards and licensing bodies. Companies that abuse AI tools or create risks for the public could see licenses to use or develop products revoked. 

Not everyone in the tech community loves that idea. On Sunday, Eric Schmidt, a former CEO of Google who now runs a secretive defense-focused venture fund, suggested that AI at the scale of a company like Google and Microsoft is far too complex to be effectively regulated by anyone except for industry itself. 

But another former Googler, Meredith Whittaker, pushed back on Twitter: “The tech industry failing to police itself while regulators enabled is how we got where we are. I say this as someone who worked for Google for over 13 years. Don’t be fooled by powerful men making self-interested pronouncements in reasonable tones.”

Marcus, the neuroscientist, sounded a similar plea: In order to avoid the worst of AI abuse, the U.S. government must urgently regulate data sources, algorithm explainability, and provide general governmental and civil oversight. 

“The big tech companies’ preferred plan boils down to: trust us. But why should we? The sums of money at stake are mind-boggling,” he said. “Missions drift. OpenAI’s original mission statement proclaimed ‘our goal is to advance AI in the way that’s most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.’ Seven years later, they’re largely beholden to Microsoft, embroiled in part of an epic battle of search engines that routinely make things up. And that’s forced Alphabet to rush out products and de-emphasize safety. Humanity has taken a backseat.”

Defense One

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More