Jesus' Coming Back

As adversaries harness AI, tech firms peer through chat logs to catch them

Asking large language models for information on intelligence agencies and using them to help fix scripting errors and develop code to hack into systems. Translating technical jargon and querying about ways to hide inside networks — those are some of the AI-driven methods used by a China-backed group collective that’s historically spent its time targeting U.S. defense contractors and government agencies.

It’s just one of several instances outlined in a Microsoft and OpenAI analysis released Wednesday which details what AI and security researchers have feared for at least the past year: nation-state hackers have started experimenting with large language models to help them carry out cyberattacks.

Those detections, which the tech giant was able to link back to a slew of other major state-backed hacking groups linked to Iran, North Korea, and Russia, weren’t conjured up. The joint research conducted between OpenAI and Microsoft — the latter of which is a major investor in the AI titan — collaborated with Microsoft’s Threat Intelligence arm to analyze the chat history of the malicious accounts and terminate them.

It’s evidence that, despite broad pushes from big-name tech and AI firms to enshrine privacy standards into their products, companies with AI offerings will have a propensity to screen chat logs and user interactions for abuses, especially for cyber and national security reasons, cybersecurity and privacy experts told Nextgov/FCW.

“There are privacy concerns. And that was the almost unacknowledged theme within the report,” said Eric Noonan, CEO of CyberSheath, a security IT provider that provides compliance services. “The ability to observe this activity by bad actors implies that there was some level of surveillance or monitoring.”

“Microsoft and OpenAI take action against known, malicious actors — those among the more than 300 threat actors Microsoft Threat Intelligence continually tracks, including 160 nation-state actors, 50 ransomware groups, and many others,” said Microsoft’s threat intelligence strategy director Sherrod DeGrippo. 

“Our focus is on recognizing and blocking these known malicious identities, attributes and infrastructure that we see active across the threat space,” DeGrippo said. “As noted in our blog post, these actions are guided by the principles we announced around protecting platforms and customers.” 

Researchers and officials have warned over the past year that the advent of consumer-facing AI chat tools like OpenAI’s ChatGPT or Microsoft’s Copilot may help established nation-state hackers supercharge their capabilities or enable inexperienced cybercriminals to automate and deploy programs that can take down websites or exfiltrate sensitive information from networks. 

Security officials have often emphasized that the new-wave AI element of cyberwarfare is double-edged, helping defenders as much as it may help attackers. But the old-school approach taken by the two companies simply ended in sweeping through message history and taking down affiliated accounts, similar to social media giants that analyze posts for hateful content and terminate users.

That dynamic will likely continue in future cyber and AI research, which means users could expect tech firms to sometimes dive into chat history and draw linkages to cybercriminals or state-affiliated hacking collectives. To some, it shouldn’t come as a surprise.

“AI privacy concerns are well warranted with what we’ve already seen with examples of AI use cases, where data used in learning models are leaked or discovered inappropriately without appropriate guardrails,” said Ken Dunham, a director with the Threat Research unit at cloud security firm Qualys.

Privacy concerns are often raised with any large corporation that offers services online, though many privacy laws recognize times where firms work to mitigate fraud or address security lapses, said Cody Venzke, senior policy counsel for privacy and technology at the American Civil Liberties Union.

The problem lies in tech firms’ transparency about how they use any collected data, which is often unclear, he said.

“Does [sweeping AI chat logs], for example, stop threat actors and create mitigations for cybersecurity threats, or is this sort of information being used elsewhere?” Venzke said. 

ChatGPT’s OpenAI, for instance, holds onto conversations and uses that data to better tune its models. Last year the company released a feature that let users turn off its chat history retainment after concerns were raised over how sensitive data may spill into other users’ chats.

The Federal Trade Commission warned Wednesday that it will crack down on companies that quietly change privacy policies to mine user data for training AI models. FTC officials declined to comment for this story.  

Big tech firms and their affiliated AI or cybersecurity offerings will likely release similar reports that involve scanning chat logs to detect hackers, Venzke and Noonan said. Though it might not amount to full-scale monitoring, cybersecurity giants like Mandiant — whose parent company Google recently rebranded its Bard AI chatbot — will be incentivized to use every tool at their disposal in the name of cybersecurity.

“We’re seeing increased concerns across society, lawmakers and policymakers about the capabilities of large language models and emerging AI, and I think some companies are trying to respond to that,” Venzke said.

Defense One

Jesus Christ is King

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More