US must harness generative AI in competition against China, think tank warns
The United States must increase its federal spending on research and development and build out new government bodies—or change existing ones—if it wants to lead the globe in setting norms for how generative AI is developed and used, according to a new report from the the Special Competitive Studies Project, or SCSP.
Generative AI could accelerate the discovery of new drugs and cybersecurity solutions, enable radically better computer networking, and improve public understanding. But in the hands of an adversary, it could lead to more cyberattacks, the development of new bioweapons, and far more effective disinformation campaigns.
The report lays out numerous recommendations for policy makers. Among the broader ones: the United States should “lead in convening a range of stakeholders to set responsible rules, both domestically and internationally,” for AI development; “accelerate its efforts to make quality, large datasets more widely available to U.S. organizations, particularly academia”; and “proactively incorporate GenAI into its daily work or risk falling behind.”
The Defense Department recently launched a task force to explore how to incorporate AI into various operations, but according to the report’s authors, generative AI could be useful across a wide range of government activities, particularly improving diplomacy.
“We do need to adopt new tools into government in terms of how we execute on our statecraft with foreign policy … We can actually capitalize on a lot of existing data, a lot of existing tools that the Department of State and other US government agencies have,” to reveal new insights about how the United States deals with other nations, Joe Wang, the senior director for foreign policy at SCSP, told Defense One.
On the defensive side, the United States also needs to take steps to prevent adversaries like China from using generative AI to manipulate political discourse, such as establishing a governmental body to call out media and influence campaigns created by AI entities, the report states. The U.S. government doesn’t have a great track record of fighting disinformation, in part because certain candidates and causes often benefit from malign foreign influence. A modest proposal to establish a board within the Department of Homeland Security to track state-backed disinfo last year received so much pushback that DHS canceled the idea almost as quickly as they announced it.
But the proposed AI monitoring body “would solely alert of incidents that they’re seeing in the information domain as it pertains to synthetic media and federal elections. So this would be more of a public service type of function to avoid getting into some of those sticky situations that we saw with the disinformation governance board,” Meaghan Waff, associate director for intelligence, told Defense One.
One of the most important steps the government could take is investing more federal money in research and development. “The U.S. government should aim to increase federal R&D funding to one percent of the total U.S. GDP by 2026. Reaching this target means federal R&D funding would need to increase by approximately $83 billion over the next two years,” the report states. Current federal R&D spending is 0.66% of total U.S. GDP.
The U.S. must also use the United Nations and other international forums to push for responsible use of generative AI—and continue to work with China on the issue when each country can identity mutual interest, such as preventing individuals from using generative AI for new bioweapons or cyberattacks that could disrupt global financial systems.
“Just as we had to win the Cold War with strategic stability between the U.S. and the Soviet Union as responsible leaders, we need to have a way to dialogue on the issues that have that sort of global risk [with China],” Wang said. “Gen AI [should be one] of the agenda items that the United States builds into the foreign policy dialogue, that the Secretary of State has with his counterpart, into the commercial dialogue that the Secretary of Commerce sets with her counterpart etc, etc..”
But before the government can build out a multi-faceted approach to leading in generative AI, it has to set up regulatory frameworks around AI, and to rebuild lost trust between the American public and technology sector leaders.
The EU has proposed sweeping AI regulations that could limit the future of AI development, while China has placed no similar restrictions on itself. “So it’s a question of U.S. firms being able to operate in European markets, obviously, and the revenue that can be generated from that being put into R&D and thinking about keeping that kind of technological edge,” said Will Mooreland, a director for foreign policy at SCSP.
The United States needs to be aware of such efforts without necessarily following them.
“I think any small ways that you can increase transparency, for example, even if the public knows when they’re interacting with an AI system would go a long way towards increasing public trust,” said Rama Elluru, the senior director for SCSP.
Comments are closed.