Jesus' Coming Back

AI-powered F-16 impresses ride-along SECAF in dogfight

After riding in the front seat of an F-16 fighter jet controlled by artificial intelligence, Air Force Secretary Frank Kendall said he can see a future where AI agents will fly in war—and will do it better than humans. 

Computers “don’t get tired. They don’t get scared. They’re relentless. It’s easy to see a situation where they’re going to be able to do this job, generally speaking, better than humans can do. They also can handle large amounts of data,” Kendall said Wednesday during an AI and national security conference hosted by the Special Competitive Studies Project.

The secretary spent an hour in an X-62A VISTA, an F-16 fighter jet modified to test and train AI software, on May 2 at Edwards Air Force Base in California, flying in various combat scenarios. At one point during the flight, Kendall said, the machine-guided F-16 was chasing a crewed one in a circle. Each pilot was trying to fly the airplane better than the other, to get into a position where they could launch a missile.  

The automated jet was up against a “very good” pilot with 2,000 or 3,000 hours of experience—and the contest was roughly even. But if the AI agent had gone up against a pilot with less experience, the human would’ve lost, he said. 

“There are just inherent limitations on human beings, and when we can build machines that can do these jobs better than people can do them, the machines are going to do the job,” Kendall said. 

But many concerns remain about the ethics of using this technology in warfare—and what might happen if the Pentagon used lethal robots on the battlefield without human operators. 

The Pentagon will adhere to the laws of armed conflict, but the U.S. still needs to figure out how to apply these norms to automated machines, Kendall said.  

“At the end of the day, human beings are still responsible for creating, testing and putting those machines out and using them, so we have to figure out how to hold those people accountable to ensure that we have compliance with the norms that we all agree to,” he said. 

U.S. adversaries may decide to employ these weapons without considering collateral damage, because there’s an “operational advantage” to doing so, Kendall said. 

“We are seeing very vivid applications of these concerns right now in at least two major conflicts going on in the world today, and we’ve had this experience in the counterterrorism, counterinsurgency fights we were involved in, and we made some serious mistakes where we did engagements that we should not have made, but we were trying hard to follow the rules,” Kendall said.  

The U.S. must ensure its automated weapons don’t cause any more collateral damage “than necessary,” he said.  

“We won’t always be perfect about that, but we’re going to work really hard at it. We’re going to try very hard to implement those rules. I can assure you that,” he said.  

The Air Force first held an AI-versus-human F-16 dogfight in September at Edwards, and while officials wouldn’t say who came out on top, they said the AI agents performed well in various offensive and defensive combat sets.

The service wants to develop this technology quickly as it moves forward on plans to operate a fleet of AI-enabled drones flying alongside manned fighter jets by the end of this decade, called collaborative combat aircraft, or CCAs. 

“We’ll have uncrewed aircraft that are carrying weapons in the force by about 2030,” Kendall said.

Defense One

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More