Timon Harz

December 12, 2024

US Department of Defense against deepfakes: This technology is intended to ensure national security in cyberspace

For the US Department of Defense, defending against deepfakes is crucial. The technology of the selected startup looks for specific details in deepfakes.

The US Department of Defense has invested $2.4 million over a two-year period in the deepfake detection technology of the startup Hive AI. The Defense Innovation Unit of the Defense Department wants to accelerate the introduction of new technologies for the US defense sector for the first time on a contractual basis. Hive AI's models should be able to recognize video, image and audio content generated by AI.

Although deepfakes have been around for nearly a decade, generative AI has made them easier to create and more realistic-looking than ever, making them vulnerable to misuse in disinformation campaigns or fraud. Defending against these types of threats is now critical to national security, says Captain Anthony Bustamante, project manager and cyberwarfare operator at the Defense Innovation Unit.

Hive AI: Defending against deepfakes is "existential"

"This work is an important step in strengthening our information advantage in the fight against sophisticated disinformation campaigns and synthetic media threats," said Bustamante. Hive was selected from a pool of 36 companies to test its deepfake detection and attribution technology with the Department of Defense. The contract could enable the department to detect and combat AI deceptions at scale. Defending against deepfakes is "existential," says Kevin Guo, CEO of Hive AI. "This is the evolution of cyberwarfare."

Hive's technology has been trained on a large amount of content, some AI-generated and some not. It detects signals and patterns in AI-generated content that are invisible to the human eye but can be detected by an AI model.

Looking for patterns in deepfakes

"It turns out that any image generated by one of these generators contains this type of pattern if you know where to look for it," Guo says. The Hive team is constantly tracking new models and updating its technology accordingly.

The tools and methods developed through this initiative have the potential to be adapted for broader use to not only address defense-specific challenges but also protect civilian entities from disinformation, fraud and deception, the Defense Department said in a statement.

According to Siwei Lyu, a professor of computer science and engineering at the University at Buffalo, Hive's technology offers cutting-edge performance in detecting AI-generated content. He was not involved in Hive's work but has tested the detection tools.

Ben Zhao, a professor at the University of Chicago who has also independently evaluated Hive AI's deepfake technology, agrees, but points out that it is far from foolproof.

Hive's deepfake detection isn't perfect either

"Hive is certainly better than most commercial companies and some of the research techniques we tested, but we also showed that it's not difficult to bypass," Zhao says. The team found that attackers can manipulate images in ways that evade Hive's detection. And given the rapid development of generative AI technologies, it's not yet certain how they will perform in real-world scenarios the defense sector might face, Lyu adds.

Hive's Guo says it's making the company's models available to the Defense Department so the department can use the tools offline and on its own devices, preventing sensitive information from leaking out.

But when an outside attack in the form of deepfakes threatens, off-the-shelf products aren't enough, Zhao says: "There's very little they can do to protect themselves against unforeseen state-level attacks."


Press contact

Timon Harz

oneboardhq@outlook.com

The logo for Oneboard Blog

Discover recent post from the Oneboard team.

Notes, simplified.

Follow us

Company

About

Blog

Careers

Press

Legal

Privacy

Terms

Security