Notebookcheck Logo

DARPA releases deepfake detection tools and starts collaborative challenges to develop more

DARPA releases deepfake tools to help counter fake AI pictures, voices, and news. (Source: DARPA)
DARPA releases deepfake tools to help counter fake AI pictures, voices, and news. (Source: DARPA)
DARPA has released its deepfake detection tools developed in its Semantic Forensics program to counter AI threats. Nearly two dozen tools help detect deepfake photos and videos as well as AI-generated text and voice. Collaborative AI FORCE challenges released every four weeks encourage public researchers to help develop innovative solutions to the generative AI threat.

DARPA (US Defense Advanced Research Projects Agency) has released its deepfake detection tools developed in its Semantic Forensics (SemaFor) program to the public. The agency has made available nearly two dozen tools under permissive or open licensing to detect deepfake photos and videos as well as AI-generated text and voice. To encourage continued development, a series of collaborative challenges will be released every four weeks under the AI Forensics Open Research Challenge Evaluations (AI FORCE) program to help develop new tools and algorithms to counter the AI threat.

The AI Pandora’s box was confirmed open when Google engineer Blake Lemoine claimed in 2022 that Google's LaMDA AI knew it was sentient. Regardless of how artificial life is defined, users of AI can certainly see that its abilities are vast and often better than those of humans. Readers who have seen The Terminator or Ghost in the Shell will also recognize that the threat of AI, whether initiated by its users or on its own, is great.

DARPA’s mission “to make pivotal investments in breakthrough technologies for national security” has led it to target the problem of AI threats under its SemaFor program. The agency has developed an Analytic Catalog of 19 components with more to be added over time. All components are open-sourced in an effort to spread necessary anti-AI tools to the public.

These tools include code and algorithms developed to detect AI threats such as fake news and text posted by AI on Twitter and elsewhere, fake President Biden calls, and fake Pope pictures created by generative tools such as Midjourney.

DARPA also hopes to collaborate with public researchers and developers through its AI FORCE program by targeting a new area of interest every four weeks to encourage innovation. Readers who want to develop counter-AI tools will need a fast Nvidia graphics card (like this one on Amazon).

Deepfake Defense Tech Ready for Commercialization, Transition

To foster transition of Semantic Forensics technologies to industry and government, DARPA to launch efforts that will bolster defenses against manipulated media

[email protected]/14/2024

The threat of manipulated media has steadily increased as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing.

The speed, scale, and breadth at which massive disinformation campaigns can unfold require computational defenses and automated algorithms to help humans discern what content is real and what’s been manipulated or synthesized, why, and how.

Through the Semantic Forensics (SemaFor) program, and previously the Media Forensics program, DARPA’s research investments in detecting, attributing, and characterizing manipulated and synthesized media, known as deepfakes, have resulted in hundreds of analytics and methods that can help organizations and individuals protect themselves against the multitude of threats of manipulated media.

With SemaFor in its final phase, DARPA’s investments have systemically driven down developmental risks – paving the way for a new era of defenses against the mounting threat of deepfakes. Now, the agency is calling on the broader community – including commercial industry and academia doing research in this space – to leverage these investments.

To support this transition, the agency is launching two new efforts to help the broader community continue the momentum of defense against manipulated media.

The first comprises an analytic catalog containing open-source resources developed under SemaFor for use by researchers and industry. As capabilities mature and become available, they will be added to this repository.

The second will be an open community research effort called AI Forensics Open Research Challenge Evaluation (AI FORCE), which aims to develop innovative and robust machine learning, or deep learning, models that can accurately detect synthetic AI-generated images. Via a series of mini challenges, AI FORCE asks participants to build models that can discern between authentic images, including ones that may have been manipulated or edited using non-AI methods, and fully synthetic AI-generated images. This effort will launch the week of March 18 and will be linked from the SemaFor program page. Those seeking a notification may sign up for the Information Innovation Office newsletter.

According to DARPA and SemaFor researchers, a concerted effort across the commercial sector, media organizations, external researchers and developers, and policymakers is needed to develop and deploy solutions that combat the threats of manipulated media. SemaFor is providing the tools and methods necessary to help people in this problem space.

“Our investments have seeded an opportunity space that is timely, necessary, and poised to grow,” said Dr. Wil Corvey, DARPA’s Semantic Forensics program manager. “With the help of industry and academia around the world, the Semantic Forensics program is ready to share what we’ve started to bolster the ecosystem needed to defend authenticity in a digital world.”

For more information on the above efforts and resources, visit https://www.darpa.mil/program/semantic-forensics and https://semanticforensics.com/.

To learn more about the SemaFor program and an overview on resulting technologies, check out the Voices from DARPA episode on “Demystifying Deepfakes” at https://www.darpa.mil/news-events/2023-06-16

Read all 2 comments / answer
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2024 06 > DARPA releases deepfake detection tools and starts collaborative challenges to develop more
David Chien, 2024-06- 2 (Update: 2024-06- 2)