Hosted our Authenticity & Provenance in the Age of generative AI (APAI) workshop at ICCV25 in Honolulu this year. Had a fantastic range of papers and talks, from watermarking NeRFs to semantically-aligned VLM-based deepfake detection; and many great entries to our SAFE Synthetic Video Detection Challenge 2025.
AI Policy-Academia Advisory in Washington DC
I joined a group of researchers from Berkeley AI Research (BAIR) attending the Defense Innovation Unit HQ in Crystal City to gather an array of policymakers from DoD, DoE, DHS, DoS, the IC, Congress and more to talk AI-policy and help bridge the infamous technology-policy gap.
How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports
Out in Nature Portfolio SciReports today, our work on detecting AI-powered voice clones. TL;DR: AI-generated voices are nearly indistinguishable from human voices. Listeners could only spot a 'fake' 60% of the time. By comparison, randomly guessing gets 50% accuracy.
Releasing The DeepSpeak Dataset
Tired of using the usual poor quality, non-consensual and limited deepfake datasets found online, we decided to make our own. Now in it’s second iteration, the dataset comprises over 50 hours of footage of 500 diverse individuals (recorded with their consent), and 50 hours of deepfake video footage.
Invited to the White House to talk AI Voice Cloning
My PI, Hany Farid, and I were delighted to join the discussion about AI voice cloning technologies at The White House in January.
Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security
How do you detect a cloned voice? The simple answer… deep learning. Hugely enjoyed presenting our different detection algorithms and the relative benefits of each at the 2023 IEEE International Workshop on Information and Forensics (WIFS) in Nuremburg, Germany.
Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023
Selected student facilitator for the Berkeley Risk and Security Lab & OpenAI workshop on Confidence-Building Measures for Artificial Intelligence (Jan 2023, paper released August 2023).
Paper: Detecting Real vs. Deep-fake Cloned Voices
Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection. Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms.
Conference: Presenting at the 2023 Nobel Prize Summit
Excited, shocked (!) and honored to share that our work in deepfake detection was recognized at The Nobel Prize summit by the Digital Public Goods Alliance and United Nations UNDP as part of their campaign in combatting disinformation.
We were invited to attend the summit and participate in the most enriching and enlightening conversations about information integrity. We met world leaders, Laureates, policymakers, and technologists from the international community who are creating real change in their fields. We were also invited to the Royal Norwegian Embassy to present our work.
We were selected as one of an array of open-source innovations aiming to enhance information integrity worldwide. More info at bit.ly/Info-Pollution #nobelprizesummit #nobelprize #unitednations. Thank you to our lab Romit Barua Gautham Koorma Hany Farid UC Berkeley School of Information, and a special shout out to Nick Merrill for sharing this amazing opportunity.
Paper: Published in Nature Scientific Reports!
Hany & I have been working for over a year devising a study to examine how well we can estimate height and weight from a single image. We compare state-of-the-art AI, computer vision, and 3D modeling techniques with estimations from experts and non-experts. The results are surprising.
Female Experiences of Online Abuse, Sexism and Trolling: Call for Participants
Online abuse, including trolling, is becoming an increasingly prevalent issue in modern day society. Women, in particular, face a higher threat of becoming the recipient of online harassment, with examples widely reported in recent literature across blogging [1], politics [2], journalism [3] and multiple other fields. This study, conducted through the School of Information at the University of California, Berkeley, will involve remotely interviewing 5 self-identifying women with the goal of understanding their experiences of online abuse.