Is generative AI too new or unprecendented to regulate? In my latest article published in Tech Policy Press, I argue ‘no’. While some elements of generative AI are genuinely new, and the inner workings of these models are extremely complex, the harms it enables are evolutions of those we have seen before.
Hosted our Authenticity & Provenance in the Age of generative AI (APAI) workshop at ICCV25 in Honolulu this year. Had a fantastic range of papers and talks, from watermarking NeRFs to semantically-aligned VLM-based deepfake detection; and many great entries to our SAFE Synthetic Video Detection Challenge 2025.
Hany Farid, Emily Cooper and Rebecca Wexler - who happen to be three of my favorite academics of all time EVER - authored an article about our work on AI voice clones and what this means in a court of law.
Out in Nature Portfolio SciReports today, our work on detecting AI-powered voice clones. TL;DR: AI-generated voices are nearly indistinguishable from human voices. Listeners could only spot a 'fake' 60% of the time. By comparison, randomly guessing gets 50% accuracy.
Tired of using the usual poor quality, non-consensual and limited deepfake datasets found online, we decided to make our own. Now in it’s second iteration, the dataset comprises over 50 hours of footage of 500 diverse individuals (recorded with their consent), and 50 hours of deepfake video footage.
My PI, Hany Farid, and I were delighted to join the discussion about AI voice cloning technologies at The White House in January.
How do you detect a cloned voice? The simple answer… deep learning. Hugely enjoyed presenting our different detection algorithms and the relative benefits of each at the 2023 IEEE International Workshop on Information and Forensics (WIFS) in Nuremburg, Germany.
Had a fantastic day leading the AI & cybersecurity session for the U.S. Department of State #TechWomen23 initiative. I presented an overview of the AI landscape at present; ranging from deepfakes and misinformation to the potential threat of cyber-fuelled nuclear warfare.
Selected student facilitator for the Berkeley Risk and Security Lab & OpenAI workshop on Confidence-Building Measures for Artificial Intelligence (Jan 2023, paper released August 2023).
Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection. Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms.
Hany & I have been working for over a year devising a study to examine how well we can estimate height and weight from a single image. We compare state-of-the-art AI, computer vision, and 3D modeling techniques with estimations from experts and non-experts. The results are surprising.