In our latest study, humans perceived the identity of an AI-generated voice to be the same as its real counterpart 80% of the time. So what do we do about it? I was glad to talk to WKBW-TV ABC News about the common scams people are experiencing every day through these technologies and how we can look out for them.
My Interview With NBC
Our audio deepfakes study on NBC News! An absolute pleasure to sit down with Senior Investigative Reporter, Bigad Shaban (who, by the way, is awesome to work with), and discuss all thing deepfakes and AI-powered voice clones, and the post-truth era. Bigad even tried to do the same 'real vs. fake' voice quiz that we gave to our study participants and did no better than guessing!
Writing for Tech Policy Press: AI is Not Too Unprecedented to Regulate
Is generative AI too new or unprecendented to regulate? In my latest article published in Tech Policy Press, I argue ‘no’. While some elements of generative AI are genuinely new, and the inner workings of these models are extremely complex, the harms it enables are evolutions of those we have seen before.
Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones
Hany Farid, Emily Cooper and Rebecca Wexler - who happen to be three of my favorite academics of all time EVER - authored an article about our work on AI voice clones and what this means in a court of law.
Featured in the Royal Academy of Engineering Ingenia Magazine
The Royal Academy of Engineering did a very kind profile on me for the Ingenia magazine.
The article can be found here: https://www.ingenia.org.uk/articles/qa-sarah-barrington-phd-student-studying-ai-harms-and-deepfakes/. Thank you to Jasmine Wragg for coordinating, and Florence Downs for being so lovely to work with!
In the Washington Post Today
Delighted to open today’s Washington Post newsletter to see a prominent headline about audio deepfakes- with quotes from Hany and myself in it!
Writing about Bad Bunny in the Berkeley Science Review
Our piece about the AI-deepfake of Bad Bunny is out in the Berkeley Science Review now.
Talking to Vox About FKA twigs, Scarlett Johansson and Audio Deepfakes
I was invited to share my thoughts with Vox about recent developments in the entertainment industry revolving around FKA twigs, Scarlett Johansson and the looming dawn of audio deepfakes.
Our Work on NPR
Was great to be a part of this recent NPR piece on voice cloning. I got to talk to Huo Jingnan about all things VC detection, and how there’s no ‘silver bullet’ from machine learning to fix this problem.