• About
  • Research / Projects
  • In The News
  • Publications
  • CV
  • GITHUB
  • Contact
Sarah Barrington
  • About
  • Research / Projects
  • In The News
  • Publications
  • CV
  • GITHUB
  • Contact

Discussing AI-Powered Scams on WKBW ABC

In our latest study, humans perceived the identity of an AI-generated voice to be the same as its real counterpart 80% of the time. So what do we do about it? I was glad to talk to WKBW-TV ABC News about the common scams people are experiencing every day through these technologies and how we can look out for them.

Read more

tags: news
Friday 01.02.26
Posted by Sarah Barrington
 

My Interview With NBC

Our audio deepfakes study on NBC News! An absolute pleasure to sit down with Senior Investigative Reporter, Bigad Shaban (who, by the way, is awesome to work with), and discuss all thing deepfakes and AI-powered voice clones, and the post-truth era. Bigad even tried to do the same 'real vs. fake' voice quiz that we gave to our study participants and did no better than guessing!

Read more

tags: news
Friday 11.28.25
Posted by Sarah Barrington
 

Writing for Tech Policy Press: AI is Not Too Unprecedented to Regulate

Is generative AI too new or unprecendented to regulate? In my latest article published in Tech Policy Press, I argue ‘no’. While some elements of generative AI are genuinely new, and the inner workings of these models are extremely complex, the harms it enables are evolutions of those we have seen before.

Read more

tags: news, writing
Friday 11.28.25
Posted by Sarah Barrington
 

Hosting Our First APAI Workshop at ICCV25

Hosted our Authenticity & Provenance in the Age of generative AI (APAI) workshop at ICCV25 in Honolulu this year. Had a fantastic range of papers and talks, from watermarking NeRFs to semantically-aligned VLM-based deepfake detection; and many great entries to our SAFE Synthetic Video Detection Challenge 2025.

Read more

tags: academic
Friday 10.31.25
Posted by Sarah Barrington
 

Hany's TED Talk with 1 Million Views

My PhD advisor, Professor Hany Farid, gave a fantastic Ted talk about what our lab does. Highly recommended watch.

Read more

Thursday 09.04.25
Posted by Sarah Barrington
 

AI Policy-Academia Advisory in Washington DC

I joined a group of researchers from Berkeley AI Research (BAIR) attending the Defense Innovation Unit HQ in Crystal City to gather an array of policymakers from DoD, DoE, DHS, DoS, the IC, Congress and more to talk AI-policy and help bridge the infamous technology-policy gap.

Read more

tags: academic
Thursday 09.04.25
Posted by Sarah Barrington
 

Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones

Hany Farid, Emily Cooper and Rebecca Wexler - who happen to be three of my favorite academics of all time EVER - authored an article about our work on AI voice clones and what this means in a court of law.

Read more

tags: news, writing
Thursday 04.10.25
Posted by Sarah Barrington
 

Featured in the Royal Academy of Engineering Ingenia Magazine

The Royal Academy of Engineering did a very kind profile on me for the Ingenia magazine.

The article can be found here: https://www.ingenia.org.uk/articles/qa-sarah-barrington-phd-student-studying-ai-harms-and-deepfakes/. Thank you to Jasmine Wragg for coordinating, and Florence Downs for being so lovely to work with!

Read more

tags: news
Thursday 04.10.25
Posted by Sarah Barrington
 

How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports

Out in Nature Portfolio SciReports today, our work on detecting AI-powered voice clones. TL;DR: AI-generated voices are nearly indistinguishable from human voices. Listeners could only spot a 'fake' 60% of the time. By comparison, randomly guessing gets 50% accuracy.

Read more

tags: academic
Thursday 04.10.25
Posted by Sarah Barrington
 

Releasing The DeepSpeak Dataset

Tired of using the usual poor quality, non-consensual and limited deepfake datasets found online, we decided to make our own. Now in it’s second iteration, the dataset comprises over 50 hours of footage of 500 diverse individuals (recorded with their consent), and 50 hours of deepfake video footage.

Read more

tags: academic
Thursday 04.10.25
Posted by Sarah Barrington
 

In the Washington Post Today

Delighted to open today’s Washington Post newsletter to see a prominent headline about audio deepfakes- with quotes from Hany and myself in it!

Read more

tags: news
Wednesday 10.16.24
Posted by Sarah Barrington
 

Writing about Bad Bunny in the Berkeley Science Review

Our piece about the AI-deepfake of Bad Bunny is out in the Berkeley Science Review now.

Read more

tags: news
Monday 10.07.24
Posted by Sarah Barrington
 

Keynote at the Alan Turing Institute Women in AI Security Workshop

On 17th July 2024, I was delighted to be able to speak as a keynote technical presenter at the Alan Turing Institute Women in AI Security Workshop.

Read more

Monday 10.07.24
Posted by Sarah Barrington
 

Talking to Vox About FKA twigs, Scarlett Johansson and Audio Deepfakes

I was invited to share my thoughts with Vox about recent developments in the entertainment industry revolving around FKA twigs, Scarlett Johansson and the looming dawn of audio deepfakes.

Read more

tags: news
Sunday 06.16.24
Posted by Sarah Barrington
 

Our Work on NPR

Was great to be a part of this recent NPR piece on voice cloning. I got to talk to Huo Jingnan about all things VC detection, and how there’s no ‘silver bullet’ from machine learning to fix this problem.

Read more

tags: news
Tuesday 06.04.24
Posted by Sarah Barrington
 

Invited to the White House to talk AI Voice Cloning

My PI, Hany Farid, and I were delighted to join the discussion about AI voice cloning technologies at The White House in January.

Read more

tags: academic
Tuesday 06.04.24
Posted by Sarah Barrington
 

Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security

How do you detect a cloned voice? The simple answer… deep learning. Hugely enjoyed presenting our different detection algorithms and the relative benefits of each at the 2023 IEEE International Workshop on Information and Forensics (WIFS) in Nuremburg, Germany.

Read more

tags: academic
Thursday 01.04.24
Posted by Sarah Barrington
 

Talk & workshop: AI & Cybersecurity for the US Department of State TechWomen Initative

Had a fantastic day leading the AI & cybersecurity session for the U.S. Department of State #TechWomen23 initiative. I presented an overview of the AI landscape at present; ranging from deepfakes and misinformation to the potential threat of cyber-fuelled nuclear warfare.

Read more

tags: speaking
Monday 09.25.23
Posted by Sarah Barrington
 

Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023

Selected student facilitator for the Berkeley Risk and Security Lab & OpenAI workshop on Confidence-Building Measures for Artificial Intelligence (Jan 2023, paper released August 2023).

Read more

tags: academic
Tuesday 08.22.23
Posted by Sarah Barrington
 

Paper: Detecting Real vs. Deep-fake Cloned Voices

Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection. Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms.

Read more

tags: academic
Monday 08.21.23
Posted by Sarah Barrington
 
Newer / Older

SARAH BARRINGTON