• About
  • Research / Projects
  • In The News
  • Publications
  • CV
  • GITHUB
  • Contact
Sarah Barrington
  • About
  • Research / Projects
  • In The News
  • Publications
  • CV
  • GITHUB
  • Contact
Featured
Writing for Tech Policy Press: AI is Not Too Unprecedented to Regulate
Nov 28, 2025
Writing for Tech Policy Press: AI is Not Too Unprecedented to Regulate
Nov 28, 2025

Is generative AI too new or unprecendented to regulate? In my latest article published in Tech Policy Press, I argue ‘no’. While some elements of generative AI are genuinely new, and the inner workings of these models are extremely complex, the harms it enables are evolutions of those we have seen before.

Read More →
Nov 28, 2025
Hosting Our First APAI Workshop at ICCV25
Oct 31, 2025
Hosting Our First APAI Workshop at ICCV25
Oct 31, 2025

Hosted our Authenticity & Provenance in the Age of generative AI (APAI) workshop at ICCV25 in Honolulu this year. Had a fantastic range of papers and talks, from watermarking NeRFs to semantically-aligned VLM-based deepfake detection; and many great entries to our SAFE Synthetic Video Detection Challenge 2025.

Read More →
Oct 31, 2025
IMG_9840.jpg
Sep 4, 2025
AI Policy-Academia Advisory in Washington DC
Sep 4, 2025

I joined a group of researchers from Berkeley AI Research (BAIR) attending the Defense Innovation Unit HQ in Crystal City to gather an array of policymakers from DoD, DoE, DHS, DoS, the IC, Congress and more to talk AI-policy and help bridge the infamous technology-policy gap.

Read More →
Sep 4, 2025
Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones
Apr 10, 2025
Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones
Apr 10, 2025

Hany Farid, Emily Cooper and Rebecca Wexler - who happen to be three of my favorite academics of all time EVER - authored an article about our work on AI voice clones and what this means in a court of law.

Read More →
Apr 10, 2025
How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports
Apr 10, 2025
How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports
Apr 10, 2025

Out in Nature Portfolio SciReports today, our work on detecting AI-powered voice clones. TL;DR: AI-generated voices are nearly indistinguishable from human voices. Listeners could only spot a 'fake' 60% of the time. By comparison, randomly guessing gets 50% accuracy.

Read More →
Apr 10, 2025
Releasing The DeepSpeak Dataset
Apr 10, 2025
Releasing The DeepSpeak Dataset
Apr 10, 2025

Tired of using the usual poor quality, non-consensual and limited deepfake datasets found online, we decided to make our own. Now in it’s second iteration, the dataset comprises over 50 hours of footage of 500 diverse individuals (recorded with their consent), and 50 hours of deepfake video footage.

Read More →
Apr 10, 2025
Invited to the White House to talk AI Voice Cloning
Jun 4, 2024
Invited to the White House to talk AI Voice Cloning
Jun 4, 2024

My PI, Hany Farid, and I were delighted to join the discussion about AI voice cloning technologies at The White House in January.

Read More →
Jun 4, 2024
Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security
Jan 4, 2024
Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security
Jan 4, 2024

How do you detect a cloned voice? The simple answer… deep learning. Hugely enjoyed presenting our different detection algorithms and the relative benefits of each at the 2023 IEEE International Workshop on Information and Forensics (WIFS) in Nuremburg, Germany.

Read More →
Jan 4, 2024
Talk & workshop: AI & Cybersecurity for the US Department of State TechWomen Initative
Sep 25, 2023
Talk & workshop: AI & Cybersecurity for the US Department of State TechWomen Initative
Sep 25, 2023

Had a fantastic day leading the AI & cybersecurity session for the U.S. Department of State #TechWomen23 initiative. I presented an overview of the AI landscape at present; ranging from deepfakes and misinformation to the potential threat of cyber-fuelled nuclear warfare.

Read More →
Sep 25, 2023
Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023
Aug 22, 2023
Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023
Aug 22, 2023

Selected student facilitator for the Berkeley Risk and Security Lab & OpenAI workshop on Confidence-Building Measures for Artificial Intelligence (Jan 2023, paper released August 2023).

Read More →
Aug 22, 2023
Paper: Detecting Real vs. Deep-fake Cloned Voices
Aug 21, 2023
Paper: Detecting Real vs. Deep-fake Cloned Voices
Aug 21, 2023

Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection. Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms.

Read More →
Aug 21, 2023
IMG_1137.JPG
May 26, 2023
Conference: Presenting at the 2023 Nobel Prize Summit
May 26, 2023
Read More →
May 26, 2023
Paper: Published in Nature Scientific Reports!
Mar 23, 2023
Paper: Published in Nature Scientific Reports!
Mar 23, 2023

Hany & I have been working for over a year devising a study to examine how well we can estimate height and weight from a single image. We compare state-of-the-art AI, computer vision, and 3D modeling techniques with estimations from experts and non-experts. The results are surprising.

Read More →
Mar 23, 2023
Female Experiences of Online Abuse, Sexism and Trolling: Call for Participants
technical
Sep 15, 2021
Female Experiences of Online Abuse, Sexism and Trolling: Call for Participants
technical
Sep 15, 2021

Online abuse, including trolling, is becoming an increasingly prevalent issue in modern day society. Women, in particular, face a higher threat of becoming the recipient of online harassment, with examples widely reported in recent literature across blogging [1], politics [2], journalism [3] and multiple other fields. This study, conducted through the School of Information at the University of California, Berkeley, will involve remotely interviewing 5 self-identifying women with the goal of understanding their experiences of online abuse.

Read More →
technical
Sep 15, 2021
IFM_Alumni.jpeg
Jun 28, 2021
Talking start-ups at the Cambridge Institute for Manufacturing
Jun 28, 2021

It is always a pleasure to re-connect with the Institute for Manufacturing (IfM), University of Cambridge, of which I have been an alumni since 2016. Head of Department & my former course lead, Professor Tim Minshall, invited a handful of alumni back to talk about their experiences across multiple industries, and I was excited to be able to talk about my experience of creating, building and operating within start up companies.

Read More →
Jun 28, 2021

 SEE OLDER →

 

SARAH BARRINGTON