• About
  • Research / Projects
  • Publications
  • CV
  • GITHUB
  • Contact
Sarah Barrington
  • About
  • Research / Projects
  • Publications
  • CV
  • GITHUB
  • Contact
Featured
Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones
Apr 10, 2025
Our Work in Lawfare - Why Courts Need Updated Rules of Evidence to Handle AI Voice Clones
Apr 10, 2025

Hany Farid, Emily Cooper and Rebecca Wexler - who happen to be three of my favorite academics of all time EVER - authored an article about our work on AI voice clones and what this means in a court of law.

Read More →
Apr 10, 2025
Featured in the Royal Academy of Engineering Ingenia Magazine
Apr 10, 2025
Featured in the Royal Academy of Engineering Ingenia Magazine
Apr 10, 2025

The Royal Academy of Engineering did a very kind profile on me for the Ingenia magazine.

The article can be found here: https://www.ingenia.org.uk/articles/qa-sarah-barrington-phd-student-studying-ai-harms-and-deepfakes/. Thank you to Jasmine Wragg for coordinating, and Florence Downs for being so lovely to work with!

Read More →
Apr 10, 2025
How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports
Apr 10, 2025
How AI-Voice Clones Fool Humans: Our Latest Work in Nature Scientific Reports
Apr 10, 2025

Out in Nature Portfolio SciReports today, our work on detecting AI-powered voice clones. TL;DR: AI-generated voices are nearly indistinguishable from human voices. Listeners could only spot a 'fake' 60% of the time. By comparison, randomly guessing gets 50% accuracy.

Read More →
Apr 10, 2025
Releasing The DeepSpeak Dataset
Apr 10, 2025
Releasing The DeepSpeak Dataset
Apr 10, 2025

Tired of using the usual poor quality, non-consensual and limited deepfake datasets found online, we decided to make our own. Now in it’s second iteration, the dataset comprises over 50 hours of footage of 500 diverse individuals (recorded with their consent), and 50 hours of deepfake video footage.

Read More →
Apr 10, 2025
In the Washington Post Today
Oct 16, 2024
In the Washington Post Today
Oct 16, 2024
Read More →
Oct 16, 2024
Writing about Bad Bunny in the Berkeley Science Review
Oct 7, 2024
Writing about Bad Bunny in the Berkeley Science Review
Oct 7, 2024

Our piece about the AI-deepfake of Bad Bunny is out in the Berkeley Science Review now.

Read More →
Oct 7, 2024
Keynote at the Alan Turing Institute Women in AI Security Workshop
Oct 7, 2024
Keynote at the Alan Turing Institute Women in AI Security Workshop
Oct 7, 2024

On 17th July 2024, I was delighted to be able to speak as a keynote technical presenter at the Alan Turing Institute Women in AI Security Workshop.

Read More →
Oct 7, 2024
Talking to Vox About FKA twigs, Scarlett Johansson and Audio Deepfakes
Jun 16, 2024
Talking to Vox About FKA twigs, Scarlett Johansson and Audio Deepfakes
Jun 16, 2024

I was invited to share my thoughts with Vox about recent developments in the entertainment industry revolving around FKA twigs, Scarlett Johansson and the looming dawn of audio deepfakes.

Read More →
Jun 16, 2024
Our Work on NPR
Jun 4, 2024
Our Work on NPR
Jun 4, 2024

Was great to be a part of this recent NPR piece on voice cloning. I got to talk to Huo Jingnan about all things VC detection, and how there’s no ‘silver bullet’ from machine learning to fix this problem.

Read More →
Jun 4, 2024
Invited to the White House to talk AI Voice Cloning
Jun 4, 2024
Invited to the White House to talk AI Voice Cloning
Jun 4, 2024

My PI, Hany Farid, and I were delighted to join the discussion about AI voice cloning technologies at The White House in January.

Read More →
Jun 4, 2024
Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security
Jan 4, 2024
Conference: Presenting our Voice Cloning Work at the IEEE International Workshop on Information Forensics and Security
Jan 4, 2024

How do you detect a cloned voice? The simple answer… deep learning. Hugely enjoyed presenting our different detection algorithms and the relative benefits of each at the 2023 IEEE International Workshop on Information and Forensics (WIFS) in Nuremburg, Germany.

Read More →
Jan 4, 2024
Talk & workshop: AI & Cybersecurity for the US Department of State TechWomen Initative
Sep 25, 2023
Talk & workshop: AI & Cybersecurity for the US Department of State TechWomen Initative
Sep 25, 2023

Had a fantastic day leading the AI & cybersecurity session for the U.S. Department of State #TechWomen23 initiative. I presented an overview of the AI landscape at present; ranging from deepfakes and misinformation to the potential threat of cyber-fuelled nuclear warfare.

Read More →
Sep 25, 2023
Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023
Aug 22, 2023
Working for OpenAI at the Confidence-Building Measures for Artificial Intelligence Workshop, 2023
Aug 22, 2023

Selected student facilitator for the Berkeley Risk and Security Lab & OpenAI workshop on Confidence-Building Measures for Artificial Intelligence (Jan 2023, paper released August 2023).

Read More →
Aug 22, 2023
Paper: Detecting Real vs. Deep-fake Cloned Voices
Aug 21, 2023
Paper: Detecting Real vs. Deep-fake Cloned Voices
Aug 21, 2023

Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection. Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms.

Read More →
Aug 21, 2023
IMG_1137.JPG
May 26, 2023
Conference: Presenting at the 2023 Nobel Prize Summit
May 26, 2023
Read More →
May 26, 2023
Paper: Published in Nature Scientific Reports!
Mar 23, 2023
Paper: Published in Nature Scientific Reports!
Mar 23, 2023

Hany & I have been working for over a year devising a study to examine how well we can estimate height and weight from a single image. We compare state-of-the-art AI, computer vision, and 3D modeling techniques with estimations from experts and non-experts. The results are surprising.

Read More →
Mar 23, 2023
Code: Detecting Deep-Fakes Through Corneal Reflections
Dec 4, 2022
Code: Detecting Deep-Fakes Through Corneal Reflections
Dec 4, 2022
Read More →
Dec 4, 2022
Paper: The ‘Fungibility’ of Non-Fungible Tokens: A Quantitative Analysis of ERC-721 Metadata
Sep 29, 2022
Paper: The ‘Fungibility’ of Non-Fungible Tokens: A Quantitative Analysis of ERC-721 Metadata
Sep 29, 2022

Non-Fungible Tokens (NFTs), digital certificates of ownership for virtual art, have until recently been traded on a highly lucrative and speculative market. Yet, an emergence of misconceptions, along with a sustained market downtime, are calling the value of NFTs into question.

Read More →
Sep 29, 2022
Winning the UC Berkeley & Binance LIFT Big Ideas Contest
May 23, 2022
Winning the UC Berkeley & Binance LIFT Big Ideas Contest
May 23, 2022

Since late last year, I have been collaborating on two very exciting projects regarding the feasibility of applications for Web3 technologies as part of the University of California Big Ideas contest.

Read More →
May 23, 2022
Stanford University CodeX Conference: DAOs & Systems for Resilient Societies
May 2, 2022
Stanford University CodeX Conference: DAOs & Systems for Resilient Societies
May 2, 2022

In April, I attended the Stanford University CodeX Blockchain Group's DAOs and Systems for Resilient Societies conference as a collaborator with my colleagues from the Open Earth Foundation (OEF). OEF is a non-profit research organisation whom I have worked with on several research projects, examining the feasibility and security implications of decentralised technologies for climate applications.

Read More →
May 2, 2022
Platforms of Oppression: Countercultures, the Matrix of Domination & Intersectionality in Social Media
Mar 9, 2022
Platforms of Oppression: Countercultures, the Matrix of Domination & Intersectionality in Social Media
Mar 9, 2022

Defined as a conceptual model to enable the consideration of ‘how power, oppression, resistance, privilege, penalties, benefits and harms are systematically distributed’, the matrix of domination is a term coined by feminist scholar Patricia Hill Collins in her book Black Feminist Thought. Sociotechnical, data-driven systems can, in particular, be considered to enforce & proliferate this structure; which constitutes the thesis explored by Costanza-Chocks’s article written in 2017. Arguably, the most notable example of such interlocking systems of oppression can be observed in social media moderation algorithms, which are typically combined human-algorithmic approaches to curating and silencing online user-generated content.

Read More →
Mar 9, 2022
Middleware Models for Algorithmic Moderation at Scale
technical
Dec 22, 2021
Middleware Models for Algorithmic Moderation at Scale
technical
Dec 22, 2021

Over the past 2 decades, Digital Platforms (DPs), including Facebook, Google, Amazon and Apple, have risen from insignificant start- ups to a dominating set of firms with a combined 4 trillion dollar market capitalisation1. These now ubiquitous DPs have revolutionised many aspects of everyday life, from working and studying to communicating and dating. In recognition of this unprecedented rise of commercial power, legal and academic scholarship has begun to revisit the concepts of monopolistic market behaviours and the subsequent potential for both economic and political influence.

Read More →
technical
Dec 22, 2021
Metadata and Non-Fungible Token Architecture
technical
Dec 18, 2021
Metadata and Non-Fungible Token Architecture
technical
Dec 18, 2021

Cryptocurrency and Blockchain technologies are fast becoming areas of public interest across a breadth of diverse domains, with novel applications ranging from financial services to state politics. In particular, promising future use cases of these decentralised technologies aim to re-define the concept of value, and potentially enable a new wave of accessible investment opportunities for the general public. One such example of this is through Non-Fungible Tokens (NFTs), which act as digital certificates of ownership for a given piece of digital content. These stamps are stored as tokens on a public Blockchain (such as Ethereum), which uses complex hashing algorithms to ensure that each token is unique, tamper-proof and publicly-visible. This adds value to a range of digital content & artworks through enabling a single source of truth relating to its ownership and historic activity.

Read More →
technical
Dec 18, 2021
Feedback.png
technical
Nov 7, 2021
The Perception of Threat in Female-Targeted Online Abuse
technical
Nov 7, 2021

As a component of the INFO-272 Research course at the School of Information, I lead an independent qualitative research study in order to explore the perception of threat in a range of online abuse situations through the contexts, themes and experience of five self-identifying women. Online abuse is becoming an increasingly prevalent issue in modern day society, with 41% of Americans having experienced online harassment in some capacity in 2021. People who identify as women, in particular, can be subjected to a wide range of abusive behaviour online, with gender-specific experiences cited broadly in recent literature across fields such as blogging, politics and journalism.

Read More →
technical
Nov 7, 2021
Female Experiences of Online Abuse, Sexism and Trolling: Call for Participants
technical
Sep 15, 2021
Female Experiences of Online Abuse, Sexism and Trolling: Call for Participants
technical
Sep 15, 2021

Online abuse, including trolling, is becoming an increasingly prevalent issue in modern day society. Women, in particular, face a higher threat of becoming the recipient of online harassment, with examples widely reported in recent literature across blogging [1], politics [2], journalism [3] and multiple other fields. This study, conducted through the School of Information at the University of California, Berkeley, will involve remotely interviewing 5 self-identifying women with the goal of understanding their experiences of online abuse.

Read More →
technical
Sep 15, 2021
IFM_Alumni.jpeg
Jun 28, 2021
Talking start-ups at the Cambridge Institute for Manufacturing
Jun 28, 2021

It is always a pleasure to re-connect with the Institute for Manufacturing (IfM), University of Cambridge, of which I have been an alumni since 2016. Head of Department & my former course lead, Professor Tim Minshall, invited a handful of alumni back to talk about their experiences across multiple industries, and I was excited to be able to talk about my experience of creating, building and operating within start up companies.

Read More →
Jun 28, 2021
SBarrington_Fulbright_Elsevier
technical
Jun 28, 2021
Winning the US-UK Fulbright Scholarship for Data Analytics
technical
Jun 28, 2021

From being told I only got my first job in engineering because I was a woman, to starting a Postgraduate degree … as a Fulbright Scholar!

I'm hugely grateful & honoured to announce that I've received this year's Elsevier US-UK Fulbright Commission Award for Data Analytics, ahead of attending the UC Berkeley School of Information this fall.

After starting my journey as an Engineer almost a decade ago, studying at the Cambridge Engineering Department & Institute for Manufacturing (IfM), University of Cambridge, before working at McLaren Applied, starting two businesses and joining Anmut, I'm excited for my next chapter exploring how Artificial Intelligence can revolutionise our approach to global engineering challenges and bring social good.

Read More →
technical
Jun 28, 2021
unnamed.jpg
technical
Dec 4, 2020
'Big Data' Machine Learning using Spark & Azure with the Women in Data Initiative
technical
Dec 4, 2020

As a Data Scientist, I fall directly into the category of programmer who, at least once a week, uses the phrase ‘sure, but…. I’m not a Data Engineer’. Back in the days in which CSVs, Jupyter notebooks and ‘Titanic’ style datasets from Kaggle were enough to satisfy Data Science needs, this response was pretty valid. Sure enough, this basic stack proved totally sufficient with which to do some pretty exciting Machine Learning and analysis.

Read More →
technical
Dec 4, 2020
1596475896981.jpeg
technical
Dec 3, 2020
Data Protection in Social Media: the TikTok Data Request
technical
Dec 3, 2020

Recently, I received a notification in my TikTok app saying that my data request had been successful and that I could now download my personal file. I’m a regular user, and as a Data Scientist, I’m always eager to grasp any opportunity to better understand my own data footprint. Even knowing what I know about online data collection and how powerful this can be in empowering better products, services and opportunities for users, I still tend to feel a little uncomfortable at the sheer volume of what’s returned.

Read More →
technical
Dec 3, 2020

 SEE OLDER →

 

SARAH BARRINGTON