Recently, I was fortunate to work with Romit Barua and Gautham Koorma on a project exploring the relative performances of computation approaches to cloned-voice detection.
Synthetic-voice cloning technologies have seen significant advances in recent years, giving rise to a range of potential harms. From small- and large-scale financial fraud to disinformation campaigns, the need for reliable methods to differentiate real and synthesized voices is imperative. In our recent paper, we describe three techniques for differentiating a real from a cloned voice designed to impersonate a specific person.
These three approaches differ in their feature extraction stage with low-dimensional perceptual features offering high interpretability but lower accuracy, to generic spectral features, and end-to-end learned features offering less interpretability but higher accuracy. We show the efficacy of these approaches when trained on a single speaker's voice and when trained on multiple voices. The learned features consistently yield an equal error rate between 0% and 4%, and are reasonably robust to adversarial laundering.
The full paper is currently under review but the pre-print can be found here: https://arxiv.org/abs/2307.07683