AIModels.fyi

AIModels.fyi

Share this post

AIModels.fyi
AIModels.fyi
Meta found a way to secretly watermark deepfake audio
Copy link
Facebook
Email
Notes
More

Meta found a way to secretly watermark deepfake audio

Researchers have found a way to imperceptibly watermark fake audio

aimodels-fyi's avatar
aimodels-fyi
Feb 03, 2024
∙ Paid
2

Share this post

AIModels.fyi
AIModels.fyi
Meta found a way to secretly watermark deepfake audio
Copy link
Facebook
Email
Notes
More
1
Share
Meta can now secretly watermark deepfake audio
"Proactive detection of AI-generated speech. We embed an imperceptible watermark in the audio, which can be used to detect if a speech is AI-generated and identify the model that generated it."

The rapid advancement of AI voice synthesis technologies has enabled the creation of extremely realistic fake human speech. However, this also opens up concerning possibilities of voice cloning, deepfakes, and other forms of audio manipulation (this recent fake Biden robocall being the first example that comes to mind).

Robust new detection methods are needed to find and segregate audio deepfakes from real recordings. In this post, we'll take a look at a novel technique from Facebook Research called AudioSeal (github, paper) that tackles this problem by imperceptibly watermarking AI-generated speech. We'll see how it works and also take a look at some applications and limitations. Let's go!

AIModels.fyi is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Keep reading with a 7-day free trial

Subscribe to AIModels.fyi to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AIModels.fyi
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More