This repository provides a pipeline for automatic download, preprocessing, and feature extraction of audio files for training Deep Neural Networks (DNNs) in the task of singing voice deepfake detection.
The preprocessing pipeline is composed of four scripts that must be executed in order:
download_script.py– Download audio files based on metadata provided insingfake.csv.audio_separation.py– Use Demucs to separate vocals from background music and keep only the singing voice.vad_segmentation.py– Use PyAnnote Voice Activity Detection (VAD) to segment and remove silence.generate_melspec.py– Generate mel-spectrogram features from segmented audio and export them as.pthtensors, along with ameta.csvfile.
The script download_script.py requires the file singfake.csv, which can be downloaded from https://singfake.org/ or generated manually.
The CSV must contain the following columns:
- Set
- Bonafide Or Spoof
- Language
- Singer
- Title
- Model
- Url
torchtorchaudiodemucspyannote.audioyt-dlp
To use the PyAnnote speaker diarization pipeline, proceed as follows:
- visit hf.co/pyannote/segmentation and accept user conditions
- visit hf.co/settings/tokens to create an access token
At the end of the pipeline, mel-spectrograms will be stored under the melspec/ directory, and a metadata file meta.csv will be created.
We will extend this repository with singing voice deepfake detection models, including:
- Baseline architectures
- Pretrained weights
- Evaluation scripts
Stay tuned for updates! ✨