DCASE 2024 Challenge Task 2 and DCASE 2023 Challenge Task 2 Baseline Auto Encoder: dcase2023_task2_baseline_ae
This is an autoencoder-based baseline for the DCASE2024 Challenge Task 2 (DCASE2024T2) and the DCASE2023 Challenge Task 2 (DCASE2023T2).
This source code is an example implementation of the baseline Auto Encoder of DCASE2024T2 and DCASE2023T2: First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring. This baseline implementation is based on the previous baseline, dcase2022_baseline_ae. The model parameter settings of this baseline AE are almost equivalent to those of the dcase2022_task2_baseline_ae.
Differences between the previous dcase2022_baseline_ae and this version are as follows:
- The dcase2022_baseline_ae was implemented with Keras; however, this version is written in PyTorch.
- Data folder structure is updated to support DCASE2024T2 and DCASE2023T2 data sets.
- The system uses the MSE loss as a loss function for training, but for testing, two score functions depend on the testing modes (i.e., MSE for the Simple Autoencoder mode and Mahalanobis distance for the Selective Mahalanobis mode).
This system consists of three main scripts (01_train.sh, 02a_test.sh, and 02b_test.sh) with some helper scripts for DCASE2024T2 (For DCASE2023T2, see README_legacy):
-
Helper scripts for DCASE2024T2
- data_download_2024dev.sh
- "Development dataset":
- This script downloads development data files and puts them into "data/dcase2024t2/dev_data/raw/train/" and "data/dcase2024t2/dev_data/raw/test/".
- "Development dataset":
- data_download_2024add.sh Updated on (2024/05/15)
- "Additional train dataset for Evaluation":
- This script downloads Addition data files and puts them into "data/dcase2024t2/eval_data/raw/train/". Updated on (2024/05/15)
- "Additional train dataset for Evaluation":
- data_download_2024eval.sh Newly added!!
- "Additional test dataset for Evaluation"
- This script downloads evaluation data files and puts them into "data/dcase2024t2/eval_data/raw/test".
- "Additional test dataset for Evaluation"
- data_download_2024dev.sh
-
01_train_2024t2.sh
- "Development" mode:
- This script trains a model for each machine type for each section ID by using the directory
data/dcase2024t2/dev_data/raw/<machine_type>/train/<section_id>
.
- This script trains a model for each machine type for each section ID by using the directory
- "Evaluation" mode:
- This script trains a model for each machine type for each section ID by using the directory
data/dcase2024t2/eval_data/raw/<machine_type>/train/<section_id>
. Updated on (2024/05/15)
- This script trains a model for each machine type for each section ID by using the directory
- "Development" mode:
-
02a_test_2024t2.sh (Use MSE as a score function for the Simple Autoencoder mode)
- "Development" mode:
- This script makes a CSV file for each section, including the anomaly scores for each WAV file in the directories
data/dcase2024t2/dev_data/raw/<machine_type>/test/
. - The CSV files will be stored in the directory
results/
. - It also makes a csv file including AUC, pAUC, precision, recall, and F1-score for each section.
- This script makes a CSV file for each section, including the anomaly scores for each WAV file in the directories
- "Evaluation" mode: Newly added!!
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
data/dcase2024t2/eval_data/raw/<machine_type>/test/
. (These directories will be made available with the "evaluation dataset".) - The CSV files are stored in the directory
results/
.
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
- "Development" mode:
-
02b_test_2024t2.sh (Use Mahalanobis distance as a score function for the Selective Mahalanobis mode)
- "Development" mode:
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
data/dcase2024t2/dev_data/raw/<machine_type>/test/
. - The CSV files will be stored in the directory
results/
. - It also makes a csv file including AUC, pAUC, precision, recall, and F1-score for each section.
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
- "Evaluation" mode: Newly added!!
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
data/dcase2024t2/eval_data/raw/<machine_type>/test/
. (These directories will be made available with the "evaluation dataset".) - The CSV files are stored in the directory
results/
.
- This script makes a CSV file for each section, including the anomaly scores for each wav file in the directories
- "Development" mode:
-
03_summarize_results.sh
- This script summarizes results into a CSV file.
C.f., for DCASE2023T2, see README_legacy.
Clone this repository from GitHub.
We will launch the datasets in three stages. Therefore, please download the datasets in each stage:
-
DCASE 2024 Challenge Task 2
- "Development Dataset" New! (2024/04/01)
- Download "dev_data_<machine_type>.zip" from https://zenodo.org/records/10902294.
- "Additional Training Dataset", i.e., the evaluation dataset for training
Download "eval_data_<machine_type>_train.zip" from https://zenodo.org/records/11183284. Updated on (2024/05/15)- Download "eval_data_<machine_type>_[train|train_r2].zip" from https://zenodo.org/records/11259435. New! 2023/05/24
- "Evaluation Dataset", i.e., the evaluation dataset for test
- Download "eval_data_<machine_type>_test.zip" from https://zenodo.org/records/11363076.
- "Development Dataset" New! (2024/04/01)
-
For DCASE 2023 Challenge Task 2 (C.f., for DCASE2023T2, see README_legacy)
- "Development Dataset"
- Download "dev_data_<machine_type>.zip" from https://zenodo.org/record/7882613.
- "Additional Training Dataset", i.e., the evaluation dataset for training
- Download "eval_data_<machine_type>_train.zip" from https://zenodo.org/record/7830345.
- "Evaluation Dataset", i.e., the evaluation dataset for test
- Download "eval_data_<machine_type>_test.zip" from https://zenodo.org/record/7860847.
- "Development Dataset"
- dcase2023_task2_baseline_ae
- data/dcase2024t2/dev_data/raw/
- fan/
- train (only normal clips)
- section_00_source_train_normal_0000_.wav
- ...
- section_00_source_train_normal_0989_.wav
- section_00_target_train_normal_0000_.wav
- ...
- section_00_target_train_normal_0009_.wav
- test/
- section_00_source_test_normal_0000_.wav
- ...
- section_00_source_test_normal_0049_.wav
- section_00_source_test_anomaly_0000_.wav
- ...
- section_00_source_test_anomaly_0049_.wav
- section_00_target_test_normal_0000_.wav
- ...
- section_00_target_test_normal_0049_.wav
- section_00_target_test_anomaly_0000_.wav
- ...
- section_00_target_test_anomaly_0049_.wav
- attributes_00.csv (attributes CSV for section 00)
- train (only normal clips)
- fan/
- gearbox/ (The other machine types have the same directory structure as fan.)
- data/dcase2024t2/dev_data/raw/
- data/dcase2024t2/eval_data/raw/
- <machine_type0_of_additional_dataset>/
- train/ (after launch of the additional training dataset)
- section_00_source_train_normal_0000_.wav
- ...
- section_00_source_train_normal_0989_.wav
- section_00_target_train_normal_0000_.wav
- ...
- section_00_target_train_normal_0009_.wav
- attributes_00.csv (attributes CSV for section 00)
- train/ (after launch of the additional training dataset)
- test/ (after launch of the evaluation dataset)
- section_00_test_0000.wav
- ...
- section_00_test_0199.wav
- /test_rename (convert from test directory using
tools/rename.py
)- /section_00_source_test_normal_<0000~0200>_<attribute>.wav
- ...
- /section_00_source_test_anomaly_<0000~0200>_<attribute>.wav
- ...
- /section_00_target_test_normal_<0000~0200>_<attribute>.wav
- ...
- /section_00_target_test_anomaly_<0000~0200>_<attribute>.wav
- ...
- attributes_00.csv (attributes CSV for section 00)
- <machine_type1_of_additional_dataset> (The other machine types have the same directory structure as <machine_type0_of_additional_dataset>/.)
- <machine_type0_of_additional_dataset>/
You can change parameters for feature extraction and model definition by editing baseline.yaml
.
Note that if values are specified in the command line option, it will overwrite the parameter settings in baseline.yaml
.
If you haven't yet downloaded the dataset yourself nor you have not run the download script (example, data_download_2024dev.sh
) then you may want to use the auto download.
To enable the auto-downloading, set the parameter --is_auto_download
(default: False
) True
in baseline.yaml
. If --is_auto_download
is True
, then auto-download is executed.
Run the training script 01_train_2024t2.sh
. Use the option -d for the development dataset data/dcase2024t2/dev_data/<machine_type>/raw/train/
.
01_train_2024t2.sh
differs from 01_train_2024t2.sh
only in dataset;
# using DCASE2024 Task 2 Datasets
$ 01_train_2024t2.sh -d
The two operating modes of this baseline implementation, the simple Autoencoder, and the Selective Mahalanobis AE modes, share the common training process. By running the script 01_train_2024t2.sh
, all the model parameters for the simple Autoencoder and the selective Mahalanobis AE will be trained at the same time.
After the parameter update of the Autoencoder at the last epoch specified by either the yaml file or the command line option, the covariance matrixes for the Mahalanobis distance calculation will be set.
Run the test script 02a_test_2024t2.sh
. Use the option -d
for the development dataset data/dcase2024t2/dev_data/<machine_type>/raw/test/
.
02a_test_2024t2.sh
differs from 02a_test_2023t2.sh
only in dataset;
# using DCASE2024 Task 2 Datasets
$ 02a_test_2024t2.sh -d
The 02a_test_2024t2.sh
options are the same as those for 01_train_2024t2.sh
. 02a_test_2024t2.sh
calculates an anomaly score for each wav file in the directories data/dcase2024t2/dev_data/raw/<machine_type>/test/
or data/dcase2024t2/dev_data/raw/<machine_type>/source_test/
and data/dcase2024t2/dev_data/raw/<machine_type>/target_test/
.
A CSV file for each section, including the anomaly scores, will be stored in the directory results/
. If the mode is "development", the script also outputs another CSV file, including AUC, pAUC, precision, recall, and F1-score for each section.
Run the test script 02b_test_2024t2.sh
. Use the option -d
for the development dataset data/dcase2024t2/dev_data/<machine_type>/raw/test/
.
02b_test_2024t2.sh
differs from 02b_test_2023t2.sh
only in dataset;
# using DCASE2024 Task 2 Datasets
$ 02b_test_2024t2.sh -d
The 02b_test_2024t2.sh
options are the same as those for 01_train_2024t2.sh
. 02b_test_2024t2.sh
calculates an anomaly score for each wav file in the directories data/dcase2024t2/dev_data/raw/<machine_type>/test/
or data/dcase2024t2/dev_data/raw/<machine_type>/source_test/
and data/dcase2024t2/dev_data/raw/<machine_type>/target_test/
.
A CSV file for each section, including the anomaly scores, will be stored in the directory results/
. If the mode is "development", the script also outputs another CSV file, including AUC, pAUC, precision, recall, and F1-score for each section.
You can check the anomaly scores in the CSV files anomaly_score_<machine_type>_section_<section_index>_test.csv
in the directory results/
.
Each anomaly score corresponds to a wav file in the directories data/dcase2024t2/dev_data/<machine_type>/test/
.
anomaly_score_<machine_type>_section_00_test.csv
section_00_source_test_normal_0000_car_A2_spd_28V_mic_1_noise_1.wav,0.3084583878517151
section_00_source_test_normal_0001_car_A2_spd_28V_mic_1_noise_1.wav,0.31289517879486084
section_00_source_test_normal_0002_car_A2_spd_28V_mic_1_noise_1.wav,0.4160425364971161
section_00_source_test_normal_0003_car_A2_spd_28V_mic_1_noise_1.wav,0.25631701946258545
Also, anomaly detection results based on the corresponding threshold can be checked in the CSV files decision_result_<machine_type>_section_<section_index>_test.csv
:
decision_result_<machine_type>_section_<section_index>_test.csv
section_00_source_test_normal_0000_car_A2_spd_28V_mic_1_noise_1.wav,0
section_00_source_test_normal_0001_car_A2_spd_28V_mic_1_noise_1.wav,0
section_00_source_test_normal_0002_car_A2_spd_28V_mic_1_noise_1.wav,0
section_00_source_test_normal_0003_car_A2_spd_28V_mic_1_noise_1.wav,0
...
In addition, you can check performance indicators such as AUC, pAUC, precision, recall, and F1-score:
result.csv
section, AUC (source), AUC (target), pAUC, pAUC (source), pAUC (target), precision (source), precision (target), recall (source), recall (target), F1 score (source), F1 score (target)
00,0.88,0.5078,0.5063157894736842,0.5536842105263158,0.4926315789473684,0.0,0.0,0.0,0.0,0.0,0.
arithmetic mean,00,0.88,0.5078,0.5063157894736842,0.5536842105263158,0.4926315789473684,0.0,0.0,0.0,0.0,0.0,0.
harmonic mean,00,0.88,0.5078,0.5063157894736842,0.5536842105263158,0.4926315789473684,0.0,0.0,0.0,0.0,0.0,0.
After the additional training dataset is launched, download and unzip it. Move it to data/dcase2024t2/eval_data/raw/<machine_type>/train/
. Run the training script 01_train_2024t2.sh
with the option -e
.
$ 01_train_2024t2.sh -e
Models are trained by using the additional training dataset data/dcase2024t2/raw/eval_data/<machine_type>/train/
.
After the evaluation dataset for the test is launched, download and unzip it. Move it to data/dcase2024t2/eval_data/raw/<machine_type>/test/
. Run the test script 02a_test_2024t2.sh
with the option -e
.
$ 02a_test_2024t2.sh -e
Anomaly scores are calculated using the evaluation dataset, i.e., data/dcase2024t2/eval_data/raw/<machine_type>/test/
. The anomaly scores are stored as CSV files in the directory results/
. You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.
If you use rename script to generate test_rename
directory, AUC and pAUC are also calculated.
After the evaluation dataset for the test is launched, download and unzip it. Move it to data/dcase2024t2/eval_data/raw/<machine_type>/test/
. Run the 02b_test_2024t2.sh
test script with the option -e
.
$ 02b_test_2024t2.sh -e
Anomaly scores are calculated using the evaluation dataset, i.e., data/dcase2024t2/eval_data/raw/<machine_type>/test/
. The anomaly scores are stored as CSV files in the directory results/
. You can submit the CSV files for the challenge. From the submitted CSV files, we will calculate AUC, pAUC, and your ranking.
If you use rename script to generate test_rename
directory, AUC and pAUC are also calculated.
After the executed 02a_test_2024t2.sh
, 02b_test_2024t2.sh
, or both. Run the summarize script 03_summarize_results.sh
with the option DCASE2024T2 -d
or DCASE2024T2 -e
.
# Summarize development dataset 2024
$ 03_summarize_results.sh DCASE2024T2 -d
# Summarize the evaluation dataset 2024
$ 03_summarize_results.sh DCASE2024T2 -e
After the summary, the results are exported in CSV format to results/dev_data/baseline/summarize/DCASE2024T2
or results/eval_data/baseline/summarize/DCASE2024T2
.
If you want to change, summarize results directory or export directory, edit 03_summarize_results.sh
.
This version takes the legacy datasets provided in DCASE2020 task2, DCASE2021 task2, DCASE2022 task2, DCASE2023 task2, and DCASE2024 task2 dataset for inputs.
The Legacy support scripts are similar to the main scripts. These are in tools
directory.
We developed and tested the source code on Ubuntu 20.04.4 LTS.
- Python == 3.10.8
- cuda == 11.6
- libsndfile1
- Pytorch == 1.13.1
- torchvision == 0.14.1
- numpy == 1.22.3
- pyYAML == 6.0
- scipy == 1.10.1
- librosa == 0.9.2
- matplotlib == 3.7.0
- tqdm == 4.63
- seaborn == 0.12.2
- fasteners == 0.18
- Added a link to the DCASE2024 task2 evaluator that calculates the official score.
- Added DCASE2024 Task2 Ground Truth data.
- Added DCASE2024 Task2 Ground truth attributes.
- The legacy script has been updated to be compatible with DCASE2024 Task2.
- Updated the README and README legacy files to reflect the latest citations.
- Updated to fix issues in 02a_test_2024t2.sh and 02b_test_2024t2.sh.
- The URL to download DCASE2024T2 Evaluation dataset was incorrect. Updated to fix issues in data_download_2024eval.sh.
- Provides support for the evaluation dataset to be used in DCASE2024T2.
- We have corrected the Additional training datasets data used in DCASE2024T2.
- eval_data_3DPrinter_train.zip and eval_data_RoboticArm_train.zip were replaced with eval_data_3DPrinter_train_r2.zip and eval_data_RoboticArm_train_r2.zip
- For the other machine types, data files are identical.
- This version reflects the update on the additional training dataset.
- Provides support for the additional training datasets to be used in DCASE2024T2.
- Added information about ground truth and citations for each year's task in README.md and README_legacy.md.
- Provides support for the legacy datasets used in DCASE2020, 2021, 2022, and 2023.
- Fixed a typo in README.md in the previous release, v3.0.0.
- Provides support for the development datasets used in DCASE2024.
- Fix anomaly score distribution.
- Decision threshold has changed, but AUC, pAUC, etc. have not changed.
- Provides support for the legacy datasets used in DCASE2020, 2021, 2022, and 2023.
The following code was used to calculate the official score. Among these is evaluation datasets ground truth.
This repository have evaluation data's ground truth csv. this csv is using to rename evaluation datasets. You can calculate AUC and other score if add ground truth to evaluation datasets file name. *Usually, rename function is executed along with download script and auto download function.
Attribute information is hidden by default for the following machine types:
- dev data
- gearbox
- slider
- ToyTrain
- eval data
- AirCompressor
- BrushlessMotor
- HoveringDrone
- ToothBrush
You can view the hidden attributes in the following directory:
If you use this system, please cite all the following four papers:
- Tomoya Nishida, Noboru Harada, Daisuke Niizumi, Davide Albertini, Roberto Sannino, Simone Pradolini, Filippo Augusti, Keisuke Imoto, Kota Dohi, Harsh Purohit, Takashi Endo, and Yohei Kawaguchi. Description and discussion on DCASE 2024 challenge task 2: first-shot unsupervised anomalous sound detection for machine condition monitoring. In arXiv e-prints: 2406.07250, 2024. URL
- Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In Proceedings of the Detection and Classification of Acoustic Scenes and Events Workshop (DCASE), 1–5. Barcelona, Spain, November 2021. URL
- Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In Proceedings of the 7th Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE2022). Nancy, France, November 2022. URL
- Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. First-shot anomaly detection for machine condition monitoring: a domain generalization baseline. Proceedings of 31st European Signal Processing Conference (EUSIPCO), pages 191–195, 2023. URL