Skip to content

Code repository for the paper Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation, EUSIPCO 2024

License

Notifications You must be signed in to change notification settings

lucacoma/XAISrcLoc

Repository files navigation

Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation

Code repository for the paper Interpreting End-to-End Deep Neural Networks Models for Acoustic Source Localization using Layer-wise Relevance Propagation, EUSIPCO 2024 [1].

For any further info feel free to contact me at luca.comanducci@polimi.it

Dependencies

  • Python, it has been tested with version 3.9.18
  • Numpy, tqdm, matplotlib
  • Pytorch 2.1.2+cu118
  • zennit
  • gpuRIR

Data generation

The generate_data.py script generates the data using the room parameters contained in params.py.

The command-line arguments are the following

  • T60: float, Reverberation Time (T60)
  • SNR: Int, Signal to Noise Ratio (SNR)
  • gpu: Int, number of the chosen GPU (if multiple are available)

Network training

The train.py script trains the network.

The command-line arguments are the following

  • T60: float, Reverberation Time (T60)
  • SNR: Int, Signal to Noise Ratio (SNR)
  • gpu: Int, number of the chosen GPU (if multiple are available)
  • data_path: String, path to where dataset is saved
  • log_dir: String, Path to where to store tensorboard logs

Results computation

To perform the XAI experiments:

The perturbation_experiment.py performs manipulation of input features.

The command-line arguments are the following

  • T60: float, Reverberation Time (T60)
  • SNR: Int, Signal to Noise Ratio (SNR)
  • gpu: Int, number of the chosen GPU (if multiple are available)

The tdoa_experiment.py performs the time-delay estimation experiment.

The command-line arguments are the following

  • T60: float, Reverberation Time (T60)
  • SNR: Int, Signal to Noise Ratio (SNR)
  • gpu: Int, number of the chosen GPU (if multiple are available)

The jupyter notebooks Input_visualization_paper.ipynb and Plot_Perturbation.ipynb can be used to obtain the same figures presented in the paper.0

N.B. pre-trained models used to compute the results shown in [1] can be found in folder models

References

[1] L.Comanducci, F.Antonacci, A.Sarti, Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation, accepted at EUSIPCO 2024

About

Code repository for the paper Interpreting End-to-End Deep Learning Models for Acoustic Source Localization using Layer-wise Relevance Propagation, EUSIPCO 2024

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published