Skip to content

Latest commit

 

History

History
32 lines (21 loc) · 990 Bytes

README.md

File metadata and controls

32 lines (21 loc) · 990 Bytes

Adversarial Attacks on Question Answering models

Code to reproduce results in the paper

Mudrakarta, Pramod Kaushik, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. "Did the Model Understand the Question?." ACL 2018.

@article{mudrakarta2018did,
  title={Did the Model Understand the Question?},
  author={Mudrakarta, Pramod Kaushik and Taly, Ankur and Sundararajan, Mukund and Dhamdhere, Kedar},
  journal={arXiv preprint arXiv:1805.05492},
  year={2018}
}

Setup

Clone the repository using

git clone https://github.com/pramodkaushik/acl18_results.git --recursive

Code for experiments

Attacks on Neural Programmer are present in the folder np_analysis, and those on visual question answering in visual_qa_analysis. Code for computing attributions via Integrated Gradients and to reproduce experiments are in Jupyter notebooks in both these directories.

Contact

Pramod Kaushik Mudrakarta

pramodkm@uchicago.edu