Skip to content

Latest commit

 

History

History
42 lines (27 loc) · 1.39 KB

README.md

File metadata and controls

42 lines (27 loc) · 1.39 KB

adversarial

This repository contains PyTorch code to create and defend against adversarial attacks.

See this Medium article for a discussion on how to use and defend against the projected gradient attack.

Example adversarial attack created using this repo.

PGD Attack

Cool fact - adversarially trained discriminative (not generative!) models can be used to interpolate between classes by creating large-epsilon adversarial examples against them.

MNIST Class Interpolation

Contents

  • A Jupyter notebook demonstrating how to use and defend against the projected gradient attack (see notebooks/)

  • adversarial.functional contains functional style implementations of a view different types of adversarial attacks

    • Fast Gradient Sign Method - white box - batch implementation
    • Projected Gradient Descent - white box - batch implementation
    • Local-search attack - black box, score-based - single image
    • Boundary attack - black box, decision-based - single imagae

Setup

Requirements

Listed in requirements.txt. Install with pip install -r requirements.txt preferably in a virtualenv.

Tests (optional)

Run pytest in the root directory to run all tests.