Skip to content

Code and Visualizations for TMLR Paper "Understanding Noise-Augmented Training for Randomized Smoothing"

License

Notifications You must be signed in to change notification settings

ambarpal/randomized-smoothing

Repository files navigation

Understanding Noise-Augmented Training for Randomized Smoothing

This repository provides code and visualizations for our TMLR Paper "Understanding Noise-Augmented Training for Randomized Smoothing", Ambar Pal, Jeremias Sulam, Transactions in Machine Learning Research 2023. We demonstrate that noise-augmented training is not always beneficial for randomized smoothing, and identify a key theoretical property that is able to determine this.

Synthetic Experiments

The synthetic experiments in Section 5 can be reproduced with the code in Synthetic_Experiments.ipynb.

Real Data Experiments

The plots for the experiments on MNIST and CIFAR-10 can be reproduced with MNIST_CIFAR10_ExperimentsPlot.ipynb. Further, code for training the MNIST noise-augmented models is given in C1_train_noise_augmented_MNIST.py. These models can then be used with C2_evaluate_risk_after_noise_augmentation_MNIST.py to evaluating the risk of the randomized smoothed classifers.

About

Code and Visualizations for TMLR Paper "Understanding Noise-Augmented Training for Randomized Smoothing"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published