-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Carlini Linfinity attack #70
base: master
Are you sure you want to change the base?
Conversation
@samuelemarro Thanks for the contribution! Maybe it's in your plan already, just want to mention that it would also be nice if you could follow the guidelines for attacks specified in https://github.com/BorealisAI/advertorch/blob/master/CONTRIBUTING.md to add the tests. |
I marked it as work in progress because I need to understand if you (as well as the other core maintainers) prefer to follow the original implementation (in this case, Carlini's implementation, which is very sensitive to I followed the contribution guidelines, did I miss anything? Sorry, it's my first major pull request to this repo. |
No worries :)
Re: Once these changes are in I'll review and make necessary changes to copyright notices and author's list. |
Added benchmark. Fixed formatting.
Ok, I've added |
# during optimisation | ||
prev_adversarials = x.clone() | ||
|
||
while torch.any(active): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line doesn't work for torch 1.1 (works for torch 1.4). As in torch 1.1, torch.any only takes byte tensor, not bool tensor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to fix right now, but please mention this in the comment. We'll implement testing under multiple torch versions later, and it can be fixed with other problems all together.
|
||
replace_to = active.clone() | ||
replace_to[active] = filter_ | ||
to[replace_to] = from_[filter_] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused here. what would happen if to[replace_to]
and from_[filter_]
have different number of elements?
@@ -0,0 +1,64 @@ | |||
# Copyright (c) 2018-present, Royal Bank of Canada and other authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm running this on a single GPU machine, seems to be pretty slow. Any estimate on how long would this script run for?
@samuelemarro Sorry for the super long delay in responding. I just found that I didn't "submit" my review, so they were "pending" and you cannot see them... for 4 months. |
# during optimisation | ||
prev_adversarials = x.clone() | ||
|
||
while torch.any(active): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this while needed here? there already exists a loop in _run_attack. This loop may not stop and it seems to cause an infinite loop in the benchmarking test.
This implementation is based on Carlini’s original Tensorflow implementation .
The main differences between the original and this one are:
All the default parameter values are based on Carlini’s implementation.
Unlike the Carlini L2 attack, this attack is quite sensitive to the choice of
max_const
: too low and the attack will fail, too high and the Linfinity distance will be ridiculous. See here for a comparison.The cause is clear: Carlini's original implementation returns the last adversarial found, which is not necessarily the best one. The last adversarial is the one found with the highest value of
const
.I could modify the code so that it returns the best adversarial, rather than the last, but this would mean that this implementation would be different from Carlini's code. I'll leave the decision up to the maintainers.