Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
-
Updated
Aug 16, 2024 - Jupyter Notebook
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
A comparison analysis between classical and quantum-classical (or hybrid) neural network and the impact effectiveness of a compound adversarial attack.
A quantum-classical (or hybrid) neural network and the use of a adversarial attack mechanism. The core libraries employed are Quantinuum pytket and pytket-qiskit. torchattacks is used for the white-box, targetted, compounded adversarial attacks.
Hybrid neural network is protected against adversarial attacks using various defense techniques, including input transformation, randomization, and adversarial training.
Hybrid neural network model is protected against adversarial attacks using either adversarial training or randomization defense techniques
Add a description, image, and links to the targetted-attacks topic page so that developers can more easily learn about it.
To associate your repository with the targetted-attacks topic, visit your repo's landing page and select "manage topics."