Detection of network traffic anomalies using unsupervised machine learning
-
Updated
Jan 26, 2022 - Jupyter Notebook
Detection of network traffic anomalies using unsupervised machine learning
Adversarial Attack on 3D U-Net model: Brain Tumour Segmentation.
Implementation of FGSM (Fast Gradient Sign Method) attack on fine-tuned MobileNet architecture trained for flood detection in images.
Comparison of the impact the Fast Gradient Sign Attack has on a Deep Neural Networks and a Bayesian Neural Networks.
In this work, we extend the FGSM method proposing multistep adversarial perturbation (MSAP) procedures to study the recommenders’ robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF.
Adversarial Sample Generation
Adversarial attacks against CIFAR-10 and MNIST
Containing several tutorials/demo about malicious uses and abuses of Artificial Intelligence used in the FIT3183 unit (Monash Malaysia, 2020).
This project explores the vulnerability of machine learning models to adversarial attacks and implements robust defense strategies.
Assignments and projects from the interpretable artificial intelligence course offered at the University of Tehran.
Defending Neural Networks from Adversarial Attacks
Add a description, image, and links to the fast-gradient-sign-attack topic page so that developers can more easily learn about it.
To associate your repository with the fast-gradient-sign-attack topic, visit your repo's landing page and select "manage topics."