diff --git a/htmls/Adversarial Attacks.html b/htmls/Adversarial Attacks.html index bd8c4f8..a51a539 100644 --- a/htmls/Adversarial Attacks.html +++ b/htmls/Adversarial Attacks.html @@ -69,7 +69,7 @@
- + +
@@ -148,21 +149,17 @@

- +
@@ -176,79 +173,9 @@

- - -
-
-
-

- Introduction -

-

-

White-box Attacks

- In white-box adversarial attacks, attackers have complete knowledge of - the target model, including model structure, weight parameters, and training data. In this scenario, - attackers can directly access the internal information of the target model, making it easier to - understand the model’s characteristics and vulnerabilities. Attackers can generate adversarial - examples in a targeted manner by analyzing model gradients, loss functions, and other information, - causing the model to produce misleading outputs. White-box attacks typically involve using gradient - information for backpropagation to maximize changes in input, steering the model output toward - the direction expected by the attacker. With carefully designed adversarial examples, attackers - can guide the model to make incorrect decisions, posing significant harm in practical applications -
-
-

Black-box Attacks

- In recent years, black-box adversarial attacks in the field of neural code - models have been widely studied. In contrast to white-box adversarial attacks, where the attacker - has detailed information about the model’s structure and weights, black-box adversarial attacks - involve attackers who cannot access such detailed information. In black-box attacks, adversaries - can only generate adversarial examples by obtaining limited model outputs through model queries. - The harm caused by black-box attacks primarily manifests in compromised model performance and - threats to system security. In situations where detailed model information is unavailable, attackers - ingeniously construct adversarial examples, potentially leading to misleading outputs from neural - code models, affecting the accuracy of the model in practical tasks. This not only poses a potential - threat to downstream models in software engineering tasks but may also result in serious issues in - security-critical systems. -

- -
- -
-
-
-

Datasets used in adversarial research on NCMs

+

Datasets used in adversarial research on LM4Code

@@ -294,7 +221,106 @@

Datasets used in adversarial research on NCMs

- + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Download XXX
Code2Seq2019Java + GitHub DownloadXXX
Devign2019Java + GitHub DownloadXXX
Google Code Jam (GCJ)2020C++
+ Java +
OJ Platform DownloadXXX
CodeXGLUE2021Go
+ Java
+ JavaScript
+ PHP
+ Python
+ Ruby
+
GitHub DownloadXXX
CodeQA2021Java
+ Python +
GitHub DownloadXXX
APPS2021Python + OJ Platform DownloadXXX
Shellcode_IA322021assembly language instruction + OJ Platform DownloadXXX
SecurityEval2022Python + GitHub DownloadXXX
LLMSecEval2023Python
+ C +
GitHub DownloadXXX
PoisonPy2023Python + GitHub + not yet published + XXX
@@ -303,7 +329,7 @@

Datasets used in adversarial research on NCMs

-

A summary of existing adversarial attacks in NCMs

+

A summary of target models of adversarial attacks in LM4Code

@@ -793,33 +819,7 @@
-