From e982b97ff70b7b0eeca78c9d52796cb18561d64c Mon Sep 17 00:00:00 2001 From: Adit Nugroho Date: Tue, 13 Feb 2024 23:31:34 +1100 Subject: [PATCH 1/3] update Glossary --- docs/glossary.md | 55 ++++++++++++++++++++++++++++++++++++++---------- tab_glossary.md | 13 +++++++++--- 2 files changed, 54 insertions(+), 14 deletions(-) diff --git a/docs/glossary.md b/docs/glossary.md index 9e8e740..544398d 100644 --- a/docs/glossary.md +++ b/docs/glossary.md @@ -53,7 +53,9 @@ comments: false ## A {#a} -[]() + +[Adversarial attack](#adversarial_attack) +Type of attack which seeks to trick machine learning models into misclassifying inputs by maliciously tampering with input data ## B {#b} @@ -61,15 +63,24 @@ comments: false ## C {#c} -[]() +[Classification](#classification) +Process of arranging things in groups which are distinct from each other, and are separated by clearly determined lines of demarcation ## D {#d} -[]() +[Data labeling](#data_labeling) +Process of assigning tags or categories to each data point in a dataset + +[Data poisoning](#data_poisoning) +Type of attack that inject poisoning samples into the data + +[Deep learning](#deep_learning) +Family of machine learning methods based on artificial neural networks with long chains of learnable causal links between actions and effects ## E {#e} -[]() +[Ensemble](#ensemble) +See: [Model Ensemble](#model_ensemble) ## F {#f} @@ -85,7 +96,14 @@ comments: false ## I {#i} -[]() +[Input Validation](#input_validation) +Input validation is a technique for checking potentially dangerous inputs in order to ensure that the inputs are safe for processing within the code, or when communicating with other components + +[Intrusion Detection Systems (IDS)](#ids) +Security service that monitors and analyzes network or system events for the purpose of finding, and providing real-time or near real-time warning of, attempts to access system resources in an unauthorized manner + +[Intrusion Prevention System (IPS)](#ips) +System that can detect an intrusive activity and can also attempt to stop the activity, ideally before it reaches its targets ## J {#j} @@ -101,7 +119,14 @@ comments: false ## M {#m} -[]() +[MLOps](#mlops) +The selection, application, interpretation, deployment, and maintenance of machine learning models within an AI-enabled system + +[Model](#model) +Detailed description or scaled representation of one component of a larger system that can be created, operated, and analyzed to predict actual operational characteristics of the final produced component + +[Model ensemble](#model_ensemble) +Art of combining a diverse set of learners (individual models) together to improvise on the stability and predictive power of the model ## N {#n} @@ -109,11 +134,16 @@ comments: false ## O {#o} -[]() +[Obfuscation](#obfuscation) +Defense mechanism in which details of the model or training data are kept secret by adding a large amount of valid but useless information to a data store + +[Overfitting](#overfitting) +Overfitting is when a statistical model begins to describe the random error in the data rather than the relationships between variables. This occurs when the model is too complex ## P {#p} -[]() +[Perturbation](#perturbation) +Noise added to an input sample ## Q {#q} @@ -121,11 +151,13 @@ comments: false ## R {#r} -[]() +[Regularisation](#regularisation) +Controlling model complexity by adding information in order to solve ill-posed problems or to prevent overfitting ## S {#s} -[]() +[Spam](#spam) +The abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages ## T {#t} @@ -133,7 +165,8 @@ comments: false ## U {#u} -[]() +[Underfitting](#underfitting) +Underfitting is when a data model is unable to capture the relationship between the input and output variables accurately, generating a high error rate on both the training set and unseen data ## V {#v} diff --git a/tab_glossary.md b/tab_glossary.md index ddaf5f8..548497e 100644 --- a/tab_glossary.md +++ b/tab_glossary.md @@ -77,6 +77,9 @@ Process of assigning tags or categories to each data point in a dataset [Data poisoning](#data_poisoning) Type of attack that inject poisoning samples into the data +[Deep learning](#deep_learning) +Family of machine learning methods based on artificial neural networks with long chains of learnable causal links between actions and effects + ## E {#e} [Ensemble](#ensemble) @@ -96,11 +99,14 @@ See: [Model Ensemble](#model_ensemble) ## I {#i} +[Input Validation](#input_validation) +Input validation is a technique for checking potentially dangerous inputs in order to ensure that the inputs are safe for processing within the code, or when communicating with other components + [Intrusion Detection Systems (IDS)](#ids) -Security service that monitors and analyzes network or system events for the purpose of finding, and providing real-time or near real-time warning of, attempts to access system resources in an unauthorized manner. +Security service that monitors and analyzes network or system events for the purpose of finding, and providing real-time or near real-time warning of, attempts to access system resources in an unauthorized manner [Intrusion Prevention System (IPS)](#ips) -System that can detect an intrusive activity and can also attempt to stop the activity, ideally before it reaches its targets. +System that can detect an intrusive activity and can also attempt to stop the activity, ideally before it reaches its targets ## J {#j} @@ -139,7 +145,8 @@ Overfitting is when a statistical model begins to describe the random error in t ## P {#p} -[]() +[Perturbation](#perturbation) +Noise added to an input sample ## Q {#q} From 05e151ca69726c20980121a05bcb4debe0c885db Mon Sep 17 00:00:00 2001 From: Yusuf Munir Date: Wed, 14 Feb 2024 13:58:51 +0500 Subject: [PATCH 2/3] Fixed Typo --- docs/ML04_2023-Membership_Inference_Attack.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/ML04_2023-Membership_Inference_Attack.md b/docs/ML04_2023-Membership_Inference_Attack.md index 31a6ae9..9c9ee9a 100644 --- a/docs/ML04_2023-Membership_Inference_Attack.md +++ b/docs/ML04_2023-Membership_Inference_Attack.md @@ -58,7 +58,7 @@ information. | Threat Agents/Attack Vectors | Security Weakness | Impact | | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | Exploitability: 4 (Moderate)

_ML Application Specific: 5_
_ML Operations Specific: 3_ | Detectability: 3 (Moderate) | Technical: 4 (Moderate) | -| Threat Actors: Hackers or malicious actors who have access to the data and the model.
Insiders who have malicious intent or are bribed to interfere with the data.

Attact Vectors: Unsecured data transmission channels that allow unauthorized access to the data. | Lack of proper data access controls.

Lack of proper data validation and sanitization techniques.

Lack of proper data encryption.

Lack of proper data backup and recovery techniques. | Unreliable or incorrect model predictions.

Loss of confidentiality and privacy of sensitive data.

Legal and regulatory compliance violations.

Reputational damage. | +| Threat Actors: Hackers or malicious actors who have access to the data and the model.
Insiders who have malicious intent or are bribed to interfere with the data.

Attack Vectors: Unsecured data transmission channels that allow unauthorized access to the data. | Lack of proper data access controls.

Lack of proper data validation and sanitization techniques.

Lack of proper data encryption.

Lack of proper data backup and recovery techniques. | Unreliable or incorrect model predictions.

Loss of confidentiality and privacy of sensitive data.

Legal and regulatory compliance violations.

Reputational damage. | It is important to note that this chart is only a sample based on [the scenario below](#scenario1) only. The actual risk assessment will depend on From a8bb4fbba7a10f678454de3eb1a0942874e1ce0e Mon Sep 17 00:00:00 2001 From: Shain Singh Date: Thu, 30 May 2024 10:50:22 +1000 Subject: [PATCH 3/3] fix: rename directory in presentations --- ...SP ML Security Top 10 - A Practical Approach.pdf | Bin 1 file changed, 0 insertions(+), 0 deletions(-) rename presentations/{Null Bangalore Chapter - May 2024. => Null Bangalore Chapter - May 2024}/OWASP ML Security Top 10 - A Practical Approach.pdf (100%) diff --git a/presentations/Null Bangalore Chapter - May 2024./OWASP ML Security Top 10 - A Practical Approach.pdf b/presentations/Null Bangalore Chapter - May 2024/OWASP ML Security Top 10 - A Practical Approach.pdf similarity index 100% rename from presentations/Null Bangalore Chapter - May 2024./OWASP ML Security Top 10 - A Practical Approach.pdf rename to presentations/Null Bangalore Chapter - May 2024/OWASP ML Security Top 10 - A Practical Approach.pdf