Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update terms.json #99

Merged
merged 1 commit into from
Jan 30, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions data/terms.json
Original file line number Diff line number Diff line change
Expand Up @@ -1354,6 +1354,46 @@
"mlops",
"deployment-platform"
]
},
{
"name": "Machine unlearning",
"description": "Approaches to efficiently remove the influence of a subset of the training data from the weights of a trained model, without retraining the model from scratch, and whilst retaining the model’s performance on downstream tasks. Machine unlearning could be used to remove the influence of personal data from a model if someone exercises their “Right to be Forgotten”.",
"termCode": "machine-unlearning",
"related": [
"machine-editing",
"memorization",
"training-data-leakage"
]
},
{
"name": "Machine editing",
"description": "Approaches to efficiently modify the behaviour of a machine learning model on certain inputs, whilst having little impact on unrelated inputs. Machine Editing can be used to inject or update knowledge in the model or modify undesired behaviours.",
"termCode": "machine-editing",
"related": [
"machine-unlearning",
"memorization",
"training-data-leakage"
]
},
{
"name": "Memorization",
"description": "Machine Learning Models have been shown to memorize aspects of their training data during the training process. This has been demonstrated to correlate with model size (number of parameters).",
"termCode": "memorization",
"related": [
"machine-editing",
"machine-unlearning",
"training-data-leakage"
]
},
{
"name": "Training data leakage",
"description": "Aspects of the training data can be memorized by a machine learning model during training and are consequently vulnerable to being inferred or extracted verbatim from the model alone. This is possible as the behaviour of the model on samples which were members of the training data is distinguishable from samples the model has not seen before. This leakage has been demonstrated on a range of machine learning models including Transformer-based Image and Language Models.",
"termCode": "training-data-leakage",
"related": [
"machine-editing",
"machine-unlearning",
"memorization"
]
}
]
}
Loading