[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
-
Updated
Feb 23, 2025 - Jupyter Notebook
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Must-read Papers on Knowledge Editing for Large Language Models.
EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
Official code repo for "Editing Implicit Assumptions in Text-to-Image Diffusion Models"
[EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
[ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
[ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models
OneEdit: A Neural-Symbolic Collaboratively Knowledge Editing System.
Code and dataset for the paper: "Can Editing LLMs Inject Harm?"
Knowledge Unlearning for Large Language Models
Official codes for COLING 2024 paper "Robust and Scalable Model Editing for Large Language Models": https://arxiv.org/abs/2403.17431v1
Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)
Stable Knowledge Editing in Large Language Models
Official implementation for Zhong & Le et al., GNNs Also Deserve Editing, and They Need It More Than Once. ICML 2024
Debiasing Stereotyped Language Models via Model Editing
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
COLING 2025: MLaKE: Multilingual Knowledge Editing Benchmark for Large Language Models
Circuit-Aware Editing Enables Generalizable Knowledge Learners
Add a description, image, and links to the knowledge-editing topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-editing topic, visit your repo's landing page and select "manage topics."