Official Implementation β Accepted at EMNLP 2025 Findings
This repository provides the implementation of iterative and neighbor-assisted model editing for large language models with a simple CLI to reproduce experiments and run your own edits.
This project uses uv for dependency management. If you don't have uv installed, please follow the installation instructions at:
- Official uv documentation: https://docs.astral.sh/uv/getting-started/installation/
- Clone the repository:
git clone https://github.com/bhimanbaghel/ResolveUnderOverEdit.git
cd ResolveUnderOverEdit- Install dependencies using
uv:
uv syncThis command will:
- Create a virtual environment (
.venv) - Install all dependencies with exact versions from
uv.lock
The experiments require pre-computed model statistics (~13GB total) that are hosted on Hugging Face Hub.
The Hugging Face Hub library is required for downloading. It should already be available if you ran uv sync. If needed, you can add it:
uv add huggingface_hubTo download statistics for all models:
uv run python download_stats.pyThis will download ~13GB of data to the data/stats/ directory.
To download statistics for a single model:
# Download only Llama-3-8B statistics (3.7GB)
uv run python download_stats.py --model llama-3-8b
# Other options: gpt2-xl, gpt-j-6B, llama-2-7b- gpt-j-6B: 5.8 GB
- llama-3-8b: 3.7 GB
- llama-2-7b: 2.2 GB
- gpt2-xl: 747 MB
Dataset Repository: bkb45/ResolveUnderOverEdit-stats
uv run python checkZ.py \
--alg_name={ALGORITHM}_RECURSIVE \
--model_name={MODEL} \
--hparams_fname={HPARAMS_PATH} \
--ds_name={DATASET} \
--num_edits={NUM_EDITS} \
--ds_subset={SUBSET_INDEX} \
--iterations={ITERATIONS}-
ALGORITHM: Base editing algorithm name
- Supported:
MEMIT,AlphaEdit,PMET,ROME - Note:
_RECURSIVEsuffix is automatically added
- Supported:
-
MODEL: Model identifier
- Supported models:
gpt2-xl,gpt-j-6b,llama-2-7b,llama-3-8b - Models should be downloaded from Hugging Face and placed in the
hugging_cache/directory
- Supported models:
-
HPARAMS_PATH: Path to hyperparameter configuration file (YAML)
- Example:
./hparams/MEMIT_RECURSIVE/llama3-8b.yaml - Ensure the YAML matches your chosen model and algorithm
- Example:
-
DATASET: Dataset for editing
- Supported datasets:
mcf,zsre
- Supported datasets:
-
NUM_EDITS: Number of edits to apply (e.g.,
1000) -
SUBSET_INDEX: Dataset subset index (integer, e.g.,
1)- Results reported in the paper use subsets
1,10,15for MCF dataset - Results reported in the paper use subsets
0,10,15for ZSRE dataset
- Results reported in the paper use subsets
-
ITERATIONS: Number of iterative editing passes
- The code will run for the specified number of iterations
- Note: Early stopping based on perplexity will not execute to show the full trend for experimentation
uv run python checkZ.py --alg_name=MEMIT_RECURSIVE \
--model_name=llama-3-8b \
--hparams_fname=./hparams/MEMIT_RECURSIVE/llama3-8b.yaml \
--ds_name=mcf \
--num_edits=1000 \
--ds_subset=1 \
--iterations=5Neighbor-assisted model editing incorporates neighboring knowledge during the editing process to reduce OverEdit. Use the _NEIGHBOR suffix with your algorithm to enable this mode.
MEMIT with GPT-J-6B:
uv run python checkZ.py --alg_name=MEMIT_RECURSIVE_NEIGHBOR --model_name=gpt-j-6B --hparams_fname=./hparams/MEMIT_RECURSIVE_NEIGHBOR/gpt-j-6B.yaml --ds_name=mcf --num_edits=960 --ds_subset=960 --iterations=5MEMIT with GPT2-XL:
uv run python checkZ.py --alg_name=MEMIT_RECURSIVE_NEIGHBOR --model_name=gpt2-xl --hparams_fname=./hparams/MEMIT_RECURSIVE_NEIGHBOR/gpt2-xl.yaml --ds_name=mcf --num_edits=739 --ds_subset=739 --iterations=5PMET with Llama-2-7B:
uv run python checkZ.py --alg_name=PMET_RECURSIVE_NEIGHBOR --model_name=llama-2-7b --hparams_fname=./hparams/PMET_RECURSIVE_NEIGHBOR/llama-7b.yaml --ds_name=mcf --num_edits=1340 --ds_subset=1340 --iterations=5-
Do not change
--num_editsand--ds_subsetvalues: These parameters are tied to the specific model and represent precomputed eligible examples for that model. Each model has its own set of eligible examples. -
Using a different model: If you choose a model other than the ones in the examples above, the eligible examples need to be recomputed for that model. The algorithms can be changed (e.g., from MEMIT to PMET), but the num_edits and ds_subset should match the model's precomputed values.
-
Iterative vs. Neighbor-Assisted: The commands shown above run the iterative version of neighbor-assisted model editing. Our paper provides comparative results between:
- Iterative Model Editing: Reduces UnderEdit
- Iterative + Neighbor-Assisted Model Editing: Reduces both UnderEdit and OverEdit
-
To run only iterative editing (without neighbor assistance): Simply remove the
_NEIGHBORsuffix from the algorithm name and keep all other parameters the same. For example:# Iterative only (no neighbor assistance) uv run python checkZ.py --alg_name=MEMIT_RECURSIVE --model_name=gpt-j-6B --hparams_fname=./hparams/MEMIT_RECURSIVE/gpt-j-6B.yaml --ds_name=mcf --num_edits=960 --ds_subset=960 --iterations=5
Individual results for each edit at each iteration are stored in the results/ directory. These detailed results track the performance of every edit throughout the iterative process.
To generate a summary of the individual results, run:
uv run python summary_table.pyThis command will create summary files in the summaries/ directory, with one CSV file per experiment.
The summary CSV files contain:
- Initial perplexity: Corresponds to the SPREAD stage metrics (as reported in the paper)
- Final perplexity: Corresponds to the OPTIMIZATION stage metrics (as reported in the paper)
These metrics allow you to track the overall effectiveness of the editing process across iterations.
For questions or issues, please contact:
Bhiman Kumar Baghel
Email: bkb45@pitt.edu
If you use this code in your research, please cite our paper:
@inproceedings{baghel-etal-2025-resolving,
title = "Resolving {U}nder{E}dit {\&} {O}ver{E}dit with Iterative {\&} Neighbor-Assisted Model Editing",
author = "Baghel, Bhiman Kumar and
Jordan, Emma and
Shi, Zheyuan Ryan and
Li, Xiang Lorraine",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.findings-emnlp.798/",
doi = "10.18653/v1/2025.findings-emnlp.798",
pages = "14786--14808",
ISBN = "979-8-89176-335-7",
abstract = "Large Language Models (LLMs) are widely deployed in downstream tasks, but keeping their knowledge up-to-date via retraining or fine-tuning is often computationally expensive. Model editing provides a more efficient alternative by updating a targeted subset of parameters, which often follows the locate-and-edit paradigm. Despite this efficiency, existing methods are limited: edits may fail to inject knowledge (UnderEdit) or unintentionally disrupt unrelated neighboring knowledge (OverEdit). To address these challenges, we propose two complementary methods: **iterative model editing**, which applies successive edits to mitigate UnderEdit, and **neighbor-assisted model editing**, which incorporates neighboring knowledge during editing to reduce OverEdit. Our extensive experiments show that these techniques improve editing performance across multiple LLMs, algorithms, and benchmarks, reducing UnderEdit by up to 38 percentage points and OverEdit by up to 6, while remaining broadly applicable to any locate-and-edit method."
}This work builds upon the EasyEdit framework. We extend our sincere gratitude to the EasyEdit authors for their excellent work and open-source contribution. If you use this code, please also consider citing their work:
@article{wang2023easyedit,
title={Easyedit: An easy-to-use knowledge editing framework for large language models},
author={Wang, Peng and Zhang, Ningyu and Xie, Xin and Yao, Yunzhi and Tian, Bozhong and Wang, Mengru and Xi, Zekun and Cheng, Siyuan and Liu, Kangwei and Zheng, Guozhou and others},
journal={arXiv preprint arXiv:2308.07269},
year={2023}
}