Skip to content

YutingLi0606/Idempotent-Continual-Learning

Repository files navigation

IDER: Idempotent Experience Replay for Reliable Continual Learning

arXiv License Contact Email Hugging Face

IDER is a novel framework for continual learning based on the idempotent property, which mitigates catastrophic forgetting and improves prediction reliability. It is a simple and robust method that can be easily integrated into other state-of-the-art approaches.


Zhanwang Liu1*, Yuting Li1*‡, Haoyuan Gao1, Yexin Li4, Linghe Kong1, Lichao Sun3, Weiran Huang1,2

1 School of Computer Science, Shanghai Jiao Tong University   2 Shanghai Innovation Institute
3 Lehigh University   4 State Key Laboratory of General Artificial Intelligence, BIGAI

* Equal contribution.   Corresponding author.   Project lead.


Method overview

🎉 News

  • [2026.01.26] Our paper has been accepted by ICLR 2026!

Table of Content

1. Quick Start

1.1. Environment

Clone this repository and install the requirements. Our model can be learnt in a single GPU RTX-4090 24G

conda env create -f environment.yaml
conda activate icl

The code was tested on Python 3.10 and PyTorch 1.13.0.

1.2. Training

Train and evaluate ResNet18 on different datasets using ER and ER+ID with different buffers. Run the following command:

CIFAR-10
bash run_para_cifar10.sh
CIFAR-100
bash run_para_cifar100.sh
TinyImageNet
bash run_para_tinyimg.sh

2. Reproduced Results

The example results are ResNet18 on different datasets using ER and ER+ID as baseline methods with different buffers and 0-4 seeds. All results reported here were obtained by running experiments on an NVIDIA GeForce RTX 4090.

Dataset Buffer Method Forgetting(⬇️) TIL(⬆️) CIL(⬆️) Checkpoint
CIFAR-10 200 ER 59.71 ± 2.62 91.48 ± 0.93 48.89 ± 2.19 -
ER+ID 16.89 ± 2.26 95.87 ± 0.36 70.68 ± 1.10 pth
500 ER 44.75 ± 2.94 93.38 ± 0.36 60.62 ± 2.46 -
ER+ID 11.59 ± 2.13 96.20 ± 0.40 75.52 ± 1.35 pth
CIFAR‑100 500 ER 73.81 ± 0.42 73.98 ± 1.15 21.28 ± 1.08 -
ER+ID 32.27 ± 1.96 83.30 ± 0.41 45.21 ± 1.20 pth
2000 ER 54.52 ± 0.62 81.62 ± 0.95 37.93 ± 0.76 -
ER+ID 18.76 ± 1.52 86.54 ± 0.34 56.30 ± 0.50 pth
Tiny‑ImageNet 4000 ER 56.89 ± 0.74 66.68 ± 0.47 25.20 ± 0.70 -
ER+ID 21.62 ± 1.67 74.56 ± 0.55 43.25 ± 1.26 pth

The results below were obtained using an Ascend 910B.

ASCEND
Dataset Buffer Method Forgetting(⬇️) TIL(⬆️) CIL(⬆️) Checkpoint
CIFAR-10 200 ER+ID 16.57 ± 3.29 95.73 ± 0.30 70.85 ± 0.81 pth
500 ER+ID 12.02 ± 1.39 96.07 ± 0.19 75.06 ± 0.95 pth
CIFAR‑100 500 ER+ID 31.85 ± 3.50 83.45 ± 0.37 45.55 ± 0.66 pth
2000 ER+ID 18.99 ± 1.09 86.79 ± 0.30 56.15 ± 0.31 pth
Tiny‑ImageNet 4000 ER+ID 20.73 ± 0.72 74.30 ± 0.97 43.15 ± 1.20 pth

The checkpoints are saved under experiments folder.

3. Tools

mlflow visulization
  1. Setup
pip install mlflow
  1. All results are stored in mlflow under the repository mlruns. You can run mlflow ui server locally:
mlflow ui

And then go to http://127.0.0.1:5000/#/ in your brower to see all the results from the experiments we runned and exact hyperparameters used in each run.

4. Citation

If our project is helpful for your research, please consider citing :

@article{liu2026ider,
  title={IDER: IDempotent Experience Replay for Reliable Continual Learning},
  author={Liu, Zhanwang and Li, Yuting and Gao, Haoyuan and Li, Yexin and Kong, Linghe and Sun, Lichao and Huang, Weiran},
  journal={arXiv preprint arXiv:2603.00624},
  year={2026}
}

5. Acknowledgement

Supported by the Shanghai Municipal Special Program for Basic Research on General AI Foundation Models (Grant No. 2025SHZDZX025G03) and the SJTU Kunpeng & Ascend Center of Excellence.

This project is heavily based on Mammoth and weight-interpolation-cl. We sincerely appreciate the authors of the mentioned works for sharing such great library as open-source project.

✨ Feel free to contribute and reach out if you have any questions! ✨
📧 Email: zhanwnagliu@gmail.com

About

(ICLR 2026) Pytorch implementation of "IDER: Idempotent Experience Replay for Reliable Continual Learning"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors