Skip to content

The official repository of CVPR2025 paper "Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization"

Notifications You must be signed in to change notification settings

MyToumaKazusa/S6MOD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization

The official repository of CVPR2025 paper "Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization"

arXiv

S6MOD Framework

📒 Updates

  • 23 Mar: We released the code of our paper.

🔨 Installation

  • We use the following hardware and software for our experiments:

  • Hardware: NVIDIA Tesla A100 GPUs

  • Software: Please refer to requirements.txt for the detailed package versions. Conda is highly recommended.

➡️ Data Preparation

  • CIFAR-10/100

Torchvision should be able to handle the CIFAR-10/100 dataset automatically. If not, please download the dataset from here and put it in the data folder.

  • TinyImageNet

This codebase should be able to handle TinyImageNet dataset automatically and save them in the data folder. If not, please refer to this github gist.

🚀 Training

  • Execute the provided scripts to start training:
python main.py --data-root ./data --config ./config/CVPR25/cifar10/ER,c10,m500.yaml
(see more in cmd.txt)
  • Training with weight and bias sweep (Recommended)

Weight and bias sweep is originally designed for hyperparameter search. However, it make the multiple runs much easier. Training can be done with W&B sweep more elegantly, for example:

wandb sweep sweeps/CVPR/ER,cifar10.yaml

Note that you need to set the dataset path in .yaml file by specify --data-root-dir. And run the sweep agent with:

wandb agent $sweepID

The hyperparameters after our hyperparameter search is located at ./sweeps/CVPR.

✏️ Citation

If you find our work useful in your research, please consider citing:

@article{liu2024enhancing,
  title={Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization},
  author={Liu, Sihao and Yang, Yibo and Li, Xiaojie and Clifton, David A and Ghanem, Bernard},
  journal={arXiv preprint arXiv:2412.18177},
  year={2024}
}

👍 Acknowledgments

This codebase builds on CCLDC. Thank you to all the contributors.

About

The official repository of CVPR2025 paper "Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages