Official Repository for the (Oral) AAAI'23 paper Sparse Coding in a Dual Memory System for Lifelong Learning
We extended the CLS-ER repo with our method
-
Use
python main.py
to run experiments. -
Use argument
--load_best_args
to use the best hyperparameters for each of the evaluation setting from the paper. -
To reproduce the results in the paper run the following
python main.py --dataset <dataset> --model <model> --experiment_id <experiment_id> --buffer_size <buffer_size> --load_best_args
python main.py --dataset seq-cifar10 --model scommer --buffer_size 200 --experiment_id scommer-c10-200 --load_best_args python main.py --dataset seq-cifar100 --model scommer --buffer_size 200 --experiment_id scommer-c100-200 --load_best_args
python main.py --dataset gcil-cifar100 --weight_dist unif --model scommer --buffer_size 200 --experiment_id scommer-gcil-unif-200 --load_best_args python main.py --dataset gcil-cifar100 --weight_dist longtail --model scommer --buffer_size 200 --experiment_id scommer-gcil-longtail-200 --load_best_args
-
torch==1.7.0
-
torchvision==0.9.0
-
quadprog==0.1.7