Official repository of the paper 'Essentials for Class Incremental Learning'
This Pytorch repository contains the code for our work Essentials for Class Incremental Learning.
This work presents a straightforward class-incrmental learning system that focuses on the essential components and already exceeds the state of the art without integrating sophisticated modules.
To install requirements:
pip install -r requirements.txt
Following scripts contain both training and evaluation codes. Model is evaluated after each phase in class-IL.
To train the base CCIL model:
bash ./scripts/run_cifar.sh
bash ./scripts/run_imagenet100.sh
bash ./scripts/run_imagenet1k.sh
To train CCIL + Self-distillation
bash ./scripts/run_cifar_w_sd.sh
bash ./scripts/run_imagenet100_w_sd.sh
bash ./scripts/run_imagenet1k_w_sd.sh
Model name | Avg Acc (5 iTasks) | Avg Acc (10 iTasks) |
---|---|---|
CCIL | 66.44 | 64.86 |
CCIL + SD | 67.17 | 65.86 |
Model name | Avg Acc (5 iTasks) | Avg Acc (10 iTasks) |
---|---|---|
CCIL | 77.99 | 75.99 |
CCIL + SD | 79.44 | 76.77 |
Model name | Avg Acc (5 iTasks) | Avg Acc (10 iTasks) |
---|---|---|
CCIL | 67.53 | 65.61 |
CCIL + SD | 68.04 | 66.25 |
-
Distillation Methods
- Knowledge Distillation (--kd, --w-kd X), X is the weightage for KD loss, default=1.0
- Representation Distillation (--rd, --w-rd X), X is the weightage for cos-RD loss, default=0.05
- Contrastive Representation Distillation (--nce, --w-nce X), only valid for CIFAR-100, X is the weightage of NCE loss
-
Regularization for the first task
- Self-distillation (--num-sd X, --epochs-sd Y), X is number of generations, Y is number of self-distillation epochs
- Mixup (--mixup, --mixup-alpha X), X is mixup alpha value, default=0.1
- Heavy Augmentation (--aug)
- Label Smoothing (--label-smoothing, --smoothing-alpha X), X is a alpha value, default=0.1
-
Incremental class setting
- No. of base classes (--start-classes 50)
- 5-phases (--new-classes 10)
- 10-phases (--new-classes 5)
-
Cosine learning rate decay (--cosine)
-
Save and Load
- Experiment Name (--exp-name X)
- Save checkpoints (--save)
- Resume checkpoints (--resume, --resume-path X), only to resume from first snapshot
@InProceedings{Mittal_2021_CVPR,
author = {Mittal, Sudhanshu and Galesso, Silvio and Brox, Thomas},
title = {Essentials for Class Incremental Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
pages = {3513-3522}
}