S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning
Yabin Wang, Zhiwu Huang, Xiaopeng Hong. 2022 Conference on Neural Information Processing Systems (NeurIPS 22).
[Paper]
Create the virtual environment for S-Prompts.
conda env create -f environment.yaml
After this, you will get a new environment sp that can conduct S-Prompts experiments.
Run conda activate sp
to activate.
Thanks to laitifranz. Please refer to this file: requirements.txt.
Note that only NVIDIA GPUs are supported for now, and we use NVIDIA RTX 3090.
Please refer to the following links to download three standard domain incremental learning benchmark datasets.
Unzip the downloaded files, and you will get the following folders.
CDDB
├── biggan
│ ├── train
│ └── val
├── gaugan
│ ├── train
│ └── val
├── san
│ ├── train
│ └── val
├── whichfaceisreal
│ ├── train
│ └── val
├── wild
│ ├── train
│ └── val
... ...
core50
└── core50_128x128
├── labels.pkl
├── LUP.pkl
├── paths.pkl
├── s1
├── s2
├── s3
...
domainnet
├── clipart
│ ├── aircraft_carrier
│ ├── airplane
│ ... ...
├── clipart_test.txt
├── clipart_train.txt
├── infograph
│ ├── aircraft_carrier
│ ├── airplane
│ ... ...
├── infograph_test.txt
├── infograph_train.txt
├── painting
│ ├── aircraft_carrier
│ ├── airplane
│ ... ...
... ...
Please change the data_path
in the config files to the locations of the datasets。
Currently, there are two options for net_type
in the config files: slip
and sip
.
slip
means S-liPrompts and sip
means S-iPrompts.
Feel free to change the parameters in the config files, following scripts will reproduce the main results in our paper.
python main.py --config configs/cddb_slip.json
python main.py --config configs/cddb_sip.json
python main.py --config configs/core50_slip.json
python main.py --config configs/domainnet_slip.json
Please refer to [Evaluation Code].
Please check the MIT license that is listed in this repository.
We thank the following repos providing helpful components/functions in our work.
If you use any content of this repo for your work, please cite the following bib entry:
@inproceedings{wang2022sprompt,
title={S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning},
author={Wang, Yabin and Huang, Zhiwu and Hong, Xiaopeng},
booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
year={2022}
}