Skip to content

Latest commit

 

History

History
116 lines (78 loc) · 4.54 KB

README.md

File metadata and controls

116 lines (78 loc) · 4.54 KB

Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery

Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery (NeurIPS 2023)
By Sarah Rastegar, Hazel Doughty, and Cees Snoek.

image

Dependencies

pip install -r requirements.txt

kmeans_pytorch Installation

Since our work relies heavily on kmeans_pytorch for cluster assignments, you need to ensure that it is correctly imported to reproduce the results from the paper. You can install kmeans_pytorch directly in the directory by executing the following commands:

cd InfoSieve
git clone https://github.com/subhadarship/kmeans_pytorch
cd kmeans_pytorch
pip install --editable .

Note: While using scikit-learn's KMeans provides improvements, the results in the paper have been reported using kmeans_pytorch.

Config

Set paths to datasets, pre-trained models and desired log directories in config.py

Set SAVE_DIR (logfile destination) and PYTHON (path to python interpreter) in bash_scripts scripts.

Datasets

We use fine-grained benchmarks in this paper, including:

We also use generic object recognition datasets, including:

Scripts

Train representation: To run the code with the hyperparameters used in the paper, execute the following command:

python contrastive_training.py

This script will automatically train the representations, extract features, and fit the semi-supervised KMeans algorithm. It also provides final evaluations on both the best checkpoint and the final checkpoint.

Dataset Hyperparameter Specifics: If you're working with the CUB dataset, set the unsupervised_smoothing parameter to 1.0, for other fine-grained datasets (Scars, Pets, Aircraft) to 0.5 and for generic datasets to 0.1.

Evaluation

In the Final Reports section at the end, please note that only the evaluations reported for:

Reports for the best checkpoint:
Reports for the last checkpoint:

are the evaluations performed at test time to evaluate the checkpoints. Additionally, please note that Train ACC Unlabelled_v2 is the metric reported by our work and prior studies.

If you use this code in your research, please consider citing our paper:

@inproceedings{
rastegar2023learn,
title={Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery},
author={Sarah Rastegar and Hazel Doughty and Cees Snoek},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=m0vfXMrLwF}
}

Acknowledgements

The codebase is mainly built on the repo of https://github.com/sgvaze/generalized-category-discovery.

Further Resources

If you found our code helpful and are interested in exploring more, also check out our ECCV 2024 paper SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery .