Skip to content

reproduce codes for Sill-Net: Feature Augmentation with Separated Illumination Representation

Notifications You must be signed in to change notification settings

jyzhang1208/Sill-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sill-Net: Feature Augmentation with Separated Illumination Representation

This repository is the official basic implementation of Separating-Illumination Network (Sill-Net).

image

Usage

  1. Clone the repository. The default folder name is 'Sill-Net'.

    git clone https://github.com/lanfenghuanyu/Sill-Net.git
    
  2. Download the datasets used in our paper from here. The datasets used in our paper are modified from the existing datasets. Please cite the dataset papers if you use it for your research.

    • Organize the file structure as below.
    |__ Sill-Net
        |__ code
        |__ db
            |__ belga
            |__ flickr32
            |__ toplogo10
            |__ GTSRB
            |__ TT100K
            |__ exp_list
    
    • Training and test splits are defined as text files in 'Sill-Net/db/exp_list' folder.
  3. Set the global repository path in 'Sill-Net/code/config.json'.

  4. Run main.py to train and test the code.

Generalized one/few-shot models

  1. How to train: Our training is based on PT-MAP, refering to the codes here. Simply add lines of codes (during training) to randomly sample some illuminations scaled to the same size as the support samples (inputs or the features in the hidden layers). Then mix up the support samples and the illumination features for training. The illumination repository can be produced by yourself using Sill-Net or simply download ours released here.
  2. How to reproduce: Use the pretrained models on miniImageNet, CUB and CIFAR-FS using WRN. Then train the models further as introduced in the 1st step. Our trained final models are released here. Please read the model using the file io_utils.py (not to untar the model files).

Training Tips

  1. For better results, increase the batchsize (64 or 128). For limited GPU memory, set the batchsize as 16.

  2. Adjust the number of support samples ('choose_sup' = 1 or more) for batches to balance the training speed and memory.

About

reproduce codes for Sill-Net: Feature Augmentation with Separated Illumination Representation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published