Skip to content

vstar37/SANet

Repository files navigation

Incremental Structural Adaptation for Camouflaged Object Detection

News 📰

  • Nov 21, 2024: First release.

Overview

This repository provides a PyTorch implementation of SANet, a Structure-Adaptive Network designed for Camouflaged Object Detection (COD). SANet addresses the challenges of detecting camouflaged objects by incorporating an innovative incremental structural adaptation mechanism, which enhances the model's ability to refine segmentation and improve localization in complex environments. The key feature of SANet is its ability to adaptively integrate high-resolution structural information, enabling fine-grained detection of camouflaged objects that closely resemble their backgrounds.

Usage

Installation

To use this repository, follow the steps below to set up the environment:

  1. Clone the repository:

    git clone https://github.com/vstar37/SANet.git
    cd SANet
    
  2. Install the required dependencies: It is recommended to create a virtual environment first, then install the dependencies.

    # PyTorch==2.0.1 is used for faster training with compilation.
    conda create -n sanet python=3.10 -y && conda activate sanet
    pip install -r requirements.txt
    
  3. Download the datasets: Download datasets After setting up the environment, you can download the training and test datasets from the provided links. Once downloaded, please unzip the datasets into the data folder under the root directory of the project. The folder structure should look like this:

    SANet/
    ├── datasets/
    │   ├── CAMO_TestingDataset/
    │   ├── CHAMELEON_TestingDataset/
    │   ├── COD10K_TestingDataset/
    │   ├── NC4K_TestingDataset/
    │   └── COD10K_CAMO_CHAMELEON_TrainingDataset/
    ├── requirements.txt
    └── … (other project files)
  4. Download the weights: Download weights After downloading the datasets, you will need to download the pretrained weights for the model and the backbone. These weights are required to initialize the model for training and inference.

    • Model Weights: The model weights should be placed in the ckpt/COD directory.
    • Backbone Weights: The backbone weights should be placed in the lib/weights/backbones directory.

Run

# Train & Test & Evaluation
./sub.sh RUN_NAME GPU_NUMBERS_FOR_TRAINING GPU_NUMBERS_FOR_TEST
# Example: ./sub.sh  0,1,2,3,4,5,6,7 0

# See train.sh / test.sh for only training / test-evaluation.
# After the evaluation, run `gen_best_ep.py` to select the best ckpt from a specific metric (you choose it from Sm, wFm).

About

PyTorch implementation of SANet

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published