Skip to content

Official code of multimodal feature RedFeat: Recoupling detection and description for multimodal feature learning (TIP2023)

Notifications You must be signed in to change notification settings

ACuOoOoO/ReDFeat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ReDFeat

This repository code for multimodal feature ReDFeat. "RedFeat: Recoupling detection and description for multimodal feature learning"

Requirements

The code is build on Pytorch 1.10 and Korina 0.6.2. Later version should also be compatible.

Datasets

Please clone the mutilmodal_feature_evaluation benchmark by

git clone https://github.com/ACuOoOoO/Multimodal_Feature_Evaluation

And then, follow README.md to build up train and test data.

Training

run

python train.py --image_type=VIS_IR --name=IR

The major parameters include:

image_type: type of modal (VIS_NIR,VIS_IR,VIS_SAR)

name: name of checkpoint

datapath: path for training data

Evaluation

Evaluation codes for feature extraction, matching and transform estimation are included in mutilmodal_feature_evaluation benchmark:

extract_ReDFeat.py: extract ReDFeat for three types of modals.

match.py: reproduce feature matching experiments in the paper.

reproj.py: reproduce image registration experiments in the paper.

About

Official code of multimodal feature RedFeat: Recoupling detection and description for multimodal feature learning (TIP2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages