This is a PyTorch implementation of our CDNet.
In the current stage, the test code of our CDNet is publicly available. To make readers easily understand the internal details of the network structure, we keep the variable names to be consistent with the symbols used in the main paper. The training code will be publicly released upon the acceptance of this paper.
- PyTorch 1.4.0
- opencv-python 3.4.2
git clone https://github.com/blanclist/CDNet.git
We evaluate our CDNet on seven commonly used datasets: NJU2K, STERE, DES, NLPR, SSD, LFSD, and DUT. These datasets can be downloaded from the links provided in http://dpfan.net/d3netbenchmark/.
We provide two pre-trained CDNets:
-
(CDNet.pth) CDNet trained on NJU2K+NLPR GoogleDrive | BaiduYun (fetch code: j4gx).
The evaluation results are listed in Table I of the main paper.
-
(CDNet_2.pth) CDNet trained on NJU2K+NLPR+DUT GoogleDrive | BaiduYun (fetch code: go86).
The evaluation results are listed in Table II of the main paper.
To run the test code, you are required to assign variables in "CDNet/code/config.py" with meaningful values:
- "img_base": The directory path of RGB images.
- "depth_base": The directory path of depth maps.
- "checkpoint_path": The path of pre-trained model (i.e., the path of "CDNet.pth" or "CDNet_2.pth").
- "save_base": The directory path to save the saliency maps produced by the model.
cd CDNet/code/
python main.py
Notes of the Input Formats: The input RGB images and depth maps will be resized to 224*224 and properly normalized by the test code. The pre-trained CDNet requires the input depth maps to follow the criterion: the object is closer to the sensor, the corresponding depth values are lower. To utilize CDNet to normally generate saliency maps, please ensure that the input depth maps follow this criterion before the test.