Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Saliency Sampler is not used in the experiment #4

Open
kevinlemon opened this issue Dec 11, 2023 · 2 comments
Open

The Saliency Sampler is not used in the experiment #4

kevinlemon opened this issue Dec 11, 2023 · 2 comments

Comments

@kevinlemon
Copy link

Thanks for your code.
(1) In the parse_args.py, the saliency sampler module is defaulted as 0. There are no additional commands to change the default value in the experiment (both "Training and Testing" and "Running Batch Experiments").
# Saliency sampler module
ss_parser = parser.add_argument_group('Saliency Sampler')
ss_parser.add_argument('--ss', type=int, default=0, help='toggle saliency sampler [0,1]')
ss_parser.add_argument('--ss_pretrain', type=int, default=0, help='toggle saliency sampler ImageNet pretraining')
ss_parser.add_argument('--ss_dim', default=None, type=int, nargs='+', help='width and height of saliency network input in pixels')
ss_parser.add_argument('--ss_out_dim', default=None, type=int, nargs='+', help='width and height of saliency network input in pixels')
ss_parser.add_argument('--ss_layers', default=None, type=int, help='number of layers to include in saliency net')
ss_parser.add_argument('--ss_sparsity', default=0, type=float, help='weighting for sparsity loss')
ss_parser.add_argument('--ss_temporal', default=0, type=float, help='weighting for temporal consistency loss')

(2) There are ppg_, our_, pred_ in the results. What are the meanings of these prefix?
test_Last ppg_NegMCC: -0.5935
test_Last ppg_NegSNR: -14.5025
test_Last ppg_IPR: 0.4602
test_Last ppg_NegPC: 0.3113
test_Last ppg_MVTL: 0.0000
test_Last our_v_gt_sd: 5.2483
test_Last our_v_gt_rmse: 6.1034
test_Last our_v_gt_mae: 4.6451
test_Last our_v_gt_corr: 0.8769
test_Last pred_v_gt_sd: 6.2129
test_Last pred_v_gt_rmse: 6.9774
test_Last pred_v_gt_mae: 5.2871
test_Last pred_v_gt_corr: 0.8370
test_Last pred_v_our_sd: 1.1367
test_Last pred_v_our_rmse: 1.1382
test_Last pred_v_our_mae: 0.8260
test_Last pred_v_our_corr: 0.9955

@lbh111w
Copy link

lbh111w commented Dec 13, 2023

I also have this question (2).

1 similar comment
@Mollyzml
Copy link

I also have this question (2).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants