Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What cause the large performance improvement on 3DPW dataset? #58

Closed
linjing7 opened this issue Aug 17, 2022 · 2 comments
Closed

What cause the large performance improvement on 3DPW dataset? #58

linjing7 opened this issue Aug 17, 2022 · 2 comments

Comments

@linjing7
Copy link

Hi, thank you very much for your excellent work.

In the journal version, the performance of PyMAF is significantly better than that in the conference version. Could you please tell me the reason? Is it caused by the differences between the training setting?

@HongwenZhang
Copy link
Owner

Hi, thanks for your questions.

Yes, it is caused by the differences between the training settings used in the conference version (following the setting in SPIN) and the journal version (following the setting in PARE). Basically, the performance of the baseline (HMR architecture) is significantly better than the conference version when following the same training setting as PARE. In comparison with SPIN, the training setting of PARE mainly differs in the following aspects:

  1. Using the EFT pseudo ground-truth labels.
  2. Using the weights pretrained in pose_resnet for a 2D pose estimation task to initialize the backbone network.
  3. Using different dataset ratios (i.e. first 100% COCO-EFT then mixture datasets) in the training.

More details about the training of PARE can be found in its paper:
image

The issues of the numerical evaluation and its fairness are also discussed in our survey paper.

@linjing7
Copy link
Author

Okay, thank you very much for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants