Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in the evaluation #3

Open
XiaoyuBIE1994 opened this issue Apr 28, 2023 · 0 comments
Open

Bug in the evaluation #3

XiaoyuBIE1994 opened this issue Apr 28, 2023 · 0 comments

Comments

@XiaoyuBIE1994
Copy link

Hi Droliven,

If I'm not wrong, there might be a small issue in the evaluation function. When iterating the data generator, if the number of multi-modal GT is 1, the evaluation will be bypassed #L280-L281, so in the final metric computation #L369-L37 the summed ADE/FDE ... should be divided by the real total number of data examples that have been used in the evaluation (should be a smaller number than i+1). After correcting this issue, the final results from the proposed pretrained models should be like this:

HumanEva
| ADE | FDE | MMADE | MMFDE |
| 0.234 | 0.247 | 0.350 | 0.327 |

Human3.6M
| ADE | FDE | MMADE | MMFDE |
| 0.378 | 0.495 | 0.483 | 0.525 |

Best,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant