You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, the output is simply the trained weights for the 3 encoders. These are saved in the log_path. The training data are the same datasets for the smirk pipeline, but only the predicted landmarks and the predicted MICA shape parameters are used as targets. The existing code fully support this.
What is the file format that gets outputted from this? Do you have an example image and data pair you could share?
The text was updated successfully, but these errors were encountered: