You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The top down models feature the data_cfg['heatmap_size'] parameter. It seems that the parameter is used only during the training in the target generation.
Is the parameter only a workaround to get the actual heatmap size determined by the number of deconvolution layers in the top down network head?
What is the proper way to change the size of the heatmap while keeping the input size constant? Adjust the number of deconvolution layers (num_deconv_layers) in the head and set the heatmap_size to the head ouput dimensions?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The top down models feature the
data_cfg['heatmap_size']
parameter. It seems that the parameter is used only during the training in the target generation.Is the parameter only a workaround to get the actual heatmap size determined by the number of deconvolution layers in the top down network head?
What is the proper way to change the size of the heatmap while keeping the input size constant? Adjust the number of deconvolution layers (
num_deconv_layers
) in the head and set theheatmap_size
to the head ouput dimensions?Thanks!
Beta Was this translation helpful? Give feedback.
All reactions