You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am trying to work off your paper + code to include learnable hyperparameters in the interpolation matrix W. However, it results in a GPyTorch error "Trying to backward through the graph a second time". I guess this is due to some issues with the caching where the computation graph for the W matrix is lost as it is not intended to be used with hyperparameters. Could you give me any advice on how to fix that?
The text was updated successfully, but these errors were encountered:
in general, thats going to break the online nature of the algorithm so that's most likely why it's throwing errors. I suppose one could do a meta-learning like strategy if you're not going to follow our approach in the paper of a somewhat stale head, which are implemented as stems in our model classes here
I understand. I was attempting to extend your work to non-stationary fields by warping the inputs using parameters that are learnable. Consequently, this will lead to a changing warp function. This will however lead to a time-varying matrix W as it is computed from interpolations from grid to warped inputs. If I understand correctly, this is not going to work, right?
Hi,
I am trying to work off your paper + code to include learnable hyperparameters in the interpolation matrix W. However, it results in a GPyTorch error "Trying to backward through the graph a second time". I guess this is due to some issues with the caching where the computation graph for the W matrix is lost as it is not intended to be used with hyperparameters. Could you give me any advice on how to fix that?
The text was updated successfully, but these errors were encountered: