You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're having quite a few problems with optimization in float32. In small batches, these go away if we .astype(np.float64) our data before calling glum.GeneralizedLinearRegressor.fit. This also makes the algorithm much faster for some reason. However, we cannot afford the float32 -> float64 conversion on the entire dataset due to memory constraints.
Is there an option to do the optimization in glum (i.e., probably, coef and the current hessian estimate) in float64 even if the data itself is float32?
The text was updated successfully, but these errors were encountered:
We're having quite a few problems with optimization in
float32
. In small batches, these go away if we.astype(np.float64)
our data before callingglum.GeneralizedLinearRegressor.fit
. This also makes the algorithm much faster for some reason. However, we cannot afford thefloat32
->float64
conversion on the entire dataset due to memory constraints.Is there an option to do the optimization in
glum
(i.e., probably,coef
and the current hessian estimate) infloat64
even if the data itself isfloat32
?The text was updated successfully, but these errors were encountered: