You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you kindly advise whether it's feasible to cache the predictions of the base estimator from the initial fit to explore subsequent Hyperparameter Optimization (HPO) trials? This would involve focusing solely on tuning the parameters of the meta-estimator, assuming the base estimators are already optimized.
Currently, the HPO pipeline takes around 5 minutes per iteration for 4-fold cross-validation, which is slower than desired. I've ensured that the seed for the HPO is set, resulting in consistent splits and predictions by the base estimators across estimations. Only the parameters of the meta-estimator vary.
My HPO pipeline is based on Optuna. Any insights or suggestions would be greatly appreciated. Thank you.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Could you kindly advise whether it's feasible to cache the predictions of the base estimator from the initial fit to explore subsequent Hyperparameter Optimization (HPO) trials? This would involve focusing solely on tuning the parameters of the meta-estimator, assuming the base estimators are already optimized.
Currently, the HPO pipeline takes around 5 minutes per iteration for 4-fold cross-validation, which is slower than desired. I've ensured that the seed for the HPO is set, resulting in consistent splits and predictions by the base estimators across estimations. Only the parameters of the meta-estimator vary.
My HPO pipeline is based on Optuna. Any insights or suggestions would be greatly appreciated. Thank you.
Beta Was this translation helpful? Give feedback.
All reactions