-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backward compatibility with v.1.7.6 #9624
Comments
I met the similar problem. When I training model with |
Some defaults have changed in the 2.0 version.
See the Release Note for the full list of changes. In general, developers of XGBoost do not guarantee that different versions of XGBoost would behave identically. (Making such guarantee would prevent us from making necessary improvements.) Instead, we make the following guarantees:
However, if you observe significant degradation of model accuracy or training taking longer time, please file a new GitHub issue. |
Thank you for the explanation. In my case 1.7.6 uses 'tree_method' : 'exact', and this was the only source of the difference. I explicitly specify base_score, so this point is irrelevant in my case. I do not think I use learning-to-rank as I am running a regression ('objective' : 'reg:squarederror'). |
Ah, and BTW, for my problem 'exact' works better than 'hist', hands down. |
is base_score := F0 (in e.g. the Friedman paper)? |
I am explicitly using 0 in my problem. However, in other applications other settings may work better. |
Adding to this: setting base_score to 0.5 in xgboost 2.x resolved the differences I saw between 2.x and 1.7. |
Hi,
When training the model using v.2.0.0 I'm getting substantially different results from v.1.7.6. Could you please clarify why this may be happening even though my code remains untouched?
I'm using:
import xgboost as xgb
paramIn = {
'disable_default_eval_metric' : False,
'objective' : 'reg:squarederror',
'eval_metric' : 'rmse',
'max_depth': 3,
'base_score' : 0.,
'max_leaves' : 0,
'min_child_weight' : 1,
'max_delta_step' : 0,
'subsample' : 1,
'colsample_bytree' : 1,
'lambda' : 0,
'alpha' : 0,
'eta' : 0.05,
'gamma' : 0
}
dtrain = xgb.DMatrix(X, feature_names=feature_names, label=y, nthread=-1) #X is a TxN numpy array,
#y is a Tx1 numpy array
evals_result = {}
bst = xgb.train(paramIn,
dtrain,
num_boost_round=500,
early_stopping_rounds=1000, #i.e. no early stopping
obj=custom_obj,
custom_metric=custom_eval,
evals=[(dtrain, 'train')],
evals_result=evals_result,
verbose_eval=False)
The text was updated successfully, but these errors were encountered: