You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For binary classification problems, Precision Recall AUC (as opposed to ROC AUC) is a good metric for unbalanced data. Currently, I have to use a custom eval function even though I use logloss as the ojective function. As a result, both logloss and PR-AUC are used to stop training. (I use early_stopping_rounds= 25 to stop the iterations.)
I would like to stop the iterations with just PR-AUC as the metric. Using custom eval function slows down the speed of LightGBM too. Additionally, XGBoost has PR-AUC as a metric. (They called it aucpr.)
I propose that PR-AUC to be added as a built-in metric.
For binary classification problems, Precision Recall AUC (as opposed to ROC AUC) is a good metric for unbalanced data. Currently, I have to use a custom eval function even though I use logloss as the ojective function. As a result, both logloss and PR-AUC are used to stop training. (I use early_stopping_rounds= 25 to stop the iterations.)
I would like to stop the iterations with just PR-AUC as the metric. Using custom eval function slows down the speed of LightGBM too. Additionally, XGBoost has PR-AUC as a metric. (They called it aucpr.)
I propose that PR-AUC to be added as a built-in metric.
My workaround is as follows for the time being:
model=lgb.train(params, lgb_train,
num_boost_round=2000,
valid_sets=[lgb_valid],
feval=f_pr_auc,
early_stopping_rounds=25,
verbose_eval=50)
def pr_auc(y_true, probas_pred):
p, r, _ = precision_recall_curve(y_true, probas_pred)
return auc(r, p)
def f_pr_auc(probas_pred, y_true):
probas_pred=sigmoid(probas_pred)
labels=y_true.get_label()
p, r, _ = precision_recall_curve(labels, probas_pred)
score=auc(r,p)
return "pr_auc", score, True
The text was updated successfully, but these errors were encountered: