Cross validation logic used by LightGBM
lgb.cv( params = list(), data, nrounds = 10L, nfold = 3L, label = NULL, weight = NULL, obj = NULL, eval = NULL, verbose = 1L, record = TRUE, eval_freq = 1L, showsd = TRUE, stratified = TRUE, folds = NULL, init_model = NULL, colnames = NULL, categorical_feature = NULL, early_stopping_rounds = NULL, callbacks = list(), reset_data = FALSE, ... )
params | List of parameters |
---|---|
data | a |
nrounds | number of training rounds |
nfold | the original dataset is randomly partitioned into |
label | Vector of labels, used if |
weight | vector of response values. If not NULL, will set to dataset |
obj | objective function, can be character or custom objective function. Examples include
|
eval | evaluation function, can be (list of) character or custom eval function |
verbose | verbosity for output, if <= 0, also will disable the print of evaluation during training |
record | Boolean, TRUE will record iteration message to |
eval_freq | evaluation output frequency, only effect when verbose > 0 |
showsd |
|
stratified | a |
folds |
|
init_model | path of model file of |
colnames | feature names, if not null, will use this to overwrite the names in dataset |
categorical_feature | categorical features. This can either be a character vector of feature
names or an integer vector with the indices of the features (e.g.
|
early_stopping_rounds | int. Activates early stopping. Requires at least one validation data and one metric. If there's more than one, will check all of them except the training data. Returns the model with (best_iter + early_stopping_rounds). If early stopping occurs, the model will have 'best_iter' field. |
callbacks | List of callback functions that are applied at each iteration. |
reset_data | Boolean, setting it to TRUE (not the default value) will transform the booster model into a predictor model which frees up memory and the original datasets |
... | other parameters, see Parameters.rst for more information. A few key parameters:
|
a trained model lgb.CVBooster
.
# \dontrun{ data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) params <- list(objective = "regression", metric = "l2") model <- lgb.cv( params = params , data = dtrain , nrounds = 5L , nfold = 3L , min_data = 1L , learning_rate = 1.0 )#> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001553 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001175 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001049 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 4342, number of used features: 116 #> [LightGBM] [Info] Start training from score 0.477199 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Info] Start training from score 0.488715 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Info] Start training from score 0.480424 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1]: valid's l2:0.000460617+0.000651411 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [2]: valid's l2:0.000460617+0.000651411 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [3]: valid's l2:0.000460617+0.000651411 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [4]: valid's l2:0.000460617+0.000651411 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [5]: valid's l2:0.000460617+0.000651411# }