Predicted values based on class lgb.Booster
# S3 method for lgb.Booster predict( object, data, num_iteration = NULL, rawscore = FALSE, predleaf = FALSE, predcontrib = FALSE, header = FALSE, reshape = FALSE, ... )
object | Object of class |
---|---|
data | a |
num_iteration | number of iteration want to predict with, NULL or <= 0 means use best iteration |
rawscore | whether the prediction should be returned in the for of original untransformed
sum of predictions from boosting iterations' results. E.g., setting |
predleaf | whether predict leaf index instead. |
predcontrib | return per-feature contributions for each record. |
header | only used for prediction for text file. True if text file has header |
reshape | whether to reshape the vector of predictions to a matrix form when there are several prediction outputs per case. |
... | Additional named arguments passed to the |
For regression or binary classification, it returns a vector of length nrows(data)
.
For multiclass classification, either a num_class * nrows(data)
vector or
a (nrows(data), num_class)
dimension matrix is returned, depending on
the reshape
value.
When predleaf = TRUE
, the output is a matrix object with the
number of columns corresponding to the number of trees.
# \dontrun{ data(agaricus.train, package = "lightgbm") train <- agaricus.train dtrain <- lgb.Dataset(train$data, label = train$label) data(agaricus.test, package = "lightgbm") test <- agaricus.test dtest <- lgb.Dataset.create.valid(dtrain, test$data, label = test$label) params <- list(objective = "regression", metric = "l2") valids <- list(test = dtest) model <- lgb.train( params = params , data = dtrain , nrounds = 5L , valids = valids , min_data = 1L , learning_rate = 1.0 )#> [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001301 seconds. #> You can set `force_row_wise=true` to remove the overhead. #> And if memory is not enough, you can set `force_col_wise=true`. #> [LightGBM] [Info] Total Bins 232 #> [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 #> [LightGBM] [Info] Start training from score 0.482113 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [1]: test's l2:6.44165e-17 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [2]: test's l2:1.97215e-31 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [3]: test's l2:0 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [4]: test's l2:0 #> [LightGBM] [Warning] No further splits with positive gain, best gain: -inf #> [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements #> [5]: test's l2:0