[RFC] Version 0.90 release candidate #4475
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
You can try the 0.90 release candidate by downloading the following:
(Don't worry about the file names here, they will be renamed)
v0.90 (2019.05.18)
XGBoost Python package drops Python 2.x (#4379, #4381)
Python 2.x is reaching its end-of-life at the end of this year. Many scientific Python packages are now moving to drop Python 2.x.
XGBoost4J-Spark now requires Spark 2.4.x (#4377)
Roadmap: better performance scaling for multi-core CPUs (#4310)
hist
algorithm for multi-core CPUs has been under investigation (Call for contribution: improve multi-core CPU performance of 'hist' #3810). Optimizations of pre-processing for 'hist' tree method #4310 optimizes quantile sketches and other pre-processing tasks. Special thanks to @SmirnovEgorRu.Roadmap: Harden distributed training (#4250)
New feature: Multi-class metric functions for GPUs (#4368)
merror
,mlogloss
. Special thanks to @trivialfis.n_gpus
parameter.New feature: Scikit-learn-like random forest API (#4148, #4255, #4258)
XGBRFClassifier
andXGBRFRegressor
API to train random forests. See the tutorial. Special thanks to @canonizerNew feature: use external memory in GPU predictor (#4284, #4396, #4438, #4457)
It is now possible to make predictions on GPU when the input is read from external memory. This is useful when you want to make predictions with big dataset that does not fit into the GPU memory. Special thanks to @rongou, @canonizer, @sriramch.
Coming soon: GPU training (
gpu_hist
) with external memoryNew feature: XGBoost can now handle comments in LIBSVM files (#4430)
New feature: Embed XGBoost in your C/C++ applications using CMake (#4323, #4333, #4453)
It is now easier than ever to embed XGBoost in your C/C++ applications. In your CMakeLists.txt, add
xgboost::xgboost
as a linked library:XGBoost C API documentation is available. Special thanks to @trivialfis
Performance improvements
gpu_hist
(Optimisations for gpu_hist. #4248, Further optimisations for gpu_hist. #4283)gpu_hist
( Combine thread launches into single launch per tree for gpu_hist #4343)Bug-fixes
hist
(Fix node reuse. #4404)gpu_hist
(minor fix: log InitDataOnce() only when it is actually called #4206)maxLeaves
. ([xgboost4j-spark] Allow set the parameter "maxLeaves". #4226)maximize_evaluation_metrics
if not explicitly given ([jvm-packages] Automatically set maximize_evaluation_metrics if not explicitly given in XGBoost4J-Spark #4446)API changes
reg:linear
in favor ofreg:squarederror
. ( Deprecatereg:linear' in favor of
reg:squarederror'. #4267, Change obj name toreg:squarederror
in learner. #4427)Maintenance: Refactor C++ code for legibility and maintainability
hist
. (Use Monitor in quantile hist. #4273)Maintenance: testing, continuous integration, build system
craigcitro/r-travis
, since it's deprecated ([r-package] cut CI-time dependency on craigcitro/r-travis (fixes #4348) #4353).gitignore
(added files from local R build to gitignore #4346)silent
anddebug_verbose
in Python tests (Remove remainingsilent
anddebug_verbose
. #4299)Usability Improvements, Documentation
num_parallel_tree
(Fix docs for num_parallel_tree #4221)colsample_by*
parameter (Fix document about colsample_by* parameter #4340)Acknowledgement
Contributors: Nan Zhu (@CodingCat), Adam Pocock (@Craigacp), Daniel Hen (@Daniel8hen), Jiaxiang Li (@JiaxiangBU), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), Andy Adinets (@canonizer), Jonas (@elcombato), Harry Braviner (@harrybraviner), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), James Lamb (@jameslamb), Jean-Francois Zinque (@jeffzi), Yang Yang (@jokerkeny), Mayank Suman (@mayanksuman), jess (@monkeywithacupcake), Hajime Morrita (@omo), Ravi Kalia (@project-delphi), @ras44, Rong Ou (@rongou), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), @sriramch, Jiaming Yuan (@trivialfis), Christopher Suchanek (@wsuchy), Bozhao (@yubozhao)
Reviewers: Nan Zhu (@CodingCat), Adam Pocock (@Craigacp), Daniel Hen (@Daniel8hen), Jiaxiang Li (@JiaxiangBU), Laurae (@Laurae2), Rory Mitchell (@RAMitchell), Egor Smirnov (@SmirnovEgorRu), @alois-bissuel, Andy Adinets (@canonizer), Chen Qin (@chenqin), Harry Braviner (@harrybraviner), Philip Hyunsu Cho (@hcho3), Tong He (@hetong007), @jakirkham, James Lamb (@jameslamb), Julien Schueller (@jschueller), Mayank Suman (@mayanksuman), Hajime Morrita (@omo), Rong Ou (@rongou), Sara Robinson (@sararob), Shaochen Shi (@shishaochen), Xu Xiao (@sperlingxx), @sriramch, Sean Owen (@srowen), Sergei Lebedev (@superbobry), Yuan (Terry) Tang (@terrytangyuan), Theodore Vasiloudis (@thvasilo), Matthew Tovbin (@tovbinm), Jiaming Yuan (@trivialfis), Xin Yin (@xydrolase)
@dmlc/xgboost-committer