You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
xgboost GPU RAM requirement sampling is unreliable, as there is an initialization peak which is difficult to catch. What you see in nvidia-smi is not what you need in practice for xgboost.
LightGBM GPU RAM requirement never grows up, what you see in nvidia-smi is really what you need
Therefore, xgboost requires trial and error sampling (using different number of parallel workers) until crash, while LightGBM is easily predictable with only 1 sample.
Notes:
The GPU requirements also depend on the hyperparameters you are using for your model
For reference, the hyperparameters are the following: depth = 6, leaves = 63, bins = 255
Table for GPU requirements:
Dataset
Observations
xgboost
LightGBM
Airline
100,000
"nearly" 1GB (fits 4 on 4GB)
about 67 MB
Airline
1,000,000
"nearly" 1GB (fits only 3 on 4GB)
about 93 MB
Airline
10,000,000
"nearly" 1.1GB (fits only 2 on 4GB)
about 333 MB
To be updated with more datasets later.
Edit 05/31/2019:
Airline 1M 4x sometimes crashes on xgboost with 4GB GPU. Reduced to 3x.
Airline 10M 3x sometimes crashes on xgboost with 4GB GPU. Reduced to 2x.
The text was updated successfully, but these errors were encountered:
GPU RAM behavior information:
nvidia-smi
is not what you need in practice for xgboost.nvidia-smi
is really what you needTherefore, xgboost requires trial and error sampling (using different number of parallel workers) until crash, while LightGBM is easily predictable with only 1 sample.
Notes:
Table for GPU requirements:
To be updated with more datasets later.
Edit 05/31/2019:
The text was updated successfully, but these errors were encountered: