Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU][CUDA] Add support for linear tree with device=gpu, device=cuda #6555

Open
dragonbra opened this issue Jul 18, 2024 · 0 comments
Open

Comments

@dragonbra
Copy link
Contributor

dragonbra commented Jul 18, 2024

Summary

Add support for linear tree with device=gpu, device=cuda

Motivation

Linear tree is a useful feature to improve model's performance. However, it can be only used in CPU version now. I think it would be great if we can add support in GPU and CUDA version

Description

By reading the code, I found that the gpu_tree_learner now calculates the gradients and hessians in GPU device and pass them back to CPU to call the train() function in serial_tree_learner. The linear_tree_learner's training process is based on serial_tree_learner's and used Eigen library, so it should be easy to implement this feature if we do not take GPU acceleration of training linear model leaves into consideration.
I have already successfully tried it in device=gpu version and I will submit a PR when I have time to organize the code.
I haven't done experiments in device=cuda version. It would be great if anyone can contribute to it.

References

Trees with linear models at leaves #3299

Tree* LinearTreeLearner::Train(const score_t* gradients, const score_t *hessians, bool is_first_tree) {

@dragonbra dragonbra changed the title [GPU][CUDA] Add support to linear tree with device=gpu, device=cuda [GPU][CUDA] Add support for linear tree with device=gpu, device=cuda Jul 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant