-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Lang] Cuda caching allocator for ndarray 1/n #3581
Conversation
✔️ Deploy Preview for jovial-fermat-aa59dc canceled. 🔨 Explore the source changes: a302510 🔍 Inspect the deploy log: https://app.netlify.com/sites/jovial-fermat-aa59dc/deploys/61a582b5ecac5a0007a2f8da |
/format |
/format |
/format |
/format |
/format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I have a few nits
Co-authored-by: Ye Kuang <k-ye@users.noreply.github.com>
/format |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
/format |
Related issue = #3634
This is the first step towards ndarray memory recycling. We add a simple caching mechanism to reuse CUDA buffers of same size.