-
Notifications
You must be signed in to change notification settings - Fork 757
Fix regression of temporary_allocator with non-CUDA back ends #1103
Conversation
A recent cleanup of functions that have different code for host and device accidentally broke temporary_allocator for non-CUDA back ends. An #if condition that protects a piece of code that only works with the CUDA back end was incorrectly removed. This fix adds that condition back in.
Should Thoughts? |
Tweaking the definition of |
Sounds reasonable, LGTM. I'll land this sometime tomorrow 👍 |
Now that I've seen this change, it seems a bit unfortunate that we are using the word "device" to mean different things in different contexts (in the context of Thrust types, we use it to mean "in the domain of the device backend", while in the context of the macros, we use it to mean "on the GPU" instead). It's probably not something we should be concerned about right now, but we should think about this. @hwinkler can you please confirm that this patch solves the issue? |
Shelve 28262454. |
@griwes the patch checks out here. Compiles our codebase for CPP and OMP. Update: and CUDA. Update 2: Sorry, not working after all. |
wait... cancel that approval... sorry, I spoke too quickly. |
Sorry for the scare. All is well. The patch does check out fine. Testing our release build works great. The cause of the scare was, I ran the tests under our debug switches, and we've always had certain unit tests that fail under that config. |
Thanks for confirming, @hwinkler! |
A recent cleanup of functions that have different code for host and device
accidentally broke temporary_allocator for non-CUDA back ends. An #if
condition that protects a piece of code that only works with the CUDA back
end was incorrectly removed. This fix adds that condition back in.
Fix issue #1101