-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Support via OpenCL #1377
base: master
Are you sure you want to change the base?
GPU Support via OpenCL #1377
Conversation
That is interesting. Thanks for the neat proof of concept. Now I don't have bandwidth to look at this in depth unfortunately. Look at the repo, we're working on version 0.16 which is only three years late or so. :) Brief and off the cuff for that reason. I'll ask some questions, that doesn't mean I'm making requirements or demands, they are things to think about. Why not have a separate owned type for arrays that are on a non-cpu device? Users probably don't want a situation where they can't tell if adding two arrays together is going to cause a crash or not. How do array views work in this model? Most interesting functionality in ndarray happens through array views. I do agree - if you think so - that too much type information is not good either - it just gets in the way, the hard part is finding a balance. |
let typename = match crate::opencl::rust_type_to_c_name::<A>() { | ||
Some(x) => x, | ||
None => panic!("The Rust type {} is not supported by the \ | ||
OpenCL backend", std::any::type_name::<A>()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce at type level?
unsafe { | ||
let elements = self.len(); | ||
let self_ptr = self.as_ptr() as *mut std::ffi::c_void; | ||
let other_ptr = rhs.as_ptr() as *mut std::ffi::c_void; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming this operation can be invalid here because they are only known to be on the same device by a debug assertion, not a hard assertion?
let dim = self.dim; | ||
let strides = self.strides; | ||
let data = self.data.move_to_device(device)?; | ||
let ptr = std::ptr::NonNull::new(data.as_ptr() as *mut A).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new ptr is invalid if move is Host to Host and the ptr before move was not to the start of the allocation. This is probably just a feature not yet implemented here.
Thanks for the review @bluss. I didn't check the validity of stuff particularly rigorously, so I'm sure there are a few potential issues. I've had a bit of a think, and I reckon it'd be possible to kill a few birds with one (admittedly quite large) stone. I've read through most of the open issues, and it seems like a lot of them relate to the maintainability of the library (I'd be keen to help out with this if you're still looking for someone to maintain things) and cleaning up or refactoring the code. If there were a The backends could have reasonably generic code that uses templates, cutting down on the heavy macro expansions and making the code easier to maintain. The Array and View types would need to take the backend as a generic type, but that should allow any backend to have full View functionality since it'd all be handled by the backend provider. Obviously, things like custom mapping functions and hand-written iterators won't work on anything other than the CPU since the data isn't available in host memory, but everything else should work fine. That has the added benefit of allowing anyone to write their own backend for Anyway, maybe I'm getting a bit ahead of myself, since that'd be a fairly substantial refactor of the codebase. If that's something you think would be beneficial, then I'd be happy to start looking into it. As I mentioned earlier, I'd also be happy to help maintain the project if you need another maintainer :) Thanks again for the review! |
About maintainership, that's welcome, feel free to join the new disucssion board #1272 (comment) |
This is very much a work-in-progress, but I wanted to know how this approach fits with the rest of the codebase. I realise it's not the most "rusty" implementation, but I think it could be abstracted away quite nicely.
There is a fair amount of unnecessary/unclean code, but that can be cleaned up pretty quickly. The changes made here only allow for binary operations (
+ - * /
) on contiguousArrayBase
references (see the example below). With more work, it could be expanded to support almost everything the CPU "backend" supports.To enable OpenCL support, you need to enable the
opencl
feature.Here is an example:
A very similar approach can be taken to get CUDA support working. It might even be possible to merge them into a single
GPU
backend trait, for example, which would simplify the implementations quite a bit. It'd require a few substantial changes internally, though (I think. Maybe not?).Anyway, let me know what you think!