-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support M1 GPU in FARMReader #2826
Comments
It is actually already there :D |
Reopening this, as the device is not used for the inferencer. See: haystack/haystack/modeling/infer.py Line 229 in 632cd1c
|
Additionally, currently transformers does not support pytorch 1.12 (see huggingface/transformers#17971 (comment)). When changing the code in inferencer to pass on the
|
Also see this for the current state of covered ops for the mps backend: |
Hey, Thanks for sharing this information! I am new to haystack and wondering how to enable GPU in Mac Pro M1? I have PyTorch set up already with torch.backends.mps.is_available() = True. However, I still don't know how to activate it. Can you provide a bit more information? Best |
Hey, @yli223 we do not currently support the M1 GPU. We would need to implement the changes explained by @mathislucka above in Haystack. In addition, we need also need to wait for HuggingFace transformers to support PyTorch 1.12 which is required for the M1 GPU to work (more info here huggingface/transformers#17925). |
Update: the HF PR has been merged to main. Therefore, we can use this feature as soon as we support HF v4.21.2 release (as soon as it gets released). Do we need to add |
That's great! I would say that anywhere the user passes an option to haystack/haystack/modeling/infer.py Lines 175 to 176 in be127e5
where haystack/haystack/modeling/infer.py Line 128 in be127e5
So what is inconsistent at the moment is that the |
@sjrl, so what you are saying is that every function, including the component constructor where we currently pass |
Yes I think this makes sense to help standardize how devices are specified in Haystack.
I'm not entirely sure what you mean here. Do you mean we should always use this statement everywhere we have added the if devices is None:
devices, n_gpu = initialize_device_settings(use_cuda=gpu, multi_gpu=False) |
Yes, it seems to be already used everywhere, but we should make sure that it does get used in addition to making sure we provide |
Yes I agree. |
Update: although HF has recently added support for devices in pipelines the main blocker for the Haystack deployment on Apple Silicone M1/M2 remains MPS implementation of torch cumsum operator which is used extensively in all HF models. |
However, seq2seq generative models still don't work (whenever GenerationMixin is used). The error is
So now we have to wait for pytorch/pytorch#86806 |
Hi @vblagoje, the blocking issue has been fixed. May I ask what the current status for M1 GPU support is? At least from the documentation, it didn't mention Apple Silicon support, so I suppose it's still not supported: |
@laike9m haven't tried it in a while tbh. Having looked at pytorch/pytorch#86806 it seems like it should work now. Please try it out and let us know. If not, I'll get to this task next week or so |
Thanks. I can give it a try, where I can find the instructions to enable it? (sorry I'm pretty new to haystack) |
Still getting the error: Running MacOS Sonoma 14.2.1 (23C71) I have PyTorch 2.1.2 |
Is your feature request related to a problem? Please describe.
Since haystack v1.6 we have support for pytorch 1.12 which also means support for the M1 GPU. However, we currently initialize the device to be either
cpu
orcuda
depending on availability and if the user passes in theuse_gpu=True
parameter. For GPU use on the M1, pytorch actually uses themps
backend. See: https://pytorch.org/docs/stable/notes/mps.htmlIf we could allow the users to pass in the actual device into the FARMReader then this might support of GPU training and inference on the M1 possible.
Describe the solution you'd like
Allow the user to pass in
devices=[<device>]
intoFARMReader.__init__
and use these devices ininitialize_device_settings
. We could make this non-breaking by making this an optional argument to the reader init and the device initialization.Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: