-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Cambricon mlu devices #535
base: main
Are you sure you want to change the base?
Conversation
@Narsil @ArthurZucker |
I don't see that the data is transferred into the device itself with this PR, they stay as numpy arrays? |
@kalvdans I ran the test code on MLUs. |
I ran the code withuot "import torch_mlu" and got the error Where can I get the torch_mlu module from to test it, and does it have side-effects on module load? |
@kalvdans Test code is ran as below.
|
Thanks @huismiling for explaining. I'll leave it to others to decide if they want proprietary untestable code in a public library. off-topic, but I recommend Cambricon to make the registering with torch explicit, as tools such as "uv check --fix" will remove seemingly unused imports. |
@kalvdans Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM not sure if docs need an update as well @Narsil
What does this PR do?
Add support for Cambricon mlu devices.
Transformers and Accelerate have supported cambricon mlu (huggingface/transformers#29627, huggingface/accelerate#2552).
This PR enables users to leverage the cambricon mlu for training and inference with safetensors.
Test code: