-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support engine with NPU backend. #2262
Conversation
[Feature] Support engine with NPU backend.
fix autocast bugs on npu (#2273)
add npu extension and focal loss adapter
@@ -280,7 +280,7 @@ def get_extensions(): | |||
if is_rocm_pytorch or torch.cuda.is_available() or os.getenv( | |||
'FORCE_CUDA', '0') == '1': | |||
if is_rocm_pytorch: | |||
define_macros += [('HIP_DIFF', None)] | |||
define_macros += [('MMCV_WITH_HIP', None)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we remove HIP_DIFF
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/open-mmlab/mmcv/blob/master/setup.py#L283
This might be a comparison error...
Please update |
* add npu test case
done.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Added ascending device support in mmcv.
Modification
mmcv/device/
, We've added our NPU devices. Since the NPU does not support the DataParallel mode, DP and DDP have been customized, the Scatter related methods have been rewritten, and the use case of MLU has been referenced, thanks.mmcv/runner/dist_utils.py
, We have added NPU-related distributed initialization methods.mmcv/runner/hooks/optimizer.py
, We've added the amp method on npu.mmcv/utils/
, We've added NPU devices.BC-breaking (Optional)
None
Use cases (Optional)
Checklist
Before PR:
After PR: