Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Skip fused_lrelu op when gcc is less than 6.0 or cuda is less than 10.2 #2671

Merged
merged 4 commits into from
Mar 17, 2023

Conversation

grimoire
Copy link
Member

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Disable fused_lrelu on gcc < 6

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects, like MMDet or MMCls.
  • CLA has been signed and all committers have signed the CLA in this PR.

@codecov
Copy link

codecov bot commented Mar 16, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (869dbf1) 65.28% compared to head (260a85f) 65.28%.

❗ Current head 260a85f differs from pull request most recent head b9d1d2d. Consider uploading reports for the commit b9d1d2d to get more accurate results

Additional details and impacted files
@@           Coverage Diff           @@
##              2.x    #2671   +/-   ##
=======================================
  Coverage   65.28%   65.28%           
=======================================
  Files         124      124           
  Lines        8337     8337           
  Branches     1168     1168           
=======================================
  Hits         5443     5443           
  Misses       2700     2700           
  Partials      194      194           
Flag Coverage Δ
unittests 65.28% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@zhouzaida zhouzaida changed the title [Fix] Disable ops gcc < 6 [Fix] Disable fused_lrelu op when gcc is less than 6.0 or cuda is less than 10.2 Mar 17, 2023
@zhouzaida zhouzaida changed the title [Fix] Disable fused_lrelu op when gcc is less than 6.0 or cuda is less than 10.2 [Fix] Skip fused_lrelu op when gcc is less than 6.0 or cuda is less than 10.2 Mar 17, 2023
@zhouzaida zhouzaida merged commit 03ea1c9 into open-mmlab:2.x Mar 17, 2023
tyomj pushed a commit to tyomj/mmcv that referenced this pull request May 8, 2023
…han 10.2 (open-mmlab#2671)

* disable filtered_lrelu_op

* fix lint

* add cuda version check

* warning if disable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants