Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] Fix cublas batch matmul #6715

Merged
merged 3 commits into from
Oct 22, 2020
Merged

Conversation

sxjscience
Copy link
Member

Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.

@comaniac The issue is the same as the cblas_batch_matmul issue in #6699.

Update batch_matmul.py
Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@merrymercy the CI failed at the Ansor tutorial. Would you mind take a look?

python/tvm/topi/cuda/batch_matmul.py Outdated Show resolved Hide resolved
@icemelon
Copy link
Member

Why do we need this attribute in batch_matmul?

@sxjscience
Copy link
Member Author

The reason is that the code here always assume that there is an out_shape argument. https://github.com/apache/incubator-tvm/blob/461e75bd5ffaf45a0f270998514d444463d11261/python/tvm/relay/op/strategy/generic.py#L685-L686

@icemelon
Copy link
Member

icemelon commented Oct 20, 2020

I checked the implementation. I think we can remove the oshape arg in the topi implementation and we can enable broadcasting at batch axis by default. cc @jwfromm

https://github.com/apache/incubator-tvm/blob/main/python/tvm/topi/nn/batch_matmul.py#L23

@sxjscience sxjscience changed the title [TinyFix] Fix cublas batch matmul [FIX] Fix cublas batch matmul Oct 20, 2020
@sxjscience
Copy link
Member Author

Per offline discussion with @icemelon9 , I will have a separate PR to remove the oshape flag directly.

Copy link
Member

@icemelon icemelon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@icemelon icemelon merged commit ea9194a into apache:main Oct 22, 2020
@icemelon
Copy link
Member

Thanks @sxjscience @comaniac

masahi pushed a commit to masahi/tvm that referenced this pull request Oct 23, 2020
* Update batch_matmul.py

Update batch_matmul.py

* fix
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Oct 29, 2020
* Update batch_matmul.py

Update batch_matmul.py

* fix
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
* Update batch_matmul.py

Update batch_matmul.py

* fix
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
* Update batch_matmul.py

Update batch_matmul.py

* fix
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
* Update batch_matmul.py

Update batch_matmul.py

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants