-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] Fix cublas batch matmul #6715
Conversation
Update batch_matmul.py
eac559b
to
e1116a3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@merrymercy the CI failed at the Ansor tutorial. Would you mind take a look?
Why do we need this attribute in |
The reason is that the code here always assume that there is an |
I checked the implementation. I think we can remove the oshape arg in the topi implementation and we can enable broadcasting at batch axis by default. cc @jwfromm https://github.com/apache/incubator-tvm/blob/main/python/tvm/topi/nn/batch_matmul.py#L23 |
Per offline discussion with @icemelon9 , I will have a separate PR to remove the oshape flag directly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @sxjscience @comaniac |
* Update batch_matmul.py Update batch_matmul.py * fix
* Update batch_matmul.py Update batch_matmul.py * fix
* Update batch_matmul.py Update batch_matmul.py * fix
* Update batch_matmul.py Update batch_matmul.py * fix
* Update batch_matmul.py Update batch_matmul.py * fix
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.
@comaniac The issue is the same as the cblas_batch_matmul issue in #6699.