Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX,AUTOTVM] Add flop counts to cublas #7297

Merged
merged 5 commits into from
Jan 23, 2021

Conversation

tkonolige
Copy link
Contributor

Fixes a bug when trying to autotune with cublas enabled. I've also added the op name to the error messages.

@comaniac @merrymercy

Also print which op was missing flops.
python/tvm/autotvm/task/task.py Outdated Show resolved Hide resolved
@@ -44,6 +44,8 @@ def dense_cublas(cfg, data, weight, bias=None, out_dtype=None):
matmul = cublas.matmul(data, weight, False, True)
if isinstance(batch, int):
cfg.add_flop(batch * in_dim * out_dim * 2)
else:
cfg.add_flop(batch.value * in_dim * out_dim * 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should have an assertion to make sure batch is a TVM container type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a check that it is int or IntImm, would any other types show up here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. Let's use the current version to make sure we could catch any exception.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah it turns out that you may get Var in the case of dynamic shape. For other TOPI templates, many of them just ignore the flops when the shape is dynamic. You could do the same.

tkonolige and others added 2 commits January 15, 2021 16:55
@@ -44,6 +44,8 @@ def dense_cublas(cfg, data, weight, bias=None, out_dtype=None):
matmul = cublas.matmul(data, weight, False, True)
if isinstance(batch, int):
cfg.add_flop(batch * in_dim * out_dim * 2)
else:
cfg.add_flop(batch.value * in_dim * out_dim * 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. Let's use the current version to make sure we could catch any exception.

@comaniac comaniac merged commit 218048e into apache:main Jan 23, 2021
@comaniac
Copy link
Contributor

Thanks @tkonolige

alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 11, 2021
electriclilies pushed a commit to electriclilies/tvm that referenced this pull request Feb 18, 2021
Lokiiiiii pushed a commit to Lokiiiiii/tvm that referenced this pull request Mar 2, 2021
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants