Skip to content

Commit

Permalink
Improve quantize_comm error message (#2018)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #2018

As titled

Reviewed By: jianyuh, henrylhtsang, edqwerty10

Differential Revision: D49295738

fbshipit-source-id: 45524d8e220ba6b686a99d201e24c6a3d839aed7
  • Loading branch information
sryap authored and facebook-github-bot committed Sep 15, 2023
1 parent aa48aaa commit be1d5ca
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions fbgemm_gpu/fbgemm_gpu/quantize_comm.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,9 +193,10 @@ def calc_quantized_size(
self._comm_precision == SparseType.FP8 and self._row_dim > 0
):
ctx = none_throws(ctx)
assert (
input_len % ctx.row_dim == 0
), f"input_len {input_len} is not a multiple of row dim {ctx.row_dim}"
assert input_len % ctx.row_dim == 0, (
f"input_len {input_len} is not a multiple of row dim {ctx.row_dim} "
"Please check your batch size (power of 2 batch size is recommended)"
)
nrows = input_len // ctx.row_dim
ncols = (ctx.row_dim + 3) // 4 * 4 + 2 * 4
return nrows * ncols
Expand Down

0 comments on commit be1d5ca

Please sign in to comment.