Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix edge cases in (de)serialize_torch_tensor #591

Merged
merged 7 commits into from
Sep 5, 2023

Conversation

justheuristic
Copy link
Member

During an earlier patch, we lost the requires_grad property during serialize_torch_tensor. This PR adds it back.

@mryab mryab changed the title serialize with requires_grad Serialize the requires_grad tensor property Sep 5, 2023
@codecov
Copy link

codecov bot commented Sep 5, 2023

Codecov Report

Merging #591 (12e8fc2) into master (64f1f1e) will increase coverage by 0.17%.
The diff coverage is 88.88%.

@@            Coverage Diff             @@
##           master     #591      +/-   ##
==========================================
+ Coverage   85.20%   85.37%   +0.17%     
==========================================
  Files          81       81              
  Lines        8009     8022      +13     
==========================================
+ Hits         6824     6849      +25     
+ Misses       1185     1173      -12     
Files Changed Coverage Δ
hivemind/compression/floating.py 89.23% <84.61%> (-2.15%) ⬇️
hivemind/compression/quantization.py 94.21% <92.30%> (-0.62%) ⬇️
hivemind/compression/base.py 94.36% <100.00%> (+0.08%) ⬆️

... and 4 files with indirect coverage changes

@justheuristic justheuristic changed the title Serialize the requires_grad tensor property Fix edge cases in (de)serialize_torch_tensor Sep 5, 2023
@@ -12,22 +12,28 @@ class Float16Compression(CompressionBase):
FP16_MIN, FP16_MAX = torch.finfo(torch.float16).min, torch.finfo(torch.float16).max

def compress(self, tensor: torch.Tensor, info: CompressionInfo, allow_inplace: bool = False) -> runtime_pb2.Tensor:
assert torch.is_floating_point(tensor) and tensor.dtype != torch.bfloat16
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why we should fail with an error in case of bf16 inputs? It is indeed not sensible, but if the user wants to do so, it's probably better to issue a warning instead of flat out refusing to pass that through quantization

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added ValueError with a more user-legible reason

@@ -135,14 +138,15 @@ def quantize(
except ImportError:
raise ImportError(BNB_MISSING_MESSAGE)

quantized, (absmax, codebook) = quantize_blockwise(tensor)
quantized, (absmax, codebook, *extra_params) = quantize_blockwise(tensor, blocksize=4096, nested=False)
assert tuple(extra_params) == (4096, False, tensor.dtype, None, None) # blocksize, nested, dtype, offset, s2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can make that tuple on the right a module-level constant? It's used twice in the code, better to make it clear we're using some predefined values

Copy link
Member Author

@justheuristic justheuristic Sep 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, thanks for the suggestion

@justheuristic justheuristic merged commit 2873252 into master Sep 5, 2023
15 checks passed
@justheuristic justheuristic deleted the fix-requires-grad branch September 5, 2023 22:18
jmikedupont2 pushed a commit to meta-introspector/hivemind that referenced this pull request Mar 29, 2024
* serialize with requires_grad
* ensure that all compression methods return tensor of the original dtype
* test that all compression methods preserve dtype and requires_grad


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants