Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial FP8 support #75

Merged
merged 14 commits into from
Jun 28, 2024
Merged

Initial FP8 support #75

merged 14 commits into from
Jun 28, 2024

Conversation

madamczykhabana
Copy link

@madamczykhabana madamczykhabana commented Jun 28, 2024

Includes:

@madamczykhabana
Copy link
Author

@nirda7 Please check if all the necessary changes are included
@kzawora-intel Please approve

@madamczykhabana madamczykhabana self-assigned this Jun 28, 2024
@madamczykhabana madamczykhabana merged commit 33a2620 into habana_next Jun 28, 2024
@madamczykhabana madamczykhabana deleted the vllm-hqt-fork branch June 28, 2024 13:17
@kzawora-intel kzawora-intel added the habana Issues or PRs submitted by Habana Labs label Nov 8, 2024
michalkuligowski added a commit that referenced this pull request Jan 15, 2025
remove expert_max hard code (#47)
vLLM-Ext: Full enabling of ALiBi (#34)
Add version inference via setuptools-scm (#58)
Revert "vLLM-Ext: Full enabling of ALiBi (#34)" (#59)
Remove punica_hpu.py from vllm_hpu_extension (#66)
Removed previous (not-pipelined) pa implementation (#72)
Add flag to enable running softmax in fp32 (#71)
Update calibration readme link (#73)
allow lm_head quantization in calibration process (#65)
Pad to bmin if value is less (#67)
Update pyproject.toml (#75)

---------

Co-authored-by: Michał Kuligowski <mkuligowski@habana.ai>
mfylcek added a commit that referenced this pull request Jan 21, 2025
remove expert_max hard code (#47)
vLLM-Ext: Full enabling of ALiBi (#34)
Add version inference via setuptools-scm (#58)
Revert "vLLM-Ext: Full enabling of ALiBi (#34)" (#59)
Remove punica_hpu.py from vllm_hpu_extension (#66)
Removed previous (not-pipelined) pa implementation (#72)
Add flag to enable running softmax in fp32 (#71)
Update calibration readme link (#73)
allow lm_head quantization in calibration process (#65)
Pad to bmin if value is less (#67)
Update pyproject.toml (#75)

---------

Co-authored-by: Michał Kuligowski <mkuligowski@habana.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
habana Issues or PRs submitted by Habana Labs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants