Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: modelloader handling of model_kwargs load_in*bit #1999

Merged
merged 10 commits into from
Oct 30, 2024

Conversation

NanoCode012
Copy link
Collaborator

Description

I missed one important case where model_kwargs["load_in_8bit"] and model_kwargs["load_in_4bit"] are deleted at the end of self.set_quantization_config() which would break all further dependency on the above kwarg. This PR fixes that and refactors the handling.

Motivation and Context

How has this been tested?

Screenshots (if appropriate)

Types of changes

Social Handles (Optional)

@NanoCode012 NanoCode012 changed the title Fix/model loader cast issues Fix: modelloader handling of model_kwargs load_in*bit Oct 28, 2024
@MengqingCao
Copy link
Contributor

This looks much better than checking if the key-value pair exists before each check. :-)

@NanoCode012
Copy link
Collaborator Author

To validate the new e2e test, I ran it on current main which threw an error.

main
image

this branch
image

@winglian winglian merged commit 5c7e891 into main Oct 30, 2024
14 of 15 checks passed
@winglian winglian deleted the fix/model-loader-cast-issues branch October 30, 2024 18:41
bursteratom pushed a commit that referenced this pull request Nov 18, 2024
* fix: load_in_*bit not properly read

* fix: load_*bit check

* fix: typo

* refactor: load * bit handling

* feat: add test dpo lora multi-gpu

* fix: turn off sample packing for dpo

* fix: missing warmup_steps

* fix: test to load in 8bit for lora

* skip 8bit lora on h100, add 4bit lora on h100 to multi gpu tests

* chore: reduce max_steps

---------

Co-authored-by: Wing Lian <wing.lian@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants