Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load model in the target export precision by default in PTQ #10267

Merged
merged 2 commits into from
Aug 27, 2024

Conversation

janekl
Copy link
Collaborator

@janekl janekl commented Aug 27, 2024

What does this PR do ?

Configuring PTQ to use the same precision to load a model as export.dtype by default. This is to avoid possible OOM errors when casting weights on the model export step.

This is for use cases with less device memory available, for example, PTQ for Llama3-8b on L4 24GB GPU.

Collection: NLP

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • In order to use FP16 / BF16 weights precision enabling model.megatron_amp_O2 is necessary
  • Disabling model.dist_ckpt_load_on_device avoid GPU memory spikes on model loading
  • Using a lower inference.batch_size (compared to default 64) might also be needed.

Example PTQ command is:

python megatron_gpt_ptq.py \
    model.restore_from_path=Llama3-8b \
    +model.dist_ckpt_load_on_device=false \
    export.dtype=bf16 \
    inference.batch_size=16

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: Jan Lasek <jlasek@nvidia.com>
@oyilmaz-nvidia oyilmaz-nvidia merged commit 2f422dd into main Aug 27, 2024
130 of 131 checks passed
@oyilmaz-nvidia oyilmaz-nvidia deleted the jlasek/ptq_model_precision branch August 27, 2024 14:31
shanmugamr1992 pushed a commit that referenced this pull request Aug 27, 2024
* Load model in the target export precision by default

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

* Enable megatron_amp_O2=true to actually use half-precision

Signed-off-by: Jan Lasek <jlasek@nvidia.com>

---------

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: Jan Lasek <jlasek@nvidia.com>
hemildesai pushed a commit that referenced this pull request Aug 28, 2024
* Load model in the target export precision by default

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

* Enable megatron_amp_O2=true to actually use half-precision

Signed-off-by: Jan Lasek <jlasek@nvidia.com>

---------

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: Jan Lasek <jlasek@nvidia.com>
adityavavre pushed a commit to adityavavre/NeMo that referenced this pull request Sep 15, 2024
…0267)

* Load model in the target export precision by default

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>

* Enable megatron_amp_O2=true to actually use half-precision

Signed-off-by: Jan Lasek <jlasek@nvidia.com>

---------

Signed-off-by: Jan Lasek <janek.lasek@gmail.com>
Signed-off-by: Jan Lasek <jlasek@nvidia.com>
Signed-off-by: adityavavre <aditya.vavre@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants