Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding TP llama example #2623

Merged
merged 35 commits into from
Oct 5, 2023
Merged

adding TP llama example #2623

merged 35 commits into from
Oct 5, 2023

Conversation

HamidShojanazeri
Copy link
Collaborator

@HamidShojanazeri HamidShojanazeri commented Sep 27, 2023

Description

Adding Pytorch TP example for llama2, the idea here is to get "meta/ original" weights from HF model hub, do a checkpoint conversion and do the distributed inference with TP. "meta/ original" llama2 model is using Fairscale TP, which we are trying to use PyTorch TP instead here.

Fixes #(issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • [ x] New feature (non-breaking change which adds functionality)
  • [ x] This change requires a documentation update

Feature/Issue validation/testing

Please describe the Unit or Integration tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
Please also list any relevant details for your test configuration.

Checklist:

  • Did you have fun?
  • Have you added tests that prove your fix is effective or that this feature works?
  • Has code been commented, particularly in hard-to-understand areas?
  • Have you made corresponding changes to the documentation?

@codecov
Copy link

codecov bot commented Sep 27, 2023

Codecov Report

Merging #2623 (8ea479a) into master (e346a93) will not change coverage.
The diff coverage is n/a.

❗ Current head 8ea479a differs from pull request most recent head c507e11. Consider uploading reports for the commit c507e11 to get more accurate results

@@           Coverage Diff           @@
##           master    #2623   +/-   ##
=======================================
  Coverage   72.39%   72.39%           
=======================================
  Files          85       85           
  Lines        3956     3956           
  Branches       58       58           
=======================================
  Hits         2864     2864           
  Misses       1088     1088           
  Partials        4        4           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
examples/large_models/tp_llama/REAME.md Show resolved Hide resolved
examples/large_models/tp_llama/model-config.yaml Outdated Show resolved Hide resolved
examples/large_models/tp_llama/llama-handler.py Outdated Show resolved Hide resolved
examples/large_models/tp_llama/llama-handler.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@mreso mreso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, please see comments. A unit test for the handler would also be great. You can mock out the model etc but test the logic.

examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
### How to use it?


1- Make sure you have access to llama weights on [HF model hub](https://huggingface.co/meta-llama), there is form you need to fill up and within few mins you will get access. ANy model name on the hub **without -hf** is Meta/FAIR weight.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo:
llama -> Llama
there is form -> there is a form
fill up -> fill out
nit: Any model name -> Any Llama model name

examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
examples/large_models/tp_llama/REAME.md Outdated Show resolved Hide resolved
examples/large_models/tp_llama/llama-handler.py Outdated Show resolved Hide resolved
examples/large_models/tp_llama/llama-handler.py Outdated Show resolved Hide resolved
"""
if isinstance(input_text, (bytes, bytearray)):
input_text = input_text.decode("utf-8")
logger.info("Received text: '%s'", input_text)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be debug?

@@ -0,0 +1 @@
Hey, are you conscious? Can you talk to me?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️ that

@HamidShojanazeri HamidShojanazeri changed the title [WIP] adding TP llama example adding TP llama example Oct 2, 2023
Copy link
Collaborator

@lxning lxning left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I added some comments so that the model artifacts with converted checkpoint can be uploaded to S3, which can be directly used by cx.

Comment on lines 69 to 71
converted_ckpt_dir=ctx.model_yaml_config["handler"]["converted_ckpt_dir"],
tokenizer_path= ctx.model_yaml_config["handler"]["tokenizer_path"],
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you replace them with the following:
converted_ckpt_dir=f'{model_dir}/{ctx.model_yaml_config["handler"]["converted_ckpt_dir"]}',
tokenizer_path= f'{model_dir}/{ctx.model_yaml_config["handler"]["tokenizer_path"]}',
)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this and the one below errors out the reason is we are not bundling these files into mar file

FileNotFoundError: [Errno 2] No such file or directory
'/tmp/models/38d7adf021da4f278f6711ff1584fac0/llama//data/home/hamidnazeri/fresh_ts/serve/examples/large_models/tp_llama/model_args.json'

logger.info("Instantiating Llama model")
model_load_start = time.perf_counter()
llama_model_and_tok= Llama.build(
model_args=ctx.model_yaml_config["handler"]["model_args_path"],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you change:

model_args=f'{model_dir}/{ctx.model_yaml_config["handler"]["model_args_path"]}'

Comment on lines 87 to 89
converted_ckpt_dir: "PATH/TO/converted_checkpoints"
tokenizer_path: "/PATH/TO/MODEL/CHECKPOINTS/tokenizer.model"
model_args_path: "PATH/TO/model_args.json"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove ""PATH"

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I should I replace it with?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just remove "PATH/"

Create the mar file using the following command here.

```
torch-model-archiver --model-name llama --version 1.0 --handler llama-handler.py --config-file model-config.yaml --archive-format tgz --extra-files "llama2.py,llama2_tokenizer.py,generate.py,checkpoint_converter.py"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you change to:

torch-model-archiver --model-name llama --version 1.0 --handler llama-handler.py --config-file model-config.yaml --archive-format no-archive --extra-files "llama2.py,llama2_tokenizer.py,generate.py,checkpoint_converter.py"

mv TO llama/

Copy link
Collaborator Author

@HamidShojanazeri HamidShojanazeri Oct 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lxning changed the packaging step as suggested just cp files instead of mv.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't cp. you can just exactly copy my comments

@lxning lxning enabled auto-merge October 4, 2023 20:29
@lxning lxning added this pull request to the merge queue Oct 5, 2023
Merged via the queue into master with commit f10a071 Oct 5, 2023
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants