Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for transformers-neuronx continuous batching #488

Merged
merged 13 commits into from
Feb 19, 2024

Conversation

dacorvo
Copy link
Collaborator

@dacorvo dacorvo commented Feb 15, 2024

What does this PR do?

This adds support for transformers-neuronx continuous batching.

This also activates automatically the continuous batching feature for any model that supports it as soon as the batch size is > 1.

This allows to workaround the fact that static batching is broken in transformers-neuronx for these models: aws-neuron/transformers-neuronx#79.

But this is also a game changer for NeuronX TGI, as it unlocks:

  • higher batch sizes for Llama and Mistral models thanks to transformers-neuronx "continuous batching" graphs that share KV cache between nodes,
  • unlimited concurrent requests thanks to TGI 1.4.1 max-batch-size parameter.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@dacorvo dacorvo force-pushed the continuous_batching branch 6 times, most recently from 7c11422 to 3862e28 Compare February 16, 2024 15:02
@dacorvo dacorvo marked this pull request as ready for review February 16, 2024 16:46
@@ -379,7 +379,7 @@ class NeuronDecoderConfig(NeuronConfig):
be passed to export the model,
- NEURONX_CLASS (`str`) -- the name of the transformers-neuronx class to instantiate for the model.
It is a full class name defined relatively to the transformers-neuronx module, e.g. `gpt2.model.GPT2ForSampling`
[`~optimum.utils.DummyInputGenerator`] specifying how to create dummy inputs.
- CONTINUOUS_BATCHING (`bool`) -- Whether the model supports continuous batching or not.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- CONTINUOUS_BATCHING (`bool`) -- Whether the model supports continuous batching or not.
- CONTINUOUS_BATCHING (`bool`, defaults to `False`) -- Whether the model supports continuous batching or not.

if seq_ids is None:
seq_ids = torch.arange(input_ids.shape[0])
else:
assert seq_ids.shape[0] == input_ids.shape[0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe raise an explicit error with a message saying that the shapes should match in this case

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using an assert because these methods are always called internally, so it is rather to catch internal programming errors.

input_ids: torch.Tensor,
attention_mask: torch.Tensor,
seq_ids: Optional[List[int]] = None,
**kwargs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? I tend to avoid it whenever possible.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a leftover from the GenerationMixin, but I moved away from this pattern and I 100% agree with you on this, at least for us where we are supposed to be a bit more certain about what we accept or not. Will remove it.

auto_cast_type = auto_cast_type.replace("p", "")
tnx_kwargs = {
"batch_size": batch_size,
"tp_degree": num_cores,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TP degree is always exactly the number of neuron cores used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I use the num_cores name in optimum-neuron because I use it for two things: to set the TP degree (here) and also to restrict the number of cores I reserve at initialization (otherwise the TNX runtime takes them all).

Copy link
Collaborator

@JingyaHuang JingyaHuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks David!

optimum/neuron/modeling.py Show resolved Hide resolved
text-generation-inference/Dockerfile Show resolved Hide resolved
@dacorvo dacorvo merged commit 8f3e96a into main Feb 19, 2024
13 checks passed
@dacorvo dacorvo deleted the continuous_batching branch February 19, 2024 13:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants