You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following is a little piece of code to extract embeddings from a certain layer of LLM:
defprocess_row(prompt: str, model, tokenizer, layers_to_use: list, remove_period: bool):
""" Processes a row of data and returns the embeddings. """ifremove_period:
prompt=prompt.rstrip(". ")
inputs=tokenizer(prompt, return_tensors="pt")
withtorch.no_grad():
outputs=model.generate(inputs.input_ids, output_hidden_states=True, return_dict_in_generate=True, max_new_tokens=1, min_new_tokens=1)
embeddings= {}
forlayerinlayers_to_use:
last_hidden_state=outputs.hidden_states[0][layer][0][-1]
embeddings[layer] = [last_hidden_state.numpy().tolist()]
returnembeddings
It's pretty standard way, but it's pretty slow. Is there any way to use vllm to make it faster without needing to call generate function everytime? I've tried batching, but it's slow too. Any help is appreciated!
One way to get last hidden state values using vllm is as follows:
fromvllmimportLLM, SamplingParamsfromvllm.sequenceimport (SamplerOutput, Sequence, SequenceGroup, SequenceData,
SequenceGroupMetadata, SequenceStatus)
fromtransformersimportLlamaModel, LlamaTokenizerfromvllmimportEngineArgs, LLMEngine, SamplingParams, RequestOutputfromvllm.sequenceimportSamplerOutput, SequenceData, SequenceGroupMetadatallm=LLM(model=path_to_llama2)
# Enable top-k sampling to reflect the accurate memory usage.vocab_size=llm.llm_engine.workers[0].model.config.vocab_sizesampling_params=SamplingParams(top_p=0.99, top_k=vocab_size-1)
max_num_batched_tokens=llm.llm_engine.workers[0].scheduler_config.max_num_batched_tokensmax_num_seqs=llm.llm_engine.workers[0].scheduler_config.max_num_seqs
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Anything you want to discuss about vllm.
Following is a little piece of code to extract embeddings from a certain layer of LLM:
It's pretty standard way, but it's pretty slow. Is there any way to use vllm to make it faster without needing to call generate function everytime? I've tried batching, but it's slow too. Any help is appreciated!
One way to get last hidden state values using vllm is as follows:
but this doesn't get me with all the hidden state embeddings (of all layers). Is there any other way to get such values in a faster manner?
The text was updated successfully, but these errors were encountered: