Is there a way to get GenerateResponse full json when calling sagemaker endpoints #1029
Answered
by
philschmid
nth-attempt
asked this question in
Q&A
-
The schema here (https://huggingface.github.io/text-generation-inference/#/Text%20Generation%20Inference/generate) shows the response contains details and generated_text. Currently, the output response from the sagemaker endpoints using TGI only return generated_text. Is there a way to get the full output json in the response to a sagemaker endpoint request? |
Beta Was this translation helpful? Give feedback.
Answered by
philschmid
Oct 19, 2023
Replies: 3 comments 1 reply
-
Bumping this! |
Beta Was this translation helpful? Give feedback.
0 replies
-
You need to provide "decoder_input_details": true,
"details": true, in the parameters to get the details |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
OlivierDehaene
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
You need to provide
in the parameters to get the details