Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds 5 PaLM-2 models and a
VertexAI
client:text-bison@001
text-bison-32k
text-unicorn@001
code-bison@001
code-bison-32k
Other models are available here but I only added those necessary and relevant to HELM (although Imagen could be added for HEIM).
There are several issues related to the lack of documentation that have been opened:
google/t5-11b
which is probably not correct.Request
are not supported because the Python SDK is incomplete.RequestResponse
are not supported (such as logprobs) because the Python SDK is incomplete.I have not been able to verify the context length due to the very small quota I have and used what is provided in the API ref.
We should add a
max_output_tokens
toWindowService
as some models have a very low value for this which would probably break on some scenarios (code-gecko@001
had a limit of 64 output tokens). (See #2086)