-
-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
api_key_name
isn't used in extra-openai-models.yaml
if OPENAI_API_KEY
env var is set
#158
Comments
I believe this happens if the env var Looking at |
api_key_name
doesn't appear to be used in extra-openai-models.yaml
api_key_name
isn't used in extra-openai-models.yaml
if OPENAI_API_KEY
env var is set
Relevant code: Lines 97 to 105 in a4f55e9
|
I don't like those variable names, and that function needs proper documentation. It's confusing to look at. |
Oh that's weird... the main place that keys come from doesn't even call that utility method! Lines 209 to 234 in a4f55e9
|
The main place the Lines 211 to 213 in a4f55e9
Where |
The actual logic is unchanged, but it is a lot easier to understand what it does now. Refs #158
I'll try to do a PR for this next week. Thanks for the pointers. |
I wrote this: diff --git a/llm/models.py b/llm/models.py
index 10006bf..4cfd1e9 100644
--- a/llm/models.py
+++ b/llm/models.py
@@ -217,15 +217,24 @@ class Model(ABC):
pass
def get_key(self):
+ from llm import get_key
+
if self.needs_key is None:
+ # This model doesn't use an API key
return None
+
if self.key is not None:
+ # Someone already set model.key='...'
return self.key
- if self.key_env_var is not None:
- key = os.environ.get(self.key_env_var)
- if key:
- return key
+ # Attempt to load a key using llm.get_key()
+ key = get_key(
+ explicit_key=None, key_alias=self.needs_key, env_var=self.key_env_var
+ )
+ if key:
+ return key
+
+ # Show a useful error message
message = "No key found - add one using 'llm keys set {}'".format(
self.needs_key
) But I need to come up with a robust test plan - both manual and automated - for ensuring it does what it needs to do. |
I can test this more easily using the OpenRouter support I added in: |
Managed to replicate this bug: llm -m claude 'say hi and your name'
But with the environment variable set: OPENAI_API_KEY=x llm -m claude 'say hi and your name'
|
This was a deliberate design decision: Lines 114 to 118 in 341dbce
I'm trying to remember why. |
I'm going to change that decision. If you want to use the key from your environment variable instead, you can do this: llm "my prompt" --key $OPENAI_API_KEY |
Updated documentation: https://llm.datasette.io/en/latest/setup.html#keys-in-environment-variables |
Thank you @simonw ! |
This is more of a note for myself if I end up searching the docs again. When using - model_name: lbl/llama-3
model_id: lbl/llama-3
api_base: "https://api.cborg.lbl.gov"
api_key_name: CBORG_API_KEY and it will pick up I prefer the former, so I usually give a more conventional key name: - model_name: lbl/llama-3
model_id: lbl/llama-3
api_base: "https://api.cborg.lbl.gov"
api_key_name: cborg and manage this using llm keys set cborg |
If I add an openai-compatible model to
extra-openai-models.yaml
and specify anapi_key_name
, the CLI does not appear to use the key value. Instead, it appears to use theopenai
key.Example key file:
Example CLI command (inspecting server logs shows that the
openai
key value was passed):llm -m opstower 'how many ec2 instances'
This works:
llm -m opstower 'how many ec2 instances' --key opstower
Orignally mentioned by @klauern in #139 (comment)_
This is on version 0.7.
The text was updated successfully, but these errors were encountered: