Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Third-Party APIs #44

Closed
ahmedosman2001 opened this issue Sep 29, 2023 · 5 comments
Closed

Third-Party APIs #44

ahmedosman2001 opened this issue Sep 29, 2023 · 5 comments

Comments

@ahmedosman2001
Copy link

I have an LLM API hosted on a remote server, and this is the way it functions: I send the query "Hello" using a fetch request to the API, and in response, I receive the message "Hello! How can I assist you today?". Is it possible to use AutoGen with APIs of this nature, where you submit a question and receive a response? This is example of response from the API
[{"role": "system", "content": "Knowledge cutoff: 2021-09-01 Current date: 2023-09-29"}, {"role": "user", "content": "Hello", "token": "7284194572548942422"}, {"role": "assistant", "content": "Hello! How can I assist you today?\n", "token": "7284194572548942422"}]

@AaronWard
Copy link
Collaborator

Check out this blog post on the Autogen documentation: https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs.
You may also want to take a look into FastChat: https://github.com/lm-sys/FastChat

@ahmedosman2001
Copy link
Author

Thanks for getting back to me. The problem with the FastChat method is that you have to install the model on your computer and tell it where to find it when you start fastchat.serve.model_worker using the command python -m fastchat.serve.model_worker --model-path chatglm2-6b. The issue is, I've got the LLM model installed on a cloud-based computer that I can't access directly. The only way I can use the model is through an API. For example this is how i access the model on javascript:

const response = await fetch("https://remote_IP:port/chat", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ user_input: userInput }) });

@victordibia
Copy link
Collaborator

victordibia commented Oct 6, 2023

Quick question @ahmedosman2001 , is your model endpoint an OpenAI compatible endpoint?
If so, it should be possible to simply add it to config_list_json for example

See related discusison here where a fastcat chatglm model is configured to work with autogen

from autogen import AssistantAgent, UserProxyAgent, oai
config_list=[
    {
        "model": "chatglm2-6b",
        "api_base": "http://localhost:8000/v1",
        "api_type": "open_ai",
        "api_key": "NULL", # just a placeholder
    }
]

response = oai.Completion.create(config_list=config_list, prompt="Hi")
print(response) # works fine

assistant = AssistantAgent("assistant")
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stock price change YTD.", config_list=config_list)
# fails with the error: openai.error.AuthenticationError: No API key provided.

@ahmedosman2001
Copy link
Author

No, it is not OpenAI compliant, but this PR #95 solves my issue. Thank you all.

@DinghaoXi
Copy link

No, it is not OpenAI compliant, but this PR #95 solves my issue. Thank you all.

Hello, could you please share an easy demo of your solution. I am faced with the problem as well. Thanks!

randombet pushed a commit to randombet/autogen that referenced this issue Oct 29, 2024
* Updated to expand acceptable OpenAI API key format

* Update to ChromaDB version required for RetrieveChatTest
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants