Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Documentation] Topic Category for AutoGen + Non-OpenAI Models #1994

Closed
ekzhu opened this issue Mar 13, 2024 · 10 comments
Closed

[Documentation] Topic Category for AutoGen + Non-OpenAI Models #1994

ekzhu opened this issue Mar 13, 2024 · 10 comments
Labels
documentation Improvements or additions to documentation models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)

Comments

@ekzhu
Copy link
Collaborator

ekzhu commented Mar 13, 2024

Is your feature request related to a problem? Please describe.

Many LLMs are catching up with GPT-4. We want to have documentation on how to use them with AutoGen.

Describe the solution you'd like

A topic category (see the topics page) with a collection of pages about how to run some AutoGen conversation pattern or application scenario using non-OpenAI models. We can organize the pages by having one page for each model, or we can organize by having one page for each proxy stack.

Additional context

We aim to close the following issues by providing example solutions in the documentation.

  1. ollam isn't working with autogen #767
  2. [Feature Request]: Support for Mistral AI API (and Mixtral2) #991
  3. [Issue]: Unable to enable tool calling when using a custom model #1738
  4. [Issue]: CodaLlama:70b "I cannot provide code or assistance" #1476
  5. How to connect to LLM ,not gpt4 #763

You can find examples of notebooks in /website/docs/topics/.

To build documentation locally, see: https://microsoft.github.io/autogen/docs/Contribute#documentation

@ekzhu ekzhu added enhancement documentation Improvements or additions to documentation models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) and removed enhancement labels Mar 13, 2024
@ekzhu
Copy link
Collaborator Author

ekzhu commented Mar 13, 2024

@lestan @marklysze would you like to take a stab at this?

@marklysze
Copy link
Collaborator

@lestan @marklysze would you like to take a stab at this?

Yep, I'm happy to give it a go.

A category under topics and then documentation for each proxy stack under that sounds good. I feel people think more of how they're going to run it rather than the specific model (and models may be mentioned throughout the different pages).

If we go by proxy stack, do you see something like these under the new topic:

  • together.ai (cloud)
  • Anthropic (cloud)
  • Hugging Face (cloud)
  • LiteLLM+Ollama (local)
  • Ollama (local)
  • LM Studio (local)
  • vLLM (local)

In terms of full examples (the Notebooks section) - it would be good to add some full notebooks/examples (which I can do in parallel with the documentation) for non-OpenAI models and the proxy stacks - not sure if a sub-section for non-OpenAI models would be useful.

I can start tackling together.ai, LiteLLM+Ollama, LM Studio, vLLM. @lestan, if you are interested then you could do Ollama as I noticed you already have code for that with #767 - I can tackle this if you're not able to at this stage.

I'm sure I'll have some questions but I'll think them through and wait for your reply.

@lestan
Copy link

lestan commented Mar 13, 2024

@ekzhu @marklysze

Would be glad to!

Let me review what's already there and also look at Mark's suggestions.

@marklysze will be in touch

@descention
Copy link

descention commented Mar 13, 2024

I can start tackling together.ai, LiteLLM+Ollama, LM Studio, vLLM. @lestan, if you are interested then you could do Ollama as I noticed you already have code for that with #767 - I can tackle this if you're not able to at this stage.

Per Ollama, feel free to use this OllamaModelClient I made or suggest the user use the OpenAI endpoint like http://localhost:11434/v1/.

@marklysze
Copy link
Collaborator

I can start tackling together.ai, LiteLLM+Ollama, LM Studio, vLLM. @lestan, if you are interested then you could do Ollama as I noticed you already have code for that with #767 - I can tackle this if you're not able to at this stage.

Per Ollama, feel free to use this OllamaModelClient I made or suggest the user use the OpenAI endpoint like http://localhost:11434/v1/.

That's awesome work, thanks @descention! I can't wait to try it out as well...

@ekzhu
Copy link
Collaborator Author

ekzhu commented Mar 17, 2024

A category under topics and then documentation for each proxy stack under that sounds good. I feel people think more of how they're going to run it rather than the specific model (and models may be mentioned throughout the different pages).

Yes this is good. I can start first with a LM studio example for now. And laying out the parent folder structure

@marklysze
Copy link
Collaborator

@ekzhu, can you look at my repo+branch I created as I put in a folder structure:
https://github.com/marklysze/autogenlocalllm/tree/NonOpenAILLMDocs

@ekzhu
Copy link
Collaborator Author

ekzhu commented Mar 17, 2024

Yes. Let's use that structure to get started

@ekzhu
Copy link
Collaborator Author

ekzhu commented Mar 20, 2024

Closing this issue thanks to the hard work of @marklysze

@ekzhu ekzhu closed this as completed Mar 20, 2024
@marklysze
Copy link
Collaborator

@ekzhu @marklysze

Would be glad to!

Let me review what's already there and also look at Mark's suggestions.

@marklysze will be in touch

Hey @lestan, just a note that the first round of non-OpenAI documentation has been merged. If you are still up for doing the Ollama example, please see the LiteLLM+Ollama, LM Studio and vLLM pages to see what has been done already.

If you're short on time and need a hand to do it, I can help. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)
Projects
None yet
Development

No branches or pull requests

4 participants