-
Notifications
You must be signed in to change notification settings - Fork 5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add LM Studio Example in Topics (#2044)
* add lm studio example * format * newline * Update lm-studio.ipynb * Update lm-studio.ipynb * update * update
- Loading branch information
Showing
3 changed files
with
170 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
{ | ||
"position": 5, | ||
"label": "Using Non-OpenAI Models", | ||
"collapsible": true | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,164 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# LM Studio\n", | ||
"\n", | ||
"This notebook shows how to use AutoGen with multiple local models using \n", | ||
"[LM Studio](https://lmstudio.ai/)'s multi-model serving feature, which is available since\n", | ||
"version 0.2.17 of LM Studio.\n", | ||
"\n", | ||
"To use the multi-model serving feature in LM Studio, you can start a\n", | ||
"\"Multi Model Session\" in the \"Playground\" tab. Then you select relevant\n", | ||
"models to load. Once the models are loaded, you can click \"Start Server\"\n", | ||
"to start the multi-model serving.\n", | ||
"The models will be available at a locally hosted OpenAI-compatible endpoint." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Two Agent Chats\n", | ||
"\n", | ||
"In this example, we create a comedy chat between two agents\n", | ||
"using two different local models, Phi-2 and Gemma it.\n", | ||
"\n", | ||
"We first create configurations for the models." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 6, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"gemma = {\n", | ||
" \"config_list\": [\n", | ||
" {\n", | ||
" \"model\": \"lmstudio-ai/gemma-2b-it-GGUF/gemma-2b-it-q8_0.gguf:0\",\n", | ||
" \"base_url\": \"http://localhost:1234/v1\",\n", | ||
" \"api_key\": \"lm-studio\",\n", | ||
" },\n", | ||
" ],\n", | ||
" \"cache_seed\": None, # Disable caching.\n", | ||
"}\n", | ||
"\n", | ||
"phi2 = {\n", | ||
" \"config_list\": [\n", | ||
" {\n", | ||
" \"model\": \"TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf:0\",\n", | ||
" \"base_url\": \"http://localhost:1234/v1\",\n", | ||
" \"api_key\": \"lm-studio\",\n", | ||
" },\n", | ||
" ],\n", | ||
" \"cache_seed\": None, # Disable caching.\n", | ||
"}" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Now we create two agents, one for each model." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 7, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from autogen import ConversableAgent\n", | ||
"\n", | ||
"jack = ConversableAgent(\n", | ||
" \"Jack (Phi-2)\",\n", | ||
" llm_config=phi2,\n", | ||
" system_message=\"Your name is Jack and you are a comedian in a two-person comedy show.\",\n", | ||
")\n", | ||
"emma = ConversableAgent(\n", | ||
" \"Emma (Gemma)\",\n", | ||
" llm_config=gemma,\n", | ||
" system_message=\"Your name is Emma and you are a comedian in two-person comedy show.\",\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Now we start the conversation." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 22, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"\u001b[33mJack (Phi-2)\u001b[0m (to Emma (Gemma)):\n", | ||
"\n", | ||
"Emma, tell me a joke.\n", | ||
"\n", | ||
"--------------------------------------------------------------------------------\n", | ||
"\u001b[31m\n", | ||
">>>>>>>> USING AUTO REPLY...\u001b[0m\n", | ||
"\u001b[33mEmma (Gemma)\u001b[0m (to Jack (Phi-2)):\n", | ||
"\n", | ||
"Sure! Here's a joke for you:\n", | ||
"\n", | ||
"What do you call a comedian who's too emotional?\n", | ||
"\n", | ||
"An emotional wreck!\n", | ||
"\n", | ||
"--------------------------------------------------------------------------------\n", | ||
"\u001b[31m\n", | ||
">>>>>>>> USING AUTO REPLY...\u001b[0m\n", | ||
"\u001b[33mJack (Phi-2)\u001b[0m (to Emma (Gemma)):\n", | ||
"\n", | ||
"LOL, that's a good one, Jack! You're hilarious. 😂👏👏\n", | ||
"\n", | ||
"\n", | ||
"--------------------------------------------------------------------------------\n", | ||
"\u001b[31m\n", | ||
">>>>>>>> USING AUTO REPLY...\u001b[0m\n", | ||
"\u001b[33mEmma (Gemma)\u001b[0m (to Jack (Phi-2)):\n", | ||
"\n", | ||
"Thank you! I'm just trying to make people laugh, you know? And to help them forget about the troubles of the world for a while.\n", | ||
"\n", | ||
"--------------------------------------------------------------------------------\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "autogen", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.5" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |