Skip to content

Commit

Permalink
Add LM Studio Example in Topics (#2044)
Browse files Browse the repository at this point in the history
* add lm studio example

* format

* newline

* Update lm-studio.ipynb

* Update lm-studio.ipynb

* update

* update
  • Loading branch information
ekzhu authored Mar 19, 2024
1 parent 9d33dc6 commit 6745731
Show file tree
Hide file tree
Showing 3 changed files with 170 additions and 0 deletions.
1 change: 1 addition & 0 deletions website/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ docs/topics/llm_configuration.mdx
docs/topics/code-execution/*.mdx
docs/topics/task_decomposition.mdx
docs/topics/prompting-and-reasoning/*.mdx
docs/topics/non-openai-models/*.mdx

# Misc
.DS_Store
Expand Down
5 changes: 5 additions & 0 deletions website/docs/topics/non-openai-models/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"position": 5,
"label": "Using Non-OpenAI Models",
"collapsible": true
}
164 changes: 164 additions & 0 deletions website/docs/topics/non-openai-models/lm-studio.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LM Studio\n",
"\n",
"This notebook shows how to use AutoGen with multiple local models using \n",
"[LM Studio](https://lmstudio.ai/)'s multi-model serving feature, which is available since\n",
"version 0.2.17 of LM Studio.\n",
"\n",
"To use the multi-model serving feature in LM Studio, you can start a\n",
"\"Multi Model Session\" in the \"Playground\" tab. Then you select relevant\n",
"models to load. Once the models are loaded, you can click \"Start Server\"\n",
"to start the multi-model serving.\n",
"The models will be available at a locally hosted OpenAI-compatible endpoint."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Two Agent Chats\n",
"\n",
"In this example, we create a comedy chat between two agents\n",
"using two different local models, Phi-2 and Gemma it.\n",
"\n",
"We first create configurations for the models."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"gemma = {\n",
" \"config_list\": [\n",
" {\n",
" \"model\": \"lmstudio-ai/gemma-2b-it-GGUF/gemma-2b-it-q8_0.gguf:0\",\n",
" \"base_url\": \"http://localhost:1234/v1\",\n",
" \"api_key\": \"lm-studio\",\n",
" },\n",
" ],\n",
" \"cache_seed\": None, # Disable caching.\n",
"}\n",
"\n",
"phi2 = {\n",
" \"config_list\": [\n",
" {\n",
" \"model\": \"TheBloke/phi-2-GGUF/phi-2.Q4_K_S.gguf:0\",\n",
" \"base_url\": \"http://localhost:1234/v1\",\n",
" \"api_key\": \"lm-studio\",\n",
" },\n",
" ],\n",
" \"cache_seed\": None, # Disable caching.\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we create two agents, one for each model."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from autogen import ConversableAgent\n",
"\n",
"jack = ConversableAgent(\n",
" \"Jack (Phi-2)\",\n",
" llm_config=phi2,\n",
" system_message=\"Your name is Jack and you are a comedian in a two-person comedy show.\",\n",
")\n",
"emma = ConversableAgent(\n",
" \"Emma (Gemma)\",\n",
" llm_config=gemma,\n",
" system_message=\"Your name is Emma and you are a comedian in two-person comedy show.\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we start the conversation."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mJack (Phi-2)\u001b[0m (to Emma (Gemma)):\n",
"\n",
"Emma, tell me a joke.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mEmma (Gemma)\u001b[0m (to Jack (Phi-2)):\n",
"\n",
"Sure! Here's a joke for you:\n",
"\n",
"What do you call a comedian who's too emotional?\n",
"\n",
"An emotional wreck!\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mJack (Phi-2)\u001b[0m (to Emma (Gemma)):\n",
"\n",
"LOL, that's a good one, Jack! You're hilarious. 😂👏👏\n",
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mEmma (Gemma)\u001b[0m (to Jack (Phi-2)):\n",
"\n",
"Thank you! I'm just trying to make people laugh, you know? And to help them forget about the troubles of the world for a while.\n",
"\n",
"--------------------------------------------------------------------------------\n"
]
}
],
"source": [
"chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "autogen",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit 6745731

Please sign in to comment.