diff --git a/examples/conversation_basic/README.md b/examples/conversation_basic/README.md index eb89720a0..7cf6f5cf0 100644 --- a/examples/conversation_basic/README.md +++ b/examples/conversation_basic/README.md @@ -18,4 +18,4 @@ These models are tested in this example. For other models, some modifications ma ## Prerequisites To set up model serving with open-source LLMs, follow the guidance in -[scripts/REAMDE.md](../../scripts/README.md). +[scripts/README.md](https://github.com/modelscope/agentscope/blob/main/scripts/README.md). diff --git a/notebook/conversation.ipynb b/notebook/conversation.ipynb deleted file mode 100644 index ea4836f03..000000000 --- a/notebook/conversation.ipynb +++ /dev/null @@ -1,199 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "b75c96fc-e399-4f81-bcf8-8a5d15bcb79b", - "metadata": {}, - "source": [ - "# Conversation with Agent" - ] - }, - { - "cell_type": "markdown", - "id": "25d30c19de76e93f", - "metadata": { - "collapsed": false - }, - "source": [ - "In this notebook, we will show a demo of how to program a multi-agent conversation in AgentScope. The complete codes can be found in `examples/conversation/conversation.py`, which sets up a user agent and an assistant agent to have a conversation. When user input \"exit\", the conversation ends. You can modify the `sys_prompt` to change the role of assistant agent." - ] - }, - { - "cell_type": "markdown", - "id": "5f8b1a15-cf1a-46ff-8a62-88d7476c8608", - "metadata": {}, - "source": [ - "To install AgentScope, please follow the steps in [README.md](../README.md#installation)." - ] - }, - { - "cell_type": "markdown", - "id": "4ccebee3-9a27-47d9-9133-40126b888593", - "metadata": {}, - "source": [ - "First, set the configs of the models you use." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "c60227f7-7380-4f72-b653-bc7394cfaace", - "metadata": {}, - "outputs": [], - "source": [ - "import agentscope\n", - "from agentscope.agents import DialogAgent\n", - "from agentscope.agents import UserAgent\n", - "from agentscope.pipelines.functional import sequentialpipeline\n", - "\n", - "agentscope.init(\n", - " model_configs=[\n", - " {\n", - " \"model_type\": \"openai_chat\",\n", - " \"config_name\": \"gpt-3.5-turbo\",\n", - " \"model_name\": \"gpt-3.5-turbo\",\n", - " \"api_key\": \"xxx\", # Load from env if not provided\n", - " \"organization\": \"xxx\", # Load from env if not provided\n", - " \"generate_args\": {\n", - " \"temperature\": 0.5,\n", - " },\n", - " },\n", - " {\n", - " \"model_type\": \"post_api_chat\",\n", - " \"config_name\": \"my_post_api\",\n", - " \"api_url\": \"https://xxx\",\n", - " \"headers\": {},\n", - " },\n", - " ],\n", - ")" - ] - }, - { - "cell_type": "markdown", - "id": "503f7e7a-7a93-468d-b88b-37540f607083", - "metadata": {}, - "source": [ - "Then, initialize two agents, one is used as an assistant agent, and the other one is a user agent." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "id": "848270c3-b73b-482d-a426-49e0ac67dc39", - "metadata": {}, - "outputs": [], - "source": [ - "dialog_agent = DialogAgent(\n", - " name=\"Assistant\",\n", - " sys_prompt=\"You're a helpful assistant.\",\n", - " model_config_name=\"gpt-3.5-turbo\", # replace by your model config name\n", - ")\n", - "user_agent = UserAgent()" - ] - }, - { - "cell_type": "markdown", - "id": "128356259909067b", - "metadata": { - "collapsed": false - }, - "source": [ - "Start the conversation between user and assistant. Input \"exit\" to quit." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "e46f7a9d-355b-46cd-86b5-201c91eabffb", - "metadata": {}, - "outputs": [], - "source": [ - "x = None\n", - "while x is None or x.content != \"exit\":\n", - " x = dialog_agent(x)\n", - " x = user_agent(x)" - ] - }, - { - "cell_type": "markdown", - "id": "adc601a8-144f-4e42-bc86-f0e14adb39b5", - "metadata": {}, - "source": [ - "To code it easier, you can use pipeline in [`agentscope.pipelines`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/pipelines/pipeline.py) and [`agentscope.pipelines.functional`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/pipelines/functional.py)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "6057d6c5-a332-4949-84a7-6fbeda021561", - "metadata": {}, - "outputs": [], - "source": [ - "x = None\n", - "while x is None or x.content != \"exit\":\n", - " x = sequentialpipeline([dialog_agent, user_agent], x)" - ] - }, - { - "cell_type": "markdown", - "id": "aed5ff6a-5167-4450-be57-b42ee8ae5e92", - "metadata": {}, - "source": [ - "We show the following dialog as an example." - ] - }, - { - "cell_type": "markdown", - "id": "9cb11109-4229-4faa-a859-9e806d9c2a6f", - "metadata": {}, - "source": [ - "Assistant: Thank you! I'm here to assist you with any questions or tasks you have. How can I help you today?\n", - "\n", - "User: Please help me arrange a day trip to Hangzhou.\n", - "\n", - "Assistant: Certainly! I can help you with that. When are you planning to visit Hangzhou?\n", - "\n", - "User: Tomorrow\n", - "\n", - "Assistant: Great! Tomorrow sounds like a good day for a day trip to Hangzhou. Here's a suggested itinerary for your day trip:\n", - "\n", - "1. Depart from your current location: It's important to plan your departure time to arrive in Hangzhou early in the morning. You can consider taking a train or bus, or hiring a private driver for the trip.\n", - "\n", - "2. West Lake: Start your day by visiting West Lake, which is the heart of Hangzhou. Take a leisurely stroll along the lake and enjoy the beautiful scenery. You can also rent a boat or take a cruise to explore the lake.\n", - "\n", - "3. Lingyin Temple: After visiting West Lake, head to Lingyin Temple, one of the most famous Buddhist temples in China. Marvel at the intricate architecture and explore the peaceful surroundings.\n", - "\n", - "4. Hefang Street: Next, make your way to Hefang Street, a vibrant pedestrian street filled with shops, street vendors, and traditional food stalls. Take some time to shop for souvenirs or try some local snacks.\n", - "\n", - "5. Longjing Tea Plantations: As Hangzhou is renowned for its tea, a visit to the Longjing Tea Plantations is a must. Take a short trip to the outskirts of Hangzhou and immerse yourself in the serene tea fields. You can also participate in a tea ceremony and learn about the tea-making process.\n", - "\n", - "6. Return to your starting point: After a full day of exploring Hangzhou, it's time to head back to your starting point. Plan your return journey accordingly, whether it's by train, bus, or with the help of a hired driver.\n", - "\n", - "Remember to check the opening hours of the attractions and plan your time accordingly. Also, be prepared for the weather and make sure to bring comfortable shoes, a hat, and sunscreen. Enjoy your day trip to Hangzhou!\n", - "\n", - "User: exit" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.9.18" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/notebook/distributed_debate.ipynb b/notebook/distributed_debate.ipynb deleted file mode 100644 index bec6c22f9..000000000 --- a/notebook/distributed_debate.ipynb +++ /dev/null @@ -1,195 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "478efc38-ac99-40a7-9e13-b72840f14e19", - "metadata": {}, - "source": [ - "# Distributed debate" - ] - }, - { - "cell_type": "markdown", - "id": "2f0c5593-c810-4c93-90de-b2c389b878ab", - "metadata": { - "collapsed": true - }, - "source": [ - "This example simulates a debate competition with three participant agents, including the affirmative side (Pro), the negative side (Con), and the adjudicator (Judge). \n", - "\n", - "Pro believes that AGI can be achieved using the GPT model framework, while Con contests it. Judge listens to both sides' arguments and provides an analytical judgment on which side presented a more compelling and reasonable case.\n", - "\n", - "A fully distributed version can be found in `examples/distributed/distributed_debate.py`.\n", - "Here we provide a standalone multi-process version." - ] - }, - { - "cell_type": "markdown", - "id": "321e5966-752c-4a28-b63e-3239008d6b3a", - "metadata": {}, - "source": [ - "To install AgentScope, please follow the steps in [README.md](../README.md#installation)." - ] - }, - { - "cell_type": "markdown", - "id": "fc97a3fc-6bed-4a0f-bf61-e977630a159c", - "metadata": {}, - "source": [ - "First, we need to set the model configs of AgentScope." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "7924b86d", - "metadata": {}, - "outputs": [], - "source": [ - "model_configs = [\n", - " {\n", - " \"model_type\": \"openai_chat\",\n", - " \"config_name\": \"gpt-3.5-turbo\",\n", - " \"model_name\": \"gpt-3.5-turbo\",\n", - " \"api_key\": \"xxx\",\n", - " \"organization\": \"xxx\",\n", - " \"generate_args\": {\n", - " \"temperature\": 0.0,\n", - " },\n", - " },\n", - " {\n", - " \"model_type\": \"openai_chat\",\n", - " \"config_name\": \"gpt-4\",\n", - " \"model_name\": \"gpt-4\",\n", - " \"api_key\": \"xxx\",\n", - " \"organization\": \"xxx\",\n", - " \"generate_args\": {\n", - " \"temperature\": 0.0,\n", - " },\n", - " }\n", - "]" - ] - }, - { - "cell_type": "markdown", - "id": "0072fc64", - "metadata": {}, - "source": [ - "Second, let's start the three agents in the debate. Note that each agent here will automatically starts a sub-process, and the `reply` method is executed within the sub-process." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "260aab10", - "metadata": {}, - "outputs": [], - "source": [ - "import agentscope\n", - "from agentscope.agents.dialog_agent import DialogAgent\n", - "\n", - "agentscope.init(model_configs=model_configs)\n", - "\n", - "pro_agent = DialogAgent(\n", - " name=\"Pro\",\n", - " model_config_name=\"gpt-3.5-turbo\",\n", - " use_memory=True,\n", - " sys_prompt=\"Assume the role of a debater who is arguing in favor of the proposition that AGI (Artificial General Intelligence) can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models are a viable path to AGI. Highlight the advancements in language understanding, adaptability, and scalability of GPT models as key factors in progressing towards AGI.\",\n", - ").to_dist()\n", - "con_agent = DialogAgent(\n", - " name=\"Con\",\n", - " model_config_name=\"gpt-3.5-turbo\",\n", - " use_memory=True,\n", - " sys_prompt=\"Assume the role of a debater who is arguing against the proposition that AGI can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models, while impressive, are insufficient for reaching AGI. Discuss the limitations of GPT models such as lack of understanding, consciousness, ethical reasoning, and general problem-solving abilities that are essential for true AGI.\",\n", - ").to_dist()\n", - "judge_agent = DialogAgent(\n", - " name=\"Judge\",\n", - " model_config_name=\"gpt-3.5-turbo\",\n", - " use_memory=True,\n", - " sys_prompt=\"Assume the role of an impartial judge in a debate where the affirmative side argues that AGI can be achieved using the GPT model framework, and the negative side contests this. Listen to both sides' arguments and provide an analytical judgment on which side presented a more compelling and reasonable case. Consider the strength of the evidence, the persuasiveness of the reasoning, and the overall coherence of the arguments presented by each side.\"\n", - ").to_dist()" - ] - }, - { - "cell_type": "markdown", - "id": "01ca8024-fa7e-4d7f-bf35-a78511a47ab3", - "metadata": {}, - "source": [ - "Next, write the main debate competition process.\n", - "Note that we need to use `msghub` to ensure each agent in the debate knows the speeaches of all other agents." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "6391fb00-f74c-42c5-b742-56b7a773f875", - "metadata": {}, - "outputs": [], - "source": [ - "from agentscope.msghub import msghub\n", - "from agentscope.message import Msg\n", - "from agentscope.utils.logging_utils import logger\n", - "\n", - "# Rules explained before the debate begins \n", - "ANNOUNCEMENT = \"\"\"\n", - "Welcome to the debate on whether Artificial General Intelligence (AGI) can be achieved using the GPT model framework. This debate will consist of three rounds. In each round, the affirmative side will present their argument first, followed by the negative side. After both sides have presented, the adjudicator will summarize the key points and analyze the strengths of the arguments.\n", - "\n", - "The rules are as follows:\n", - "\n", - "Each side must present clear, concise arguments backed by evidence and logical reasoning.\n", - "No side may interrupt the other while they are presenting their case.\n", - "After both sides have presented, the adjudicator will have time to deliberate and will then provide a summary, highlighting the most persuasive points from both sides.\n", - "The adjudicator's summary will not declare a winner for the individual rounds but will focus on the quality and persuasiveness of the arguments.\n", - "At the conclusion of the three rounds, the adjudicator will declare the overall winner based on which side won two out of the three rounds, considering the consistency and strength of the arguments throughout the debate.\n", - "Let us begin the first round. The affirmative side: please present your argument for why AGI can be achieved using the GPT model framework.\n", - "\"\"\"\n", - "\n", - "\n", - "\"\"\"Setup the main debate competition process\"\"\"\n", - "if __name__ == \"__main__\":\n", - " participants = [pro_agent, con_agent, judge_agent]\n", - " hint = Msg(name=\"System\", content=ANNOUNCEMENT)\n", - " x = None\n", - " with msghub(participants=participants, announcement=hint):\n", - " for _ in range(3):\n", - " pro_resp = pro_agent(x)\n", - " logger.chat(pro_resp)\n", - " con_resp = con_agent(pro_resp)\n", - " logger.chat(con_resp)\n", - " x = judge_agent(con_resp)\n", - " logger.chat(x)\n", - " x = judge_agent(x)\n", - " logger.chat(x)\n" - ] - }, - { - "cell_type": "markdown", - "id": "dbfc5033", - "metadata": {}, - "source": [ - "Finally, just wait for the above code to run and watch the debate proceed." - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -} diff --git a/notebook/distributed_dialog.ipynb b/notebook/distributed_dialog.ipynb deleted file mode 100644 index ab01224b0..000000000 --- a/notebook/distributed_dialog.ipynb +++ /dev/null @@ -1,149 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "id": "d8bb3d3e-eec5-4a14-bb36-9fdf6b7d00b2", - "metadata": {}, - "source": [ - "# Distributed dialogue" - ] - }, - { - "cell_type": "markdown", - "id": "8626bd94-3a0b-4c61-85d6-b157ffc5ac25", - "metadata": {}, - "source": [ - "This example initializes an assistant agent and a user agent as separate processes and uses RPC to communicate between them. The full codes can be found in in `examples/distributed/distributed_dialog.py`" - ] - }, - { - "cell_type": "markdown", - "id": "605ebd1c-3222-4dce-b974-6377da37d555", - "metadata": {}, - "source": [ - "To install AgentScope, please follow the steps in [README.md](../README.md#installation)." - ] - }, - { - "cell_type": "markdown", - "id": "2417b9fc", - "metadata": {}, - "source": [ - "First, we need to set the model configs properly." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "8d61bef5", - "metadata": {}, - "outputs": [], - "source": [ - "model_configs = [\n", - " {\n", - " \"model_type\": \"openai_chat\",\n", - " \"config_name\": \"gpt-3.5-turbo\",\n", - " \"model_name\": \"gpt-3.5-turbo\",\n", - " \"api_key\": \"xxx\",\n", - " \"organization\": \"xxx\",\n", - " \"generate_args\": {\n", - " \"temperature\": 0.0,\n", - " },\n", - " },\n", - " {\n", - " \"model_type\": \"openai_chat\",\n", - " \"config_name\": \"gpt-4\",\n", - " \"model_name\": \"gpt-4\",\n", - " \"api_key\": \"xxx\",\n", - " \"organization\": \"xxx\",\n", - " \"generate_args\": {\n", - " \"temperature\": 0.0,\n", - " },\n", - " }\n", - "]" - ] - }, - { - "cell_type": "markdown", - "id": "710f835a-ecc8-481f-a4ab-7f0db33e68f4", - "metadata": {}, - "source": [ - "Then, we need to initialize two agents: an assistant agent and a user agnent.\n", - "\n", - "To facilitate display on jupyter, the agents will be started in a standalone multi-process mode. For a fully distributed version, please refer to `examples/distributed/distributed_dialog.py`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "bf3226dc", - "metadata": {}, - "outputs": [], - "source": [ - "import agentscope\n", - "from agentscope.agents.user_agent import UserAgent\n", - "from agentscope.agents.dialog_agent import DialogAgent\n", - "\n", - "agentscope.init(\n", - " model_configs=model_configs\n", - ")\n", - "\n", - "assistant_agent = DialogAgent(\n", - " name=\"Assistant\",\n", - " sys_prompt=\"You are a helpful assistant.\",\n", - " model_config_name=\"gpt-3.5-turbo\",\n", - " use_memory=True,\n", - ").to_dist()\n", - "user_agent = UserAgent(\n", - " name=\"User\",\n", - ")" - ] - }, - { - "cell_type": "markdown", - "id": "dd70c37d", - "metadata": {}, - "source": [ - "Finally, let's write the main process of the dialogue and chat with the assistant." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "b0f3c851", - "metadata": {}, - "outputs": [], - "source": [ - "import time\n", - "from loguru import logger\n", - "\n", - "msg = user_agent()\n", - "while not msg.content.endswith(\"exit\"):\n", - " msg = assistant_agent(msg)\n", - " logger.chat(msg)\n", - " msg = user_agent(msg)" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3 (ipykernel)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.9" - } - }, - "nbformat": 4, - "nbformat_minor": 5 -}