From 00e097d4a54b27afd878687bf541978d4bddc6cc Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Sat, 9 Mar 2024 21:14:45 -0500 Subject: [PATCH] Update more notebooks to be available on the website (#1890) * Update more notebooks to be available on the website * fix notebook * update link --- ...hat_capability_long_context_handling.ipynb | 40 +++++++--- notebook/agentchat_chess.ipynb | 79 ++++++------------- notebook/agentchat_compression.ipynb | 73 ++++++----------- notebook/agentchat_custom_model.ipynb | 41 ++++------ notebook/agentchat_groupchat_research.ipynb | 66 +++++----------- notebook/agentchat_teachability.ipynb | 78 ++++++------------ notebook/agentchat_teaching.ipynb | 75 +++++------------- 7 files changed, 147 insertions(+), 305 deletions(-) diff --git a/notebook/agentchat_capability_long_context_handling.ipynb b/notebook/agentchat_capability_long_context_handling.ipynb index edeb8216be3..7d2ef12a561 100644 --- a/notebook/agentchat_capability_long_context_handling.ipynb +++ b/notebook/agentchat_capability_long_context_handling.ipynb @@ -4,19 +4,20 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Handling A Long Context via `TransformChatHistory`\n", + "# Handling A Long Context via `TransformChatHistory`\n", "\n", - "This notebook illustrates how you can use the `TransformChatHistory` capability to give any `Conversable` agent an ability to handle a long context. " - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "## Uncomment to install pyautogen if you don't have it already\n", - "#! pip install pyautogen" + "This notebook illustrates how you can use the `TransformChatHistory` capability to give any `Conversable` agent an ability to handle a long context. \n", + "\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", + "```bash\n", + "pip install pyautogen\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -45,6 +46,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````\n", + "\n", "To add this ability to any agent, define the capability and then use `add_to_agent`." ] }, @@ -652,6 +659,13 @@ } ], "metadata": { + "front_matter": { + "description": "Use the TransformChatHistory capability to handle long contexts", + "tags": [ + "long context handling", + "capability" + ] + }, "kernelspec": { "display_name": "Python 3", "language": "python", @@ -667,7 +681,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.11.7" } }, "nbformat": 4, diff --git a/notebook/agentchat_chess.ipynb b/notebook/agentchat_chess.ipynb index 21a3c29fb35..8ff713587c0 100644 --- a/notebook/agentchat_chess.ipynb +++ b/notebook/agentchat_chess.ipynb @@ -5,15 +5,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Open" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Auto Generated Agent Chat: Chess Game Playing While Chitchatting by GPT-4 Agents\n", + "# Chess Game Playing While Chitchatting by GPT-4 Agents\n", "\n", "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", @@ -22,10 +14,17 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", "```bash\n", - "pip install pyautogen\n", - "```" + "pip install pyautogen chess\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -35,16 +34,13 @@ "outputs": [], "source": [ "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"\n", "from collections import defaultdict\n", "from typing import Any, Dict, List, Optional, Union\n", "\n", "import chess\n", "import chess.svg\n", "\n", - "import autogen\n", - "\n", - "%pip install chess -U" + "import autogen" ] }, { @@ -68,20 +64,7 @@ " filter_dict={\n", " \"model\": [\"gpt-4\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n", " },\n", - ")\n", - "# config_list_gpt35 = autogen.config_list_from_json(\n", - "# \"OAI_CONFIG_LIST\",\n", - "# filter_dict={\n", - "# \"model\": {\n", - "# \"gpt-3.5-turbo\",\n", - "# \"gpt-3.5-turbo-16k\",\n", - "# \"gpt-3.5-turbo-16k-0613\",\n", - "# \"gpt-3.5-turbo-0301\",\n", - "# \"chatgpt-35-turbo-0301\",\n", - "# \"gpt-35-turbo-v0301\",\n", - "# },\n", - "# },\n", - "# )" + ")" ] }, { @@ -89,33 +72,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - "]\n", - "```\n", - "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -996,6 +957,10 @@ } ], "metadata": { + "front_matter": { + "tags": ["chess"], + "description": "Use AutoGen to create two agents that are able to play chess" + }, "kernelspec": { "display_name": "flaml", "language": "python", diff --git a/notebook/agentchat_compression.ipynb b/notebook/agentchat_compression.ipynb index a6f2605f9d3..afdd20d356f 100644 --- a/notebook/agentchat_compression.ipynb +++ b/notebook/agentchat_compression.ipynb @@ -4,14 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Open" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Auto Generated Agent Chat: Conversations with Chat History Compression Enabled\n", + "# Conversations with Chat History Compression Enabled\n", "\n", "**CompressibleAgent will be deprecated.** \n", "\n", @@ -22,9 +15,11 @@ "In this notebook, we demonstrate how to enable compression of history messages using the `CompressibleAgent`. While this agent retains all the default functionalities of the `AssistantAgent`, it also provides the added feature of compression when activated through the `compress_config` setting.\n", "\n", "Different compression modes are supported:\n", + "\n", "1. `compress_config=False` (Default): `CompressibleAgent` is equivalent to `AssistantAgent`.\n", "2. `compress_config=True` or `compress_config={\"mode\": \"TERMINATE\"}`: no compression will be performed. However, we will count token usage before sending requests to the OpenAI model. The conversation will be terminated directly if the total token usage exceeds the maximum token usage allowed by the model (to avoid the token limit error from OpenAI API).\n", - "3. `compress_config={\"mode\": \"COMPRESS\", \"trigger_count\": }, \"leave_last_n\": `: compression is enabled.\n", + "3. `compress_config={\"mode\": \"COMPRESS\", \"trigger_count\": , \"leave_last_n\": }`: compression is enabled.\n", + "\n", " ```python\n", " # default compress_config\n", " compress_config = {\n", @@ -38,12 +33,13 @@ " \"verbose\": False, # if True, print out the content to be compressed and the compressed content\n", " }\n", " ```\n", + "\n", " Currently, our compression logic is as follows:\n", " 1. We will always leave the first user message (as well as system prompts) and compress the rest of the history messages.\n", " 2. You can choose to not compress the last n messages in the history with \"leave_last_n\".\n", " 2. The summary is performed on a per-message basis, with the role of the messages (See compressed content in the example below).\n", "\n", - "4. `compress_config={\"mode\": \"CUSTOMIZED\", \"compress_function\": }`: the `compress_function` function will be called on trigger count. The function should accept a list of messages as input and return a tuple of (is_success: bool, compressed_messages: List[Dict]). The whole message history (except system prompt) will be passed.\n", + "4. `compress_config={\"mode\": \"CUSTOMIZED\", \"compress_function\": }t`: the `compress_function` function will be called on trigger count. The function should accept a list of messages as input and return a tuple of (is_success: bool, compressed_messages: List[Dict]). The whole message history (except system prompt) will be passed.\n", "\n", "\n", "By adjusting `trigger_count`, you can decide when to compress the history messages based on existing tokens. If this is a float number between 0 and 1, it is interpreted as a ratio of max tokens allowed by the model. For example, the AssistantAgent uses gpt-4 with max tokens 8192, the trigger_count = 0.7 * 8192 = 5734.4 -> 5734. Do not set `trigger_count` to the max tokens allowed by the model, since the same LLM is employed for compression and it needs tokens to generate the compressed content. \n", @@ -56,19 +52,16 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "# %pip install pyautogen~=0.1.0" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -105,35 +98,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - "]\n", - "```\n", - "\n", - "If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n", - "\n", - "You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -884,6 +853,10 @@ } ], "metadata": { + "front_matter": { + "description": "Learn about the CompressibleAgent", + "tags": [] + }, "kernelspec": { "display_name": "msft", "language": "python", diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb index 6a42906743f..d4cf14e57a0 100644 --- a/notebook/agentchat_custom_model.ipynb +++ b/notebook/agentchat_custom_model.ipynb @@ -1,13 +1,5 @@ { "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\"Open" - ] - }, { "attachments": {}, "cell_type": "markdown", @@ -25,26 +17,17 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", "```bash\n", "pip install pyautogen torch transformers sentencepiece\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "execution": { - "iopub.execute_input": "2023-02-13T23:40:52.317406Z", - "iopub.status.busy": "2023-02-13T23:40:52.316561Z", - "iopub.status.idle": "2023-02-13T23:40:52.321193Z", - "shell.execute_reply": "2023-02-13T23:40:52.320628Z" - } - }, - "outputs": [], - "source": [ - "# %pip install pyautogen~=0.2.0b4 torch git+https://github.com/huggingface/transformers sentencepiece" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -455,6 +438,12 @@ } ], "metadata": { + "front_matter": { + "description": "Define and laod a custom model", + "tags": [ + "custom model" + ] + }, "kernelspec": { "display_name": "Python 3", "language": "python", diff --git a/notebook/agentchat_groupchat_research.ipynb b/notebook/agentchat_groupchat_research.ipynb index 30ad696abb3..c448ed8cb7a 100644 --- a/notebook/agentchat_groupchat_research.ipynb +++ b/notebook/agentchat_groupchat_research.ipynb @@ -5,35 +5,23 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Open" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Auto Generated Agent Chat: Performs Research with Multi-Agent Group Chat\n", + "# Perform Research with Multi-Agent Group Chat\n", "\n", "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -67,33 +55,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k-0314',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - "]\n", - "```\n", - "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -548,6 +514,10 @@ } ], "metadata": { + "front_matter": { + "tags": ["group chat"], + "description": "Perform research using a group chat with a number of specialized agents" + }, "kernelspec": { "display_name": "flaml", "language": "python", diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb index 252ff92c63d..ac239f793dc 100644 --- a/notebook/agentchat_teachability.ipynb +++ b/notebook/agentchat_teachability.ipynb @@ -1,13 +1,5 @@ { "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\"Open" - ] - }, { "attachments": {}, "cell_type": "markdown", @@ -21,24 +13,21 @@ "\n", "In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n", "\n", - "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](../test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", + "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install the [teachable] option.\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", + "\n", "```bash\n", - "pip install \"pyautogen[teachable]\"\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "# %pip install \"pyautogen[teachable]\"" + "pip install pyautogen[teachable]\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -85,39 +74,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). After application of the filter shown above, only the gpt-4 models are considered.\n", - "\n", - "The config list may look like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " 'model': 'gpt-4-1106-preview',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2024-02-15-preview',\n", - " },\n", - "]\n", - "```\n", - "\n", - "If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n", - "\n", - "You can set the value of config_list in other ways if you prefer, e.g., loading from a YAML file." + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````" ] }, { @@ -833,6 +794,13 @@ } ], "metadata": { + "front_matter": { + "description": "Learn how to persist memories across chat sessions using the Teachability capability", + "tags": [ + "teachability", + "capability" + ] + }, "kernelspec": { "display_name": "flaml", "language": "python", diff --git a/notebook/agentchat_teaching.ipynb b/notebook/agentchat_teaching.ipynb index 45eab8b5cb3..a61f3c7e08e 100644 --- a/notebook/agentchat_teaching.ipynb +++ b/notebook/agentchat_teaching.ipynb @@ -1,13 +1,5 @@ { "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\"Open" - ] - }, { "attachments": {}, "cell_type": "markdown", @@ -22,57 +14,16 @@ "\n", "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", - "pip install \"pyautogen>=0.2.3\"\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# %pip install --quiet \"pyautogen>=0.2.3\"" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Set your API Endpoint\n", - "\n", - "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n", - "\n", - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The json looks like the following:\n", - "```json\n", - "[\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\"\n", - " },\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\",\n", - " \"base_url\": \"\",\n", - " \"api_type\": \"azure\",\n", - " \"api_version\": \"2024-02-15-preview\"\n", - " },\n", - " {\n", - " \"model\": \"gpt-4-32k\",\n", - " \"api_key\": \"\",\n", - " \"base_url\": \"\",\n", - " \"api_type\": \"azure\",\n", - " \"api_version\": \"2024-02-15-preview\"\n", - " }\n", - "]\n", + "pip install pyautogen\n", "```\n", "\n", - "If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n" + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````\n" ] }, { @@ -99,6 +50,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).\n", + ":::\n", + "````\n", + "\n", "## Example Task: Literature Survey\n", "\n", "We consider a scenario where one needs to find research papers of a certain topic, categorize the application domains, and plot a bar chart of the number of papers in each domain." @@ -942,6 +899,12 @@ } ], "metadata": { + "front_matter": { + "description": "Teach the agent news skills using natural language", + "tags": [ + "teaching" + ] + }, "kernelspec": { "display_name": "flaml-eval", "language": "python",