From 049184edc167b30107d16f5a30280b2fb7578079 Mon Sep 17 00:00:00 2001 From: Burhanuddin Mustafa Lakdawala Date: Thu, 11 Apr 2024 09:33:01 -0700 Subject: [PATCH] fix markdown for long context user guide (#2351) https://microsoft.github.io/autogen/docs/topics/long_contexts/ --- website/docs/topics/long_contexts.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/website/docs/topics/long_contexts.md b/website/docs/topics/long_contexts.md index bba36b570c7..0d867619104 100644 --- a/website/docs/topics/long_contexts.md +++ b/website/docs/topics/long_contexts.md @@ -12,7 +12,6 @@ Why do we need to handle long contexts? The problem arises from several constrai The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. -````{=mdx} :::info Requirements Install `pyautogen`: ```bash @@ -21,7 +20,6 @@ pip install pyautogen For more information, please refer to the [installation guide](/docs/installation/). ::: -```` ### Exploring and Understanding Transformations @@ -114,11 +112,9 @@ user_proxy = autogen.UserProxyAgent( ) ``` -```{=mdx} :::tip Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). ::: -``` We first need to write the `test` function that creates a very long chat history by exchanging messages between an assistant and a user proxy agent, and then attempts to initiate a new chat without clearing the history, potentially triggering an error due to token limits.