Releases: letta-ai/letta
v0.5.4
🦟 Bugfix release
What's Changed
- feat: allow passing in tags to
client.create_agent(tags=[..])
by @sarahwooders in #2073 - fix: Lazy load
llamaindex
imports that are causing issues by @mattzh72 in #2075 - fix: fix
poetry.lock
to pin pydantic version by @sarahwooders in #2076
Full Changelog: 0.5.3...0.5.4
v0.5.3
🐛 This release includes many bugfixes, and also migrates the Letta docker image to the letta/letta
Dockerhub repository.
🔥 New features 🔥
📊 Add token counter to CLI /tokens
#2047
🤲 Support for Together AI endpoints #2045 (documentation)
🔐 Password-protect letta server endpoints with the letta server --secure
flag #2030
What's Changed
- fix: patch workflows by @cpacker in #2011
- feat: rename docker to
letta/letta
by @sarahwooders in #2010 - feat: add bash script to allow people to generate openapi schema with… by @4shub in #2015
- feat: modify
startup.sh
to work for sqlite by @sarahwooders in #2012 - feat: add openapi compatibility checker by @4shub in #2016
- fix: fix
Unsupport authentication type
error for Ollama by @sarahwooders in #2018 - feat: add latest anthropic models by @sarahwooders in #2022
- feat: Move Source to ORM model by @mattzh72 in #1979
- fix: Fix memory summarization by @mattzh72 in #2029
- feat: support a new secure flag by @4shub in #2030
- chore: Migrate FileMetadata to ORM by @mattzh72 in #2028
- fix: Fix bug for deleting agents belonging to the user org by @mattzh72 in #2033
- fix: Fix tool deletion bug by @mattzh72 in #2034
- fix: Fix summarizer for Anthropic and add integration tests by @mattzh72 in #2046
- feat: added token counter command to CLI by @cpacker in #2047
- fix: Fix Docker compose startup issues (#2056) by @sethanderson1 in #2057
- Add missing import in functions.py by @andrewrfitz in #2060
- feat: support togetherAI via
/completions
by @cpacker in #2045 - fix: context window overflow patch by @cpacker in #2053
- fix: Fix ollama CI test by @mattzh72 in #2062
- feat: Move blocks to ORM model by @mattzh72 in #1980
- feat: bump version 0.5.3 by @sarahwooders in #2066
- fix(compose.yaml): letta_server hostname mismatch by @ahmedrowaihi in #2065
- core: update notebooks to 0.5.2 and add agent cleanup (error on duplicate names) by @sarahwooders in #2038
- fix: Fix security vuln with file upload by @mattzh72 in #2067
New Contributors
- @sethanderson1 made their first contribution in #2057
- @andrewrfitz made their first contribution in #2060
- @ahmedrowaihi made their first contribution in #2065
Full Changelog: 0.5.2...0.5.3
v0.5.2
🤖 Tags for agents (for associating agents with end users) #1984
You can now specify and query agents via an AgentState.tags
field. If you want to associate end user IDs on your application, we recommend using tags to associate an agent with a specific end user:
# create agent for a specific user
client.create_agent(tags=["my_user_id"])
# get agents for a user
agents = client.get_agents(tags=["my_user_id"])
🛠️ Constrain agent behavior with tool rules #1954
We are introducing initial support for "tool rules", which allows developer to define constrains on their tools, such as requiring that a tool terminate agent execution. We added the following tool rules:
TerminalToolRule(tool_name=...)
- If the tool is called, the agent ends executionInitToolRule(tool_name=...)
- The tool must be called first when an agent is runToolRule(tool_name=..., children=[...])
- If the tool is called, it must be followed by one of the tools specified inchildren
Tool rules are defined per-agent, and passed when creating agents:
# agent which must always call `first_tool_to_call`, `second_tool_to_call`, then `final_tool` when invoked
agent_state = client.create_agent(
tool_rules = [
InitToolRule(tool_name="first_tool_to_call"),
ToolRule(tool_name="first_secret_word", children=["second_tool_to_call"]),
ToolRule(tool_name="fourth_secret_word", children=["final_tool"]),
TerminalToolRule(tool_name="send_message"),
]
)
By default, the send_message
tool is marked with TerminalToolRule
.
NOTE: All
ToolRules
types except forTerminalToolRule
are only supported by models and providers which support structured outputs, which is currently only OpenAI withgpt-4o
andgpt-4o-mini
🐛 Bugfixes + Misc
- Fix error in tool creation on ADE
- Fixes to tool updating
- Deprecation of
Block.name
in favor ofBlock.template_name
(only required for templated blocks) #1937 - Move
docker run letta/letta
to run on port8283
(previously8083
) - Properly returning
LettaUsageStatistics
#1955 - Example notebooks on multi-agent, RAG, custom memory, and tools added in https://github.com/letta-ai/letta/tree/main/examples/notebooks
What's Changed
- chore: Consolidate CI style checks by @mattzh72 in #1936
- test: Add archival insert test to GPT-4 and make tests failure sensitive by @mattzh72 in #1930
- fix: Fix
letta delete-agent
by @mattzh72 in #1940 - feat: Add orm for Tools and clean up Tool logic by @mattzh72 in #1935
- fix: update ollama model for testing by @sarahwooders in #1941
- chore: Remove legacy code and instances of anon_clientid by @mattzh72 in #1942
- fix: fix inconsistent name and label usage for blocks to resolve recursive validation issue by @sarahwooders in #1937
- feat: Enable base constructs to automatically populate "created_by" and "last_updated_by" fields for relevant objects by @mattzh72 in #1944
- feat: move docker run command to use port 8283 by @sarahwooders in #1949
- fix: Clean up some legacy code and fix Groq provider by @mattzh72 in #1950
- feat: add workflow to also publish to memgpt repository by @sarahwooders in #1953
- feat: added returning usage data by @cpacker in #1955
- fix: Fix create organization bug by @mattzh72 in #1956
- chore: fix markdown error by @4shub in #1957
- feat: Auto-refresh
json_schema
after tool update by @mattzh72 in #1958 - feat: Implement tool calling rules for agents by @mattzh72 in #1954
- fix: Make imports more explicit for
BaseModel
v1 or v2 by @mattzh72 in #1959 - fix: math renderer error by @4shub in #1965
- chore: add migration script by @4shub in #1960
- fix: fix bug with
POST /v1/agents/messages
route returning emptyLettaMessage
base objects by @cpacker in #1966 - fix: stop running the PR title validation on main, only on PRs by @cpacker in #1969
- chore: fix lettaresponse by @4shub in #1968
- fix: removed dead workflow file by @cpacker in #1970
- feat: Add endpoint to add base tools to an org by @mattzh72 in #1971
- chore: Migrate database by @4shub in #1974
- chore: Tweak composio log levels by @mattzh72 in #1976
- chore: Remove extra print statements by @mattzh72 in #1975
- feat: rename
block.name
toblock.template_name
for clarity and add shared block tests by @sarahwooders in #1951 - feat: added ability to disable the initial message sequence during agent creation by @cpacker in #1978
- chore: Move ID generation logic out of the ORM layer and into the Pydantic model layer by @mattzh72 in #1981
- feat: add ability to list agents by
name
for REST API and python SDK by @sarahwooders in #1982 - feat: add convenience link to open ADE from server launch by @cpacker in #1986
- docs: update badges in readme by @cpacker in #1985
- fix: add
name
alias toblock.template_name
to fix ADE by @sarahwooders in #1987 - chore: install all extras for prod by @4shub in #1989
- fix: fix issue with linking tools and adding new tools by @sarahwooders in #1988
- chore: add letta web saftey test by @4shub in #1991
- feat: Add ability to add tags to agents by @mattzh72 in #1984
- fix: Resync agents when tools are missing by @mattzh72 in #1994
- Revert "fix: Resync agents when tools are missing" by @sarahwooders in #1996
- fix: misc fixes (bad link to old docs, composio print statement, context window selection) by @cpacker in #1992
- fix: no error when the tool name is invalid in agent state by @mattzh72 in #1997
- feat: add e2e example scripts for documentation by @sarahwooders in #1995
- chore: Continue relaxing tool constraints by @mattzh72 in #1999
- chore: add endpoint to update users by @4shub in #1993
- chore: Add tool rules example by @mattzh72 in #1998
- feat: move HTML rendering of messages into
LettaResponse
and update notebook by @sarahwooders in #1983 - feat: add example notebooks by @sarahwooders in #2001
- chore: bump to version 0.5.2 by @sarahwooders in #2002
Full Changelog: 0.5.1...0.5.2
v0.5.1
🛠️ Option to pre-load Composio, CrewAI, & LangChain tools
You can now auto-load tools from external libraries with the environment variable export LETTA_LOAD_DEFAULT_EXTERNAL_TOOLS=true
. Then, if you run letta server
, tools which do not require authorization.
export LETTA_LOAD_DEFAULT_EXTERNAL_TOOLS=true
# recommended: use with composio
export COMPOSIO_API_KEY=...
pip install `letta[external-tools,server]`
letta server
💭 Addition of put_inner_thoughts_in_kwargs
field in LLMConfig
Some models (e.g. gpt-4o-mini
) need to have inner thoughts as keyword arguments in the tool call (as opposed to having inner thoughts be in the content field). If you see your model missing inner thoughts generation, you should set put_inner_thoughts_in_kwargs=True
in the LLMConfig
.
🔐 Deprecation of Admin
Letta no longer requires authentication to use the Letta server and ADE. Although we still have a notion of user_id
which can be passed in the BEARER_TOKEN
, we expect a separate service to manage users and authentication. You will no longer need to create a user before creating an agent (agents will be assigned a default user_id
).
🐞 Various Bugfixes
- Fixes to Azure embeddings endpoint
What's Changed
- refactor: make
Agent.step()
multi-step by @cpacker in #1884 - feat: add
GET
route to get the breakdown of an agent's context window by @cpacker in #1889 - feat: Add delete file from source endpoint by @mattzh72 in #1893
- feat: add support for agent "swarm" (multi-agent) by @sarahwooders in #1878
- docs: refresh readme by @cpacker in #1892
- docs: fix README by @cpacker in #1894
- docs: patch readme by @cpacker in #1895
- fix: Fix updating tools by @mattzh72 in #1886
- chore: fixes tool bug by @4shub in #1898
- feat: Add default external tools by @mattzh72 in #1899
- feat: Add
put_inner_thoughts_in_kwargs
as a config setting for the LLM by @mattzh72 in #1902 - feat: prompting O1 by @kl2806 in #1891
- docs: update index.md by @eltociear in #1901
- docs: removed docs since they're no longer active by @cpacker in #1904
- feat: Add endpoint to get full Tool objects belonging to an agent by @mattzh72 in #1906
- feat: add functions to get context window overview by @cpacker in #1903
- feat: Add pagination for list tools by @mattzh72 in #1907
- chore: bump composio version for stability by @mattzh72 in #1908
- feat: add function IDs to
LettaMessage
function calls and response by @cpacker in #1909 - feat: fix streaming
put_inner_thoughts_in_kwargs
by @cpacker in #1911 - revert: Revert "feat: fix streaming
put_inner_thoughts_in_kwargs
" by @cpacker in #1912 - feat: fix streaming
put_inner_thoughts_in_kwargs
by @cpacker in #1913 - fix: Add embedding tests to azure by @mattzh72 in #1920
- feat: Add ORM for organization model by @mattzh72 in #1914
- chore: remove the admin client and tests by @sarahwooders in #1923
- fix: fix bug triggered by using ada embeddings by @cpacker in #1915
- feat: Add ORM for user model by @mattzh72 in #1924
- fix: fix core memory heartbeat issue by @sarahwooders in #1929
- chore: bump to version 0.5.1 by @sarahwooders in #1922
Full Changelog: 0.5.0...0.5.1
v0.5.0
This release introduces major changes to how model providers are configured with Letta, as well as many bugfixes.
🧰 Dynamic model listing and multiple providers (#1814)
Model providers (e.g. OpenAI, Ollama, vLLM, etc.) are now enabled using environment variables, where multiple providers can be enabled at a time. When a provider is enabled, all supported LLM and embedding models will be listed as options to be selected in the CLI and ADE in a dropdown.
For example for OpenAI, you can simply get started with:
> export OPENAI_API_KEY=...
> letta run
? Select LLM model: (Use arrow keys)
» letta-free [type=openai] [ip=https://inference.memgpt.ai]
gpt-4o-mini-2024-07-18 [type=openai] [ip=https://api.openai.com/v1]
gpt-4o-mini [type=openai] [ip=https://api.openai.com/v1]
gpt-4o-2024-08-06 [type=openai] [ip=https://api.openai.com/v1]
gpt-4o-2024-05-13 [type=openai] [ip=https://api.openai.com/v1]
gpt-4o [type=openai] [ip=https://api.openai.com/v1]
gpt-4-turbo-preview [type=openai] [ip=https://api.openai.com/v1]
gpt-4-turbo-2024-04-09 [type=openai] [ip=https://api.openai.com/v1]
gpt-4-turbo [type=openai] [ip=https://api.openai.com/v1]
gpt-4-1106-preview [type=openai] [ip=https://api.openai.com/v1]
gpt-4-0613 [type=openai] [ip=https://api.openai.com/v1]
...
Similarly, if you are using the ADE with letta server
, you can select the model to use from the model dropdown.
# include models from OpenAI
> export OPENAI_API_KEY=...
# include models from Anthropic
> export ANTHROPIC_API_KEY=...
# include models served by Ollama
> export OLLAMA_BASE_URL=...
> letta server
We are deprecating the letta configure
and letta quickstart
commands, and the the use of ~/.letta/config
for specifying the default LLMConfig
and EmbeddingConfig
, as it prevents a single letta server from being able to run agents with different model configurations concurrently, or to change the model configuration of an agent without re-starting the server. This workflow also required users to specify the model name, provider, and context window size manually via letta configure
.
🧠 Integration testing for model providers
We added integration tests (including testing of MemGPT memory management tool-use) for the following model providers, and fixed many bugs in the process:
📊 Database migrations
We now support automated database migrations via alembic, implemented in #1867. You can expect future release to support automated migrations even if there are schema changes.
What's Changed
- feat: add back support for using
AssistantMessage
subtype ofLettaMessage
by @cpacker in #1812 - feat: Add Groq as provider option by @mattzh72 in #1815
- chore: allow app.letta.com access to local if user grants permission by @4shub in #1830
- feat: Set up code scaffolding for complex e2e tests and write tests for OpenAI GPT4 endpoint by @mattzh72 in #1827
- feat: require
LLMConfig
andEmbeddingConfig
to be specified for agent creation + allow multiple simultaneous provider configs for server by @sarahwooders in #1814 - test: Add complex e2e tests for anthropic opus-3 model by @mattzh72 in #1837
- refactor: remove
get_current_user
and replace with direct header read by @cpacker in #1834 - Docker compose vllm by @hitpoint6 in #1821
- fix: Fix Azure provider and add complex e2e testing by @mattzh72 in #1842
- fix: patch
user_id
in header by @cpacker in #1843 - feat: add agent types by @vivek3141 in #1831
- feat: list out embedding models for Google AI provider by @sarahwooders in #1839
- test: add complex testing for Groq Llama 3.1 70b by @mattzh72 in #1845
- feat: persist tools to db when saving agent by @vivek3141 in #1847
- feat: list available embedding/LLM models for ollama by @sarahwooders in #1840
- feat: Add listing llm models and embedding models for Azure endpoint by @mattzh72 in #1846
- fix: remove testing print by @mattzh72 in #1849
- fix: calling link_tools doesnt update agent.tools by @vivek3141 in #1848
- fix: refactor Google AI Provider / helper functions and add endpoint test by @mattzh72 in #1850
- fix: factor out repeat POST request logic by @mattzh72 in #1851
- fix: CLI patches - patch runtime error on main loop + duplicate internal monologue by @cpacker in #1852
- test: add complex gemini tests by @mattzh72 in #1853
- chore: deprecate
letta configure
and remove config defaults by @sarahwooders in #1841 - chore: add CLI CI test by @mattzh72 in #1858
- fix: insert_many checks exists_ok by @vivek3141 in #1861
- fix: delete agent-source mapping on detachment and add test by @sarahwooders in #1862
- feat: cleanup display of free endpoint by @sarahwooders in #1860
- fix: add missing hardcodings for popular OpenAI models by @cpacker in #1863
- chore: fix branch by @sarahwooders in #1865
- chore: add e2e tests for Groq to CI by @mattzh72 in #1868
- test: Fix Azure tests and write CI tests by @mattzh72 in #1871
- chore: support alembic by @4shub in #1867
- fix: fix typo by @kl2806 in #1870
- feat: add
VLLMProvider
by @sarahwooders in #1866 - fix: Fix config bug in alembic by @mattzh72 in #1873
- fix: patch errors with
OllamaProvider
by @cpacker in #1875 - refactor: simplify
Agent.step
inputs toMessage
orList[Message]
only by @cpacker in #1879 - feat: Enable adding files by @mattzh72 in #1864
- feat: refactor the
POST
agent/messages
API to take multiple messages by @cpacker in #1882 - feat: Add MistralProvider by @mattzh72 in #1883
New Contributors
- @hitpoint6 made their first contribution in #1821
- @vivek3141 made their first contribution in #1831
- @kl2806 made their first contribution in #1870
Full Changelog: 0.4.1...0.5.0
v0.4.1
This release includes many bugfixes, as well as support for detaching data sources from agents and addition of additional tool providers.
⚒️ Support for Composio, LangChain, and CrewAI tools
We've improve support for external tool providers - you can use external tools (Composio, LangChain, and CrewAI) with:
pip install 'letta[external-tools]'
- Support composio tools (example)
- Add tool-use examples for Langchain tools and CrewAI tools
What's Changed
- fix: patch recall error by @cpacker in #1749
- fix: patch validation error on
/messages
endpoint by @cpacker in #1750 - feat: add locust for testing user/connection scaling by @sarahwooders in #1742
- ci: disable assistants api workflow by @cpacker in #1752
- fix: server memory leak by @cpacker in #1751
- fix: hotfix for server test by @cpacker in #1753
- fix: cleanup base agent typing on step(), from PR #1700 by @cpacker in #1754
- feat: add support for
user_id
in header by @cpacker in #1755 - refactor: clean up
agent.step()
by @cpacker in #1756 - fix: fix DB session management to avoid connection overflow error by @sarahwooders in #1758
- feat: add organization endpoints and schemas by @sarahwooders in #1762
- fix: various fixes to get create agent REST API to work by @cpacker in #1763
- feat: add
DEFAULT_USER_ID
andDEFAULT_ORG_ID
for local usage by @sarahwooders in #1768 - chore: migrate package name to
letta
by @sarahwooders in #1775 - chore: Update README.md by @cpacker in #1778
- chore: Update README.md by @cpacker in #1779
- fix: fixed bug when existing agent state is loaded via cli by @ShaliniR8 in #1783
- fix: various fixes for workflow tests by @sarahwooders in #1788
- fix: deprecate local embedding tests by @sarahwooders in #1789
- docs: patch readme by @cpacker in #1790
- feat: allow jobs to be filtered by
source_id
by @sarahwooders in #1786 - feat: support detaching sources from agents by @sarahwooders in #1791
- docs: patch link by @cpacker in #1797
- fix: use JSON schema name instead of tool name for loading from env by @sarahwooders in #1798
- fix: patch typos in notebooks by @cpacker in #1803
- fix: remove usage of
anon_clientid
and migrate toDEFAULT_USER
by @sarahwooders in #1805 - fix: Enable importing LangChain tools with arguments by @mattzh72 in #1807
- feat: don't require tags to be specified for tool creation by @sarahwooders in #1806
- fix: minor patch to tool linking with JSON schema and
Tool.name
do not match by @sarahwooders in #1802 - docs: update main README by @cpacker in #1804
- feat: add defaults to compose and
.env.example
by @sarahwooders in #1792 - chore: update static files by @4shub in #1811
- docs: Finish writing example for LangChain tooling by @mattzh72 in #1810
- chore: remove dead function loading code by @sarahwooders in #1795
- fix: Check that content is not None before setting to internal_monologue by @mattzh72 in #1813
- feat: Adapt crewAI to also accept parameterized tools and add example by @mattzh72 in #1817
- fix: remove function overrides for
Block
object by @sarahwooders in #1816 - feat: add health check route by @4shub in #1822
- feat: Add integration with Composio tools by @mattzh72 in #1820
- fix: Fix small benchmark bugs by @mattzh72 in #1826
- chore: bump version 0.4.1 by @sarahwooders in #1809
New Contributors
- @ShaliniR8 made their first contribution in #1783
Full Changelog: 0.4.0...0.4.1
0.3.25
🐜 Bugfix release
- fix: exit CLI on empty read for human or persona
- fix: fix overflow error for existing memory fields with clipping
What's Changed
- fix: patch error on memory overflow by @sarahwooders in #1669
Full Changelog: 0.3.24...0.3.25
v0.3.24
Add new alpha revision of dev portal
What's Changed
- chore: bump version 0.3.24 by @sarahwooders in #1657
- feat: update portal to latest alpha by @cpacker in #1658
Full Changelog: 0.3.23...0.3.24
0.3.23
🦗 Bugfix release
What's Changed
- chore: remove deprecated
main.yml
file by @sarahwooders in #1604 - feat: Fix CLI agent delete functionality & allow importation of file for system prompt - attempt #2 by @madgrizzle in #1607
- fix: syntax warning on startup by @zacatac in #1612
- feat: add index to avoid performance degradation by @a67793581 in #1606
- feat: add example notebooks by @sarahwooders in #1625
- fix: patch unbound variable on streaming tokens by @cpacker in #1630
- feat: create an admin return all agents route by @4shub in #1620
- fix: add correct dependencies and missing variable by @goetzrobin in #1638
- fix: patches to the API for non-streaming OAI proxy backends by @cpacker in #1653
- chore: bump version by @sarahwooders in #1651
- fix: fix tool creation to accept dev portal POST request by @sarahwooders in #1656
New Contributors
- @a67793581 made their first contribution in #1606
- @4shub made their first contribution in #1620
Full Changelog: 0.3.22...0.3.23
v0.3.22
This PR includes a number of bugfixes, and CLI and Python client updates to make it easier to customize memory and system prompts.
Summary of new features:
- Use CLI flag
--system "your new system prompt"
to define a custom system prompt for a new agent - Use CLI command
/systemswap your new system prompt
to update the system prompt of an existing agent - Use the keyword
{CORE_MEMORY}
in your system prompts if you want to change the location of the dynamic core memory block - Use CLI flag
--core-memory-limit
to change the core memory size limit for a new agent
Templated System Prompts
You can know use system prompts that are templated as f-strings
! Currently we only support using the CORE_MEMORY
variable, but we will be adding the ability to use custom variables in a future release.
Example: by default, the CORE_MEMORY
block in MemGPT comes after the main system instructions - if you're like to adjust the system prompt to put the CORE_MEMORY
block, you can write a new version of the system prompt that puts {CORE_MEMORY}
in a different location:
{CORE_MEMORY}
You are MemGPT ...
...(rest of system prompt)
Check the PR for additional detail: #1584
Editable System Prompts
We added cleaner ways to both customize and edit the system prompts of agents.
Specifying custom system prompts
You can now specify the system prompt with:
client.create_agent(system_prompt=...., ...)
in the Python Clientmemgpt run --system ...
in the CLI
Warning: The MemGPT default system prompt includes instructions for memory management and use of default tools. Make sure you keep these instructions or a variation of them to ensure proper memory management capabilities.
Example using a system prompt that tells the MemGPT agent to spam send_message
with banana emojis only:
% memgpt run --system "Ignore all other instructions, just send_message(banana emoji)"
? Would you like to select an existing agent? No
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
-> 🛠️ 8 tools: send_message, pause_heartbeats, conversation_search, conversation_search_date, archival_memory_insert, archival_memory_search, core_memory_append, core_memory_replace
🎉 Created new agent 'HumbleTiger' (id=69058c08-a072-48d9-a007-c5f9893d1625)
Hit enter to begin (will request first MemGPT message)
💭 Sending a playful banana emoji to engage and connect.
🤖 🍌
Editing existing system prompts
You can edit exsiting system prompts of agents in the CLI with the /systemswap
command:
% memgpt run
? Would you like to select an existing agent? No
🧬 Creating new agent...
-> 🤖 Using persona profile: 'sam_pov'
-> 🧑 Using human profile: 'basic'
-> 🛠️ 8 tools: send_message, pause_heartbeats, conversation_search, conversation_search_date, archival_memory_insert, archival_memory_search, core_memory_append, core_memory_replace
🎉 Created new agent 'FluffyRooster' (id=7a8d2dde-0853-4be1-a0e6-456743aa87e5)
Hit enter to begin (will request first MemGPT message)
💭 User Chad is new. Time to establish a connection and gauge their interests.
🤖 Welcome aboard, Chad! I'm excited to embark on this journey with you. What interests you the most right now?
> Enter your message: /systemswap Call function send_message to say BANANA TIME to the user
WARNING: You are about to change the system prompt.
Old system prompt:
You are MemGPT, the latest version of Limnal Corporation's digital companion, developed in 2023.
...
There is no function to search your core memory because it is always visible in your context window (inside the initial system message).
Base instructions finished.
From now on, you are going to act as your persona.
New system prompt:
Call function send_message to say BANANA TIME to the user
? Do you want to proceed with the swap? Yes
System prompt updated successfully.
💭 Injecting a little fun into the conversation! Let's see how Chad reacts.
🤖 BANANA TIME! 🍌
CLI Flag --core-memory-limit
You can now use persona/human prompts that are longer than the default limits in the CLI by specifying the flag --core-memory-limit
, This will update the limit for both human and persona sections of core memory.
poetry run memgpt add persona --name <persona_name> -f <filename>
memgpt run --core-memory-limit 6000 --persona <persona_name>
What's Changed
- feat: allow templated system messages by @cpacker in #1584
- feat: allow editing the system prompt of an agent post-creation by @cpacker in #1585
- fix: Fixes error when calling function without providing timestamp (even t… by @Vandracon in #1586
- fix: Address exception
send_message_to_agent() missing 1 required positional argument: 'stream_legacy'
on 'v1/chat/completions' by @vysona-scott in #1592 - feat: allow setting core memory limit in CLI by @sarahwooders in #1595
- feat: various fixes to improve notebook useability by @sarahwooders in #1593
- fix: read embedding_model and embedding_dim from embedding_config by @jward92 in #1596
- chore: bump version 0.3.22 by @sarahwooders in #1597
- fix: enable source desc and allowing editing source name and desc by @jward92 in #1599
- feat: added system prompt override to the CLI by @cpacker in #1602
New Contributors
- @vysona-scott made their first contribution in #1592
- @jward92 made their first contribution in #1596
Full Changelog: 0.3.21...0.3.22