Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throw more informative error when local model envs are/are not set #418

Merged
merged 1 commit into from
Nov 10, 2023

Conversation

sarahwooders
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@cpacker cpacker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@sarahwooders sarahwooders merged commit 3b3e8b4 into main Nov 10, 2023
2 checks passed
@cpacker cpacker deleted the warn_env_error branch November 10, 2023 21:17
maociao added a commit to maociao/MemGPT that referenced this pull request Dec 10, 2023
* FIx letta-ai#261 (letta-ai#300)

* should fix issue 261 - pickle fail on DotDict class

* black patch

---------

Co-authored-by: cpacker <packercharles@gmail.com>

* Add grammar-based sampling (for webui, llamacpp, and koboldcpp) (letta-ai#293)

* add llamacpp server support

* use gbnf loader

* cleanup and warning about grammar when not using llama.cpp

* added memgpt-specific grammar file

* add grammar support to webui api calls

* black

* typo

* add koboldcpp support

* no more defaulting to webui, should error out instead

* fix grammar

* patch kobold (testing, now working) + cleanup log messages

Co-Authored-By: Drake-AI <drake-ai@users.noreply.github.com>

* Bump version to 0.1.18-alpha.1

* fix: import PostgresStorageConnector only if postgres is selected as storage type (letta-ai#310)

* Don't import postgres storage if not specified in config (letta-ai#318)

* Aligned code with README that environment variable for Azure embeddings should be AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT (letta-ai#308)

* Fix: imported wrong storage connector  (letta-ai#320)

* Fix formatting in README.md

* Remove embeddings as argument in archival_memory.insert (letta-ai#284)

* Create docs pages (letta-ai#328)

* Create docs  (letta-ai#323)

* Create .readthedocs.yaml

* Update mkdocs.yml

* update

* revise

* syntax

* syntax

* syntax

* syntax

* revise

* revise

* spacing

* Docs (letta-ai#327)

* add stuff

* patch homepage

* more docs

* updated

* updated

* refresh

* refresh

* refresh

* update

* refresh

* refresh

* refresh

* refresh

* missing file

* refresh

* refresh

* refresh

* refresh

* fix black

* refresh

* refresh

* refresh

* refresh

* add readme for just the docs

* Update README.md

* add more data loading docs

* cleanup data sources

* refresh

* revised

* add search

* make prettier

* revised

* updated

* refresh

* favi

* updated

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>

* patch in-chat command info (letta-ai#332)

* Update chat_completion_proxy.py (letta-ai#326)

grammar_name Has to be defined, if not there's an issue with line 92

* cleanup letta-ai#326 (letta-ai#333)

* Stopping the app to repeat the user message in normal use. (letta-ai#304)

- Removed repeating every user message like bein in debug mode
- Re-added the "dump" flag for the user message, to make it look nicer.
  I may "reformat" other message too when dumping, but that was what
  sticked out to me as unpleasant.

* Remove redundant docs from README (letta-ai#334)

* Fix README local LLM link

* Add autogen+localllm docs (letta-ai#335)

Co-authored-by: Jirito0 <jirito0@users.noreply.github.com>

* Update quickstart.md to show flag list properly

* Add `memgpt version` command and package version (letta-ai#336)

* add ollama support (letta-ai#314)

* untested

* patch

* updated

* clarified using tags in docs

* tested ollama, working

* fixed template issue by creating dummy template, also added missing context length indicator

* moved count_tokens to utils.py

* clean

* Better interface output for function calls (letta-ai#296)

Co-authored-by: Charles Packer <packercharles@gmail.com>

* Better error message printing for function call failing (letta-ai#291)

* Better error message printing for function call failing

* only one import traceback

* don't forward entire stack trace to memgpt

* Fixing some dict value checking for function_call (letta-ai#249)

* Specify model inference and embedding endpoint separately  (letta-ai#286)

* Fix config tests (letta-ai#343)

Co-authored-by: Vivian Fang <hi@vivi.sh>

* Avoid throwing error for older `~/.memgpt/config` files due to missing section `archival_storage` (letta-ai#344)

* avoid error if has old config type

* Dependency management  (letta-ai#337)

* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`. 
* Update docs

* Relax verify_first_message_correctness to accept any function call (letta-ai#340)

* Relax verify_first_message_correctness to accept any function call

* Also allow missing internal monologue if request_heartbeat

* Cleanup

* get instead of raw dict access

* Update `poetry.lock` (letta-ai#346)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* Add autogen example that lets you chat with docs (letta-ai#342)

* Relax verify_first_message_correctness to accept any function call

* Also allow missing internal monologue if request_heartbeat

* Cleanup

* get instead of raw dict access

* Support attach in memgpt autogen agent

* Add docs example

* Add documentation, cleanup

* add gpt-4-turbo (letta-ai#349)

* add gpt-4-turbo

* add in another place

* change to 3.5 16k

* Revert relaxing verify_first_message_correctness, still add archival_memory_search as an exception (letta-ai#350)

* Revert "Relax verify_first_message_correctness to accept any function call (letta-ai#340)"

This reverts commit 30e9110.

* add archival_memory_search as an exception for verify

* Bump version to 0.1.18 (letta-ai#351)

* Remove `requirements.txt` and `requirements_local.txt` (letta-ai#358)

* update requirements to match poetry

* update with extras

* remove requirements

* disable pretty exceptions (letta-ai#367)

* Updated documentation for users (letta-ai#365)


---------

Co-authored-by: Vivian Fang <hi@vivi.sh>

* Create pull_request_template.md (letta-ai#368)

* Create pull_request_template.md

* Add pymemgpt-nightly workflow (letta-ai#373)

* Add pymemgpt-nightly workflow

* change token name

* Update lmstudio.md (letta-ai#382)

* Update lmstudio.md

* Update lmstudio.md

* Update lmstudio.md to show the Prompt Formatting Option (letta-ai#384)

* Update lmstudio.md to show the Prompt Formatting Option

* Update lmstudio.md Update the screenshot

* Swap asset location from letta-ai#384 (letta-ai#385)

* Update poetry with `pg8000` and include `pgvector` in docs  (letta-ai#390)

* Allow overriding config location with `MEMGPT_CONFIG_PATH` (letta-ai#383)

* Always default to local embeddings if not OpenAI or Azure  (letta-ai#387)

* Add support for larger archival memory stores (letta-ai#359)

* Replace `memgpt run` flags error with warning + remove custom embedding endpoint option + add agent create time (letta-ai#364)

* Update webui.md (letta-ai#397)

turn emoji warning into markdown warning

* Update webui.md (letta-ai#398)

* softpass test when keys are missing (letta-ai#369)

* softpass test when keys are missing

* update to use local model

* both openai and local

* typo

* fix

* Specify model inference and embedding endpoint separately  (letta-ai#286)

* Fix config tests (letta-ai#343)

Co-authored-by: Vivian Fang <hi@vivi.sh>

* Avoid throwing error for older `~/.memgpt/config` files due to missing section `archival_storage` (letta-ai#344)

* avoid error if has old config type

* Dependency management  (letta-ai#337)

* Divides dependencies into `pip install pymemgpt[legacy,local,postgres,dev]`. 
* Update docs

* Relax verify_first_message_correctness to accept any function call (letta-ai#340)

* Relax verify_first_message_correctness to accept any function call

* Also allow missing internal monologue if request_heartbeat

* Cleanup

* get instead of raw dict access

* Update `poetry.lock` (letta-ai#346)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* Add autogen example that lets you chat with docs (letta-ai#342)

* Relax verify_first_message_correctness to accept any function call

* Also allow missing internal monologue if request_heartbeat

* Cleanup

* get instead of raw dict access

* Support attach in memgpt autogen agent

* Add docs example

* Add documentation, cleanup

* add gpt-4-turbo (letta-ai#349)

* add gpt-4-turbo

* add in another place

* change to 3.5 16k

* Revert relaxing verify_first_message_correctness, still add archival_memory_search as an exception (letta-ai#350)

* Revert "Relax verify_first_message_correctness to accept any function call (letta-ai#340)"

This reverts commit 30e9110.

* add archival_memory_search as an exception for verify

* Bump version to 0.1.18 (letta-ai#351)

* Remove `requirements.txt` and `requirements_local.txt` (letta-ai#358)

* update requirements to match poetry

* update with extras

* remove requirements

* disable pretty exceptions (letta-ai#367)

* Updated documentation for users (letta-ai#365)


---------

Co-authored-by: Vivian Fang <hi@vivi.sh>

* Create pull_request_template.md (letta-ai#368)

* Create pull_request_template.md

* Add pymemgpt-nightly workflow (letta-ai#373)

* Add pymemgpt-nightly workflow

* change token name

* Update lmstudio.md (letta-ai#382)

* Update lmstudio.md

* Update lmstudio.md

* Update lmstudio.md to show the Prompt Formatting Option (letta-ai#384)

* Update lmstudio.md to show the Prompt Formatting Option

* Update lmstudio.md Update the screenshot

* Swap asset location from letta-ai#384 (letta-ai#385)

* Update poetry with `pg8000` and include `pgvector` in docs  (letta-ai#390)

* Allow overriding config location with `MEMGPT_CONFIG_PATH` (letta-ai#383)

* Always default to local embeddings if not OpenAI or Azure  (letta-ai#387)

* Add support for larger archival memory stores (letta-ai#359)

* Replace `memgpt run` flags error with warning + remove custom embedding endpoint option + add agent create time (letta-ai#364)

* Update webui.md (letta-ai#397)

turn emoji warning into markdown warning

* Update webui.md (letta-ai#398)

* dont hard code embeddings

* formatting

* black

* add full deps

* remove changes

* update poetry

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
Co-authored-by: Vivian Fang <hi@vivi.sh>
Co-authored-by: MSZ-MGS <65172063+MSZ-MGS@users.noreply.github.com>

* Use `~/.memgpt/config` to set questionary defaults in `memgpt configure` + update tests to use specific config path (letta-ai#389)

* Dockerfile for running postgres locally (letta-ai#393)

* Return empty list if archival memory search over empty local index  (letta-ai#402)

* Remove AsyncAgent and async from cli (letta-ai#400)

* Remove AsyncAgent and async from cli

Refactor agent.py memory.py

Refactor interface.py

Refactor main.py

Refactor openai_tools.py

Refactor cli/cli.py

stray asyncs

save

make legacy embeddings not use async

Refactor presets

Remove deleted function from import

* remove stray prints

* typo

* another stray print

* patch test

---------

Co-authored-by: cpacker <packercharles@gmail.com>

* I added some json repairs that helped me with malformed messages (letta-ai#341)

* I added some json repairs that helped me with malformed messages

There are two of them: The first will remove hard line feeds that appear
in the message part because the model added those instead of escaped
line feeds. This happens a lot in my experiments and that actually fixes
them.

The second one is less tested and should handle the case that the model
answers with multiple blocks of strings in quotes or even uses unescaped
quotes. It should grab everything betwenn the message: " and the ending
curly braces, escape them and makes it propper json that way.

Disclaimer: Both function were written with the help of ChatGPT-4 (I
can't write much Python). I think the first one is quite solid but doubt
that the second one is fully working. Maybe somebody with more Python
skills than me (or with more time) has a better idea for this type of
malformed replies.

* Moved the repair output behind the debug flag and removed the "clean" one

* Added even more fixes (out of what I just encountered while testing)

It seems that cut of json can be corrected and sometimes the model is to
lazy to add not just one curly brace but two. I think it does not "cost"
a lot to try them all out. But the expeptions get massive that way :)

* black

* for the final hail mary with extract_first_json, might as well add a double end bracket instead of single

---------

Co-authored-by: cpacker <packercharles@gmail.com>

* Fix max tokens constant (letta-ai#374)

* stripped LLM_MAX_TOKENS constant, instead it's a dictionary, and context_window is set via the config (defaults to 8k)

* pass context window in the calls to local llm APIs

* safety check

* remove dead imports

* context_length -> context_window

* add default for agent.load

* in configure, ask for the model context window if not specified via dictionary

* fix default, also make message about OPENAI_API_BASE missing more informative

* make openai default embedding if openai is default llm

* make openai on top of list

* typo

* also make local the default for embeddings if you're using localllm instead of the locallm endpoint

* provide --context_window flag to memgpt run

* fix runtime error

* stray comments

* stray comment

* [version] bump version to 0.2.0 (letta-ai#410)

* Fix main.yml to not rely on requirements.txt (letta-ai#411)

* Hotfix openai create all with context_window kwarg (letta-ai#413)

* fix agent load (letta-ai#412)

* Patch local LLMs with context_window (letta-ai#416)

* patch

* patch ollama

* patch lmstudio

* patch kobold

* Fix model configuration for when `config.model == "local"` previously  (letta-ai#415)

* fix agent load

* fix model config

* add errors to make sure envs set correctly (letta-ai#418)

* [version] bump version to 0.2.1 (letta-ai#417)

* fix memgptagent attach docs error (letta-ai#427)

Co-authored-by: Anjalee Sudasinghe <anjalee@codegen.net>

* [fix] remove asserts for `OPENAI_API_BASE` (letta-ai#432)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts

* patch (letta-ai#435)

* patch letta-ai#428 (letta-ai#433)

* [version] bump release to 0.2.2 (letta-ai#436)

* fix config (letta-ai#438)

* Configurable presets to support easy extension of MemGPT's function set (letta-ai#420)

* partial

* working schema builder, tested that it matches the hand-written schemas

* correct another schema diff

* refactor

* basic working test

* refactored preset creation to use yaml files

* added docstring-parser

* add code for dynamic function linking in agent loading

* pretty schema diff printer

* support pulling from ~/.memgpt/functions/*.py

* clean

* allow looking for system prompts in ~/.memgpt/system_prompts

* create ~/.memgpt/system_prompts if it doesn't exist

* pull presets from ~/.memgpt/presets in addition to examples folder

* add support for loading agent configs that have additional keys

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>

* WebSocket interface and basic `server.py` process (letta-ai#399)

* patch getargspec error (letta-ai#440)

* always cast `config.context_window` to `int` before use (letta-ai#444)

* always cast config.context_window to int before use

* extra code to be super safe if self.config.context_window is somehow None

* Refactor config + determine LLM via `config.model_endpoint_type` (letta-ai#422)

* mark depricated API section

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* update config fields

* cleanup config loading

* commit

* remove asserts

* refactor configure

* put into different functions

* add embedding default

* pass in config

* fixes

* allow overriding openai embedding endpoint

* black

* trying to patch tests (some circular import errors)

* update flags and docs

* patched support for local llms using endpoint and endpoint type passed via configs, not env vars

* missing files

* fix naming

* fix import

* fix two runtime errors

* patch ollama typo, move ollama model question pre-wrapper, modify question phrasing to include link to readthedocs, also have a default ollama model that has a tag included

* disable debug messages

* made error message for failed load more informative

* don't print dynamic linking function warning unless --debug

* updated tests to work with new cli workflow (disabled openai config test for now)

* added skips for tests when vars are missing

* update bad arg

* revise test to soft pass on empty string too

* don't run configure twice

* extend timeout (try to pass against nltk download)

* update defaults

* typo with endpoint type default

* patch runtime errors for when model is None

* catching another case of 'x in model' when model is None (preemptively)

* allow overrides to local llm related config params

* made model wrapper selection from a list vs raw input

* update test for select instead of input

* Fixed bug in endpoint when using local->openai selection, also added validation loop to manual endpoint entry

* updated error messages to be more informative with links to readthedocs

* add back gpt3.5-turbo

---------

Co-authored-by: cpacker <packercharles@gmail.com>

* patch bad merge

* patch websocket server after presets refactor

* Update config to include `memgpt_version` and re-run configuration for old versions on `memgpt run` (letta-ai#450)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts

* store config versions and force update in some cases

* Add load and load_and_attach functions to memgpt autogen agent. (letta-ai#430)

* Add load and load_and_attach functions to memgpt autogen agent.

* Only recompute files if dataset does not exist.

* Update documentation [local LLMs, presets] (letta-ai#453)

* updated local llm documentation

* updated cli flags to be consistent with documentation

* added preset documentation

* update test to use new arg

* update test to use new arg

* missing .md file

* When default_mode_endpoint has a value, it needs to become model_endpoint. (letta-ai#452)

Co-authored-by: Oliver Smith <oliver.smith@superevilmegacorp.com>

* Upgrade workflows to Python 3.11 (letta-ai#441)

* use python 3.11

* change format

* [version] bump version to 0.2.3 (letta-ai#457)

* Set service context for llama index in `local.py`  (letta-ai#462)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* fix imports

* fix imports

* add prints

* update lock

* remove asserts

* bump version

* set global context for llama index

* Update functions.md (letta-ai#461)

* bugfix for linking functions from ~/.memgpt/functions (letta-ai#463)

* Add d20 function example to readthedocs (letta-ai#464)

* Update functions.md

* Update functions.md

* move webui to new openai completions endpoint, but also provide existing functionality via webui-legacy backend (letta-ai#468)

* updated websocket protocol and server (letta-ai#473)

* Lancedb storage integration (letta-ai#455)

* Docs: Fix typos (letta-ai#477)

* Remove .DS_Store from agents list (letta-ai#485)

* Fix letta-ai#487 (summarize call uses OpenAI even with local LLM config) (letta-ai#488)

* use new chatcompletion function that takes agent config inside of summarize

* patch issue with model now missing

* patch web UI (letta-ai#484)

* patch web UI

* set truncation_length

* ANNA, an acronym for Adaptive Neural Network Assistant. Which acts as your personal research assistant really good with archival documents and research. (letta-ai#494)

* vLLM support (letta-ai#492)

* init vllm (not tested), uses POST API not openai wrapper

* add to cli config list

* working vllm endpoint

* add model configuration for vllm

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>

* Add error handling during linking imports (letta-ai#495)

* Add error handling during linking imports

* correct typo + make error message even more explicit

* deadcode

* Fixes bugs with AutoGen implementation and exampes (letta-ai#498)

* patched bugs in autogen agent example, updated autogen agent creation to follow agentconfig paradigm

* more fixes

* black

* fix bug in autoreply

* black

* pass default autoreply through to the memgpt autogen conversibleagent subclass so that it doesn't leave empty messages which can trigger errors in local llm backends like lmstudio

* update version (letta-ai#497)

* add new manual json parser meant to catch send_message calls with trailing bad extra chars (letta-ai#509)

* add new manual json parser meant to catch send_message calls with stray trailing chars, patch json error passing

* typo

* add a longer prefix that to the default wrapper (letta-ai#510)

* add a longer prefix that to the default wrapper (not just opening brace, but up to 'function: ' part since that is always present)

* drop print

* add core memory char limits to text shown in core memory (letta-ai#508)

* add core memory char limits to text shown in core memory

* include char limit in xml tag

* add flag to allow reverting to old version

* extra arg being passed causing a runtime error (letta-ai#517)

* Add warning if no data sources loaded on `/attach` command  (letta-ai#513)

* minor fix

* add warn instead of error for no data sources

* fix autogem to autogen (letta-ai#512)

* Update contributing guidelines  (letta-ai#516)

* update contributing

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update CONTRIBUTING.md

* Update contributing.md (letta-ai#518)

* Update contributing.md (letta-ai#520)

* Add support for HuggingFace Text Embedding Inference endpoint for embeddings  (letta-ai#524)

* Update mkdocs theme, small fixes for `mkdocs.yml` (letta-ai#522)

* Update mkdocs.yml (letta-ai#525)

* Clean memory error messages (letta-ai#523)

* Raise a custom keyerror instead of basic keyerror to clarify issue to LLM processor

* remove self value from error message passed to LLM processor

* simplify error message propogated to llm processor

* Fix class names used in persistence manager logging (letta-ai#503)

* Fix class names used in persistence manager logging

Signed-off-by: Claudio Cambra <developer@claudiocambra.com>

* Use self.__class__.__name__ for logging in different persistence managers

Signed-off-by: Claudio Cambra <developer@claudiocambra.com>

---------

Signed-off-by: Claudio Cambra <developer@claudiocambra.com>

* add autogen extra (letta-ai#530)

* Add `user` field for vLLM endpoint  (letta-ai#531)

* patched a bug where outputs of a regex extraction weren't getting cast back to string, causing an issue when the dict was then passed to json.dumps() (letta-ai#533)

* Update bug_report.md (letta-ai#532)

* Update bug_report.md

* LanceDB integration bug fixes and improvements (letta-ai#528)

* fixes

* update

* lint

* Remove `openai` package and migrate to requests (letta-ai#534)

* Update contributing.md (typo) (letta-ai#538)

* Run formatting checks with poetry (letta-ai#537)

* update black version

* add workflow dispatch

* Removing dead code + legacy commands  (letta-ai#536)

* Remove usage of `BACKEND_TYPE` (letta-ai#539)

* Update AutoGen documentation and notebook example (letta-ai#540)

* Update AutoGen documentation

* Update webui.md

* Update webui.md

* Update lmstudio.md

* Update lmstudio.md

* Update mkdocs.yml

* Update README.md

* Update README.md

* Update README.md

* Update autogen.md

* Update local_llm.md

* Update local_llm.md

* Update autogen.md

* Update autogen.md

* Update autogen.md

* refreshed the autogen examples + notebook (notebook is untested)

* unrelated patch of typo I noticed

* poetry remove pyautogen, then manually removed autogen extra in .toml

* add pdf dependency

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>

* Update local_llm.md (letta-ai#542)

* Documentation update (letta-ai#541)

* Update autogen.md

* Update autogen.md

* clean docs (letta-ai#543)

* Update autogen.md (letta-ai#544)

* update docs (letta-ai#547)

* update admonitions

* Update local_llm.md

* Update webui.md

* Update autogen.md

* Update storage.md

* Update example_chat.md

* Update example_data.md

* Update example_chat.md

* Update example_data.md

* added vLLM doc page since we support it (letta-ai#545)

* added vLLM doc page since we support it

* capitalization

* updated documentation

* Update vllm.md

* Update ollama.md

* Update ollama.md

* Update ollama.md

* Update autogen.md

* Fix vLLM endpoint to have correct suffix (letta-ai#548)

* minor fix

* fix vllm endpoint

* fix docs

* Add documentation for using Hugging Face models for embeddings  (letta-ai#549)

* Update README.md

* bump version (letta-ai#551)

* Add docs file for customizing embedding mode  (letta-ai#554)

* minor fix

* forgot to add embedding file

* Upgrade to `llama_index=0.9.10` (letta-ai#556)

* minor fix

* forgot to add embedding file

* upgrade llama index

* fix cannot import name 'EmptyIndex' from 'llama_index' (letta-ai#558)

* Update README.md

* Update storage.md (letta-ai#564)

fix typo

* use a consistent warning prefix across codebase (letta-ai#569)

* Update autogen.md to include Azure config example + patch for `pyautogen>=0.2.0` (letta-ai#555)

* Update autogen.md

* in groupchat example add an azure elif

* fixed missing azure mappings + corrected the gpt-4-turbo one

* Updated MemGPT AutoGen agent to take credentials and store them in the config (allows users to use memgpt+autogen without running memgpt configure), also patched api_base kwarg for autogen >=v0.2

* add note about 0.2 testing

* added overview to autogen integration page

* default examples to openai, sync config header between the two main examples, change speaker mode to round-robin in 2-way chat to supress warning

* sync config header on last example (not used in docs)

* refactor to make sure we use existing config when writing out extra credentials

* fixed bug in local LLM where we need to comment out api_type (for pyautogen>=0.2.0)

* Update autogen.md

* Update autogen.md (letta-ai#571)

Update example config to match `pyautogen==0.2.0`

* Fix crash from bad key access into response_message without function_call (letta-ai#437)

Signed-off-by: Claudio Cambra <developer@claudiocambra.com>

* sort agents by directory-last-modified time (letta-ai#574)

* sort agents by directory-last-modified time

* only save agent config when agent is saved

---------

Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>

* Add safety check to pop (letta-ai#575)

* Add safety check to pop

* typo

* Add `pyyaml` package to `pyproject.toml` (letta-ai#557)

* add back dotdict for backcompat (letta-ai#572)

* Bump version to 0.2.6 (letta-ai#573)

* Update cli_faq.md

* Update cli_faq.md

* Update cli_faq.md

* allow passing `skip_verify` to autogen constructors (letta-ai#581)

* allow passing skip_verify to autogen constructors

* added flag to examples with a NOTE, also added to docs

* Chroma storage integration  (letta-ai#285)

* Fix `pyproject.toml` chroma version  (letta-ai#582)

* mark depricated API section

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* add readme

* CLI bug fixes for azure

* check azure before running

* Update README.md

* Update README.md

* bug fix with persona loading

* remove print

* make errors for cli flags more clear

* format

* add initial postgres implementation

* working chroma loading

* add postgres tests

* working initial load into postgres and chroma

* add load index command

* semi working load index

* disgusting import code thanks to llama index's nasty APIs

* add postgres connector

* working postgres integration

* working local storage (changed saving)

* implement /attach

* remove old code

* split up storage conenctors into multiple files

* remove unused code

* cleanup

* implement vector db loading

* cleanup state savign

* add chroma

* minor fix

* fix up chroma integration

* fix list error

* update dependencies

* update docs

* format

* cleanup

* forgot to add embedding file

* upgrade llama index

* fix data source naming bug

* remove legacy

* os import

* upgrade chroma version

* fix chroma package

* Remove broken tests from chroma merge (letta-ai#584)

* fix runtime error (letta-ai#586)

* Patch azure embeddings + handle azure deployments properly (letta-ai#594)

* Fix bug where embeddings endpoint was getting set to deployment, upgraded pinned llama-index to use new version that has azure endpoint

* updated documentation

* added memgpt example for openai

* change wording to match configure

---------

Signed-off-by: Claudio Cambra <developer@claudiocambra.com>
Co-authored-by: danx0r <danbmil99@gmail.com>
Co-authored-by: cpacker <packercharles@gmail.com>
Co-authored-by: Drake-AI <drake-ai@users.noreply.github.com>
Co-authored-by: Vivian Fang <hi@vivi.sh>
Co-authored-by: Robin Goetz <35136007+goetzrobin@users.noreply.github.com>
Co-authored-by: Sarah Wooders <sarahwooders@gmail.com>
Co-authored-by: Dividor <matthew@regolith.org>
Co-authored-by: borewik <borewik@gmail.com>
Co-authored-by: Hans Raaf <hara@oderwat.de>
Co-authored-by: Jirito0 <jirito0@users.noreply.github.com>
Co-authored-by: Mo Nuaimat <nuaimat2002@yahoo.com>
Co-authored-by: MSZ-MGS <65172063+MSZ-MGS@users.noreply.github.com>
Co-authored-by: Bob Kerns <1154903+BobKerns@users.noreply.github.com>
Co-authored-by: Anjalee Sudasinghe <42403668+anjaleeps@users.noreply.github.com>
Co-authored-by: Anjalee Sudasinghe <anjalee@codegen.net>
Co-authored-by: Wes <wryanmedford@gmail.com>
Co-authored-by: Oliver Smith <oliver@kfs.org>
Co-authored-by: Oliver Smith <oliver.smith@superevilmegacorp.com>
Co-authored-by: Prashant Dixit <54981696+PrashantDixit0@users.noreply.github.com>
Co-authored-by: sahusiddharth <112792547+sahusiddharth@users.noreply.github.com>
Co-authored-by: Max Blackmer, CSM <max@agiletechnologist.us>
Co-authored-by: Paul Asquin <paul.asquin@gmail.com>
Co-authored-by: Claudio Cambra <developer@claudiocambra.com>
Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com>
Co-authored-by: Alex Perez <alexperezdev@gmail.com>
mattzh72 pushed a commit that referenced this pull request Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants