using embeddings #181
Replies: 2 comments 2 replies
-
Here's my current So I think this could all be handled in a JSON config per-role, with a list of pointers to embeddings to use. At least in my case, the reason I'm using shell-gpt vs. langchain etc. is the simplicity... CSV embeddings would be fine for the foreseeable future. Assuming latest understanding of #92 functionality, I'd say:
Anything I'm glossing over? I'd be happy to take a crack at this once the user-defined roles are available in a stable branch if not released. |
Beta Was this translation helpful? Give feedback.
-
I have been playing with the extended context capabilities of Unfortunately, in order to use context / vectorstore properly shell_gpt would need to adopt an agent architecture. This is necessary for the AI to have 'thoughts' about what it should query its database for and collect the relevant bits of information into a memory. Shell GPT's REPL mode is close to this, but a lot of the backend would have to be replaced with stuff that either replicates or implements langchain stuff. Then shell gpt would basically become an Auto-GPT clone. (Which would be nice because Auto-GPT's terminal interface is terrible.) I think the best thing @TheR1D has done is create an awesome UI. |
Beta Was this translation helpful? Give feedback.
-
In relation to #110 and #92, adding a simple embedding search & context injection would be helpful to maximize use of the context window. OpenAI released some minimal example notebooks (1, 2) today that pull embeddings from a .csv and store in memory as a dataframe.
Any thoughts on supporting basic embeddings search through shell-gpt?
Beta Was this translation helpful? Give feedback.
All reactions