Replies: 6 comments 1 reply
-
Hey @Drahokma , What are you trying to do exactly ? I see that you are using a RetrievalQaChain instead of a ConversationalRetrievalQAChain . The purpose of the ConversationalRetrievalQAChain is to :
You are storing the chat_history in VectorStoreRetrieverMemory , it looks correct . You are using the same redisVectorStore for history and for the document , is it on purpose ? |
Beta Was this translation helpful? Give feedback.
-
const model = new ChatOpenAI({ temperature: 0 });
const tools = [new Calculator()];
const bufferMemory = new BufferMemory({
returnMessages: true,
memoryKey: "chat_history",
chatHistory: new RedisChatMessageHistory({
sessionId: "test_session_id",
sessionTTL: 30000,
config: {
url: process.env.REDIS_URL,
},
}),
});
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "chat-conversational-react-description",
verbose: false,
memory: bufferMemory,
}); Pretty similar to your code @Drahokma , it looks to be ok on. my side , i got the history store in Redis |
Beta Was this translation helpful? Give feedback.
-
import { ChatOpenAI } from "langchain/chat_models/openai";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { Calculator } from "langchain/tools/calculator";
import { BufferMemory } from "langchain/memory";
import { RedisChatMessageHistory } from "langchain/stores/message/redis";
export const run = async () => {
const model = new ChatOpenAI({ temperature: 0 });
const tools = [new Calculator()];
const bufferMemory = new BufferMemory({
returnMessages: true,
memoryKey: "chat_history",
chatHistory: new RedisChatMessageHistory({
sessionId: "test_session_id",
sessionTTL: 30000,
config: {
url: process.env.REDIS_URL,
},
}),
});
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "chat-conversational-react-description",
verbose: false,
memory: bufferMemory,
});
console.log("************************************");
console.log("Loaded agent.");
console.log("************************************");
const input0 = "hi, i am bob";
const result0 = await executor.call({ input: input0 });
console.log(`Got input ${input0}`);
console.log(`Got output ${result0.output}`);
console.log("************************************");
const input1 = "whats my name?";
const result1 = await executor.call({ input: input1 });
console.log(`Got input ${input1}`);
console.log(`Got output ${result1.output}`);
};
await run(); full code |
Beta Was this translation helpful? Give feedback.
-
Ok thanks @borel I will try to find where is the problem according to your code it should be working. |
Beta Was this translation helpful? Give feedback.
-
@borel I used your code and it still shows this Loaded agent. Got input hi, i am bob Got input whats my name? I Can see the messages saved in the redis via redis commander but somehow the chain is not able to work with them. Also when I use the standard BufferMemory it works. What version of langchain do you have? |
Beta Was this translation helpful? Give feedback.
-
The problem is with the version of docker image. Locally the chat history works. It seems that there is maybe older version of langchain.js in docker images for now. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I'm trying to use Redis memory or vector stored memory in the conversational agent. As described here
https://js.langchain.com/docs/modules/agents/agents/action/conversational_agent
It should be possible under one condition that the memory key is "chat_history"
When used it then shows this error
TypeError: message._getType is not a function at file:///app/node_modules/langchain/dist/chat_models/openai.js:267:51
This is my implementation
import {VectorStoreRetrieverMemory} from "langchain/memory";
import {RedisVectorStore} from "langchain/vectorstores/redis";
import { RetrievalQAChain } from "langchain/chains";
import { ConversationalRetrievalQAChain} from "langchain/chains";
import {initializeAgentExecutorWithOptions} from "langchain/agents";
import {OpenAI} from "langchain/llms/openai";
import {ChatOpenAI} from "langchain/chat_models/openai";
import {ChainTool} from "langchain/tools";
import {OpenAIEmbeddings} from "langchain/embeddings";
import {RedisClientType} from "redis";
import { BufferMemory} from "langchain/memory";
import { RedisChatMessageHistory} from "langchain/stores/message/redis";
export const convAgent = async (client) => {
const redisVectorStore = new RedisVectorStore(new OpenAIEmbeddings(),{
indexName: "conversations",
redisClient: client,
});
const memoryRedis = new BufferMemory({
chatHistory: new RedisChatMessageHistory({
sessionId: new Date().toISOString(),
sessionTTL: 300,
client,
}),
memoryKey: "chat_history",
});
const memory = new VectorStoreRetrieverMemory({
// 1 is how many documents to return, you might want to return more, eg. 4
vectorStoreRetriever: redisVectorStore.asRetriever(1),
memoryKey: "chat_history",
});
const retrievalChain = RetrievalQAChain.fromLLM(
new ChatOpenAI({temperature: 0, azureOpenAIApiDeploymentName: "gpt3-hci"}),
redisVectorStore.asRetriever(5),
memory
);
const qaTool = new ChainTool({
name: "group-documents-store",
description: "Vector Store with group documents - useful for when you need to ask questions about group documents.",
chain: retrievalChain,
});
const tools = [qaTool];
return await initializeAgentExecutorWithOptions(tools, new ChatOpenAI({temperature: 0, azureOpenAIApiDeploymentName: "gpt3-hci"}), {
agentType: "chat-conversational-react-description",
memory
})
};
Do anybody know what could be the problem? Or is it an issue of the langchain?
Beta Was this translation helpful? Give feedback.
All reactions