You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The retrieved memories in relevant_memories that get sent to _format_query_with_memories are one layer further down under a key called "results". When I change the code on line 181 to go through all memories in relevant_memories['results'] everything works as expected.
- memories_text = "\n".join(memory["memory"] for memory in relevant_memories)+ memories_text = "\n".join(memory["memory"] for memory in relevant_memories['results'])
Not sure how this should be handled properly to work with all other providers as I'm fairly new to Mem0, but thought I at least should register an issue since I couldn't find anything regarding this so far.
My code for reference:
Code
importloggingfrommem0.proxy.mainimportMem0handler=logging.FileHandler(filename="ai-assitant.log", mode='a')
logging.basicConfig(handlers=[handler],
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.INFO)
config= {
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.1",
"temperature": 0.1,
"max_tokens": 128000,
"ollama_base_url": "http://localhost:11434",
}
},
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j://localhost:7687",
"username": "neo4j",
"password": "***"
},
"llm" : {
"provider": "ollama",
"config": {
"model": "llama3.1",
"temperature": 0.0,
"max_tokens": 128000,
"ollama_base_url": "http://localhost:11434",
}
}
},
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "mem0-test",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 768,
"on_disk": True,
}
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text",
"ollama_base_url": "http://localhost:11434",
}
},
"version": "v1.1",
}
client=Mem0(config=config)
user_id="aiquen"# Ask the user for an inputmessage=input("Welcome to Mem0 AI-Assistant, how can I help you? > ")
whileTrue:
# Use the input to re-create messages list each timemessages= [
{
"role": "user",
"content": f"{message}"
}
]
# Create a chat_completionchat_completion=client.chat.completions.create(messages=messages, user_id=user_id, model="ollama/llama3.1")
# Print the answer from chat_completionprint(f"Assistant: {chat_completion.choices[0].message.content}")
message=input("> ")
The text was updated successfully, but these errors were encountered:
🐛 Describe the bug
When using mem0 with the chat completion feature with the following Ollama config
The retrieved memories in
relevant_memories
that get sent to_format_query_with_memories
are one layer further down under a key called "results". When I change the code on line 181 to go through all memories inrelevant_memories['results']
everything works as expected.full function code with change for completeness:
Not sure how this should be handled properly to work with all other providers as I'm fairly new to Mem0, but thought I at least should register an issue since I couldn't find anything regarding this so far.
My code for reference:
Code
The text was updated successfully, but these errors were encountered: