Facing issues to get the correct response even after generating a proper context related to the user query and then passing it in system prompt , and then using the simple chat engine to get the answers #17467
Replies: 4 comments 21 replies
-
also if someone can guide me through aboout how to write proper and accurate prompts according my use-case |
Beta Was this translation helpful? Give feedback.
-
if i directly use llm.achat passing in the system prompt (the system prompt has context in it,which can be a very large text but not more than around 4k words) will it cause an issue? @dosu |
Beta Was this translation helpful? Give feedback.
-
if the token size is limited , how can we pas the large contexts in system propmts, which will actually increase the token limit , how can we encounter this and also get accurate responses from the chat_engine @dosu |
Beta Was this translation helpful? Give feedback.
-
getting this error "2025-01-10 11:26:28 - app.api.services.chat_service - ERROR - Error in chat engine processing " , while using this `chat_message = ChatMessage(
` |
Beta Was this translation helpful? Give feedback.
-
I initialised a context chat engine to get the context relevent to the question and then used a simple chat engine to get the final repsonse related to the query
code
`retriever = index.as_retriever()
context_engine = ContextChatEngine.from_defaults(
retriever=retriever,
llm=llm,
memory=memory,
context_template=get_context_prompt_template(),
)
context_str, _ = await context_engine._agenerate_context(last_message_content)
when i logged the system prompt , i has the full info inside with the context user question and every other details and instruction which i wanted to be followed in my response but even after so i failed to get the repsonse .
Beta Was this translation helpful? Give feedback.
All reactions