RunnableParallel invoke function causes AttributeError: 'dict' object has no attribute 'replace' #17773
Replies: 5 comments 4 replies
-
🤖 Hey @raghuldeva, great to see you back here! Hope all's been well on your end since our last chat. Based on the error message you provided, it seems that the The def invoke(
self, input: Input, config: Optional[RunnableConfig] = None
) -> Dict[str, Any]: Here, The error you're encountering, To fix this issue, you need to modify the There was a similar issue reported in the LangChain repository: AttributeError: I hope this helps! If you have any more questions or need further clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
The “retriever1” is a dictionary object, and you need to convert it to a string. |
Beta Was this translation helpful? Give feedback.
-
I am getting the same error. This is definitely a bug in hugginface.py. I couldn't get it to work with @pechaut78 solution or @maximeperrindev solution. I am using: Here is my solution which eliminates the need for from langchain.embeddings.huggingface import HuggingFaceEmbeddings which I found from #7818: from langchain.embeddings.base import Embeddings
from sentence_transformers import SentenceTransformer
from typing import List
class CustomEmbeddings(Embeddings):
def __init__(self, model_name: str):
self.model = SentenceTransformer(model_name)
def embed_documents(self, documents: List[str]) -> List[List[float]]:
return [self.model.encode(d).tolist() for d in documents]
def embed_query(self, query: str) -> List[float]:
return self.model.encode([query])[0].tolist()
def get_chain():
embeddings = CustomEmbeddings(model_name=embeddings_model_name)
#embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
chromaClient = chromadb.HttpClient(host=embeddings_server_url)
db = Chroma(client=chromaClient, collection_name=embeddings_collection, embedding_function=embeddings)
retriever = db.as_retriever(k=embeddings_top_k)
rag_chain_from_docs = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"])))
| prompt
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
).assign(answer=rag_chain_from_docs)
chain = rag_chain_with_source
return chain
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
def generate_tokens(question):
for tokens in get_chain().stream({"question": question}):
yield tokens I am new to this and still learning so please let me know if you have any feedback on this solution. |
Beta Was this translation helpful? Give feedback.
-
So in my environment, I had Ollama in container, Chroma in a container and my Python code in a 3rd container, and a 4th NextJS container running in Kubernetes. Here is a little more refined version of what I posted previously using an ONNX-based embeddings/encoder model, instead of SentenceTransformer (which I found was very blotted): # Create an instance of the MyONNXMiniLM_L6_v2 embeddings model
embeddings = MyONNXMiniLM_L6_v2(model_path=models_directory)
# Create a Chroma client to connect to the embeddings database
chromaClient = chromadb.HttpClient(host=embeddings_server_url)
# Create a Chroma instance with the client, collection name, and the embeddings function
db = Chroma(client=chromaClient, collection_name=embeddings_collection, embedding_function=embeddings)
# Create a retriever using the embeddings database with k as the number of top embeddings to retrieve
retriever = db.as_retriever(k=embeddings_top_k)
# Create a RAG chain that consists of multiple runnables
rag_chain_from_docs = (
RunnablePassthrough.assign(context=(lambda x: format_docs(x["context"]))) # Assign the formatted context to the 'context' key
| prompt # Use the prompt template to generate a prompt
| llm # Use the Ollama language model to generate an answer
| StrOutputParser() # Parse the output of the language model as a string
)
# Create a parallel runnable that runs the retriever and the question in parallel
rag_chain_with_source = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()} # Assign the retriever and the question to the respective keys
).assign(answer=rag_chain_from_docs) But to answer your question, I used this pip package: I was creating a simple RAG application with a NextJS frontend using LangChain, Chroma, and Ollama, running in Kubernetes. You could do this without Kubernetes and whether or whether containers. |
Beta Was this translation helpful? Give feedback.
-
Was an issue ever opened for this? It is still happening in the current version of langchain-huggingface. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I'm trying to use Elasticsearchstore to retrieve, pass it to chain, and get answer from LLM along with source documents.
When I try to use RunnableParallel and invoke it, the error I see is
File "/EC_chainlit/inference.py", line 117, in
print("RAG chin with source : ", rag_chain_with_source.invoke(query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2053, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 415, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1246, in _call_with_config
context.run(
File "//lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "//lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 402, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2692, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2692, in
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2053, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2692, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2692, in
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File “/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/retrievers.py", line 121, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/retrievers.py", line 224, in get_relevant_documents
raise e
File "/lib/python3.11/site-packages/langchain_core/retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 654, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_community/vectorstores/elasticsearch.py", line 632, in similarity_search
results = self._search(
^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_community/vectorstores/elasticsearch.py", line 796, in _search
query_vector = self.embedding.embed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.11/site-packages/langchain_community/embeddings/huggingface.py", line 269, in embed_query
text = text.replace("\n", " ")
^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'replace'
System Info
python == 3.11.4
langchain == 0.1.7
langchain-community == 0.0.20
elasticsearch == 8.10.0
Beta Was this translation helpful? Give feedback.
All reactions