You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I waited for 4 hours and received the following logs:
INFO:httpx:HTTP Request: POST http://127.0.0.1:11434/api/chat "HTTP/1.1 200 OK" ⠹ Processed 42(100%) chunks, 0 entities(duplicated), 0 relations(duplicated)
WARNING:nano-graphrag: Didn't extract any entities; maybe your LLM is not working.
WARNING:nano-graphrag: No new entities found.
It appears that no entities were extracted, and there are warnings indicating a potential issue with the LLM model.
The text was updated successfully, but these errors were encountered:
When I use the "qwen2.5" LLM model in the file
using_ollama_as_llm_and_embedding.py
, I am unable to extract any entities.After running the command:
I waited for 4 hours and received the following logs:
It appears that no entities were extracted, and there are warnings indicating a potential issue with the LLM model.
The text was updated successfully, but these errors were encountered: