Replies: 4 comments 8 replies
-
You can run the scraper locally, but the structured output is designed for openai from what I can see, I think the problem will often be context size, this project uses 32k. |
Beta Was this translation helpful? Give feedback.
-
You could try https://ollama.com/library/nexusraven with function calling. There might be other models that have bigger contexts but I would need to research further. And ollana is fully openai compatible to my knowledge. |
Beta Was this translation helpful? Give feedback.
-
Mistral 0.3 was released recently, they have included structured output in that version. I'm going to have a go to include Ollama, and test. |
Beta Was this translation helpful? Give feedback.
-
just use new llama 3.1, I tested it it works fine. |
Beta Was this translation helpful? Give feedback.
-
Hello from another project! Can this repo be run with a local scraper and reliance on Ollama instead of paywalled systems? langgenius/dify#4565
Beta Was this translation helpful? Give feedback.
All reactions