-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cause: HeadersTimeoutError: Headers Timeout Error #72
Comments
Facing a similar issue |
Similar issue here. After a month of usage I haven't encountered this issue until recently. Any known solutions?
|
this happens to me when |
Happens to me as well! I am using llama3 and it only happens when I send a long message. This happens only when I use the npm package. I have llama3 installed in my machine and use ollama to run it and it works for the same message. How to solve this? |
Googling the problem got me here. Same issue using Ollama3 on an M2 Ultra Mac Studio |
same with llama3 |
Same issue. I have investigated a bit and it seems like this may be an issue with Ollama itself. I checked the server logs. I'm able to see the pull request
It stops at exactly 5 minutes, which cannot be a coincidence. Either the client times out, or the server times out. After digging some more, I found this The session duration in Ollama is 5 minutes. Sooooo... Don't believe this is an issue with this library per-say. Either this library handles a retry, or we ask Ollama to increse this session time. Whichever is easier. |
I've opened this issue on their side. Let's see what they say. |
Looks like ollama will lookup the environment variable OLLAMA_KEEP_ALIVE and convert it to default duration change OLLAMA_KEEP_ALIVE maybe works. |
This error is client side, it has nothing to do with any session timeout on the ollama server.
in my case this issue only appeared when not streaming or when other requests had to be processed first, since without streaming the server doesn't send anything until the entire response is ready. i only tested it on nodejs, browsers are probably behaving differently easy solution is to just use another I tested import { Ollama } from "ollama"
import fetch from "node-fetch"
const ollama = new Ollama({
fetch: fetch as any
}) The To test I started a big request on an underpowered machine and logged start and end time:
about 18 minutes request time on node without streaming and without headers timeout error. be advised in this case changing to |
Has there been any update on this? |
I keep on getting cause: HeadersTimeoutError: Headers Timeout Error when I am trying to request homellm model
Sometimes
I was getting responses before but now always getting Headers Timeout error or Expected a completed response.
The text was updated successfully, but these errors were encountered: