-
Hi, I've hosted a local api gateway. It works fine and can be tested with below code.
I've set up tabby's
Now the problem is chat and generation stops working. Is there anything I missed. Really appreciate your help. Tabby Version 0.15.0 |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
OpenAI SDK requires setting the base URL in a format similar to Besides - chat completion model endpoint is not really usable with |
Beta Was this translation helpful? Give feedback.
-
Hi Thanks for the reply. I've tried but still got no luck. There is no output. Can I turn on some debug log to find out what's going on? |
Beta Was this translation helpful? Give feedback.
-
hi @wsxiaoys Could you point a direction about how to debug this issue? Thanks a lot! |
Beta Was this translation helpful? Give feedback.
-
I also test it with OpenAI api which works just fine.
Output
~/.tabby/config.toml
But I cannot get any response from tabby chatbox. |
Beta Was this translation helpful? Give feedback.
-
I switched to twinny and use ollama mode and this kind of url I thinks it'd be better that tabby provide some sort of debugging method say curl some http address to test if the service is available. |
Beta Was this translation helpful? Give feedback.
OpenAI SDK requires setting the base URL in a format similar to
https://api.openai.com/v1
. Therefore, in your case, it should behttp://172.16.32.20:8089/api/pass/v4
.Besides - chat completion model endpoint is not really usable with
model.completion.http
. See https://tabby.tabbyml.com/docs/references/models-http-api/openai/ for details.