You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have installed ollama on a system with an intel arc a770 and loaded llama3.2:3b.
The initial loading of the model takes a long time, but it works.
Initial requests are successfully answered with ~1000t/s. As the chat continues, things get a bit weird. In the middle of a story, the text turned into javascript and then into pure garbage.
I have installed ollama on a system with an intel arc a770 and loaded llama3.2:3b.
The initial loading of the model takes a long time, but it works.
Initial requests are successfully answered with ~1000t/s. As the chat continues, things get a bit weird. In the middle of a story, the text turned into javascript and then into pure garbage.
screenshot
Thats the deployment I used.
Logs:
Linking:
The text was updated successfully, but these errors were encountered: