Replies: 4 comments 7 replies
-
Hi @Brenzle I moved this to the discussion sections because it isn't a particular issue with Smart Connections that needs to be solved. My understanding is that you're trying to utilize a local model. However, the screenshot indicates the Smart Chat is configured to use Open Router, a cloud service that runs open-source and other models. Running a local model does indeed require background knowledge about using Ollama or similar software, and how to do that is a bit outside the scope of Smart Connections. While I can't help much further than this, here is a screenshot of what configuring the Smart Chat to use a local model might look like: Hopefully, someone else from the community, someone with more experience implementing Ollama, will be able to help you further 😊 🌴 |
Beta Was this translation helpful? Give feedback.
-
I'd love some instructions on how to implement ollama as well. I can download/chat/use it from terminal, I'm just not certain how to connect it to Smart Chat |
Beta Was this translation helpful? Give feedback.
-
I tried /api/chat (Ollama's API) and /v1/api.chat (Ollama support OpenAI's API) and none works in either Custom Local nor Custom API. |
Beta Was this translation helpful? Give feedback.
-
Greetings all! New to the party, I think I can help here. See the screenshot:
I'm on the latest model of your plugin |
Beta Was this translation helpful? Give feedback.
-
I installed llama 3 8b with Ollama, and when my PC starts Ollama is running (small icon in taskbar). I can access it through CMD (I'm on Windows 10) and it works as expected (answers regular questions).
In Obsidian, I got the SC plugin version 2.1.66 installed and in the SC conversation, I set the model name as in the picture and an API key that I got from Open Router. Once I save, there's a popup saying "No Smart Connections found" and another one saying "Success! API key is valid.". Regular questions work but if I ask for a note summary, the reply looks like in the picture. If I ask a "Based on my notes" question, it returns "context: <note_name1>, <note_name2>..." and keeps loading forever.
I'm not very familliar with this and it seems to me like the readme file assumes some prior knowledge.
What am I doing wrong? I'd appreciate it if someone could point it out to me, and/or give me some steps to get llama to work with my notes in the SC conversation.
Beta Was this translation helpful? Give feedback.
All reactions