Releases: pfrankov/obsidian-local-gpt
1.14.4
1.14.3
1.14.2
1.14.1
1.14.0
1.13.1
1.13.0
🎉 PDF support for Enhanced Actions
Works only with text-based PDFs. No OCR.
Persistent storage for Enhanced Actions cache
So it persist even after restart of Obsidian.
This significantly speeds up work with documents that have already been used for EA and have not changed.
Check out what the first and second calls of the same 8 nested documents (39 chunks) look like:
Note: after changing the model for embedding, the caches are reset.
1.12.0
Migrated providers from fetch to remote.net.request. Closes #26
Avoiding CORS issues and improving performance.
Refactor AI provider and embedding functionality, add optimize model reloading
By default, the Ollama API has 2048 tokens limit even for the largest models. So there are some heuristics to provide full context window if needed as well as to optimize the VRAM consumption.
Added cache invalidation after changing an embedding model
Before the change, the cache was not invalidated even if the embedding model was changed. That's critical because the embeddings are not interchangeable between models.
Added prompt templating for context and selection
Context information is below.
{{=CONTEXT_START=}}
---------------------
{{=CONTEXT=}}
{{=CONTEXT_END=}}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {{=SELECTION=}}
Answer:
More about Prompt templating in prompt-templating.md
1.11.0
1.10.0
🎉 Implemented Enhanced Actions
Or ability to use the context from links and backlinks or just RAG (Retrieval Augmented Generation).
The idea is to enhance your actions with relevant context. And not from your entire vault but only with the related docs. It perfectly utilises the Obsidian's philosophy of linked documents.
Now you can create richer articles while writing, more in-depth summaries on the whole topic, you can ask your documents, translate texts without losing context, recap work meetings, conduct brainstorming sessions on a given topic...
Share your applications of the Enhanced Actions in the Discussion.
Setup
1. You need to install embedding model for Ollama:
- For English:
ollama pull nomic-embed-text
(fastest) - For other languages:
ollama pull bge-m3
(slower, but more accurate)
Or just use text-embedding-3-large
for OpenAI.
2. Select Embedding model in plugin's settings
And try to use the largest Default model with largest context window.
3. Select some text and run any action on it
No additional actions required. No indication for now but you can check the quality of the results.