Releases: longy2k/obsidian-bmo-chatbot
Releases · longy2k/obsidian-bmo-chatbot
1.8.9
Important Changes
- REST API URL uses
/chat/completions
endpoints. Make sure you add the correct urls that lead to/chat/completions
. For example, insertinghttps://openrouter.ai/api/v1
will fetchhttps://openrouter.ai/api/v1/chat/completions
. If you are using the default LM Studio url, you can insert the REST API URL ashttp://localhost:1234/v1
.
Features
- Replaced
marked
for Obsidian Markdown rendering. This allows BMO Chatbot to render links, images, and other rich text elements (e.g. Obsidian Dataview, iframe, ...). - Added Anthropic models:
claude-3-sonnet-20240229
andclaude-3-sonnet-20240229
.
Refactor
- The model list will fetch for models every time the user open BMO Settings.
- Anthropic API's Text Completion -> Anthropic API's Messages
Fixes
- 'Prompt Select Generate' and Title Rename are updated for Anthropic Models.
- Anthropic's user response no longer removes the first word.
1.8.8
Feat
- Customizable chatbox
Improvements
- Changed temperature slider to a textfield.
- More precise values for temperature.
- Better error handling (e.g. API connections errors will display as a bot message).
- Better command response (e.g.
/prompt
will display a bot message if prompt path is not set).
Fixes
1.8.7
Add
- Bot Message now contains an Edit button.
- Google Gemini Pro's API Key.
- Mistral AI's API Key.
- Anthropic's API Key.
Changes
- The 'Prompt Select Generate' hotkey default is now
CMD+Shift+=
Fixes
- Clear reference current note each time before getting the new reference note.
- Temperature max range is now set to 2.
/save
and/append
now responds with a bot message when the commands are executed. This also fixes the issue where users cannot continue a conversation after/save
or/append
.- Set max_tokens default to 4096 for REST API URLs, Mistral AI, and Google Gemini Pro.
1.8.6
1.8.5
1.8.4
1.8.3
1.8.2
1.8.1
Added
- Allow header option to display chabot name and model name.
Refactor
- Added a new notice
generating...
for 'Prompt Select Generate' command. - LOCALAI REST API URL is now OPENAI REST API URL. This should support other LLM providers with OpenAI's endpoints such as LM Studio.
Fixes
- Ollama default url is set to empty to prevent repeating connection error.
/prompt
will send a new notice if folder path is not set.- Reference current notes clears before each response.
1.8.0
Features
- Append model's response button
- Regenerate model's response button
- Generate new title command
- 'Prompt Select Generate' command
- Ability to generate response in editor by creating a prompt, selecting the prompt, and running the command.
- Prompt option
Fixes
- Openai-based url will list the proper models in dropdown.
Deprecated
/list
- Use
/model
or/models
to display model list. - Use
/prompt
or/prompts
to display prompt list.
- Use