Skip to content

Releases: longy2k/obsidian-bmo-chatbot

1.8.9

04 Mar 17:56
Compare
Choose a tag to compare

Important Changes

  • REST API URL uses /chat/completions endpoints. Make sure you add the correct urls that lead to /chat/completions. For example, inserting https://openrouter.ai/api/v1 will fetch https://openrouter.ai/api/v1/chat/completions. If you are using the default LM Studio url, you can insert the REST API URL as http://localhost:1234/v1.

Features

  • Replaced marked for Obsidian Markdown rendering. This allows BMO Chatbot to render links, images, and other rich text elements (e.g. Obsidian Dataview, iframe, ...).
  • Added Anthropic models: claude-3-sonnet-20240229 and claude-3-sonnet-20240229.

Refactor

  • The model list will fetch for models every time the user open BMO Settings.
  • Anthropic API's Text Completion -> Anthropic API's Messages

Fixes

  • 'Prompt Select Generate' and Title Rename are updated for Anthropic Models.
  • Anthropic's user response no longer removes the first word.

1.8.8

26 Feb 00:11
Compare
Choose a tag to compare

Feat

  • Customizable chatbox

Improvements

  • Changed temperature slider to a textfield.
    • More precise values for temperature.
  • Better error handling (e.g. API connections errors will display as a bot message).
  • Better command response (e.g. /prompt will display a bot message if prompt path is not set).

Fixes

  • Append button will no longer go to the first file that is opened when restarting Obsidian.
  • "handle responses.json when it is an array" by @keriati in (#51)

1.8.7

11 Feb 14:33
Compare
Choose a tag to compare

Add

  • Bot Message now contains an Edit button.
  • Google Gemini Pro's API Key.
  • Mistral AI's API Key.
  • Anthropic's API Key.

Changes

  • The 'Prompt Select Generate' hotkey default is now CMD+Shift+=

Fixes

  • Clear reference current note each time before getting the new reference note.
  • Temperature max range is now set to 2.
  • /save and /append now responds with a bot message when the commands are executed. This also fixes the issue where users cannot continue a conversation after /save or /append.
  • Set max_tokens default to 4096 for REST API URLs, Mistral AI, and Google Gemini Pro.

1.8.6

05 Feb 18:09
Compare
Choose a tag to compare

Add

  • Ollama: keep_alive parameter
  • Editor Settings section
    • Prompt Select Generate System textfield

Changes

  • New loading animation.
  • Refactor 'Reference Current Note' to better detect active file.

1.8.5

04 Feb 05:08
Compare
Choose a tag to compare

Fixes

  • Ollama: Cleared default seed parameter to avoid repetitive response

1.8.4

03 Feb 23:13
Compare
Choose a tag to compare

Added

  • Ollama parameters
  • Better light theme support
  • Setting tabs has toggle features

Changes

  • Removed gpt-4-1106-preview and replaced with gpt-4-turbo-preview

Fixes

  • Persistent 'undefined' response via prompt function

1.8.3

18 Jan 01:54
Compare
Choose a tag to compare

Added

  • Edit and regen button for all user messages (Excluding commands).

1.8.2

16 Jan 01:09
Compare
Choose a tag to compare

Fix

  • Pulling models

1.8.1

16 Jan 00:50
Compare
Choose a tag to compare

Added

  • Allow header option to display chabot name and model name.

Refactor

  • Added a new notice generating... for 'Prompt Select Generate' command.
  • LOCALAI REST API URL is now OPENAI REST API URL. This should support other LLM providers with OpenAI's endpoints such as LM Studio.

Fixes

  • Ollama default url is set to empty to prevent repeating connection error.
  • /prompt will send a new notice if folder path is not set.
  • Reference current notes clears before each response.

1.8.0

30 Dec 16:40
Compare
Choose a tag to compare

Features

  • Append model's response button
  • Regenerate model's response button
  • Generate new title command
  • 'Prompt Select Generate' command
    • Ability to generate response in editor by creating a prompt, selecting the prompt, and running the command.
  • Prompt option

Fixes

  • Openai-based url will list the proper models in dropdown.

Deprecated

  • /list
    • Use /model or /models to display model list.
    • Use /prompt or /prompts to display prompt list.