Releases: chrisrude/oobabot-plugin
v0.2.3
Release v.0.2.3
What's Changed
This update is a small bug-fix release to support changes
that were made in the most recent version of the oobabooga server.
It also updates to the latest version of the oobabot backend, which includes a minor feature to disable unsolicited replies
entirely. This is useful if you want to use the bot in a high-volume channel.
UI Bug Fixes
- Downgrade gradio to version 3.33.1
Gradio is the UI library which oobabooga uses. It would specifically use version 3.33.1 of Gradio.
Version 3.33.1, however, has known security vulnerabilities,
described here. These would affect anyone who
was using a "shared" gradio app.
Because of this, oobabot-plugin would force an upgrade of gradio to 3.34.0, which fixed the security issues. For a long
time, oobabooga would still work fine with this slightly-updated
version.
However, a somewhat recent change to Oobabooga broke compatibility with the newer version of Gradio. This means
that with any security-patched version, model loading just
doesn't work, no matter if oobabot is installed or not.
Because this is a total breakage of the oobabooga server, we
have no choice but to revert to the older version of Gradio
until the issue is fixed.
New Backend Features (from the oobabot 0.2.3 backend)
- Option to disable unsolicited replies entirely
Unsolicited replies are still enabled by default, but you can now disable them entirely by changing this setting in your config.yml:
# If set, the bot will not reply to any messages that do not @-mention it or include a
# wakeword. If unsolicited replies are disabled, the unsolicited_channel_cap setting will
# have no effect.
# default: False
disable_unsolicited_replies: true
The objective of this change is to support cases where
unsolicited replies are not desired, such as when the bot is used in a
channel with a high volume of messages.
Bug Fixes / Tech Improvements
-
Preserve newlines when prompting the bot
In some cases the whitespace in user messages is important. One case is
described in the issue 76, reported by @xydreen.When sending a prompt to the bot, we will now preserve any newlines
that the bot itself had generated in the past.We will still strip newlines from messages from user-generated messages,
as otherwise they would have the ability to imitate our prompt format.
This would let users so inclined to fool the bot into thinking a
message was sent by another user, or even itself.
Full Changelog
All changes from 0.2.2 to 0.2.3: UI
All changes from 0.2.1 to 0.2.3: Backend
v0.2.2
Release v.0.2.2
What's Changed
Quick bug-fix release to accommodate a breaking change which was added into
transformers after v4.27.0, in this commit.
This error would prevent models from being loaded when the oobabot-plugin
is installed.
New UI Features
- None
UI Bug Fixes
- None
New Backend Features
- None (this still uses the
oobabot
0.2.1 backend)
Bug Fixes / Tech Improvements
- Issue 24: error when loading model with newer version of transformers
Full Changelog
v0.2.1
Release v.0.2.1
What's Changed
True to the version, his is a small bug-fix release. I had been wanting to get a few more small features in, but a few urgent bugs came up, and I wanted to get them out to users as soon as possible. My priority is to make sure things keep working well first, before adding new features.
New UI Features
- performance optimization for log viewing
We'll now update the log view in the UI in a more efficient way, in case people like leaving the log view open.
- show "running" and "stopped" running status
UI Bug Fixes
- Fix a bug where tokens belonging to older Discord bot accounts couldn't be added in the UI.
Older bots have shorter tokens for an overly involved reason. This fix should make it so that you can add these tokens in the UI, and should support any other tokens generated for the next 135 years.
New Backend Features (from the oobabot 0.2.1 backend)
- Stable Diffusion Parameters in Prompt by @clintkittiesmeow
A discord user, with your permission, can now customize pretty much any aspect of Stable Diffusion generation within the prompt. For example:
Make me a picture of a cat wearing a hat width=512 height=512 steps=10 seed=10
The syntax is just <param>=<value>
and you can include as many as you want.
A parameter must pass two checks before they are passed to Stable Diffusion:
- It must be included in the new
user_override_params
setting - It must have a default in the
request_params
dictionary. We use this to know the type of the parameter, and to provide a default value if the user doesn't specify one.
The new yaml setting for user_override_params
looks like this, and will enable these settings by default:
# These parameters can be overridden by the Discord user by including them in their image
# generation request. The format for this is: param_name=value This is a whitelist of
# parameters that can be overridden. They must be simple parameters (strings, numbers,
# booleans), and they must be in the request_params dictionary. The value the user inputs
# will be checked against the type from the request_params dictionary, and if it doesn't
# match, the default value will be used instead. Otherwise, this value will be passed
# through to Stable Diffusion without any changes, so be mindful of what you allow here.
# It could potentially be used to inject malicious values into your SD server. For
# example, steps=1000000 could be bad for your server.
user_override_params:
- cfg_scale
- enable_hr
- model
- negative_prompt
- sampler_name
- seed
- height
- width
Thanks to @jmoney7823956789378 and @clintkittiesmeow for the idea and the initial implementation! It would not have happened without you. :)
Bug Fixes / Tech Improvements
- Fixed the Unicode logging issue in ooba_client.py
A Windows 11 update has reportedly caused an issue with the --log-all-the-things
parameter. This fix should resolve that issue. Thanks @harlyh for the quick fix.
- Fixed an urgent bug with streaming responses
When using streaming responses, the bot would not be able to see its own messages in the history. This was due to a mistake in how the messages were edited when updates came in. This is now fixed.
- Removed the ai-generated keywords feature, which was never finished and didn't really work right. It would cause issues with the new Stable Diffusion keyword parsing, so it's better to just remove it for now.
Full Changelog
All changes from 0.2.0 to 0.2.1: UI
All changes from 0.2.0 to 0.2.1: Backend
Version 0.2.0!
Release v.0.2.0
Long time since the last release, but tons of work!
New UI Features
option to automatically start Oobabot when the Oobabooga launches
This was a great suggestion by @Wrexthor. There's now a checkbox to start the bot when the UI starts. This is a great way to make sure that the bot is always running, even if you restart your computer. Note that you do have to hit the "Save Settings" button, or start the bot after checking the box for it to be saved.
Backend changes for AUDIO SUPPORT 🥳 (coming soon)
This release includes a lot of work to support audio
channels. This still needs to be documented and packaged,
but it is a thing that works! Look for full support in
version 0.2.1, but here's a preview of what's coming:
- oobabot will be able to join audio channels using the
/join_voice
command - it will transcribe audio from the channel, recording which user said what
- it will listen to wake-words, and respond using voice synthesis
- in the
oobabot-plugin
, you'll see a pretty transcript of the
conversation
This has been a ton of work, and I'm eager to get to putting on the finishing
touches and get it out. In the meantime, I wanted to release the now-unified
backend, so that I can make sure that it is stable, so that I can focus on
polishing the audio work. Also, a few important bugs have been reported in
the meantime, and I don't want to hold those back.
New .yaml settings (in the oobabot 0.2.0 backend)
stream_responses_speed_limit
When in "streaming" mode (i.e. when stream_responses is set to True), this will limit the
rate at which we update the streaming message in Discord. We need this setting because Discord has rate-limiting logic, and if we send updates "too fast" then it will slow down our updates drastically, which will appear as jerky streaming.
This value is the minimum delay in seconds in between updates. That is -- we will update Discord no more than once this number of seconds. The updates may come slower than this, perhaps on systems that take a long time to generate tokens. It's only guaranteed that they won't be any faster than this.
Previously, this value was hard-coded to 0.5. Now the default is 0.7, which was determined by user testing. Thanks to @jmoney7823956789378 for helping make this happen!
discrivener_location
and discrivener_model_location
These are new settings to add voice support to oobabot. Voice support means that the bot
can join voice chat channels, transcribe what is said, hear wakewords, and generate voice
responses in those channels. All of the audio processing -- text to speech, and speech to
text -- is handled in a binary called "discrivener", whose source lives at github.com/chrisrude/discrivener.
I've tested this to work on Linux and OSX, but there is still more work to do in documenting and packaging the software. So for now, these settings are blank by default, which will leave oobabot in text-only mode, as it has been.
command_lobotomize_response
A user noticed that there was no setting to customize the text that gets shown when you use the /lobotomize
command. Whoops! Now here it is. This is of particular interest because the bot will see this text after a lobotomize occurs, so if you have specific character styling you want to keep it from getting confused about, then you might want to put in custom text of your choosing here.
You can also use variables {AI_NAME}
and {USER_NAME}
to represent the name of the AI, and the name of the user who ran the /lobotomize
command.
Show an error if a custom .yaml file could not be loaded
Previously, we would ignore any errors that occurred when loading a custom .yaml file, and just proceed with defaults if we could. Now, we will show an error message to the user displaying the full path to the yaml file we could not load, and the bot will not start.
This should help users self-diagnose a number of configuration issues, such as accidentally having a syntax error in their .yaml file.
Bug Fixes / Tech Improvements
-
Fix for issue #19: buttons and inputs enabled while oobabot running.
When loading the oobabot page after the bot was running, the buttons and inputs were enabled, even though they shouldn't be. This could lead to the user starting the bot twice, which would lead to terribleness.
This should now work as expected no matter when you load the page.
-
fix for issue #17: wakewords lost when using .json persona
When using a json persona, user-supplied wakewords weren't saved. This was pretty broken!
The mess was with the logic to automatically add the character's
name as a wakeword, which was dropping user-supplied wakewords in
some cases. We now don't do that. -
another fix fir issue #17: streaming setting reverting on second run
The streaming setting would be lost on the second run of the bot. This was because
we didn't have a UI option for it, so I just enabled it. As a bonus, you can now
use streaming without having to edit the .yaml file!This change was made in conjunction with backend changes to make streaming more
stable, so I'm ok now exposing it to users. -
Fix bug 38: the bot will now only
mark messages as replies if it was directly mentioned (by an @-mention or keyword). Also,
if it is configured to reply across several messages, it will only mark the first message
in the series as a reply. This reduces notification noise to users when using mobile clients. -
Increase default token space back to 2048. Users who have not set a custom a token space value (aka
truncation_length
) will just have this updated automatically. -
Add new oobabooga request params:
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"mirostat_mode",
"mirostat_tau", and
"mirostat_eta" -
If the user forgets to enable either
SERVER MEMBERS INTENT
orMESSAGE CONTENT INTENT
for their bot's Discord account, show a specific error message letting them know. -
security fix: update Gradio to 0.34.0. This fixes a serious security vulnerability in the Gradio library which could affect users exposing their Oobabooga UIs to the public internet for some reason. It is not specific to this plugin, and affects the version of Gradio that is used by Oobabooga. Oobabooga seems to work with this minor version bump, so I at least want users of this plugin to be better protected.
Note that 0.34.0 is not the latest version of Gradio, but it is the latest version that works with Oobabooga builds up until yesterday.
-
move to Gradio's own code editor from ace.js
After doing lots of work to integrate ace.js for .yaml editing, it looks like Gradio
did the same thing, and it's better than what I did. So I'm switching to that.
Full Changelog
v0.1.9
Lots of changes and fixes! See release notes.
Biggest deal is the built-in yaml editor, so you can get access to all current and future settings without waiting on a gui update.
Full Changelog: v.0.1.8...v0.1.9