Notes
Initial release featuring LiteLLM integration that enables support for 40+ model providers. Support for OpenAI-compatible APIs enables connections to a range of open-source inference engines and wrappers.
The LiteLLM proxy is bundled as part of the docker-compose.yaml
.
Reorganized the code to make the project structure simpler:
commands/
├── base.ts # Command interfaces and base classes
├── handlers.ts # Unified implementation of all commands and services
└── registry.ts # Centralized command registration and routing
managers/generation/
├── core.ts # Message orchestration and routing
├── formatter.ts # Output formatting and pagination
└── generator.ts # Response generation and streaming
utilities/
├── error-handler.ts
├── index.ts # Combined types and exports
└── settings.ts
Cleaned up the file structure to make future updates simpler and smoother. The new organization keeps everything in logical places while making the code easier to work with.
Fixes
Added handling to address an error that would cause the bot to continuously check the cache for a message that cannot be found:
Error updating message: Error [ChannelNotCached]: Could not find the channel where this message came from in the cache!
Fixed an issue where messages would be copied into different conversation histories when +
was used to switch models mid-conversation.