Skip to content

The first release of vercel ai dependencies

Compare
Choose a tag to compare
@adolphnov adolphnov released this 19 Nov 18:16
· 65 commits to refactor since this release

Full Changelog: 1.9.4...2.0.0

Latest version changelog:

feat: support mediaGroup reading,
mention bot and reply mediaGroup then mention bot, set through STORE_MEDIA_MESSAGE = true

feat:supports chat with ai even if text is split by Telegram due to exceeding 4096 characters, set through STORE_TEXT_CHUNK_MESSAGE = true

  • The above two features may not work in polling mode due to upstream dependencies (i.e., cannot work in asynchronous scenarios), webhook mode works normally, and the upstream dependency repository will be updated later.

feat: Support independent function model and chat model. When the configured tool is empty or the option is none, it automatically switches to the chat model.

perf: optimize md2node conversion logic to prevent rendering errors in nested code blocks.

perf: optimize final message sending to prevent frequent triggering of bots under high concurrency mode causing 429 errors and unable to successfully send final messages.

fix: fix that multiple bots cannot process messages simultaneously in local polling mode due to adjustments in polling time.

chore: remove functionality for trimming based on message character length (i.e., MAX_TOKEN_LENGTH environment variable).

chore: hide bot id in initialization page under webhook method.

chore: optimize scenarios where command carries bot name when replying with a bot.

chore: because vercel ai now supports the o1 stream mode, the TRANSFORM variable is no longer effective.