Skip to content

Commit

Permalink
feat: support mediaGroup reading,
Browse files Browse the repository at this point in the history
mention bot and reply mediaGroup then mention bot, set through STORE_MEDIA_MESSAGE = true

feat:supports chat with ai even if text is split by Telegram due to exceeding 4096 characters, set through STORE_TEXT_CHUNK_MESSAGE = true

- The above two features may not work in polling mode due to upstream dependencies (i.e., cannot work in asynchronous scenarios), webhook mode works normally, and the upstream dependency repository will be updated later.

feat: Support independent function model and chat model. When the configured tool is empty or the option is none, it automatically switches to the chat model.

perf: optimize md2node conversion logic to prevent rendering errors in nested code blocks.

perf: optimize final message sending to prevent frequent triggering of bots under high concurrency mode causing 429 errors and unable to successfully send final messages.

fix: fix that multiple bots cannot process messages simultaneously in local polling mode due to adjustments in polling time.

chore: remove functionality for trimming based on message character length (i.e., MAX_TOKEN_LENGTH environment variable).

chore: hide bot id in initialization page under webhook method.

chore: optimize scenarios where command carries bot name when replying with a bot.

chore: because vercel ai now supports the o1 stream mode, the TRANSFORM variable is no longer effective.
  • Loading branch information
adolphnov committed Nov 19, 2024
1 parent 107357e commit ffebe22
Show file tree
Hide file tree
Showing 28 changed files with 23,278 additions and 971 deletions.
1 change: 1 addition & 0 deletions dist/buildinfo.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21,761 changes: 21,761 additions & 0 deletions dist/index.js

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions dist/timestamp

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 0 additions & 1 deletion doc/cn/CONFIG.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ OPENAI_API_BASE,GOOGLE_API_BASE,MISTRAL_API_BASE,COHERE_API_BASE,ANTHROPIC_API_B
|--------------------|----------|---------|--------------------|
| AUTO_TRIM_HISTORY | 自动裁剪历史记录 | `true` | 为避免4096字符限制,自动裁剪消息 |
| MAX_HISTORY_LENGTH | 最大历史记录长度 | `20` | 保留的最大历史记录条数 |
| MAX_TOKEN_LENGTH | 最大令牌长度 | `20480` | 历史记录的最大令牌长度 |

### 特性开关

Expand Down
2 changes: 1 addition & 1 deletion doc/en/CONFIG.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ OPENAI_API_BASE,GOOGLE_API_BASE,MISTRAL_API_BASE,COHERE_API_BASE,ANTHROPIC_API_B
|--------------------|---------------------------------------|---------|---------------------------------------------------------------|
| AUTO_TRIM_HISTORY | Automatic trimming of message history | `true` | Automatically trim messages to avoid the 4096 character limit |
| MAX_HISTORY_LENGTH | Maximum length of message history | `20` | Maximum number of message history entries to keep |
| MAX_TOKEN_LENGTH | Maximum token length | `20480` | Maximum token length for message history |
|Maximum token length |Maximum token length for message history |

### Feature configuration

Expand Down
55 changes: 28 additions & 27 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "chatgpt-telegram-workers",
"type": "module",
"version": "1.9.4",
"version": "2.0.1",
"description": "The easiest and quickest way to deploy your own ChatGPT Telegram bot is to use a single file and simply copy and paste it. There is no need for any dependencies, local development environment configuration, domain names, or servers.",
"author": "tbxark <tbxark@outlook.com>",
"license": "MIT",
Expand Down Expand Up @@ -40,51 +40,52 @@
"clean": "rm -rf dist"
},
"dependencies": {
"@ai-sdk/anthropic": "^0.0.56",
"@ai-sdk/azure": "^0.0.52",
"@ai-sdk/cohere": "^0.0.28",
"@ai-sdk/google": "^0.0.55",
"@ai-sdk/google-vertex": "^0.0.43",
"@ai-sdk/mistral": "^0.0.46",
"@ai-sdk/openai": "^0.0.72",
"ai": "^3.4.33",
"cloudflare-worker-adapter": "^1.3.3",
"@ai-sdk/anthropic": "^1.0.0",
"@ai-sdk/azure": "^1.0.2",
"@ai-sdk/cohere": "^1.0.0",
"@ai-sdk/google": "^1.0.0",
"@ai-sdk/google-vertex": "^1.0.0",
"@ai-sdk/mistral": "^1.0.1",
"@ai-sdk/openai": "^1.0.1",
"@cloudflare/workers-types": "^4.20241112.0",
"ai": "^4.0.1",
"cloudflare-worker-adapter": "^1.3.4",
"node-cron": "^3.0.3",
"ws": "^8.18.0"
},
"devDependencies": {
"@ai-sdk/anthropic": "^0.0.56",
"@ai-sdk/azure": "^0.0.52",
"@ai-sdk/cohere": "^0.0.28",
"@ai-sdk/google": "^0.0.55",
"@ai-sdk/google-vertex": "^0.0.43",
"@ai-sdk/mistral": "^0.0.46",
"@ai-sdk/openai": "^0.0.72",
"@antfu/eslint-config": "^3.8.0",
"@ai-sdk/anthropic": "^1.0.0",
"@ai-sdk/azure": "^1.0.2",
"@ai-sdk/cohere": "^1.0.0",
"@ai-sdk/google": "^1.0.0",
"@ai-sdk/google-vertex": "^1.0.0",
"@ai-sdk/mistral": "^1.0.1",
"@ai-sdk/openai": "^1.0.1",
"@antfu/eslint-config": "^3.9.2",
"@google-cloud/vertexai": "^1.9.0",
"@navetacandra/ddg": "^0.0.5",
"@navetacandra/ddg": "^0.0.6",
"@rollup/plugin-node-resolve": "^15.3.0",
"@types/node": "^22.9.0",
"@types/node": "^22.9.1",
"@types/node-cron": "^3.0.11",
"@types/react": "^18.3.12",
"@types/react-dom": "^18.3.1",
"@types/ws": "^8.5.13",
"@vercel/node": "^3.2.24",
"ai": "^3.4.33",
"eslint": "^9.14.0",
"@vercel/node": "^3.2.25",
"ai": "^4.0.1",
"eslint": "^9.15.0",
"eslint-plugin-format": "^0.1.2",
"gts": "^6.0.2",
"openai": "^4.71.1",
"openai": "^4.72.0",
"react-dom": "^18.3.1",
"rollup-plugin-cleanup": "^3.2.1",
"rollup-plugin-node-externals": "^7.1.3",
"telegram-bot-api-types": "^7.9.12",
"telegram-bot-api-types": "^7.11.0",
"tsx": "^4.19.2",
"typescript": "^5.6.3",
"vite": "^5.4.10",
"vite": "^5.4.11",
"vite-plugin-checker": "^0.8.0",
"vite-plugin-dts": "^4.3.0",
"wrangler": "^3.85.0",
"wrangler": "^3.88.0",
"ws": "^8.18.0"
}
}
6 changes: 3 additions & 3 deletions src/adapter/local/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@ async function runPolling() {
console.log(`@${name.result.username} Webhook deleted, If you want to use webhook, please set it up again.`);
}

while (true) {
for (const token of ENV.TELEGRAM_AVAILABLE_TOKENS) {
ENV.TELEGRAM_AVAILABLE_TOKENS.forEach(async (token) => {
while (true) {
try {
const resp = await clients[token].getUpdates({
offset: offset[token],
Expand All @@ -88,7 +88,7 @@ async function runPolling() {
console.error(e);
}
}
}
});
}

try {
Expand Down
33 changes: 2 additions & 31 deletions src/agent/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,46 +23,17 @@ export async function loadHistory(key: string): Promise<HistoryItem[]> {
history = [];
}

const counter = tokensCounter();

const trimHistory = (list: HistoryItem[], initLength: number, maxLength: number, maxToken: number) => {
const trimHistory = (list: HistoryItem[], initLength: number, maxLength: number) => {
// 历史记录超出长度需要裁剪, 小于0不裁剪
if (maxLength >= 0 && list.length > maxLength) {
list = list.splice(list.length - maxLength);
}
// 处理token长度问题, 小于0不裁剪
if (maxToken > 0) {
let tokenLength = initLength;
for (let i = list.length - 1; i >= 0; i--) {
const historyItem = list[i];
let length = 0;
if (historyItem.content) {
if (typeof historyItem.content === 'string') {
length = counter(historyItem.content);
} else if (Array.isArray(historyItem.content)) {
for (const content of historyItem.content) {
if (Object.prototype.hasOwnProperty.call(content, 'text')) {
length += counter((content as any).text as string);
}
}
}
} else {
historyItem.content = '';
}
// 如果最大长度超过maxToken,裁剪history
tokenLength += length;
if (tokenLength > maxToken) {
list = list.splice(i + 1);
break;
}
}
}
return list;
};

// 裁剪
if (ENV.AUTO_TRIM_HISTORY && ENV.MAX_HISTORY_LENGTH > 0) {
history = trimHistory(history, 0, ENV.MAX_HISTORY_LENGTH, ENV.MAX_TOKEN_LENGTH);
history = trimHistory(history, 0, ENV.MAX_HISTORY_LENGTH);
// 裁剪开始的tool call 以避免报错
let validStart = 0;
for (const h of history) {
Expand Down
15 changes: 10 additions & 5 deletions src/agent/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ export async function warpLLMParams(params: { messages: CoreMessage[]; model: La
messages: params.messages,
tools: tool?.tools,
activeTools,
toolChoice: toolChoice as CoreToolChoice<any>[],
toolChoice,
context,
};
}
Expand Down Expand Up @@ -286,19 +286,24 @@ export async function createLlmModel(model: string, context: AgentUserConfig) {
// return createProviderRegistry(providers);
// }

export type ToolChoice = { type: 'auto' | 'none' | 'required' } | { type: 'tool'; toolName: string };

function wrapToolChoice(activeToolAlias: string[], message: string): {
message: string;
toolChoices: ({ type: string } | { type: 'tool'; toolName: string })[];
toolChoices: ToolChoice[] | [];
} {
const tool_perfix = '/t-';
let text = message.trim();
const choices = ['auto', 'none', 'required', ...activeToolAlias];
const toolChoices: ({ type: string } | { type: 'tool'; toolName: string })[] = [];
const toolChoices = [];
while (true) {
const toolAlias = choices.find(t => text.startsWith(`${tool_perfix}${t}`)) || '';
if (toolAlias) {
text = text.substring(tool_perfix.length + toolAlias.length).trim();
toolChoices.push(['auto', 'none', 'required'].includes(toolAlias) ? { type: toolAlias } : { type: 'tool', toolName: tools[toolAlias].schema.name });
const choice = ['auto', 'none', 'required'].includes(toolAlias)
? { type: toolAlias as 'auto' | 'none' | 'required' }
: { type: 'tool', toolName: tools[toolAlias].schema.name };
toolChoices.push(choice);
} else {
break;
}
Expand All @@ -308,6 +313,6 @@ function wrapToolChoice(activeToolAlias: string[], message: string): {

return {
message: text,
toolChoices,
toolChoices: toolChoices as ToolChoice[],
};
}
54 changes: 27 additions & 27 deletions src/agent/model_middleware.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import type {
Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware,
StepResult,
} from 'ai';
import type { ToolChoice } from '.';
import type { AgentUserConfig } from '../config/env';
import type { ChatStreamTextHandler } from './types';
import { createLlmModel } from '.';
Expand All @@ -15,36 +16,25 @@ import { OpenAI } from './openai';

type Writeable<T> = { -readonly [P in keyof T]: T[P] };

export function AIMiddleware({ config, tools, activeTools, onStream, toolChoice, messageReferencer }: { config: AgentUserConfig; tools: Record<string, any>; activeTools: string[]; onStream: ChatStreamTextHandler | null; toolChoice: CoreToolChoice<any>[] | []; messageReferencer: string[] }): LanguageModelV1Middleware & { onChunk: (data: any) => boolean; onStepFinish: (data: StepResult<any>, context: AgentUserConfig) => void } {
export function AIMiddleware({ config, tools, activeTools, onStream, toolChoice, messageReferencer, chatModel }: { config: AgentUserConfig; tools: Record<string, any>; activeTools: string[]; onStream: ChatStreamTextHandler | null; toolChoice: ToolChoice[] | []; messageReferencer: string[]; chatModel: string }): LanguageModelV1Middleware & { onChunk: (data: any) => boolean; onStepFinish: (data: StepResult<any>, context: AgentUserConfig) => void } {
let startTime: number | undefined;
let sendToolCall = false;
let step = 0;
let rawSystemPrompt: string | undefined;
const openaiTransformModelRegex = new RegExp(`^${OpenAI.transformModelPerfix}`);
return {
wrapGenerate: async ({ doGenerate, params, model }) => {
log.info('doGenerate called');
await warpModel(model, activeTools, config);
log.info(`provider: ${model.provider}, modelId: ${model.modelId} `);
const logs = getLogSingleton(config);
const modelId = model.provider === 'openai' ? model.modelId.replace(openaiTransformModelRegex, '') : model.modelId;
activeTools.length > 0 ? logs.tool.model = modelId : logs.chat.model.push(modelId);
await warpModel(model, config, activeTools, (params.mode as any).toolChoice, chatModel);
recordModelLog(config, model, activeTools, (params.mode as any).toolChoice);
const result = await doGenerate();
log.info(`generated text: ${result.text}`);
return result;
},

wrapStream: async ({ doStream, model }) => {
wrapStream: async ({ doStream, params, model }) => {
log.info('doStream called');
await warpModel(model, activeTools, config);
log.info(`provider: ${model.provider}, modelId: ${model.modelId} `);
const logs = getLogSingleton(config);
const modelId = model.provider === 'openai' ? model.modelId.replace(openaiTransformModelRegex, '') : model.modelId;
if (activeTools.length > 0) {
logs.tool.model = modelId;
} else {
logs.chat.model.push(modelId);
}
await warpModel(model, config, activeTools, (params.mode as any).toolChoice, chatModel);
recordModelLog(config, model, activeTools, (params.mode as any).toolChoice);
return doStream();
},

Expand All @@ -64,7 +54,7 @@ export function AIMiddleware({ config, tools, activeTools, onStream, toolChoice,
params.mode.tools = params.mode.tools?.filter(i => activeTools.includes(i.name));
}
warpMessages(params, tools, activeTools, rawSystemPrompt);
log.info(`request params: ${JSON.stringify(params, null, 2)}`);
// log.info(`request params: ${JSON.stringify(params, null, 2)}`);
return params;
},

Expand All @@ -83,8 +73,8 @@ export function AIMiddleware({ config, tools, activeTools, onStream, toolChoice,
const logs = getLogSingleton(config);
log.info('llm request end');
log.info('step text:', text);
// log.debug('step raw request:', request);
log.debug('step raw response:', response);
log.info('step raw request:', request);
// log.debug('step raw response:', response);

const time = ((Date.now() - startTime!) / 1e3).toFixed(1);
if (toolResults.length > 0) {
Expand All @@ -100,15 +90,16 @@ export function AIMiddleware({ config, tools, activeTools, onStream, toolChoice,
arguments: Object.values(i.args),
};
});
log.info(func_logs);
log.info(`func logs: ${JSON.stringify(func_logs, null, 2)}`);
log.info(`func result: ${JSON.stringify(toolResults, null, 2)}`);
logs.functions.push(...func_logs);
logs.tool.time.push((+time - maxFuncTime).toFixed(1));
const toolNames = [...new Set(toolResults.map(i => i.toolName))];
activeTools = trimActiveTools(activeTools, toolNames);
log.info(`finish ${toolNames}`);
onStream?.send(`${messageReferencer.join('')}...\n` + `finish ${toolNames}`);
} else {
activeTools.length > 0 ? logs.tool.time.push(time) : logs.chat.time.push(time);
activeTools.length > 0 && toolChoice[step]?.type !== 'none' ? logs.tool.time.push(time) : logs.chat.time.push(time);
}

if (usage && !Number.isNaN(usage.promptTokens) && !Number.isNaN(usage.completionTokens)) {
Expand Down Expand Up @@ -148,12 +139,9 @@ function warpMessages(params: LanguageModelV1CallOptions, tools: Record<string,
}
}

async function warpModel(model: LanguageModelV1, activeTools: string[], config: AgentUserConfig) {
async function warpModel(model: LanguageModelV1, config: AgentUserConfig, activeTools: string[], toolChoice: ToolChoice, chatModel: string) {
const mutableModel = model as Writeable<LanguageModelV1>;
// if (model.provider === 'openai' && model.modelId.startsWith(OpenAI.transformModelPerfix)) {
// mutableModel.modelId = mutableModel.modelId.slice(OpenAI.transformModelPerfix.length);
// }
const effectiveModel = activeTools.length > 0 ? (config.TOOL_MODEL || model.modelId) : model.modelId;
const effectiveModel = (activeTools.length > 0 && toolChoice?.type !== 'none') ? (config.TOOL_MODEL || chatModel) : chatModel;
if (effectiveModel !== mutableModel.modelId) {
let newModel: LanguageModelV1 | undefined;
if (effectiveModel.includes(':')) {
Expand All @@ -170,3 +158,15 @@ async function warpModel(model: LanguageModelV1, activeTools: string[], config:
function trimActiveTools(activeTools: string[], toolNames: string[]) {
return activeTools.length > 0 ? activeTools.filter(name => !toolNames.includes(name)) : [];
}

function recordModelLog(config: AgentUserConfig, model: LanguageModelV1, activeTools: string[], toolChoice: ToolChoice) {
const logs = getLogSingleton(config);
// const openaiTransformModelRegex = new RegExp(`^${OpenAI.transformModelPerfix}`);
// const modelId = model.provider.includes('openai') ? model.modelId.replace(openaiTransformModelRegex, '') : model.modelId;
log.info(`provider: ${model.provider}, modelId: ${model.modelId} `);
if (activeTools.length > 0 && toolChoice?.type !== 'none') {
logs.tool.model = model.modelId;
} else {
logs.chat.model.push(model.modelId);
}
}
18 changes: 9 additions & 9 deletions src/agent/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,12 @@ export class OpenAI extends OpenAIBase implements ChatAgent {
return Array.isArray(params?.content) ? ctx.OPENAI_VISION_MODEL : ctx.OPENAI_CHAT_MODEL;
};

readonly transformModel = (model: string, context: AgentUserConfig): string => {
if (context.OPENAI_NEED_TRANSFORM_MODEL.includes(model)) {
return `${OpenAI.transformModelPerfix}${model}`;
}
return model;
};
// readonly transformModel = (model: string, context: AgentUserConfig): string => {
// if (context.OPENAI_NEED_TRANSFORM_MODEL.includes(model)) {
// return `${OpenAI.transformModelPerfix}${model}`;
// }
// return model;
// };

// 仅文本对话使用该地址
readonly base_url = (context: AgentUserConfig): string => {
Expand All @@ -51,15 +51,15 @@ export class OpenAI extends OpenAIBase implements ChatAgent {
readonly request = async (params: LLMChatParams, context: AgentUserConfig, onStream: ChatStreamTextHandler | null): Promise<{ messages: ResponseMessage[]; content: string }> => {
const userMessage = params.messages.at(-1) as CoreUserMessage;
const originalModel = this.model(context, userMessage);
const transformedModel = this.transformModel(originalModel, context);
// const transformedModel = this.transformModel(originalModel, context);
const provider = createOpenAI({
baseURL: context.OPENAI_API_BASE,
apiKey: this.apikey(context),
compatibility: 'strict',
fetch: originalModel === transformedModel ? undefined : this.fetch,
// fetch: originalModel === transformedModel ? undefined : this.fetch,
});

const languageModelV1 = provider.languageModel(transformedModel, undefined);
const languageModelV1 = provider.languageModel(originalModel, undefined);
const { messages, onStream: newOnStream } = this.extraHandle(originalModel, params.messages, context, onStream);

return requestChatCompletionsV2(await warpLLMParams({
Expand Down
Loading

0 comments on commit ffebe22

Please sign in to comment.