Skip to content

Commit

Permalink
feat: support muti api keys and schedule strategy (#21)
Browse files Browse the repository at this point in the history
  • Loading branch information
lvqq authored Apr 15, 2023
1 parent 7f1014c commit e2f20e0
Show file tree
Hide file tree
Showing 9 changed files with 68 additions and 29 deletions.
4 changes: 3 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
OPENAI_API_KEY=
LANGUAGE=
LANGUAGE=en
OPENAI_API_BASE_URL=api.openai.com
API_KEY_STRATEGY=random
LOCAL_PROXY=
DISABLE_LOCAL_PROXY=
33 changes: 17 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,27 +77,28 @@ Run `pnpm build` and `pnpm server`. Refer: [astro-node](https://docs.astro.build
### Deployment Configurations
All deployment configurations could be configured in the `.env` file or in **Environment Variables** of Vercel

| Configuration | Default Value | Description |
| ------------------- | -------------- | ------------------------------------------------------------------------------------------ |
| OPENAI_API_KEY | - | Key for API request, [how to generate](https://platform.openai.com/account/api-keys) |
| LANGUAGE | en | The default language of the website, including prompts. Supported languages: **zh**/**en** |
| OPENAI_API_BASE_URL | api.openai.com | The default address of the requested api |
| Configuration | Default Value | Description |
| ------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| OPENAI_API_KEY | - | Key for API request, multiple keys are supported, separated by commas, [how to generate](https://platform.openai.com/account/api-keys)|
| LANGUAGE | en | The default language of the website, including prompts. Supported languages: **zh**/**en** |
| API_KEY_STRATEGY | random | The scheduling strategy mode for multiple keys: **polling**/**random** |
| OPENAI_API_BASE_URL | api.openai.com | The default address of the requested api |


### Global Configurations
All global configurations will be stored locally

| Configuration | Default Value | Description |
| ------------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------- |
| OpenAI Api Key | - | The same with the deployment configuration |
| Language | en | The language of the website, including prompts. Supported languages: **zh**/**en** |
| Save all conversations | false | The conversation won't be lost after the page is refreshed |
| Temperature | 1 | The larger the value, the more random the answer, with a range of 0-2 |
| Model | gpt-3.5-turbo | Model used in api request, [supported models](https://platform.openai.com/docs/models/model-endpoint-compatibility) |
| Continuous conversations | true | Carry the context for the conversations |
| Number of historical messages carried | 4 | For continuous conversations, the number of historical messages carried |
| Number of generated images | 1 | The number of images generated in a single image generation conversation |
| Size of generated images | 256x256 | The size of a single image in image generation conversation |
| Configuration | Default Value | Description |
| ------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------- |
| OpenAI Api Key | - | Only a single key is supported. If it is configured on the page, the key in the environment variable will not be used |
| Language | en | The language of the website, including prompts. Supported languages: **zh**/**en** |
| Save all conversations | false | The conversation won't be lost after the page is refreshed |
| Temperature | 1 | The larger the value, the more random the answer, with a range of 0-2 |
| Model | gpt-3.5-turbo | Model used in api request, [supported models](https://platform.openai.com/docs/models/model-endpoint-compatibility) |
| Continuous conversations | true | Carry the context for the conversations |
| Number of historical messages carried | 4 | For continuous conversations, the number of historical messages carried |
| Number of generated images | 1 | The number of images generated in a single image generation conversation |
| Size of generated images | 256x256 | The size of a single image in image generation conversation |

## Planned Features
- [ ] Export functionality to export as markdown and images
Expand Down
13 changes: 7 additions & 6 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,19 +88,20 @@
### 部署配置
所有部署配置都可以在 `.env` 文件或者 Vercel 的环境变量中配置

| 配置项 | 默认值 | 描述 |
| ------------------- | -------------- | ------------------------------------------------------------------------------------ |
| OPENAI_API_KEY | - | Api 请求使用的 key, [如何生成](https://platform.openai.com/account/api-keys) |
| LANGUAGE | en | 站点的默认语言,包含预设提示,支持的语言: **zh**/**en** |
| OPENAI_API_BASE_URL | api.openai.com | 请求 api 的默认地址 |
| 配置项 | 默认值 | 描述 |
| ------------------- | -------------- | -------------------------------------------------------------------------------------------------- |
| OPENAI_API_KEY | - | Api 请求使用的 key, 支持多个 key,以逗号分隔,[如何生成](https://platform.openai.com/account/api-keys) |
| LANGUAGE | en | 站点的默认语言,包含预设提示,支持的语言: **zh**/**en** |
| API_KEY_STRATEGY | random | 多个 key 时的调度策略模式:轮询(**polling**)、随机(**random**|
| OPENAI_API_BASE_URL | api.openai.com | 请求 api 的默认地址 |


### 全局配置
所有页面中的全局配置都会被缓存到本地

| 配置项 | 默认值 | 描述 |
| --------------- | ------------- | --------------------------------------------------------------------------------------------------------- |
| OpenAI Api Key | - | 和部署配置中的含义一样 |
| OpenAI Api Key | - | 仅支持单个 key,页面里填写后不会使用环境变量中配置的 key |
| 语言 | en | 站点的语言,包含预设提示,支持的语言: **zh**/**en** |
| 保留所有会话 | false | 页面刷新会话不会丢失 |
| 发散程度 | 1 | 数值越大,回答越随机,范围是 0-2 |
Expand Down
4 changes: 4 additions & 0 deletions src/configs/server.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
export const serverGlobalConfigs: { polling: number } = {
// load balancer polling step
polling: 0,
};
2 changes: 2 additions & 0 deletions src/interfaces/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ export interface Message {

export type ConversationMode = 'text' | 'image';

export type StrategyMode = 'polling' | 'random';

export type Lang = 'zh' | 'en';

export interface Conversation {
Expand Down
9 changes: 7 additions & 2 deletions src/pages/api/completions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ import type { ParsedEvent, ReconnectInterval } from 'eventsource-parser';
import { createParser } from 'eventsource-parser';
import { defaultModel, supportedModels } from '@configs';
import { Message } from '@interfaces';
import { apiKey, baseURL, config } from '.';
import { loadBalancer } from '@utils/server';
import { apiKeyStrategy, apiKeys, baseURL, config } from '.';

export { config };

Expand All @@ -19,7 +20,11 @@ export const post: APIRoute = async ({ request }) => {
const { messages, temperature = 1 } = body;
let { key, model } = body;

key = key || apiKey;
if (!key) {
const next = loadBalancer(apiKeys, apiKeyStrategy);
key = next();
}

model = model || defaultModel;

if (!key) {
Expand Down
8 changes: 6 additions & 2 deletions src/pages/api/images.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
/* eslint-disable no-console */
import type { APIRoute } from 'astro';
import { apiKey, baseURL, config } from '.';
import { loadBalancer } from '@utils/server';
import { apiKeyStrategy, apiKeys, baseURL, config } from '.';

export { config };

Expand All @@ -15,7 +16,10 @@ export const post: APIRoute = async ({ request }) => {
const { prompt, size = '256x256', n = 1 } = body;
let { key } = body;

key = key || apiKey;
if (!key) {
const next = loadBalancer(apiKeys, apiKeyStrategy);
key = next();
}

if (!key) {
return new Response(JSON.stringify({ msg: 'No API key provided' }), {
Expand Down
10 changes: 8 additions & 2 deletions src/pages/api/index.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import { StrategyMode } from '@interfaces';

// read apiKey from env/process.env
export const apiKey =
import.meta.env.OPENAI_API_KEY || process.env.OPENAI_API_KEY;
export const apiKeys =
(import.meta.env.OPENAI_API_KEY || process.env.OPENAI_API_KEY)?.split(',') ??
[];

// read disableProxy from env
export const disableProxy = import.meta.env.DISABLE_LOCAL_PROXY === 'true';
Expand All @@ -20,6 +23,9 @@ export const baseURL = (
: apiBaseUrl
)?.replace(/^https?:\/\//i, '');

export const apiKeyStrategy: StrategyMode =
import.meta.env.API_KEY_STRATEGY || process.env.API_KEY_STRATEGY || 'random';

/**
* https://vercel.com/docs/concepts/edge-network/regions#region-list
* disable hkg1 HongKong
Expand Down
14 changes: 14 additions & 0 deletions src/utils/server.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import { serverGlobalConfigs } from '@configs/server';
import { StrategyMode } from '@interfaces';

export function loadBalancer<T>(arr: T[], strategy: StrategyMode = 'random') {
if (!Array.isArray(arr) || arr.length === 0) return () => null;
if (arr.length === 1) return () => arr[0];

if (strategy === 'polling') {
// eslint-disable-next-line no-plusplus
return () => arr[serverGlobalConfigs.polling++ % arr.length];
}

return () => arr[Math.floor(Math.random() * arr.length)];
}

0 comments on commit e2f20e0

Please sign in to comment.