Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

通过更新apiBaseUrl避免国内API被墙 #249

Closed
pengmaochang opened this issue Mar 3, 2023 · 16 comments
Closed

通过更新apiBaseUrl避免国内API被墙 #249

pengmaochang opened this issue Mar 3, 2023 · 16 comments

Comments

@pengmaochang
Copy link

从昨晚8点开始,GPT3.5的API被墙,导致国内各种项目都使用不了。但是更改service/src/chatgpt/index.ts文件的API参数,可以避免被墙。具体如下:

const options: ChatGPTAPIOptions = {
      apiKey: process.env.OPENAI_API_KEY,
      apiBaseUrl: 'https://openapi.ssiic.com',
      debug: false,
    }

请问作者可以把这段写入源码吗?或者把这个参数作为docker 镜像可配置的参数,方便我们docker compose部署党。
感谢感谢~

@JeazW
Copy link

JeazW commented Mar 3, 2023

我觉得不妥,这个url什么时候停了就全都受影响。

@pengmaochang
Copy link
Author

所以做成可配置参数呀。总比现在这么多国内的朋友都被强的好吧

@Chanzhaoyu Chanzhaoyu added the status: discussion 来,喝一杯 label Mar 3, 2023
@LuckyZY
Copy link

LuckyZY commented Mar 3, 2023

我觉得还是用原生的API接口地址比较好,对于如何解决API地址无法访问并不是本项目的重点要关注的点,可以自行设置代理来解决该问题

@pengmaochang
Copy link
Author

比如做成可选参数,如果不配置,就是默认的原生API地址。如果配置,就自己写进去。这样既不影响已有的部署,也能有效解决被墙的问题

@goodwisdom
Copy link

对的,有同样的需求

@font-size
Copy link

接口timeout的原因是被墙了?

@pengmaochang
Copy link
Author

TS整体换成:

import * as dotenv from 'dotenv'
import 'isomorphic-fetch'
import type { ChatMessage, SendMessageOptions } from 'chatgpt'
import { ChatGPTAPI, ChatGPTUnofficialProxyAPI } from 'chatgpt'
import { SocksProxyAgent } from 'socks-proxy-agent'
import fetch from 'node-fetch'
import { sendResponse } from '../utils'
import type { ApiModel, ChatContext, ChatGPTAPIOptions, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types'

dotenv.config()

const timeoutMs: number = !isNaN(+process.env.TIMEOUT_MS) ? +process.env.TIMEOUT_MS : 30 * 1000

let apiModel: ApiModel

if (!process.env.OPENAI_API_KEY && !process.env.OPENAI_ACCESS_TOKEN)
  throw new Error('Missing OPENAI_API_KEY or OPENAI_ACCESS_TOKEN environment variable')

let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI

// To use ESM in CommonJS, you can use a dynamic import
(async () => {
  // More Info: https://github.com/transitive-bullshit/chatgpt-api

  if (process.env.OPENAI_API_KEY) {
    const options: ChatGPTAPIOptions = {
      apiKey: process.env.OPENAI_API_KEY,
      apiBaseUrl: 'https://openapi.ssiic.com',
      debug: false,
    }

    api = new ChatGPTAPI({ ...options })
    apiModel = 'ChatGPTAPI'
  }
  else {
    const options: ChatGPTUnofficialProxyAPIOptions = {
      accessToken: process.env.OPENAI_ACCESS_TOKEN,
      debug: false,
    }

    if (process.env.SOCKS_PROXY_HOST && process.env.SOCKS_PROXY_PORT) {
      const agent = new SocksProxyAgent({
        hostname: process.env.SOCKS_PROXY_HOST,
        port: process.env.SOCKS_PROXY_PORT,
      })
      options.fetch = (url, options) => {
        return fetch(url, { agent, ...options })
      }
    }

    if (process.env.API_REVERSE_PROXY)
      options.apiReverseProxyUrl = process.env.API_REVERSE_PROXY

    api = new ChatGPTUnofficialProxyAPI({
      accessToken: process.env.OPENAI_ACCESS_TOKEN,
      ...options,
    })
    apiModel = 'ChatGPTUnofficialProxyAPI'
  }
})()

async function chatReply(
  message: string,
  lastContext?: { conversationId?: string; parentMessageId?: string },
) {
  if (!message)
    return sendResponse({ type: 'Fail', message: 'Message is empty' })

  try {
    let options: SendMessageOptions = { timeoutMs }

    if (lastContext)
      options = { ...lastContext }

    const response = await api.sendMessage(message, { ...options })

    return sendResponse({ type: 'Success', data: response })
  }
  catch (error: any) {
    return sendResponse({ type: 'Fail', message: error.message })
  }
}

async function chatReplyProcess(
  message: string,
  lastContext?: { conversationId?: string; parentMessageId?: string },
  process?: (chat: ChatMessage) => void,
) {
  if (!message)
    return sendResponse({ type: 'Fail', message: 'Message is empty' })

  try {
    let options: SendMessageOptions = { timeoutMs }

    if (lastContext)
      options = { ...lastContext }

    const response = await api.sendMessage(message, {
      ...options,
      onProgress: (partialResponse) => {
        process?.(partialResponse)
      },
    })

    return sendResponse({ type: 'Success', data: response })
  }
  catch (error: any) {
    return sendResponse({ type: 'Fail', message: error.message })
  }
}

async function chatConfig() {
  return sendResponse({
    type: 'Success',
    data: {
      apiModel,
      reverseProxy: process.env.API_REVERSE_PROXY,
      timeoutMs,
      socksProxy: (process.env.SOCKS_PROXY_HOST && process.env.SOCKS_PROXY_PORT) ? (`${process.env.SOCKS_PROXY_HOST}:${process.env.SOCKS_PROXY_PORT}`) : '-',
    } as ModelConfig,
  })
}

export type { ChatContext, ChatMessage }

export { chatReply, chatReplyProcess, chatConfig }

@pengmaochang
Copy link
Author

ChatGPTAPIOptions和ChatGPTUnofficialProxyAPI、chatConfig都有所更改

@DevinLin01
Copy link

TS整体换成:

import * as dotenv from 'dotenv'
import 'isomorphic-fetch'
import type { ChatMessage, SendMessageOptions } from 'chatgpt'
import { ChatGPTAPI, ChatGPTUnofficialProxyAPI } from 'chatgpt'
import { SocksProxyAgent } from 'socks-proxy-agent'
import fetch from 'node-fetch'
import { sendResponse } from '../utils'
import type { ApiModel, ChatContext, ChatGPTAPIOptions, ChatGPTUnofficialProxyAPIOptions, ModelConfig } from '../types'

dotenv.config()

const timeoutMs: number = !isNaN(+process.env.TIMEOUT_MS) ? +process.env.TIMEOUT_MS : 30 * 1000

let apiModel: ApiModel

if (!process.env.OPENAI_API_KEY && !process.env.OPENAI_ACCESS_TOKEN)
  throw new Error('Missing OPENAI_API_KEY or OPENAI_ACCESS_TOKEN environment variable')

let api: ChatGPTAPI | ChatGPTUnofficialProxyAPI

// To use ESM in CommonJS, you can use a dynamic import
(async () => {
  // More Info: https://github.com/transitive-bullshit/chatgpt-api

  if (process.env.OPENAI_API_KEY) {
    const options: ChatGPTAPIOptions = {
      apiKey: process.env.OPENAI_API_KEY,
      apiBaseUrl: 'https://openapi.ssiic.com',
      debug: false,
    }

    api = new ChatGPTAPI({ ...options })
    apiModel = 'ChatGPTAPI'
  }
  else {
    const options: ChatGPTUnofficialProxyAPIOptions = {
      accessToken: process.env.OPENAI_ACCESS_TOKEN,
      debug: false,
    }

    if (process.env.SOCKS_PROXY_HOST && process.env.SOCKS_PROXY_PORT) {
      const agent = new SocksProxyAgent({
        hostname: process.env.SOCKS_PROXY_HOST,
        port: process.env.SOCKS_PROXY_PORT,
      })
      options.fetch = (url, options) => {
        return fetch(url, { agent, ...options })
      }
    }

    if (process.env.API_REVERSE_PROXY)
      options.apiReverseProxyUrl = process.env.API_REVERSE_PROXY

    api = new ChatGPTUnofficialProxyAPI({
      accessToken: process.env.OPENAI_ACCESS_TOKEN,
      ...options,
    })
    apiModel = 'ChatGPTUnofficialProxyAPI'
  }
})()

async function chatReply(
  message: string,
  lastContext?: { conversationId?: string; parentMessageId?: string },
) {
  if (!message)
    return sendResponse({ type: 'Fail', message: 'Message is empty' })

  try {
    let options: SendMessageOptions = { timeoutMs }

    if (lastContext)
      options = { ...lastContext }

    const response = await api.sendMessage(message, { ...options })

    return sendResponse({ type: 'Success', data: response })
  }
  catch (error: any) {
    return sendResponse({ type: 'Fail', message: error.message })
  }
}

async function chatReplyProcess(
  message: string,
  lastContext?: { conversationId?: string; parentMessageId?: string },
  process?: (chat: ChatMessage) => void,
) {
  if (!message)
    return sendResponse({ type: 'Fail', message: 'Message is empty' })

  try {
    let options: SendMessageOptions = { timeoutMs }

    if (lastContext)
      options = { ...lastContext }

    const response = await api.sendMessage(message, {
      ...options,
      onProgress: (partialResponse) => {
        process?.(partialResponse)
      },
    })

    return sendResponse({ type: 'Success', data: response })
  }
  catch (error: any) {
    return sendResponse({ type: 'Fail', message: error.message })
  }
}

async function chatConfig() {
  return sendResponse({
    type: 'Success',
    data: {
      apiModel,
      reverseProxy: process.env.API_REVERSE_PROXY,
      timeoutMs,
      socksProxy: (process.env.SOCKS_PROXY_HOST && process.env.SOCKS_PROXY_PORT) ? (`${process.env.SOCKS_PROXY_HOST}:${process.env.SOCKS_PROXY_PORT}`) : '-',
    } as ModelConfig,
  })
}

export type { ChatContext, ChatMessage }

export { chatReply, chatReplyProcess, chatConfig }

替换后还是经常出现“fetch failed”,如何处理?

Chanzhaoyu added a commit that referenced this issue Mar 4, 2023
* feat: 添加  OPENAI_API_BASE_URL 可选参数[#249]

* fix: 生成的代码块不能复制的问题[#251][#260]

* perf: 限制高分屏上的宽度[#257]

* perf: 文字按单词换行[#215][#225]

* perf: highlight.js 新语法警告

* fix: 移动端输入框不会被键盘弹起[#256]

* chore: 更新文档

* chore: version 2.9.2
@Chanzhaoyu
Copy link
Owner

已添加

@kuxingdu
Copy link

kuxingdu commented Mar 4, 2023

localhost: 一直报错,我改成了上面的代码,在环境中也改了网址,有什么解决办法嘛,错误提示ChatGPTUnofficialProxyAPI.sendMessage: conversationId and parentMessageId must both be set or both be undefinedmmexport1677942501020.png

@kuxingdu
Copy link

kuxingdu commented Mar 4, 2023

@Chanzhaoyu @pengmaochang 两位大大有解决办法不

@Smeleo
Copy link

Smeleo commented Mar 4, 2023

我也出现这样的情况了,同求解决办法

@pengmaochang
Copy link
Author

@kuxingdu @Smeleo new chat就可以了。这个问题应该是因为更换代理后不支持以前对话导致的。所以请删除旧的对话,新搞一个对话窗口

@kuxingdu
Copy link

kuxingdu commented Mar 6, 2023 via email

@RenDaniu
Copy link

@pengmaochang 兄弟,接口调不通啦

suikodev referenced this issue in AstraSurge/gpteams Mar 27, 2023
* feat: 添加  OPENAI_API_BASE_URL 可选参数[#249]

* fix: 生成的代码块不能复制的问题[#251][#260]

* perf: 限制高分屏上的宽度[#257]

* perf: 文字按单词换行[#215][#225]

* perf: highlight.js 新语法警告

* fix: 移动端输入框不会被键盘弹起[#256]

* chore: 更新文档

* chore: version 2.9.2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants