Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: 升级go-openai版本到最新版,支持gpt-3.5-turbo-0613 #268

Merged
merged 1 commit into from
Jul 1, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ $ docker run -itd --name chatgpt -p 8090:8090 \
-v ./data:/app/data --add-host="host.docker.internal:host-gateway" \
-e LOG_LEVEL="info" -e APIKEY=换成你的key -e BASE_URL="" \
-e MODEL="gpt-3.5-turbo" -e SESSION_TIMEOUT=600 \
-e MAX_QUESTION_LENL=4096 -e MAX_ANSWER_LEN=4096 -e MAX_TEXT=4096 \
-e MAX_QUESTION_LENL=2048 -e MAX_ANSWER_LEN=2048 -e MAX_TEXT=4096 \
-e HTTP_PROXY="http://host.docker.internal:15732" \
-e DEFAULT_MODE="单聊" -e MAX_REQUEST=0 -e PORT=8090 \
-e SERVICE_URL="你当前服务外网可访问的URL" -e CHAT_TYPE="0" \
Expand Down Expand Up @@ -400,14 +400,14 @@ run_mode: "stream"
api_key: "xxxxxxxxx"
# 如果你使用官方的接口地址 https://api.openai.com,则留空即可,如果你想指定请求url的地址,可通过这个参数进行配置,注意需要带上 http 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
base_url: ""
# 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
# 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-32k-0613", "gpt-4-32k-0314", "gpt-4-32k", "gpt-4-0613", "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
model: "gpt-3.5-turbo"
# 会话超时时间,默认600秒,在会话时间内所有发送给机器人的信息会作为上下文
session_timeout: 600
# 最大问题长度
max_question_len: 4096
max_question_len: 2048
# 最大回答长度
max_answer_len: 4096
max_answer_len: 2048
# 最大上下文文本长度,通常该参数可设置为与模型Token限制相同
max_text: 4096
# 指定请求时使用的代理,如果为空,则不使用代理,注意需要带上 http 协议 或 socks5 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
Expand Down
6 changes: 3 additions & 3 deletions config.example.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@ run_mode: "stream"
api_key: "xxxxxxxxx"
# 如果你使用官方的接口地址 https://api.openai.com,则留空即可,如果你想指定请求url的地址,可通过这个参数进行配置,注意需要带上 http 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
base_url: ""
# 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
# 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-32k-0613", "gpt-4-32k-0314", "gpt-4-32k", "gpt-4-0613", "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
model: "gpt-3.5-turbo"
# 会话超时时间,默认600秒,在会话时间内所有发送给机器人的信息会作为上下文
session_timeout: 600
# 最大问题长度
max_question_len: 4096
max_question_len: 2048
# 最大回答长度
max_answer_len: 4096
max_answer_len: 2048
# 最大上下文文本长度,通常该参数可设置为与模型Token限制相同
max_text: 4096
# 指定请求时使用的代理,如果为空,则不使用代理,注意需要带上 http 协议 或 socks5 协议,如果你是用的是azure,则该配置项可以留空或者直接忽略
Expand Down
6 changes: 3 additions & 3 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ services:
APIKEY: xxxxxx # 你的 api_key
RUN_MODE: "stream" # 运行模式,http 或者 stream ,强烈建议你使用stream模式,通过此链接了解:https://open.dingtalk.com/document/isvapp/stream
BASE_URL: "" # 如果你使用官方的接口地址 https://api.openai.com,则留空即可,如果你想指定请求url的地址,可通过这个参数进行配置,注意需要带上 http 协议
MODEL: "gpt-3.5-turbo" # 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单
MODEL: "gpt-3.5-turbo" # 指定模型,默认为 gpt-3.5-turbo , 可选参数有: "gpt-4-32k-0613", "gpt-4-32k-0314", "gpt-4-32k", "gpt-4-0613", "gpt-4-0314", "gpt-4", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-0301", "gpt-3.5-turbo",如果使用gpt-4,请确认自己是否有接口调用白名单,如果你是用的是azure,则该配置项可以留空或者直接忽略
SESSION_TIMEOUT: 600 # 会话超时时间,默认600秒,在会话时间内所有发送给机器人的信息会作为上下文
MAX_QUESTION_LEN: 4096 # 最大问题长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_ANSWER_LEN: 4096 # 最大回答长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_QUESTION_LEN: 2048 # 最大问题长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_ANSWER_LEN: 2048 # 最大回答长度,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
MAX_TEXT: 4096 # 最大文本 = 问题 + 回答, 接口限制,默认4096 token,正常情况默认值即可,如果使用gpt4-8k或gpt4-32k,可根据模型token上限修改。
HTTP_PROXY: http://host.docker.internal:15777 # 指定请求时使用的代理,如果为空,则不使用代理,注意需要带上 http 协议 或 socks5 协议
DEFAULT_MODE: "单聊" # 指定默认的对话模式,可根据实际需求进行自定义,如果不设置,默认为单聊,即无上下文关联的对话模式
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ require (
github.com/go-resty/resty/v2 v2.7.0
github.com/open-dingtalk/dingtalk-stream-sdk-go v0.0.4
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/sashabaranov/go-openai v1.6.1
github.com/sashabaranov/go-openai v1.12.0
github.com/solywsh/chatgpt v0.0.14
gopkg.in/yaml.v2 v2.4.0
gorm.io/gorm v1.24.6
Expand Down
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,8 @@ github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
github.com/sashabaranov/go-openai v1.6.1 h1:cALA9G00gPapNqun8vVBFGsDssywpU6wys4BpQ0bWqY=
github.com/sashabaranov/go-openai v1.6.1/go.mod h1:lj5b/K+zjTSFxVLijLSTDZuP7adOgerWeFyZLUhAKRg=
github.com/sashabaranov/go-openai v1.12.0 h1:aRNHH0gtVfrpIaEolD0sWrLLRnYQNK4cH/bIAHwL8Rk=
github.com/sashabaranov/go-openai v1.12.0/go.mod h1:lj5b/K+zjTSFxVLijLSTDZuP7adOgerWeFyZLUhAKRg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
Expand Down
5 changes: 2 additions & 3 deletions pkg/chatgpt/chatgpt.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ func New(userId string) *ChatGPT {
public.Config.AzureOpenAIToken,
"https://"+public.Config.AzureResourceName+".openai."+
"azure.com/",
public.Config.AzureDeploymentName,
)
} else {
if public.Config.HttpProxy != "" {
Expand All @@ -61,8 +60,8 @@ func New(userId string) *ChatGPT {
ctx: ctx,
userId: userId,
maxQuestionLen: public.Config.MaxQuestionLen, // 最大问题长度
maxAnswerLen: public.Config.MaxAnswerLen, // 最大答案长度
maxText: public.Config.MaxText, // 最大文本 = 问题 + 回答, 接口限制
maxAnswerLen: public.Config.MaxAnswerLen, // 最大答案长度
maxText: public.Config.MaxText, // 最大文本 = 问题 + 回答, 接口限制
timeOut: public.Config.SessionTimeout,
doneChan: timeOutChan,
cancel: func() {
Expand Down
31 changes: 16 additions & 15 deletions pkg/chatgpt/context.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,14 @@ import (
"encoding/gob"
"errors"
"fmt"
"github.com/eryajf/chatgpt-dingtalk/pkg/dingbot"
"github.com/pandodao/tokenizer-go"
"image/png"
"os"
"strings"
"time"

"github.com/eryajf/chatgpt-dingtalk/pkg/dingbot"
"github.com/pandodao/tokenizer-go"

"github.com/eryajf/chatgpt-dingtalk/public"
openai "github.com/sashabaranov/go-openai"
)
Expand Down Expand Up @@ -166,24 +167,24 @@ func (c *ChatGPT) ChatWithContext(question string) (answer string, err error) {
if len(c.ChatContext.old) > 1 { // 至少保留一条记录
c.ChatContext.PollConversation() // 删除最旧的一条对话
// 重新构建 prompt,计算长度
promptTable = promptTable[1:] // 删除promptTable中对应的对话
promptTable = promptTable[1:] // 删除promptTable中对应的对话
prompt = strings.Join(promptTable, "\n") + c.ChatContext.startSeq
} else {
break // 如果已经只剩一条记录,那么跳出循环
}
}
}
// if tokenizer.MustCalToken(prompt) > c.maxText-c.maxAnswerLen {
// return "", OverMaxTextLength
// }
// if tokenizer.MustCalToken(prompt) > c.maxText-c.maxAnswerLen {
// return "", OverMaxTextLength
// }
model := public.Config.Model
userId := c.userId
if public.Config.AzureOn {
userId = ""
}
if model == openai.GPT3Dot5Turbo0301 ||
model == openai.GPT3Dot5Turbo ||
model == openai.GPT4 || model == openai.GPT40314 ||
model == openai.GPT432K || model == openai.GPT432K0314 {
if model == openai.GPT3Dot5Turbo || model == openai.GPT3Dot5Turbo0301 || model == openai.GPT3Dot5Turbo0613 ||
model == openai.GPT3Dot5Turbo16K || model == openai.GPT3Dot5Turbo16K0613 ||
model == openai.GPT4 || model == openai.GPT40314 || model == openai.GPT40613 ||
model == openai.GPT432K || model == openai.GPT432K0314 || model == openai.GPT432K0613 {
req := openai.ChatCompletionRequest{
Model: model,
Messages: []openai.ChatCompletionMessage{
Expand Down Expand Up @@ -239,10 +240,10 @@ func (c *ChatGPT) ChatWithContext(question string) (answer string, err error) {
}
func (c *ChatGPT) GenreateImage(ctx context.Context, prompt string) (string, error) {
model := public.Config.Model
if model == openai.GPT3Dot5Turbo0301 ||
model == openai.GPT3Dot5Turbo ||
model == openai.GPT4 || model == openai.GPT40314 ||
model == openai.GPT432K || model == openai.GPT432K0314 {
if model == openai.GPT3Dot5Turbo || model == openai.GPT3Dot5Turbo0301 || model == openai.GPT3Dot5Turbo0613 ||
model == openai.GPT3Dot5Turbo16K || model == openai.GPT3Dot5Turbo16K0613 ||
model == openai.GPT4 || model == openai.GPT40314 || model == openai.GPT40613 ||
model == openai.GPT432K || model == openai.GPT432K0314 || model == openai.GPT432K0613 {
req := openai.ImageRequest{
Prompt: prompt,
Size: openai.CreateImageSize1024x1024,
Expand Down
3 changes: 3 additions & 0 deletions prompt.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,3 +65,6 @@
- title: "#编程问题"
prefix: "我想让你充当 stackoverflow 的帖子。我会问编程相关的问题,你会回答应该是什么答案。我希望你只回答给定的答案,并在不够详细的时候写解释。不要写解释。当我需要用英语告诉你一些事情时,我会把文字放在大括号内{like this}。我的第一个问题是:"
suffix: ""
- title: "#画图"
prefix: "system: 助手善于判断用户意图,当确定需要提供图片时,助手会变得沉默寡言,只使用以下格式输出markdown图片:![描述](https://image.pollinations.ai/prompt/描述),因为这个语法可以自动按照提示生成并渲染图片。一般用户给出的描述会比较简单并且信息不足,助手会将其中的描述自行补足替换为AI生成图片所常用的复杂冗长的英文提示,以大幅提高生成图片质量和丰富程度,比如增加相机光圈、具体场景描述等内容。助手会避免用代码块或原始块包围markdown标记,因为那样只会渲染出代码块或原始块而不是图片。\nuser: Can you draw some pictures for me?\nassistant: Sure, what do you want me to draw?\nuser: "
suffix: ""
2 changes: 1 addition & 1 deletion public/public.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ func InitSvc() {
// 初始化数据库
db.InitDB()
// 暂时不在初始化时获取余额
if Config.Model == openai.GPT3Dot5Turbo0301 || Config.Model == openai.GPT3Dot5Turbo {
if Config.Model == openai.GPT3Dot5Turbo0613 || Config.Model == openai.GPT3Dot5Turbo0301 || Config.Model == openai.GPT3Dot5Turbo {
_, _ = GetBalance()
}
}