Skip to content

Commit

Permalink
feat: add openai o1 & update pricing and max_token of other models (l…
Browse files Browse the repository at this point in the history
…anggenius#11780)

Signed-off-by: -LAN- <laipz8200@outlook.com>
  • Loading branch information
laipz8200 authored and 刘江波 committed Dec 20, 2024
1 parent a64c2c7 commit 39ce222
Show file tree
Hide file tree
Showing 11 changed files with 91 additions and 18 deletions.
12 changes: 7 additions & 5 deletions api/core/model_runtime/model_providers/openai/llm/_position.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
- gpt-4o-audio-preview
- o1
- o1-2024-12-17
- o1-mini
- o1-mini-2024-09-12
- gpt-4
- gpt-4o
- gpt-4o-2024-05-13
Expand All @@ -7,10 +10,6 @@
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-2024-07-18
- o1-preview
- o1-preview-2024-09-12
- o1-mini
- o1-mini-2024-09-12
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-turbo-preview
Expand All @@ -25,4 +24,7 @@
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-instruct
- gpt-4o-audio-preview
- o1-preview
- o1-preview-2024-09-12
- text-davinci-003
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 16384
- name: response_format
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 4096
max: 16384
- name: response_format
label:
zh_Hans: 回复格式
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 16384
- name: response_format
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 16384
- name: response_format
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 4096
max: 16384
- name: response_format
label:
zh_Hans: 回复格式
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 16384
- name: response_format
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 16384
- name: response_format
Expand Down
8 changes: 4 additions & 4 deletions api/core/model_runtime/model_providers/openai/llm/gpt-4o.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ parameter_rules:
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
default: 16384
min: 1
max: 4096
max: 16384
- name: response_format
label:
zh_Hans: 回复格式
Expand All @@ -38,7 +38,7 @@ parameter_rules:
- text
- json_object
pricing:
input: '5.00'
output: '15.00'
input: '2.50'
output: '10.00'
unit: '0.000001'
currency: USD
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
model: o1-2024-12-17
label:
en_US: o1-2024-12-17
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
- vision
model_properties:
mode: chat
context_size: 200000
parameter_rules:
- name: max_tokens
use_template: max_tokens
default: 50000
min: 1
max: 50000
- name: response_format
label:
zh_Hans: 回复格式
en_US: response_format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
pricing:
input: '15.00'
output: '60.00'
unit: '0.000001'
currency: USD
36 changes: 36 additions & 0 deletions api/core/model_runtime/model_providers/openai/llm/o1.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
model: o1
label:
zh_Hans: o1
en_US: o1
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
- vision
model_properties:
mode: chat
context_size: 200000
parameter_rules:
- name: max_tokens
use_template: max_tokens
default: 50000
min: 1
max: 50000
- name: response_format
label:
zh_Hans: 回复格式
en_US: response_format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
pricing:
input: '15.00'
output: '60.00'
unit: '0.000001'
currency: USD

0 comments on commit 39ce222

Please sign in to comment.