-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding support for GPT-4o and GPT-4-turbo. #32
Conversation
Merging GPT-4 turbo updates.
…s a little bit buggy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starting the review, so far it looks good.
src/gpt3.js
Outdated
@@ -1,8 +1,16 @@ | |||
import GPT3Tokenizer from "gpt3-tokenizer"; | |||
|
|||
const tokenizer = new GPT3Tokenizer({ type: "gpt3" }); | |||
const CHAT_API_MODELS = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be export const CHAT_API_MODELS
to work correctly. And the imports should become:
import { CHAT_API_MODELS } from
.
All the import were not actually working, so all the if model in CHAT_API_MODELS conditions (apart from gtp3.js) were false by default.
Note also, that now all prompt should have the prompt formatted as:
{ "role": "user", "content": text }
including the custom prompts in background.js.
Given that now all models are chat-type we may add a switch to chose for the fast prompt or chat-gpt type of prompt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed a fix for the above. This should be enough unless I missed something else.
Changes: