Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for GPT-4o and GPT-4-turbo. #32

Merged
merged 14 commits into from
May 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 21 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,25 @@
# GPT-Prompter
Open source Browser extension to get fast customizable prompts (e.g. explanations) of the selected text or chat using OpenAI`s GPT-3 model. Available in the Chrome/Firefox web store.

To use it follows these steps:

1- Add the extension from the Chrome Web store at https://chrome.google.com/webstore/detail/gpt-prompter/lcgpdbficokndjodlcgflbhaibicompp

2- If you don`t have it already, make a profile at OpenAI https://beta.openai.com/

3- From https://beta.openai.com/account/api-keys you can copy your API Key and paste into the chrome extension (we never have access to your API Key, it is stored on your Chrome profile).

4- (Optional)Now just select text from a website.

5- Select from the context menu (right click) the GPT-Prompter command to send the custom prompt to open the popup.

5- The default prompt is "Tell me more about <selected text>", but you can create anything you want with just the constrain that it has to contain a field for the selected text to be parsed in it.

### Update (05/10/2024) - Differences from [giosilvi's GPT-Prompter](https://github.com/giosilvi/GPT-Prompter):
- Added support for GPT-4-Turbo.
- Updated prompt list with Anki prompts.
- No present support for Instruct models.
- Live response streaming is back.

### Installation Instructions:
1. `git clone https://github.com/wz-ml/GPT-Prompter.git`
2. `cd` into the repository and type `npm run build`.
3. A new .zip file should appear in /zips. Unzip it!
4. Go to `chrome://extensions`. If you use Brave or another Chromium-based browser, replace the prefix with the name of your browser (e.g `brave://extensions`).
5. Click "Load unpacked" and select your newly unzipped folder:

![image](https://github.com/wz-ml/GPT-Prompter/assets/38673735/2e780b92-c77a-45bd-b508-6342de52850a)

### Usage:
1. Make a profile at [OpenAI](https://beta.openai.com/), if you haven't already
2. From https://beta.openai.com/account/api-keys you can copy your API Key and paste into the chrome extension (we never have access to your API Key, it is stored on your Chrome profile).
3. (Optional)Now just select text from a website.
4. Select from the context menu (right click) the GPT-Prompter command to send the custom prompt to open the popup.
5. The default prompt is "Tell me more about <selected text>", but you can create anything you want with just the constrain that it has to contain a field for the selected text to be parsed in it.

![Example of a chat conversation](ChatExample.png)
848 changes: 444 additions & 404 deletions package-lock.json

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@
"css-loader": "^6.7.3",
"html-webpack-plugin": "^5.5.0",
"style-loader": "^3.3.2",
"webpack": "^5.76.2",
"webpack-cli": "^5.0.1",
"webpack": "^5.91.0",
"webpack-cli": "^5.1.4",
"zip-webpack-plugin": "^4.0.1"
}
}
45 changes: 32 additions & 13 deletions src/background.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import promptGPT3Prompting from "./gpt3.js";
import symbolFromModel from "./sharedfunctions.js";
const std_model = "gpt-3.5-turbo";
import {CHAT_API_MODELS} from "./gpt3.js";

const std_model = "gpt-4-turbo";


// FUNCTIONS DECLARATION
async function checkGPT(apikey) {
Expand Down Expand Up @@ -105,7 +108,7 @@ function passTitleOrPrompt(customprompt, symbol) {
return `${symbol} ${customprompt.title.replaceAll("#TEXT#", "%s")}`;
} else {
// if customprompt does not contain a title return the prompt
if (customprompt.model === "gpt-3.5-turbo" || customprompt.model === "gpt-4") {
if (customprompt.model in CHAT_API_MODELS) {
// if it is, json parse the prompt
const prompt = JSON.parse(customprompt.prompt);
// get the last element of the prompt
Expand Down Expand Up @@ -137,7 +140,7 @@ chrome.runtime.onInstalled.addListener(function (details) {
for (var i = 0; i < items.customprompt.length; i++) {
// modify each one of them to become a dictionary
items.customprompt[i] = {
model: "gpt-3.5-turbo-instruct",
model: "gpt-4-turbo",
temperature: 0.1,
max_tokens: 1024,
prompt: items.customprompt[i],
Expand All @@ -149,21 +152,35 @@ chrome.runtime.onInstalled.addListener(function (details) {
// if the prompt does not exist, create the default one
items.customprompt = [
{
model: "gpt-3.5-turbo-instruct",
model: "gpt-4-turbo",
temperature: 0.1,
max_tokens: 1024,
prompt: "Tell me more about #TEXT# :",
max_tokens: 4096,
prompt: JSON.stringify([{"role":"user", "content":"Try not to use headings.. Tell me more about #TEXT#:"}]),
twoStage: false,
},
{
model: "gpt-4-turbo",
temperature: 0.1,
max_tokens: 4096,
prompt: JSON.stringify([{"role":"user", "content":"Please create an Anki card for: #TEXT#:"}]),
twoStage: false,
},
{
model: "gpt-3.5-turbo-instruct",
model: "gpt-4-turbo",
temperature: 0.1,
max_tokens: 4096,
prompt: JSON.stringify([{"role":"user", "content":"Please create an Anki card for the concept below. Explain any intuitions and be sure to include formulas if necessary: #TEXT#"}]),
twoStage: false,
},
{
model: "gpt-4-turbo",
temperature: 0.1,
max_tokens: 1024,
prompt:
'Answer the question as truthfully as possible using the provided text, and if the answer is not contained within the text below, say "I don\'t know" \nContext:\n#TEXT# \n\nQ:',
title: "Two-stage Q&&A",
JSON.stringify([{"role":"user", "content":"Answer the question as truthfully as possible using the provided text, and if the answer is not contained within the text below, say 'I don\'t know' \nContext:\n#TEXT# \n\nQ:"}]),
title: "Two-stage Q&A",
twoStage: true,
},
}
];
}
// save the newPromptList
Expand Down Expand Up @@ -198,6 +215,7 @@ chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
else if (message.text === "launchGPT") {
// Get the tab from the sender
const { tab } = sender; // this line is equivalent to const tab = sender.tab;
console.log("Received prompt object.");
console.log(typeof message.prompt.prompt, message.prompt);
// Launch GPT
chrome.storage.sync.get("APIKEY", function (items) {
Expand Down Expand Up @@ -379,12 +397,12 @@ chrome.contextMenus.onClicked.addListener(async (info, tabs) => {
const prompt = items.customprompt[promptNumber];
// Update the prompt text with the selected text, if there is any
var parsedPrompt = "";
if (prompt.model == "gpt-3.5-turbo" || prompt.model === "gpt-4") {
if (prompt.model in CHAT_API_MODELS) {
parsedPrompt = JSON.parse(prompt.prompt);
prompt.prompt = parsedPrompt;
}
if (info.selectionText) {
if (prompt.model == "gpt-3.5-turbo" || prompt.model === "gpt-4") {
if (prompt.model in CHAT_API_MODELS) {
// loop over the prompt and replace the placeholder
for (var i = 0; i < parsedPrompt.length; i++) {
if (parsedPrompt[i]["content"].includes("#TEXT#")) {
Expand All @@ -405,11 +423,12 @@ chrome.contextMenus.onClicked.addListener(async (info, tabs) => {
chrome.storage.sync.get("APIKEY", function (items) {
// Launch the prompt
(async () => {
console.log("PROMPT:",prompt, items, tabs);
await promptGPT3Prompting(prompt, items, tabs);
})();
});
} else {
if (prompt.model == "gpt-3.5-turbo" || prompt.model === "gpt-4") {
if (prompt.model in CHAT_API_MODELS) {
console.log("Chat GPT", prompt);
launchPopUpInPage(prompt.prompt, prompt, "showPopUpChatGPT");
} else {
Expand Down
65 changes: 58 additions & 7 deletions src/content.js
Original file line number Diff line number Diff line change
Expand Up @@ -132,28 +132,74 @@ function checkIdPopup(id) {
return id === undefined || id === -1 ? popUpShadow.ids : parseInt(id);
}

const buffers = {};

function handleDataChunk(uuid, dataChunk, request) {
// Initialize buffer for this uuid if not already present
buffers[uuid] = buffers[uuid] || "";

// Datachunk is a sequence of newline-separated JSON objects, but may be split across multiple chunks
var new_objects_added = 0;
for (const line of dataChunk.split("\n")) {
// Append the new chunk to the appropriate buffer
parsed_line = line.replace(/^data: /, "");
if (parsed_line.length != 0) {
buffers[uuid] += parsed_line + "\n";
new_objects_added += 1;
// console.log("Newly added:", line.replace(/^data: /, ""));
}
}
// console.log("Current buffer:", buffers[uuid]);

for (let i = 0; i < new_objects_added; i++) {
// Attempt to find a complete JSON object in the buffer
const endOfObjectPos = buffers[uuid].indexOf('}\n{');
if (endOfObjectPos !== -1) {
// Extract the complete JSON object from the buffer
const completeJsonObjectStr = buffers[uuid].substring(0, endOfObjectPos + 1);

// Process the complete JSON object
processJsonObject(completeJsonObjectStr, uuid, request);

// Remove the processed data from the buffer
buffers[uuid] = buffers[uuid].substring(endOfObjectPos + 2);
}
}
if (buffers[uuid].includes("[DONE]")) {
processJsonObject("[DONE]", uuid, request);
}
}
function sendStopSignal(request,uuid) {
console.log(`Sending stop signal for ${uuid}`);
popUpShadow.updatepopup(request, uuid, false);
}

function processJsonObject(jsonStr, uuid, request) {
// console.log("jsonStr:", jsonStr, uuid, request);
try {
// Check for the [DONE] marker
if (jsonStr === "[DONE]") {
console.log("Received [DONE] marker for", uuid);
popUpShadow.updatepopup(request, uuid, false);
return;
}

// Otherwise, parse and process the JSON object
// console.log("About to process JSON.");
const jsonObject = JSON.parse(jsonStr);
// console.log(`Processing JSON object for ${uuid}:`, jsonObject);

// Check for an error property in the JSON object
if (jsonObject.error) {
console.log(`Error found for ${uuid}:`, jsonObject.error);
popUpShadow.updatepopup(jsonObject, uuid, true);
return;
}
// if (jsonObject.error) {
// console.log(`Error found for ${uuid}:`, jsonObject.error);
// popUpShadow.updatepopup(jsonObject, uuid, true);
// return;
// }

popUpShadow.updatepopup(jsonObject, uuid, true); // Assuming uuid maps to idPopup

// Once a valid JSON object has been processed, send a stop signal
sendStopSignal(request,uuid);
// sendStopSignal(request,uuid);

} catch (e) {
console.error("Failed to parse JSON object:", e);
Expand All @@ -162,6 +208,7 @@ function processJsonObject(jsonStr, uuid, request) {


chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
// console.log("Full request:", request);
if (request.greeting === "shouldReenableContextMenu") {
sendResponse({ farewell: "yes" });
return;
Expand Down Expand Up @@ -202,14 +249,18 @@ chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
break;
case "GPTStream_completion":
try {
// console.log("Request:", request);
// console.log(popUpShadow.stop_stream, popUpShadow.listOfUndesiredStreams);
if (popUpShadow.stop_stream && !popUpShadow.listOfUndesiredStreams.includes(request.uuid)) {
console.log("Stop stream with uuid", request.uuid);
popUpShadow.listOfUndesiredStreams.push(request.uuid);
delete buffers[request.uuid]; // Clear the buffer for this stream
popUpShadow.stop_stream = false;
popUpShadow.clearnewlines = true;
}
if (!popUpShadow.listOfUndesiredStreams.includes(request.uuid)) {
processJsonObject(request.text,idPopup, request);
handleDataChunk(request.uuid, request.text, request);
// processJsonObject(request.text,idPopup, request);
}
} catch (e) {
console.error(e);
Expand Down
31 changes: 24 additions & 7 deletions src/gpt3.js
Original file line number Diff line number Diff line change
@@ -1,8 +1,16 @@
import GPT3Tokenizer from "gpt3-tokenizer";

const tokenizer = new GPT3Tokenizer({ type: "gpt3" });
export const CHAT_API_MODELS = {
"gpt-4": true,
"gpt-3.5-turbo": true,
"gpt-4-turbo": true,
"gpt-4o": true
};

var MaxTokensPerModel = {
"gpt-4o": 4000,
"gpt-4-turbo": 4096,
"gpt-4": 8000,
"gpt-3.5-turbo": 4000,
"gpt-3.5-turbo-instruct": 4000,
Expand All @@ -13,9 +21,14 @@ var MaxTokensPerModel = {
"text-ada-001": 2000
};

const DECOUPLED_INPUT_OUTPUT_LENGTH_MODELS = {
"gpt-4-turbo": true,
"gpt-4o": true
};

function checkMaxTokens(content, model) {
var tokens = 0;
if (model == "gpt-4" || model == "gpt-3.5-turbo") {
if (model in CHAT_API_MODELS) {
// check the tokens in the text, for each "content" key
// var content = JSON.parse(text);
for (var i = 0; i < content.length; i++) {
Expand All @@ -29,6 +42,10 @@ function checkMaxTokens(content, model) {
tokens = countTokens(content, model);
}
var maxTokens = MaxTokensPerModel[model] - tokens;
if (model in DECOUPLED_INPUT_OUTPUT_LENGTH_MODELS) {
maxTokens = MaxTokensPerModel[model];
}
console.log("model", model, "maxTokens", maxTokens, "tokens", tokens);
return { maxTokens, tokens };
}

Expand Down Expand Up @@ -73,7 +90,7 @@ async function promptGPT3Prompting(prompt, items, tabs) {
var text = prompt["prompt"];
var model = prompt["model"];
// if the model is gpt-4 or gpt-3.5-turbo, we need to check that the text is a valid json
if (model == "gpt-4" || model == "gpt-3.5-turbo") {
if (model in CHAT_API_MODELS) {
console.log('Check',typeof text)
if (typeof text !== "object")
{text = [{"role": "user", "content": text}];}
Expand All @@ -89,7 +106,7 @@ async function promptGPT3Prompting(prompt, items, tabs) {
var uuid = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15);
//send immediately text to the content script
var { url, str_bodyData, bodyData, tokens } = chooseCompletion(model, temperature, text);
console.log(url, str_bodyData, tokens);
console.log("Debug1", url, str_bodyData, tokens);
fetch(url, {
method: "POST",
headers: {
Expand All @@ -112,7 +129,7 @@ async function promptGPT3Prompting(prompt, items, tabs) {
return;
}
// Enqueue the next data chunk into our target stream
// console.log(value);
console.log(value);
var stream = new TextDecoder().decode(value); //.substring(6);
// console.log(string, typeof string);
// if tabs.id == -1 then use querySelector to get the tab
Expand All @@ -133,14 +150,14 @@ function chooseCompletion(model, temperature, text) {
var { maxTokens, tokens } = checkMaxTokens(text, model);
var url = "";

if (model == "gpt-3.5-turbo" || model === "gpt-4") {
if (model in CHAT_API_MODELS) {
url = "https://api.openai.com/v1/chat/completions";
var bodyData = {
model: model,
temperature: temperature,
max_tokens: maxTokens,
messages: text,
stream: false,
stream: true,
};
} else {
url = "https://api.openai.com/v1/completions";
Expand All @@ -149,7 +166,7 @@ function chooseCompletion(model, temperature, text) {
temperature: temperature,
max_tokens: maxTokens,
prompt: text,
stream: false,
stream: true,
};
}
var str_bodyData = JSON.stringify(bodyData);
Expand Down
5 changes: 4 additions & 1 deletion src/history.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
* @param {object} items - The list of items to create the list from.
* @return {string} - The HTML list.
*/

import {CHAT_API_MODELS} from "./gpt3.js";

function makeHistoryList(items) {
// create empty list and total cost variables
var list = "";
Expand Down Expand Up @@ -46,7 +49,7 @@ function createListItem(item, index) {
// add prompt key and value to prompt content string
// if prompt can be parsed as JSON, add it as a string
// otherwise, add it as a string
if (prompt["model"] == "gpt-3.5-turbo" || prompt["model"] === "gpt-4") {
if (prompt["model"] in CHAT_API_MODELS) {
console.log(prompt["messages"]);
promptContent += `<strong>prompt:</strong> ${JSON.stringify(prompt["messages"])}<br>`;
} else {
Expand Down
2 changes: 1 addition & 1 deletion src/manifest.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "GPT-Prompter",
"version": "0.0.4.0",
"version": "0.0.4.3",
"description": "Fast custom prompts to GPT-3, GPT-4 and ChatGPT API",
"manifest_version": 3,
"icons": {
Expand Down
Loading
Loading