Skip to content

Commit

Permalink
Fix Typos and Grammatical Improvements (#209)
Browse files Browse the repository at this point in the history
Co-authored-by: Vasek Mlejnsky <vasek.mlejnsky@gmail.com>
  • Loading branch information
tmsagarofficial and mlejva authored Oct 31, 2023
1 parent f7aabdb commit 849e641
Show file tree
Hide file tree
Showing 11 changed files with 40 additions and 40 deletions.
2 changes: 1 addition & 1 deletion apps/docs/src/app/api/cli/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ export default function Page() {
return (
<div>
{/* It's not easy to override RootLayout without grouping everything into `(root)` dir */}
{/* So I'm hacking custom layout with full modal overlay */}
{/* So I'm hacking a custom layout with full modal overlay */}
{/* https://github.com/vercel/next.js/issues/50591 */}
<DialogAnimated
open={true}
Expand Down
34 changes: 17 additions & 17 deletions apps/docs/src/app/guide/simple-gpt4-code-interpreter/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,18 @@ export const sections = [

# Build Custom Code Interpreter with E2B and GPT-4

This is an example how to build a custom simple code interpreter that can execute JavaScript using [OpenAI GPT-4](https://platform.openai.com/docs/models) and [E2B](https://e2b.dev). {{ className: 'lead' }}
This is an example of how to build a custom simple code interpreter that can execute JavaScript using [OpenAI GPT-4](https://platform.openai.com/docs/models) and [E2B](https://e2b.dev). {{ className: 'lead' }}

## What is Code Interpreter?
## What is a Code Interpreter?

[Code Interpreter](https://openai.com/blog/chatgpt-plugins?ref=blog.roboflow.com#code-interpreter) is a [ChatGPT plugin](https://platform.openai.com/docs/plugins/introduction) released by OpenAI that gives ChatGPT capabilites to run code.
[Code Interpreter](https://openai.com/blog/chatgpt-plugins?ref=blog.roboflow.com#code-interpreter) is a [ChatGPT plugin](https://platform.openai.com/docs/plugins/introduction) released by OpenAI that gives ChatGPT capabilities to run code.

This guide will show you how to build your own custom code interpreter that can execute JavaScript code using E2B and GPT-4. {{ className: 'lead' }}

You can find the final code for both Python and JavaScript in [this GitHub repository](https://github.com/e2b-dev/e2b-cookbook/tree/main). {{ className: 'lead' }}

## Why to use E2B?
E2B allows you to execute the GPT-generated code in a sandboxed cloud environment. This means you can run the code without worrying about security issues and potential harm the generated code can make to your machine. {{ className: 'lead' }}
E2B allows you to execute the GPT-generated code in a sandboxed cloud environment. This means you can run the code without worrying about security issues and the potential harm the generated code can cause to your machine. {{ className: 'lead' }}

## Install E2B and OpenAI

Expand Down Expand Up @@ -94,7 +94,7 @@ const chatCompletion = await openai.chat.completions.create({
messages: [
{
role: 'system',
content: 'You are a senior developer that can code in JavaScript. Always produce valid JSON.',
content: 'You are a senior developer who can code in JavaScript. Always produce valid JSON.',
}
],
})
Expand All @@ -106,7 +106,7 @@ response = openai.ChatCompletion.create(
messages=[
{
"role": "system",
"content": "You are a senior developer that can code in JavaScript. Always produce valid JSON."
"content": "You are a senior developer who can code in JavaScript. Always produce valid JSON."
},
]
)
Expand Down Expand Up @@ -141,7 +141,7 @@ If you print the response, you should see something like this:

## Prepare OpenAI functions

We're going to take an advantage of the [OpenAI functions](https://openai.com/blog/function-calling-and-other-api-updates). Defining these functions will make it easier instructing the model that it can write JavaScript code to complete request from the user. We'll execute this code later using E2B.
We're going to take advantage of the [OpenAI functions](https://openai.com/blog/function-calling-and-other-api-updates). Defining these functions will make it easier to instruct the model that it can write JavaScript code to complete requests from the user. We'll execute this code later using E2B.

<CodeGroup isRunnable={false}>
```js
Expand Down Expand Up @@ -185,7 +185,7 @@ functions = [

We created an OpenAI function `exec_code` that expects a single parameter `code`. The `code` parameter will be the Python code generated by GPT that we'll execute.

Now we pass the `functions` variable to the GPT call we made earlier and also add a few messages to show model how to use the `exec_code` function.
Now we pass the `functions` variable to the GPT call we made earlier and also add a few messages to show the model how to use the `exec_code` function.
The new code is marked by the highlighted lines.

<CodeGroup isRunnable={false}>
Expand All @@ -195,7 +195,7 @@ const chatCompletion = await openai.chat.completions.create({
messages: [
{
role: 'system',
content: 'You are a senior developer that can code in JavaScript. Always produce valid JSON.',
content: 'You are a senior developer who can code in JavaScript. Always produce valid JSON.',
},
{ // $HighlightLine
role: 'user', // $HighlightLine
Expand All @@ -222,14 +222,14 @@ response = openai.ChatCompletion.create(
{"role": "system", "content": "You are a senior developer that can code in JavaScript. Always produce valid JSON."},
{"role": "user", "content": "Write hello world"}, # $HighlightLine
{"role": "assistant", "content": "print(\"hello world\")", "name":"exec_code"}, # $HighlightLine
{"role": "user", "content": "Generate first 100 fibonacci numbers"}, # $HighlightLine
{"role": "user", "content": "Generate first 100 Fibonacci numbers"}, # $HighlightLine
],
functions=functions, # $HighlightLine
)
```
</CodeGroup>

If you print the GPT response now, you'll most likely see the model is calling the `exec_code` function we defined earlier and is passing code for generating fibonacci numbers in the first element of the `choices` JSON array.
If you print the GPT response now, you'll most likely see the model is calling the `exec_code` function we defined earlier and is passing code for generating Fibonacci numbers in the first element of the `choices` JSON array.

```json
{
Expand Down Expand Up @@ -278,7 +278,7 @@ if (func) {
// TODO: Execute the code using E2B.
}
} else {
// The model didn't call a function, so we just print the message.
// The model didn't call a function, so we just printed the message.
const content = message["content"];
console.log(content);
}
Expand All @@ -302,7 +302,7 @@ if (func := message.get("function_call")):
code = func_args["code"]
# TODO: Execute the code using E2B.
else:
# The model didn't call a function, so we just print the message.
# The model didn't call a function, so we just printed the message.
content = message["content"]
print(content)
```
Expand All @@ -313,7 +313,7 @@ else:
## Run GPT-generated code with E2B

It's time to actually run the code generated by GPT. We'll be using the `e2b.runCode`/`e2b.run_code` to execute the code in E2B's sandboxed playground.
All we need to add is just a single line of code. I hgihtlighted it in the code snippets below.
All we need to add is just a single line of code. I highlighted it in the code snippets below.



Expand Down Expand Up @@ -347,7 +347,7 @@ if func_name == "exec_code":
```
</CodeGroup>

And this what the executed code for "Generate first 100 fibonacci numbers" prompt prints:
And this is what the executed code for the "Generate first 100 Fibonacci numbers" prompt prints:
```
[
0, 1, 1,
Expand Down Expand Up @@ -389,10 +389,10 @@ And this what the executed code for "Generate first 100 fibonacci numbers" promp


{/* ## Upgrade: Let GPT install missing libraries
The current version of our code interpreter has one big flaw. It doesn't allow to install missing libraries. If GPT decided to generate code that uses a library that is not installed, the code will fail to execute.
The current version of our code interpreter has one big flaw. It doesn't allow to install missing libraries. If GPT decides to generate code that uses a library that is not installed, the code will fail to execute.
We're going to solve this problem by letting GPT install missing libraries by adding a new OpenAI function called `install_package`.
This is how our new `functions` variable looks like. The highlighted lines are the new code.
This is what our new `functions` variable looks like. The highlighted lines are the new code.
<CodeGroup isRunnable={false}>
```js
Expand Down
6 changes: 3 additions & 3 deletions apps/docs/src/app/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ import { useSignIn } from '@/utils/useSignIn'

# Runtime Sandbox for Large Language Models

Sandboxes allow AI apps to have long running cloud environments where large language models can use the same tools as human do. {{ className: 'lead' }}
Sandboxes allow AI apps to have long running cloud environments where large language models can use the same tools as humans do. {{ className: 'lead' }}

## Use cases
Our Sandboxes are general purpose machines powered by Ubuntu. Most of the things your AI app or agent can do locally can also be done in our sandboxes.
Here's a few use cases:
Here are a few use cases:

- **AI data analysis**
- **Code interpreter**
- **Run LLM-generated code**
- **Coding copilots and agents**
- **AI powered browsers in cloud**
- **AI powered browsers in the cloud**

## Quick start

Expand Down
2 changes: 1 addition & 1 deletion apps/docs/src/app/pricing/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@

We're going to offer both free and premium tiers in the future. Pricing will be based on usage of our [LLM sandboxes](/sandbox/overview) through the SDK.

Our goal is to offer a free tier for hobbyists and personal projects and a premium tier for companies or projects with high usage. Similarly to for example [Supabase](https://supabase.com/pricing) or [Vercel](https://vercel.com/pricing).
Our goal is to offer a free tier for hobbyists and personal projects and a premium tier for companies or projects with high usage. Similarly for example [Supabase](https://supabase.com/pricing) or [Vercel](https://vercel.com/pricing).
6 changes: 3 additions & 3 deletions apps/docs/src/app/sandbox/cwd/page.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Current Working Directory

You can set working directory either for the whole session, a filesystem operation, or a new process.
You can set a working directory either for the whole session, a filesystem operation, or a new process.

## Session

If the current working directory for the session is not set, it will default to home directory - `/home/user`.
If the current working directory for the session is not set, it will default to the home directory - `/home/user`.

<CodeGroupAutoload path="cwd/session" />

Expand All @@ -16,6 +16,6 @@ All filesystem operations with relative paths are relative to the current workin

## Process

If you set working directory for session, all processes will inherit it. You can override it for each process.
If you set a working directory for the session, all processes will inherit it. You can override it for each process.

<CodeGroupAutoload path="cwd/process"/>
4 changes: 2 additions & 2 deletions apps/docs/src/app/sandbox/download/page.mdx
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Download files from sandbox
# Download files from the sandbox
Any file inside the sandbox can be downloaded using the `downloadFile`/`download_file` method.

## Use case for downloading files
For example, the download file method is useful for downloading any files produced by LLM. You can let LLM generate and execute code inside the sandbox.
This LLM-generated code might produce some files like charts, CSV files, or PDF file that you want to download to your machine.
This LLM-generated code might produce some files like charts, CSV files, or PDF files that you want to download to your machine.

<CodeGroupAutoload path="basics/download_file" isRunnable={false}/>

6 changes: 3 additions & 3 deletions apps/docs/src/app/sandbox/execute/page.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Code Execution

You can use E2B SDK to safely execute code generated byt LLM inside the sandbox. {{ className: 'lead' }}
You can use E2B SDK to safely execute code generated by LLM inside the sandbox. {{ className: 'lead' }}
There are 2 ways to run code with E2B SDK. {{ className: 'lead' }}

### 1. Use the `runCode`/`run_code` methods in JS and Python SDK to run the LLM-generated code {{ className: 'lead' }}

This approach is very stragithforward and you don't need to worry about managing the sandbox lifecycle or the process lifecycle.
This approach is very straightforward and you don't need to worry about managing the sandbox lifecycle or the process lifecycle.

<CodeGroupAutoload
path="code_exec/run_code"
Expand All @@ -14,7 +14,7 @@ This approach is very stragithforward and you don't need to worry about managing

### 2. Run the LLM-generated code using the low-level [session.process](/sandbox/process) module {{ className: 'lead' }}

This approach gives you more control about the underlying session and the running process. You can for example stream stdout and stderr, or kill the process that's running the code.
This approach gives you more control over the underlying session and the running process. You can for example stream stdout and stderr, or kill the process that's running the code.

<CodeGroupAutoload
path="code_exec/process"
Expand Down
8 changes: 4 additions & 4 deletions apps/docs/src/app/sandbox/overview/page.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# Overview

The E2B's LLM sandboxes are isolated cloud environments for your AI apps. These LLM sandboxes are an ideal fit for building AI assistants like coding copilots, code interpreters, AI data analysts, AI browser assistant, and similar.
The E2B's LLM sandboxes are isolated cloud environments for your AI apps. These LLM sandboxes are an ideal fit for building AI assistants like coding copilots, code interpreters, AI data analysts, AI browser assistants, and similar.

Using the sandboxes allows for a secure way to run the unpredictable LLM-generated code and safe LLM tool usage without the potential harm to you and your users.

Additionally, each running instance of your AI app can have its own separate sandbox without you worrying about any infrastructure, networking, or security.

## How sandboxes work under the hood

When you create a new sandbox session, we start a small VM in our cloud. This VM is running a Ubuntu OS and itt takes about 400-600ms to start it.
When you create a new sandbox session, we start a small VM in our cloud. This VM is running a Ubuntu OS and it takes about 400-600ms to start it.

Inside this sanbdox, your AI app can run any code, start any program program, access the internet to download or upload data, use the filesystem, start a web server, and more.
Inside this sandbox, your AI app can run any code, start any program, access the internet to download or upload data, use the filesystem, start a web server, and more.

To start and control the LLM sandbox, use the [E2B SDK](/getting-started/installation) for Python or JavaScript.

Expand All @@ -22,4 +22,4 @@ To start and control the LLM sandbox, use the [E2B SDK](/getting-started/install
{/*
## Next steps
TODO */}
TODO */}
8 changes: 4 additions & 4 deletions apps/docs/src/app/sandbox/process/page.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Starting process inside sandbox
# Starting process inside a sandbox

Here are the basic operations you can do with the process inside a LLM sandbox:

Expand All @@ -25,7 +25,7 @@ Here are the basic operations you can do with the process inside a LLM sandbox:

## Stream stdout

Set either stdout handler for whole session level or per process.
Set either stdout handler for the whole session level or per process.

<CodeGroupAutoload
path="basics/process_stream_stdout"
Expand All @@ -34,7 +34,7 @@ Set either stdout handler for whole session level or per process.

## Stream stderr

Set either stderr handler for whole session level or per process.
Set either stderr handler for the whole session level or per process.

<CodeGroupAutoload
path="basics/process_stream_stderr"
Expand All @@ -43,7 +43,7 @@ Set either stderr handler for whole session level or per process.

## On process exit

Set either on exit handler for whole session level or per process.
Set either on exit handler for the whole session level or per process.

<CodeGroupAutoload
path="basics/process_exit"
Expand Down
2 changes: 1 addition & 1 deletion apps/docs/src/app/sandbox/upload/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ You can upload any file to the sandbox. This is useful if you want to let the LL
A popular workflow is for example to upload a CSV data file and then let the LLM generate and execute Python code that operates with the uploaded CSV file inside the sandbox. This way, you can essentially create your own AI data analyst or code interpreter.

<Note>
We support uploading files up to the 100MB at the moment. If you need to upload larger files, please [reach out to us](/getting-help).
We support uploading files up to 100MB at the moment. If you need to upload larger files, please [reach out to us](/getting-help).
</Note>

<CodeGroupAutoload path="basics/upload_file" isRunnable={false}/>
Expand Down
2 changes: 1 addition & 1 deletion apps/docs/src/app/sandbox/url/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@ For example, you can start a server inside the sandbox and connect to it from yo
<CodeGroupAutoload path="basics/get_url" isRunnable={true} />


If you want to get an URL for a specific port inside the sanbdox, pass the port number to the `getHostname()`/`get_hostname()` method.
If you want to get an URL for a specific port inside the sandbox, pass the port number to the `getHostname()`/`get_hostname()` method.
<CodeGroupAutoload path="basics/get_url_port" isRunnable={true} />

1 comment on commit 849e641

@vercel
Copy link

@vercel vercel bot commented on 849e641 Oct 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

e2b-docs – ./apps/docs

e2b-docs.vercel.app
e2b-docs-e2b.vercel.app
e2b-docs-git-main-e2b.vercel.app

Please sign in to comment.