Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 4.28.5 #697

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "4.28.4"
".": "4.28.5"
}
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
# Changelog

## 4.28.5 (2024-03-13)

Full Changelog: [v4.28.4...v4.28.5](https://github.com/openai/openai-node/compare/v4.28.4...v4.28.5)

### Bug Fixes

* **ChatCompletionStream:** abort on async iterator break and handle errors ([#699](https://github.com/openai/openai-node/issues/699)) ([ac417a2](https://github.com/openai/openai-node/commit/ac417a2db31919d2b52f2eb2e38f9c67a8f73254))
* **streaming:** correctly handle trailing new lines in byte chunks ([#708](https://github.com/openai/openai-node/issues/708)) ([4753be2](https://github.com/openai/openai-node/commit/4753be272b1d1dade7a769cf350b829fc639f36e))


### Chores

* **api:** update docs ([#703](https://github.com/openai/openai-node/issues/703)) ([e1db98b](https://github.com/openai/openai-node/commit/e1db98bef29d200e2e401e3f5d7b2db6839c7836))
* **docs:** mention install from git repo ([#700](https://github.com/openai/openai-node/issues/700)) ([c081bdb](https://github.com/openai/openai-node/commit/c081bdbb55585e63370496d324dc6f94d86424d1))
* fix error handler in readme ([#704](https://github.com/openai/openai-node/issues/704)) ([4ff790a](https://github.com/openai/openai-node/commit/4ff790a67cf876191e04ad0e369e447e080b78a7))
* **internal:** add explicit type annotation to decoder ([#712](https://github.com/openai/openai-node/issues/712)) ([d728e99](https://github.com/openai/openai-node/commit/d728e9923554e4c72c9efa3bd528561400d50ad8))
* **types:** fix accidental exposure of Buffer type to cloudflare ([#709](https://github.com/openai/openai-node/issues/709)) ([0323ecb](https://github.com/openai/openai-node/commit/0323ecb98ddbd8910fc5719c8bab5175b945d2ab))


### Documentation

* **contributing:** improve wording ([#696](https://github.com/openai/openai-node/issues/696)) ([940d569](https://github.com/openai/openai-node/commit/940d5695f4cacddbb58e3bfc50fec28c468c7e63))
* **readme:** fix https proxy example ([#705](https://github.com/openai/openai-node/issues/705)) ([d144789](https://github.com/openai/openai-node/commit/d1447890a556d37928b628f6449bb80de224d207))
* **readme:** fix typo in custom fetch implementation ([#698](https://github.com/openai/openai-node/issues/698)) ([64041fd](https://github.com/openai/openai-node/commit/64041fd33da569eccae64afe4e50ee803017b20b))
* remove extraneous --save and yarn install instructions ([#710](https://github.com/openai/openai-node/issues/710)) ([8ec216d](https://github.com/openai/openai-node/commit/8ec216d6b72ee4d67e26786f06c93af18d042117))
* use [@deprecated](https://github.com/deprecated) decorator for deprecated params ([#711](https://github.com/openai/openai-node/issues/711)) ([4688ef4](https://github.com/openai/openai-node/commit/4688ef4b36e9f383a3abf6cdb31d498163a7bb9e))

## 4.28.4 (2024-02-28)

Full Changelog: [v4.28.3...v4.28.4](https://github.com/openai/openai-node/compare/v4.28.3...v4.28.4)
Expand Down
8 changes: 4 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This repository uses [`yarn@v1`](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable).
Other package managers may work but are not officially supported for development.

To setup the repository, run:
To set up the repository, run:

```bash
yarn
Expand Down Expand Up @@ -42,7 +42,7 @@ If you’d like to use the repository from source, you can either install from g
To install via git:

```bash
npm install --save git+ssh://git@github.com:openai/openai-node.git
npm install git+ssh://git@github.com:openai/openai-node.git
```

Alternatively, to link a local copy of the repo:
Expand All @@ -65,7 +65,7 @@ pnpm link -—global openai

## Running tests

Most tests will require you to [setup a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.
Most tests require you to [set up a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.

```bash
npx prism path/to/your/openapi.yml
Expand Down Expand Up @@ -99,7 +99,7 @@ the changes aren't made through the automated pipeline, you may want to make rel

### Publish with a GitHub workflow

You can release to package managers by using [the `Publish NPM` GitHub action](https://www.github.com/openai/openai-node/actions/workflows/publish-npm.yml). This will require a setup organization or repository secret to be set up.
You can release to package managers by using [the `Publish NPM` GitHub action](https://www.github.com/openai/openai-node/actions/workflows/publish-npm.yml). This requires a setup organization or repository secret to be set up.

### Publish manually

Expand Down
17 changes: 7 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,15 @@ To learn how to use the OpenAI API, check out our [API Reference](https://platfo
## Installation

```sh
npm install --save openai
# or
yarn add openai
npm install openai
```

You can import in Deno via:

<!-- x-release-please-start-version -->

```ts
import OpenAI from 'https://deno.land/x/openai@v4.28.4/mod.ts';
import OpenAI from 'https://deno.land/x/openai@v4.28.5/mod.ts';
```

<!-- x-release-please-end -->
Expand Down Expand Up @@ -274,7 +272,7 @@ a subclass of `APIError` will be thrown:
async function main() {
const job = await openai.fineTuning.jobs
.create({ model: 'gpt-3.5-turbo', training_file: 'file-abc123' })
.catch((err) => {
.catch(async (err) => {
if (err instanceof OpenAI.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
Expand Down Expand Up @@ -424,7 +422,7 @@ import OpenAI from 'openai';
```

To do the inverse, add `import "openai/shims/node"` (which does import polyfills).
This can also be useful if you are getting the wrong TypeScript types for `Response` more details [here](https://github.com/openai/openai-node/tree/master/src/_shims#readme).
This can also be useful if you are getting the wrong TypeScript types for `Response` ([more details](https://github.com/openai/openai-node/tree/master/src/_shims#readme)).

You may also provide a custom `fetch` function when instantiating the client,
which can be used to inspect or alter the `Request` or `Response` before/after each request:
Expand All @@ -434,7 +432,7 @@ import { fetch } from 'undici'; // as one example
import OpenAI from 'openai';

const client = new OpenAI({
fetch: async (url: RequestInfo, init?: RequestInfo): Promise<Response> => {
fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
console.log('About to make a request', url, init);
const response = await fetch(url, init);
console.log('Got response', response);
Expand All @@ -455,7 +453,7 @@ If you would like to disable or customize this behavior, for example to use the
<!-- prettier-ignore -->
```ts
import http from 'http';
import HttpsProxyAgent from 'https-proxy-agent';
import { HttpsProxyAgent } from 'https-proxy-agent';

// Configure the default for all requests:
const openai = new OpenAI({
Expand All @@ -464,9 +462,8 @@ const openai = new OpenAI({

// Override per-request:
await openai.models.list({
baseURL: 'http://localhost:8080/test-api',
httpAgent: new http.Agent({ keepAlive: false }),
})
});
```

## Semantic Versioning
Expand Down
2 changes: 1 addition & 1 deletion build-deno
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This is a build produced from https://github.com/openai/openai-node – please g
Usage:

\`\`\`ts
import OpenAI from "https://deno.land/x/openai@v4.28.4/mod.ts";
import OpenAI from "https://deno.land/x/openai@v4.28.5/mod.ts";

const client = new OpenAI();
\`\`\`
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "openai",
"version": "4.28.4",
"version": "4.28.5",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <support@openai.com>",
"types": "dist/index.d.ts",
Expand Down
53 changes: 52 additions & 1 deletion src/lib/ChatCompletionRunFunctions.test.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import OpenAI from 'openai';
import { OpenAIError } from 'openai/error';
import { OpenAIError, APIConnectionError } from 'openai/error';
import { PassThrough } from 'stream';
import {
ParsingToolFunction,
Expand Down Expand Up @@ -2207,6 +2207,7 @@ describe('resource completions', () => {
await listener.sanityCheck();
});
});

describe('stream', () => {
test('successful flow', async () => {
const { fetch, handleRequest } = mockStreamingChatCompletionFetch();
Expand Down Expand Up @@ -2273,5 +2274,55 @@ describe('resource completions', () => {
expect(listener.finalMessage).toEqual({ role: 'assistant', content: 'The weather is great today!' });
await listener.sanityCheck();
});
test('handles network errors', async () => {
const { fetch, handleRequest } = mockFetch();

const openai = new OpenAI({ apiKey: '...', fetch });

const stream = openai.beta.chat.completions.stream(
{
max_tokens: 1024,
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Say hello there!' }],
},
{ maxRetries: 0 },
);

handleRequest(async () => {
throw new Error('mock request error');
}).catch(() => {});

async function runStream() {
await stream.done();
}

await expect(runStream).rejects.toThrow(APIConnectionError);
});
test('handles network errors on async iterator', async () => {
const { fetch, handleRequest } = mockFetch();

const openai = new OpenAI({ apiKey: '...', fetch });

const stream = openai.beta.chat.completions.stream(
{
max_tokens: 1024,
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Say hello there!' }],
},
{ maxRetries: 0 },
);

handleRequest(async () => {
throw new Error('mock request error');
}).catch(() => {});

async function runStream() {
for await (const _event of stream) {
continue;
}
}

await expect(runStream).rejects.toThrow(APIConnectionError);
});
});
});
35 changes: 29 additions & 6 deletions src/lib/ChatCompletionStream.ts
Original file line number Diff line number Diff line change
Expand Up @@ -210,13 +210,16 @@ export class ChatCompletionStream

[Symbol.asyncIterator](): AsyncIterator<ChatCompletionChunk> {
const pushQueue: ChatCompletionChunk[] = [];
const readQueue: ((chunk: ChatCompletionChunk | undefined) => void)[] = [];
const readQueue: {
resolve: (chunk: ChatCompletionChunk | undefined) => void;
reject: (err: unknown) => void;
}[] = [];
let done = false;

this.on('chunk', (chunk) => {
const reader = readQueue.shift();
if (reader) {
reader(chunk);
reader.resolve(chunk);
} else {
pushQueue.push(chunk);
}
Expand All @@ -225,7 +228,23 @@ export class ChatCompletionStream
this.on('end', () => {
done = true;
for (const reader of readQueue) {
reader(undefined);
reader.resolve(undefined);
}
readQueue.length = 0;
});

this.on('abort', (err) => {
done = true;
for (const reader of readQueue) {
reader.reject(err);
}
readQueue.length = 0;
});

this.on('error', (err) => {
done = true;
for (const reader of readQueue) {
reader.reject(err);
}
readQueue.length = 0;
});
Expand All @@ -236,13 +255,17 @@ export class ChatCompletionStream
if (done) {
return { value: undefined, done: true };
}
return new Promise<ChatCompletionChunk | undefined>((resolve) => readQueue.push(resolve)).then(
(chunk) => (chunk ? { value: chunk, done: false } : { value: undefined, done: true }),
);
return new Promise<ChatCompletionChunk | undefined>((resolve, reject) =>
readQueue.push({ resolve, reject }),
).then((chunk) => (chunk ? { value: chunk, done: false } : { value: undefined, done: true }));
}
const chunk = pushQueue.shift()!;
return { value: chunk, done: false };
},
return: async () => {
this.abort();
return { value: undefined, done: true };
},
};
}

Expand Down
9 changes: 3 additions & 6 deletions src/resources/audio/speech.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,10 @@ export interface SpeechCreateParams {
voice: 'alloy' | 'echo' | 'fable' | 'onyx' | 'nova' | 'shimmer';

/**
* The format to return audio in. Supported formats are `mp3`, `opus`, `aac`,
* `flac`, `pcm`, and `wav`.
*
* The `pcm` audio format, similar to `wav` but without a header, utilizes a 24kHz
* sample rate, mono channel, and 16-bit depth in signed little-endian format.
* The format to audio in. Supported formats are `mp3`, `opus`, `aac`, `flac`,
* `wav`, and `pcm`.
*/
response_format?: 'mp3' | 'opus' | 'aac' | 'flac' | 'pcm' | 'wav';
response_format?: 'mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm';

/**
* The speed of the generated audio. Select a value from `0.25` to `4.0`. `1.0` is
Expand Down
18 changes: 14 additions & 4 deletions src/resources/audio/transcriptions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,14 @@ export class Transcriptions extends APIResource {
}
}

/**
* Represents a transcription response returned by model, based on the provided
* input.
*/
export interface Transcription {
/**
* The transcribed text.
*/
text: string;
}

Expand All @@ -26,7 +33,8 @@ export interface TranscriptionCreateParams {
file: Uploadable;

/**
* ID of the model to use. Only `whisper-1` is currently available.
* ID of the model to use. Only `whisper-1` (which is powered by our open source
* Whisper V2 model) is currently available.
*/
model: (string & {}) | 'whisper-1';

Expand Down Expand Up @@ -61,9 +69,11 @@ export interface TranscriptionCreateParams {
temperature?: number;

/**
* The timestamp granularities to populate for this transcription. Any of these
* options: `word`, or `segment`. Note: There is no additional latency for segment
* timestamps, but generating word timestamps incurs additional latency.
* The timestamp granularities to populate for this transcription.
* `response_format` must be set `verbose_json` to use timestamp granularities.
* Either or both of these options are supported: `word`, or `segment`. Note: There
* is no additional latency for segment timestamps, but generating word timestamps
* incurs additional latency.
*/
timestamp_granularities?: Array<'word' | 'segment'>;
}
Expand Down
3 changes: 2 additions & 1 deletion src/resources/audio/translations.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ export interface TranslationCreateParams {
file: Uploadable;

/**
* ID of the model to use. Only `whisper-1` is currently available.
* ID of the model to use. Only `whisper-1` (which is powered by our open source
* Whisper V2 model) is currently available.
*/
model: (string & {}) | 'whisper-1';

Expand Down
4 changes: 2 additions & 2 deletions src/resources/beta/threads/runs/runs.ts
Original file line number Diff line number Diff line change
Expand Up @@ -270,9 +270,9 @@ export namespace Run {
*/
export interface LastError {
/**
* One of `server_error` or `rate_limit_exceeded`.
* One of `server_error`, `rate_limit_exceeded`, or `invalid_prompt`.
*/
code: 'server_error' | 'rate_limit_exceeded';
code: 'server_error' | 'rate_limit_exceeded' | 'invalid_prompt';

/**
* A human-readable description of the error.
Expand Down
Loading