Skip to content

Commit

Permalink
feat: Add gpt-4o-2024-08-06 to model catalog in openai_dart (davidmig…
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz authored and KennethKnudsen97 committed Oct 1, 2024
1 parent 6b5afee commit 0815082
Show file tree
Hide file tree
Showing 29 changed files with 6,234 additions and 2,541 deletions.
6 changes: 6 additions & 0 deletions packages/langchain_openai/lib/src/chat_models/types.dart
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,16 @@ import 'package:meta/meta.dart';
/// - `gpt-4-vision-preview`
/// - `gpt-4o`
/// - `gpt-4o-2024-05-13`
/// - `gpt-4o-2024-08-06`
/// - `gpt-4o-mini`
/// - `gpt-4o-mini-2024-07-18`
/// - `gpt-3.5-turbo`
/// - `gpt-3.5-turbo-16k`
/// - `gpt-3.5-turbo-16k-0613`
/// - `gpt-3.5-turbo-0125`
/// - `gpt-3.5-turbo-0301`
/// - `gpt-3.5-turbo-0613`
/// - `gpt-3.5-turbo-1106`
///
/// Mind that the list may be outdated.
/// See https://platform.openai.com/docs/models for the latest list.
Expand Down
2 changes: 1 addition & 1 deletion packages/openai_dart/lib/src/generated/client.dart
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ class OpenAIClientException implements Exception {
// CLASS: OpenAIClient
// ==========================================

/// Client for OpenAI API (v.2.1.0)
/// Client for OpenAI API (v.2.3.0)
///
/// The OpenAI REST API. Please see https://platform.openai.com/docs/api-reference for more details.
class OpenAIClient {
Expand Down
48 changes: 38 additions & 10 deletions packages/openai_dart/lib/src/generated/schema/assistant_object.dart
Original file line number Diff line number Diff line change
Expand Up @@ -36,29 +36,46 @@ class AssistantObject with _$AssistantObject {
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
required String? instructions,

/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of
/// types `code_interpreter`, `file_search`, or `function`.
required List<AssistantTools> tools,

/// A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
@JsonKey(name: 'tool_resources', includeIfNull: false)
ToolResources? toolResources,

/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional
/// information about the object in a structured format. Keys can be a maximum of 64 characters long and values
/// can be a maxium of 512 characters long.
required Map<String, dynamic>? metadata,

/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random,
/// while lower values like 0.2 will make it more focused and deterministic.
@JsonKey(includeIfNull: false) @Default(1.0) double? temperature,

/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results
/// of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability
/// mass are considered.
///
/// We generally recommend altering this or temperature but not both.
@JsonKey(name: 'top_p', includeIfNull: false) @Default(1.0) double? topP,

/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o), [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
/// [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models
/// since `gpt-4o-mini-1106`.
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees
/// the model will match your supplied JSON schema. Learn more in the
/// [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates
/// is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a
/// system or user message. Without this, the model may generate an unending stream of whitespace until the
/// generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note
/// that the message content may be partially cut off if `finish_reason="length"`, which indicates the
/// generation exceeded `max_tokens` or the conversation exceeded the max context length.
@_AssistantObjectResponseFormatConverter()
@JsonKey(name: 'response_format', includeIfNull: false)
AssistantObjectResponseFormat? responseFormat,
Expand Down Expand Up @@ -170,11 +187,22 @@ enum AssistantResponseFormatMode {
// CLASS: AssistantObjectResponseFormat
// ==========================================

/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o), [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
/// [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models
/// since `gpt-4o-mini-1106`.
///
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees
/// the model will match your supplied JSON schema. Learn more in the
/// [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates
/// is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a
/// system or user message. Without this, the model may generate an unending stream of whitespace until the
/// generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note
/// that the message content may be partially cut off if `finish_reason="length"`, which indicates the
/// generation exceeded `max_tokens` or the conversation exceeded the max context length.
@freezed
sealed class AssistantObjectResponseFormat
with _$AssistantObjectResponseFormat {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ class AssistantToolsFileSearchFileSearch

/// Factory constructor for AssistantToolsFileSearchFileSearch
const factory AssistantToolsFileSearchFileSearch({
/// The maximum number of results the file search tool should output. The default is 20 for gpt-4* models
/// The maximum number of results the file search tool should output. The default is 20 for `gpt-4*` models
/// and 5 for gpt-3.5-turbo. This number should be between 1 and 50 inclusive.
///
/// Note that the file search tool may output fewer than `max_num_results` results. See the [file search
Expand Down
4 changes: 3 additions & 1 deletion packages/openai_dart/lib/src/generated/schema/batch.dart
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,9 @@ class Batch with _$Batch {
@JsonKey(name: 'request_counts', includeIfNull: false)
BatchRequestCounts? requestCounts,

/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional
/// information about the object in a structured format. Keys can be a maximum of 64 characters long and values
/// can be a maxium of 512 characters long.
@JsonKey(includeIfNull: false) dynamic metadata,
}) = _Batch;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,29 +27,46 @@ class CreateAssistantRequest with _$CreateAssistantRequest {
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
@JsonKey(includeIfNull: false) String? instructions,

/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of
/// types `code_interpreter`, `file_search`, or `function`.
@Default([]) List<AssistantTools> tools,

/// A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
@JsonKey(name: 'tool_resources', includeIfNull: false)
ToolResources? toolResources,

/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional
/// information about the object in a structured format. Keys can be a maximum of 64 characters long and values
/// can be a maxium of 512 characters long.
@JsonKey(includeIfNull: false) Map<String, dynamic>? metadata,

/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random,
/// while lower values like 0.2 will make it more focused and deterministic.
@JsonKey(includeIfNull: false) @Default(1.0) double? temperature,

/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results
/// of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability
/// mass are considered.
///
/// We generally recommend altering this or temperature but not both.
@JsonKey(name: 'top_p', includeIfNull: false) @Default(1.0) double? topP,

/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o), [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
/// [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models
/// since `gpt-4o-mini-1106`.
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees
/// the model will match your supplied JSON schema. Learn more in the
/// [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates
/// is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a
/// system or user message. Without this, the model may generate an unending stream of whitespace until the
/// generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note
/// that the message content may be partially cut off if `finish_reason="length"`, which indicates the
/// generation exceeded `max_tokens` or the conversation exceeded the max context length.
@_CreateAssistantRequestResponseFormatConverter()
@JsonKey(name: 'response_format', includeIfNull: false)
CreateAssistantRequestResponseFormat? responseFormat,
Expand Down Expand Up @@ -163,6 +180,8 @@ enum AssistantModels {
gpt4o,
@JsonValue('gpt-4o-2024-05-13')
gpt4o20240513,
@JsonValue('gpt-4o-2024-08-06')
gpt4o20240806,
@JsonValue('gpt-4o-mini')
gpt4oMini,
@JsonValue('gpt-4o-mini-2024-07-18')
Expand Down Expand Up @@ -254,11 +273,22 @@ enum CreateAssistantResponseFormatMode {
// CLASS: CreateAssistantRequestResponseFormat
// ==========================================

/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o), [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
/// Specifies the format that the model must output. Compatible with [GPT-4o](https://platform.openai.com/docs/models/gpt-4o),
/// [GPT-4 Turbo](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models
/// since `gpt-4o-mini-1106`.
///
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees
/// the model will match your supplied JSON schema. Learn more in the
/// [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
///
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
/// Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates
/// is valid JSON.
///
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a
/// system or user message. Without this, the model may generate an unending stream of whitespace until the
/// generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note
/// that the message content may be partially cut off if `finish_reason="length"`, which indicates the
/// generation exceeded `max_tokens` or the conversation exceeded the max context length.
@freezed
sealed class CreateAssistantRequestResponseFormat
with _$CreateAssistantRequestResponseFormat {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,8 @@ enum ChatCompletionModels {
gpt4o,
@JsonValue('gpt-4o-2024-05-13')
gpt4o20240513,
@JsonValue('gpt-4o-2024-08-06')
gpt4o20240806,
@JsonValue('gpt-4o-mini')
gpt4oMini,
@JsonValue('gpt-4o-mini-2024-07-18')
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ class CreateFineTuningJobRequest with _$CreateFineTuningJobRequest {

/// A string of up to 18 characters that will be added to your fine-tuned model name.
///
/// For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel`.
/// For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.
@JsonKey(includeIfNull: false) String? suffix,

/// The ID of an uploaded file that contains validation data.
Expand Down Expand Up @@ -127,6 +127,8 @@ enum FineTuningModels {
davinci002,
@JsonValue('gpt-3.5-turbo')
gpt35Turbo,
@JsonValue('gpt-4o-mini')
gpt4oMini,
}

// ==========================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,9 @@ class CreateMessageRequest with _$CreateMessageRequest {
/// A list of files attached to the message, and the tools they were added to.
@JsonKey(includeIfNull: false) List<MessageAttachment>? attachments,

/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
/// Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional
/// information about the object in a structured format. Keys can be a maximum of 64 characters long and values
/// can be a maxium of 512 characters long.
@JsonKey(includeIfNull: false) Map<String, dynamic>? metadata,
}) = _CreateMessageRequest;

Expand Down
Loading

0 comments on commit 0815082

Please sign in to comment.