Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add tts #5459

Merged
merged 13 commits into from
Sep 18, 2024
Merged

add tts #5459

Show file tree
Hide file tree
Changes from 12 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 19 additions & 5 deletions app/client/api.ts
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ export const ROLES = ["system", "user", "assistant"] as const;
export type MessageRole = (typeof ROLES)[number];

export const Models = ["gpt-3.5-turbo", "gpt-4"] as const;
export const TTSModels = ["tts-1", "tts-1-hd"] as const;
export type ChatModel = ModelType;

export interface MultimodalContent {
Expand Down Expand Up @@ -53,6 +54,15 @@ export interface LLMConfig {
style?: DalleRequestPayload["style"];
}

export interface SpeechOptions {
model: string;
input: string;
voice: string;
response_format?: string;
speed?: number;
onController?: (controller: AbortController) => void;
}

export interface ChatOptions {
messages: RequestMessage[];
config: LLMConfig;
Expand Down Expand Up @@ -87,6 +97,7 @@ export interface LLMModelProvider {

export abstract class LLMApi {
abstract chat(options: ChatOptions): Promise<void>;
abstract speech(options: SpeechOptions): Promise<ArrayBuffer>;
abstract usage(): Promise<LLMUsage>;
abstract models(): Promise<LLMModel[]>;
}
Expand Down Expand Up @@ -205,13 +216,16 @@ export function validString(x: string): boolean {
return x?.length > 0;
}

export function getHeaders() {
export function getHeaders(ignoreHeaders: boolean = false) {
const accessStore = useAccessStore.getState();
const chatStore = useChatStore.getState();
const headers: Record<string, string> = {
"Content-Type": "application/json",
Accept: "application/json",
};
let headers: Record<string, string> = {};
if (!ignoreHeaders) {
headers = {
"Content-Type": "application/json",
Accept: "application/json",
};
}

const clientConfig = getClientConfig();

Expand Down
5 changes: 5 additions & 0 deletions app/client/platforms/alibaba.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import {
getHeaders,
LLMApi,
LLMModel,
SpeechOptions,
MultimodalContent,
} from "../api";
import Locale from "../../locales";
Expand Down Expand Up @@ -83,6 +84,10 @@ export class QwenApi implements LLMApi {
return res?.output?.choices?.at(0)?.message?.content ?? "";
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}

async chat(options: ChatOptions) {
const messages = options.messages.map((v) => ({
role: v.role,
Expand Down
6 changes: 5 additions & 1 deletion app/client/platforms/anthropic.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import { Anthropic, ApiPath } from "@/app/constant";
import { ChatOptions, getHeaders, LLMApi } from "../api";
import { ChatOptions, getHeaders, LLMApi, SpeechOptions } from "../api";
import {
useAccessStore,
useAppConfig,
Expand Down Expand Up @@ -73,6 +73,10 @@ const ClaudeMapper = {
const keys = ["claude-2, claude-instant-1"];

export class ClaudeApi implements LLMApi {
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
Comment on lines +76 to +78
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement the speech method.

The speech method has been added to the ClaudeApi class, but it is not yet implemented and throws an error. To provide the intended speech functionality, please implement the method body to handle the SpeechOptions parameter and return a Promise<ArrayBuffer>.

Do you want me to generate a sample implementation for the speech method or open a GitHub issue to track this task?


extractMessage(res: any) {
console.log("[Response] claude response: ", res);

Expand Down
5 changes: 5 additions & 0 deletions app/client/platforms/baidu.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import {
LLMApi,
LLMModel,
MultimodalContent,
SpeechOptions,
} from "../api";
import Locale from "../../locales";
import {
Expand Down Expand Up @@ -75,6 +76,10 @@ export class ErnieApi implements LLMApi {
return [baseUrl, path].join("/");
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}

async chat(options: ChatOptions) {
const messages = options.messages.map((v) => ({
// "error_code": 336006, "error_msg": "the role of message with even index in the messages must be user or function",
Expand Down
5 changes: 5 additions & 0 deletions app/client/platforms/bytedance.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import {
LLMApi,
LLMModel,
MultimodalContent,
SpeechOptions,
} from "../api";
import Locale from "../../locales";
import {
Expand Down Expand Up @@ -77,6 +78,10 @@ export class DoubaoApi implements LLMApi {
return res.choices?.at(0)?.message?.content ?? "";
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
Comment on lines +81 to +83
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Speech method added but not implemented.

The speech method has been added to the DoubaoApi class, suggesting an intention to support speech-related functionality in the future. However, the method is currently not implemented and throws an error.

Consider the following suggestions:

  • Provide a clear timeline for when the speech functionality will be implemented, and update the method accordingly.
  • If the speech functionality is not planned for the near future, consider removing the method until it is fully implemented to avoid confusion and potential errors if the method is called.


async chat(options: ChatOptions) {
const messages = options.messages.map((v) => ({
role: v.role,
Expand Down
13 changes: 12 additions & 1 deletion app/client/platforms/google.ts
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
import { ApiPath, Google, REQUEST_TIMEOUT_MS } from "@/app/constant";
import { ChatOptions, getHeaders, LLMApi, LLMModel, LLMUsage } from "../api";
import {
ChatOptions,
getHeaders,
LLMApi,
LLMModel,
LLMUsage,
SpeechOptions,
} from "../api";
import { useAccessStore, useAppConfig, useChatStore } from "@/app/store";
import { getClientConfig } from "@/app/config/client";
import { DEFAULT_API_HOST } from "@/app/constant";
Expand Down Expand Up @@ -56,6 +63,10 @@ export class GeminiProApi implements LLMApi {
""
);
}
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
Comment on lines +66 to +68
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complete the implementation of the speech method.

The speech method has been added to the GeminiProApi class, but the method body is currently throwing an error indicating that it is not implemented. To enable the speech functionality, please complete the implementation of the method body.

Consider the following:

  • Implement the necessary logic to handle the speech-related functionality based on the provided SpeechOptions.
  • Ensure that the method returns a Promise<ArrayBuffer> as per the method signature.
  • Test the implementation to verify that it works as expected.
  • Update the documentation to reflect the new speech functionality and provide usage instructions.


async chat(options: ChatOptions): Promise<void> {
const apiClient = this;
let multimodal = false;
Expand Down
12 changes: 11 additions & 1 deletion app/client/platforms/iflytek.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,13 @@ import {
} from "@/app/constant";
import { useAccessStore, useAppConfig, useChatStore } from "@/app/store";

import { ChatOptions, getHeaders, LLMApi, LLMModel } from "../api";
import {
ChatOptions,
getHeaders,
LLMApi,
LLMModel,
SpeechOptions,
} from "../api";
import Locale from "../../locales";
import {
EventStreamContentType,
Expand Down Expand Up @@ -53,6 +59,10 @@ export class SparkApi implements LLMApi {
return res.choices?.at(0)?.message?.content ?? "";
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}

async chat(options: ChatOptions) {
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
Expand Down
12 changes: 11 additions & 1 deletion app/client/platforms/moonshot.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,13 @@ import {
usePluginStore,
} from "@/app/store";
import { stream } from "@/app/utils/chat";
import { ChatOptions, getHeaders, LLMApi, LLMModel } from "../api";
import {
ChatOptions,
getHeaders,
LLMApi,
LLMModel,
SpeechOptions,
} from "../api";
import { getClientConfig } from "@/app/config/client";
import { getMessageTextContent } from "@/app/utils";
import { RequestPayload } from "./openai";
Expand Down Expand Up @@ -53,6 +59,10 @@ export class MoonshotApi implements LLMApi {
return res.choices?.at(0)?.message?.content ?? "";
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
Comment on lines +62 to +64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complete the implementation of the speech function.

The speech function is currently unimplemented and throws an error. This could lead to runtime exceptions if the function is called.

Consider adding a TODO comment to track the pending implementation:

+  // TODO: Implement speech function
   speech(options: SpeechOptions): Promise<ArrayBuffer> {
     throw new Error("Method not implemented.");
   }
Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
// TODO: Implement speech function
speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}


async chat(options: ChatOptions) {
const messages: ChatOptions["messages"] = [];
for (const v of options.messages) {
Expand Down
41 changes: 40 additions & 1 deletion app/client/platforms/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ import {
LLMModel,
LLMUsage,
MultimodalContent,
SpeechOptions,
} from "../api";
import Locale from "../../locales";
import { getClientConfig } from "@/app/config/client";
Expand Down Expand Up @@ -78,7 +79,7 @@ export interface DalleRequestPayload {
export class ChatGPTApi implements LLMApi {
private disableListModels = true;

path(path: string): string {
path(path: string, model?: string): string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

θΏ™ι‡Œmodelηš„ζ„δΉ‰ζ˜―οΌŸε₯½εƒζ²‘η”¨εˆ°οΌŸ

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下青 speech ι‡Œη”¨εˆ°δΊ†

Copy link
Member

@Dogtiti Dogtiti Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pathθΏ™δΈͺε‡½ζ•°ι‡Œι’ε₯½εƒζ²‘η”¨εˆ°ε§οΌŸζ²‘ηœ‹εˆ°ε‘€

const accessStore = useAccessStore.getState();

let baseUrl = "";
Expand Down Expand Up @@ -141,6 +142,44 @@ export class ChatGPTApi implements LLMApi {
return res.choices?.at(0)?.message?.content ?? res;
}

async speech(options: SpeechOptions): Promise<ArrayBuffer> {
const requestPayload = {
model: options.model,
input: options.input,
voice: options.voice,
response_format: options.response_format,
speed: options.speed,
};

console.log("[Request] openai speech payload: ", requestPayload);

const controller = new AbortController();
options.onController?.(controller);

try {
const speechPath = this.path(OpenaiPath.SpeechPath, options.model);
const speechPayload = {
method: "POST",
body: JSON.stringify(requestPayload),
signal: controller.signal,
headers: getHeaders(),
};

// make a fetch request
const requestTimeoutId = setTimeout(
() => controller.abort(),
REQUEST_TIMEOUT_MS,
);

const res = await fetch(speechPath, speechPayload);
clearTimeout(requestTimeoutId);
return await res.arrayBuffer();
} catch (e) {
console.log("[Request] failed to make a speech request", e);
throw e;
}
}

async chat(options: ChatOptions) {
const modelConfig = {
...useAppConfig.getState().modelConfig,
Expand Down
5 changes: 5 additions & 0 deletions app/client/platforms/tencent.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import {
LLMApi,
LLMModel,
MultimodalContent,
SpeechOptions,
} from "../api";
import Locale from "../../locales";
import {
Expand Down Expand Up @@ -89,6 +90,10 @@ export class HunyuanApi implements LLMApi {
return res.Choices?.at(0)?.Message?.Content ?? "";
}

speech(options: SpeechOptions): Promise<ArrayBuffer> {
throw new Error("Method not implemented.");
}
Comment on lines +93 to +95
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implement the speech function or remove the placeholder.

The speech function has been added to the HunyuanApi class, but it is not yet implemented. The function currently throws an error indicating that the method is not implemented.

Please consider the following:

  • Review the SpeechOptions type imported from ../api to understand the expected input for this function.
  • Implement the function logic to handle speech-related operations as intended.
  • If speech functionality is not needed, remove the placeholder function to avoid runtime exceptions.


async chat(options: ChatOptions) {
const visionModel = isVisionModel(options.config.model);
const messages = options.messages.map((v, index) => ({
Expand Down
Loading
Loading