We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm working with Assistant API , and I want to implement streaming for the response content. Here's my current code: `package main
import ( "context" "fmt" "log" "time" openai "github.com/sashabaranov/go-openai" )
func testGPT_thread(messageData string, threadId string) string { client := openai.NewClient("your-api-key") ctx := context.Background() assistantId := "your-assistant-id"
if threadId == "" { thread, err := client.CreateThread(ctx, openai.ThreadRequest{}) if err != nil { fmt.Printf("Create thread error: %v\n", err) return "" } threadId = thread.ID } message, err := client.CreateMessage(ctx, threadId, openai.MessageRequest{ Role: openai.ChatMessageRoleUser, Content: messageData, }) if err != nil { fmt.Printf("Create message error: %v\n", err) return threadId } run, err := client.CreateRun(ctx, threadId, openai.RunRequest{ AssistantID: assistantId, }) if err != nil { fmt.Printf("Create run error: %v\n", err) return threadId } // Currently polling the status for run.Status == openai.RunStatusQueued || run.Status == openai.RunStatusInProgress { run, err = client.RetrieveRun(ctx, threadId, run.ID) if err != nil { return threadId } log.Printf("Run status: %s\n", run.Status) time.Sleep(100 * time.Millisecond) } if run.Status != openai.RunStatusCompleted { log.Fatalf("run failed with status %s", run.Status) } numMessages := 1 messages, err := client.ListMessage(ctx, run.ThreadID, &numMessages, nil, nil, nil, &run.ID) if err != nil { log.Fatal(err) } log.Printf(messages.Messages[0].Content[0].Text.Value) return threadId
}`
The text was updated successfully, but these errors were encountered:
pass the response messages.Messages[0].Content[0].Text.Value to a channel which are consumed from the text-event/stream or similar controller
messages.Messages[0].Content[0].Text.Value
for { resp, err := stream.Recv() if errors.Is(err, io.EOF) { break } if err != nil { fmt.Printf("\nStream error: %v\n", err) return err } newmsg := fmt.Sprintf("%v", resp.Choices[0].Delta.Content) controllerChannel <- newmsg }
and inside the text-event/stream
for stream := range controllerChannel { fmt.Fprintf(w, "data: %v\n\n", stream) flusher, ok := w.(http.Flusher) if !ok { logger.Error(prefixLog + ". streaming not ok. ") return } flusher.Flush() }
don't forget to close the channel on end
Sorry, something went wrong.
No branches or pull requests
I'm working with Assistant API , and I want to implement streaming for the response content. Here's my current code:
`package main
import (
"context"
"fmt"
"log"
"time"
openai "github.com/sashabaranov/go-openai"
)
func testGPT_thread(messageData string, threadId string) string {
client := openai.NewClient("your-api-key")
ctx := context.Background()
assistantId := "your-assistant-id"
}`
The text was updated successfully, but these errors were encountered: