Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor change #28

Merged
merged 1 commit into from
Aug 7, 2023
Merged

minor change #28

merged 1 commit into from
Aug 7, 2023

Conversation

kojix2
Copy link
Contributor

@kojix2 kojix2 commented Aug 6, 2023

Hi

This is a completely unimportant pull request.
Make test_num_tokens_from_messages look the same as test_get_chat_completion_max_tokens.

I am working on a Crystal binding for this library; this is my second day studying Rust and this is my first Rust pull request.

Feel free to close this one as it does not contain any essential changes. Thanks.

let model = "gpt-3.5-turbo";
let messages = &[async_openai::types::ChatCompletionRequestMessage {
content: Some("You are a helpful assistant that only speaks French.".to_string()),
role: async_openai::types::Role::System,
name: None,
function_call: None,
}];
let max_tokens = get_chat_completion_max_tokens(model, messages).unwrap();

@zurawiki
Copy link
Owner

zurawiki commented Aug 7, 2023

Thanks for the PR. Happy to make the test code more consistent

@zurawiki zurawiki merged commit bcb3e71 into zurawiki:main Aug 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants