Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[
Whisper
] Add conversion script for the tokenizer #27338[
Whisper
] Add conversion script for the tokenizer #27338Changes from 20 commits
63a860c
8e04b79
f1b3b5b
412bd98
deb624a
491352c
4ddc1e9
16d0152
5f63066
1512ea1
26aa3ab
10e611f
18325b1
8a9c954
a78464c
7f826a7
fcbe06b
c7cbe88
1efcd15
608ca8c
8eedcb0
d9eeee8
2350a28
47c3065
ea11820
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to convert the fast tokenizer as well? Or all good with just the slow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fast can always be converted from slow when loading with autoTokenizer, so no need I'd say but can add a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me it's more intuitive to always convert the tokenizer, since we can't use the model without it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes but not BC because this requires tiktoken