-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backport the performance improvement from llama.cpp #709
Labels
performance
CPU and memory usage - results and comparisons
Comments
Related. #702 Since whisper.cpp shares the same llama.* and ggml.* files with llama.cpp, I think it is just one step away - convert the whisper model from ggml into the new ggjt format. |
Just updated No need to generate new models - everything should just work |
darth-vader-lg
added a commit
to darth-vader-lg/whisper.cpp
that referenced
this issue
Apr 16, 2023
…estore the speed This reverts commit 69b8503.
jacobwu-b
pushed a commit
to jacobwu-b/Transcriptify-by-whisper.cpp
that referenced
this issue
Oct 24, 2023
- About x2 overall performance improvement on Apple Silicon - Results should now be the same for different number of threads (not tested)
jacobwu-b
pushed a commit
to jacobwu-b/Transcriptify-by-whisper.cpp
that referenced
this issue
Oct 24, 2023
- About x2 overall performance improvement on Apple Silicon - Results should now be the same for different number of threads (not tested)
landtanin
pushed a commit
to landtanin/whisper.cpp
that referenced
this issue
Dec 16, 2023
- About x2 overall performance improvement on Apple Silicon - Results should now be the same for different number of threads (not tested)
iThalay
pushed a commit
to iThalay/whisper.cpp
that referenced
this issue
Sep 23, 2024
- About x2 overall performance improvement on Apple Silicon - Results should now be the same for different number of threads (not tested)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It would be very cool if the performance improvements from ggerganov/llama.cpp#613 could be backported to this repo.
I couldn't find an issue for this, if there is one, I'm happy to close this.
The text was updated successfully, but these errors were encountered: