Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci : use local ggml #2567

Merged
merged 1 commit into from
Nov 16, 2024
Merged

ci : use local ggml #2567

merged 1 commit into from
Nov 16, 2024

Conversation

ggerganov
Copy link
Owner

No description provided.

@ggerganov ggerganov merged commit 01d3bd7 into master Nov 16, 2024
79 of 87 checks passed
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Dec 5, 2024
* tag 'v1.7.2': (285 commits)
  release : v1.7.2
  sycl: fix example build (ggerganov#2570)
  ci : use local ggml in Android build (ggerganov#2567)
  ggml : tmp workaround for whisper.cpp (skip) (ggerganov#2565)
  update : readme
  scripts : fix sync path
  whisper.swiftui : switch Mac dest to Mac (Designed for iPad) (ggerganov#2562)
  cmake : fix ppc64 check (#0)
  whisper : include ggml-cpu.h (#0)
  build : fixes
  talk-llama : sync llama.cpp
  whisper : fix build (#0)
  sync : ggml
  sycl : Fixes to broken builds and test-backend-ops (llama/10257)
  vulkan: Optimize contiguous copies (llama/10254)
  vulkan: Throttle the number of shader compiles during the build step. (llama/10222)
  metal : more precise Q*K in FA vec kernel (llama/10247)
  vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (llama/10226)
  metal : reorder write loop in mul mat kernel + style (llama/10231)
  metal : fix build and some more comments (llama/10229)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant