-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NPU support in whisper.cpp #1557
Comments
Would be great if we can find a way to utilize the NPUs! Keep us in the loop! |
I tried converting the whisper encode model to rknpu format(.rknn), it successed but the estimated runtime is quite slow, even lower than running on CPU. I think the NPU is not full support transformer, some operators are still running on the CPU. |
Some interesting development was done here: https://github.com/usefulsensors/useful-transformers. However not everything runs on the NPU and I've personally had mixed success on running non English models. |
Yes, I've seen that. But I'm looking to enhance the ggml tensor library by adding some operators. This way, not only will whisper.cpp be able to utilize the NPU, but other ggml examples like llama.cpp as well. I've ordered an OrangePi 5 Plus with 32GiB RAM from Aliexpress, which is still in transit : )
Hopefully, we'll be able to run all models, regardless of their size, and whether they are English-only or support multiple languages. |
The most challenging aspect I've encountered thus far is finding an appropriate driver for the RK3588 & RK3566 NPU. Most Linux distributions don't include an NPU driver, with this one being the notable exception. https://github.com/unifreq/linux-5.10.y-rk35xx/tree/main/drivers/rknpu |
You're right. From my experiments, it seems the NPU on the RK3588 is only effective for 3x3 convolutions. Unfortunately, its GEMM performance is quite poor. Despite being equipped with a |
I discovered that someone else did the exact same thing but didn't find success. @ggerganov The challenge with the Rockchip NPU stems from its peculiar input and output dimensions. To attain maximum speed, it's necessary to transform a 2D matrix into a particular dimension. If you don't do this, the driver will take over, but it operates much slower. After processing, you need to convert the result back to its original dimension. This process is quite inefficient, and I'm sharing this to prevent others from spending unnecessary time trying to implement it. With the RK3588, when you're working with a matrix A of size Links: Matrix A:Matrix B:Matrix C: |
@bobqianic this is a great idea. The question is how can we implement whisper.cpp on a NPU/TPU on an embedded device? I have an OrangePi 5 and was hoping the NPU would provide benefits, but it looks like it won't be very useful. Thank you for looking into it. I have one idea that may be theoretically possible, but would require a good amount of work and $$$. The idea is to use 4 Google Coral Edge TPU's in a pipeline (see pipeline example here https://coral.ai/examples/) and in essence jailbreak them (George Hotz is working on it in these videos: https://www.youtube.com/watch?v=rArv2NUXGU8) to run with models other than TensorFlow (for example whisper models). The Coral Edge TPU's would take up all of the USB slots on a Raspberry Pi (maybe a USB hub could be used too), so there would be a bandwidth constraint. Each TPU has up to 8 MB of SRAM to store the models, but in reality it's more like 6.5 MB each, so probably a maximum model size of 26 MB for 4 of these units. The quantized 4 bit tiny model comes in under this. The entire setup may be possible and run quickly, but the accuracy of the tiny model isn't that great. Another idea would be to take TPU's or FPGA's and connect them to a Raspberry Pi via USB or as a Raspberry Pi hat. That will be bandwidth limited by the communication protocol (serial, I2C, etc...). Maybe one day when chips like this come out things will be easier for embedded AI: https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55 |
@bobqianic Thank you for the updates! The work in marty1885/llama.cpp@rknpu2-backend is interesting and I will be following with the progress |
For reference. People have worked around the matrix reordering specifically for Whisper by abstracting the entire thing around that fact. useful-transformers is a very successful implementation. https://github.com/usefulsensors/useful-transformers |
Hey :) as raspberry is launching a new TPU hat https://www.raspberrypi.com/products/ai-kit/ I am reopening the topic. Do you have by chance any news or ways to begin enhance performance thanks to this hat ? I guess it would be easier than coral as we don't need to jailbreak it. |
@Lhemamou I actually talked to Halio about this during Computex. Long story short. No unless someone wants to form a company and sign an NDA to gain low level access. |
@marty1885 I have a company and I'd be open to signing a NDA as long as it looks reasonable, but before I go too far, my main concern is in regard to hardware. Does anyone know what the hailo hardware limit is in regard to model size? Feel free to send links. For example, the Google Coral TPU stick ASIC has 8MB of SRAM built into the chip. Something like 1.5MB of overhead is used, so a model can only be 6.5MB max. https://coral.ai/docs/edgetpu/compiler/#parameter-data-caching For the Google Coral TPU the whisper tiny model is too big, even the 4 bit quantized version of the tiny model is around 24MB. tiny | 75 MiB disk | ~273 MB Mem I'm assuming the Hailo chip does the matrix multiply internally and the results are stored in a pipeline in internal SRAM, but I could be wrong. |
@solarsamuel I can't tell without knowing NDAed information. From what I gathered from their sales. At least I think he is a sales.
|
@marty1885 I can reach out. Who would be a good person to contact? I'm definitely not making any guarantees any of this will work out. |
@solarsamuel Sorry for the late reply. I got caught in some personal issues. Let's not misuse the issue tracker and talk through email? You can find mine on my website via the link on my GitHub profile. Your GH profile links to a company and I'm not sure if that's the one you want to use for discussion. I don't have an email in mind - I don't have a business card from them since the NDA was a big show stopper for me. |
@bobqianic - Would you benefit from having a driver package for Mali GPU kernel drivers on RK3588 (specifically for Debian Bullseye)? Let me know if this is something that would improve inference performance !
|
Christmas is coming soon, and I want to take some time to research something interesting, such as edge low-power inference. Although current whisper.cpp can run on Raspberry Pi, the inference performance cannot achieve real-time transcription.
Fortunately, there are now some development boards that use processors with NPUs, which can be used to achieve real-time transcription of large models. My primary goal is to first support RK3566 and RK3588.Roadmap:
...
Reference:
https://github.com/rockchip-linux/rknpu2The text was updated successfully, but these errors were encountered: