- MacBook Air m1 8+256
- macOS Sonoma 14.2.1
- Terminal : iTerm
- Python 3.10.9
- GNU Make 3.81
- cmake version 3.28.0
- Homebrew 4.1.14
Applicants must demonstrate proficiency in building and executing backend frameworks. You are required to share screenshots and a brief documentation detailing your build and execution process for examples from these frameworks. You can pick any example to demonstrate the execution.
1.1 mlx
follow the guide here.
- Python Installation
pip install mlx
Build in C++
MLX must be built and installed from source
git clone git@github.com:ml-explore/mlx.git mlx && cd mlx
mkdir -p build && cd build
cmake .. && make -j
make test
make install
1.1.2 mlx whisper example
Get the working directory
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper
Set up
pip install -r requirements.txt
brew install ffmpeg
Convert the model to MLX format
python convert.py --torch-name-or-path tiny --mlx-path mlx_models/tiny
Convert audio to text
import whisper
output = whisper.transcribe("/Users/ryan/Downloads/audio.mp3", word_timestamps=True)
print(output["segments"][0]["words"])
import whisper
output = whisper.transcribe("/Users/ryan/Downloads/audio.mp3", word_timestamps=True)
print(output["segments"][0]["words"])
1.2 whisper.cpp
clone the repo
git clone https://github.com/ggerganov/whisper.cpp.git
download and convert to ggml
bash ./models/download-ggml-model.sh base.en
built and test
make
./main -f samples/jfk.wav
using hydai/0.13.5_ggml_lts branch
Follow this guide to build the llama.cpp
plugin and execute it with this chat example or this API server example.
git clone https://github.com/WasmEdge/WasmEdge.git -b hydai/0.13.5_ggml_lts
cd WasmEdge
brew install grpc
brew install llvm
brew install cmake
export LLVM_DIR=/opt/homebrew/opt/llvm/lib/cmake
#for Apple Silicon Model
cmake -GNinja -Bbuild -DCMAKE_BUILD_TYPE=Release \
-DWASMEDGE_PLUGIN_WASI_NN_BACKEND="GGML" \
-DWASMEDGE_PLUGIN_WASI_NN_GGML_LLAMA_METAL=ON \
-DWASMEDGE_PLUGIN_WASI_NN_GGML_LLAMA_BLAS=OFF \
.
cmake --build build
cmake --install build
curl -LO https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/resolve/main/Llama-2-7b-chat-hf-Q5_K_M.gguf
Chat with the model on the CLI
curl -LO https://github.com/second-state/LlamaEdge/releases/latest/download/llama-chat.wasm
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf llama-chat.wasm -p llama-2-chat
curl -LO https://github.com/second-state/LlamaEdge/releases/latest/download/llama-api-server.wasm
curl -LO https://github.com/second-state/chatbot-ui/releases/latest/download/chatbot-ui.tar.gz
tar xzf chatbot-ui.tar.gz
rm chatbot-ui.tar.gz
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf llama-api-server.wasm -p llama-2-chat
curl -LO https://github.com/second-state/chatbot-ui/releases/latest/download/chatbot-ui.tar.gz
tar xzf chatbot-ui.tar.gz
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llama-2-7b-chat-hf-Q5_K_M.gguf llama-api-server.wasm -p llama-2-chat