From 165373c7f18b93f7683a0c70212bbc38deabe45f Mon Sep 17 00:00:00 2001 From: Smoothieewastaken Date: Sun, 22 Oct 2023 15:50:26 +0545 Subject: [PATCH] fixed typos --- examples/huggingface/pytorch/text-generation/README.md | 4 ++-- .../huggingface/pytorch/text-generation/deployment/README.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/examples/huggingface/pytorch/text-generation/README.md b/examples/huggingface/pytorch/text-generation/README.md index b3558e025a6..bc24cddbe56 100644 --- a/examples/huggingface/pytorch/text-generation/README.md +++ b/examples/huggingface/pytorch/text-generation/README.md @@ -60,7 +60,7 @@ Performance varies by use, configuration and other factors. See platform configu IRQ Balance - Eabled + Enabled CPU Model @@ -103,7 +103,7 @@ Performance varies by use, configuration and other factors. See platform configu Enabled - FrequencyGoverner + FrequencyGovernor Performance diff --git a/examples/huggingface/pytorch/text-generation/deployment/README.md b/examples/huggingface/pytorch/text-generation/deployment/README.md index d8632602635..c14d19a1a2b 100644 --- a/examples/huggingface/pytorch/text-generation/deployment/README.md +++ b/examples/huggingface/pytorch/text-generation/deployment/README.md @@ -52,7 +52,7 @@ OMP_NUM_THREADS= numactl -m -C python ru ``` ### Advanced Inference -Neural Engine also supports weight compression to `fp8_4e3m`, `fp8_5e2m` and `int8` **only when runing bf16 graph**. If you want to try, please add arg `--weight_type`, like: +Neural Engine also supports weight compression to `fp8_4e3m`, `fp8_5e2m` and `int8` **only when running bf16 graph**. If you want to try, please add arg `--weight_type`, like: ```bash OMP_NUM_THREADS= numactl -m -C python run_llm.py --max-new-tokens 32 --input-tokens 32 --batch-size 1 --model_path --model --weight_type=fp8_5e2m ```