From 10651eaa9eeea4b6ac6403ce2d65e2f8765b5f8d Mon Sep 17 00:00:00 2001 From: Hiwot Kassa Date: Thu, 19 Sep 2024 10:11:23 -0700 Subject: [PATCH] fix llama2_70b_lora broken link for Accelerate config file in the readme --- llama2_70b_lora/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llama2_70b_lora/README.md b/llama2_70b_lora/README.md index c79ff15c1..2ba0493d6 100644 --- a/llama2_70b_lora/README.md +++ b/llama2_70b_lora/README.md @@ -84,7 +84,7 @@ accelerate launch --config_file configs/default_config.yaml scripts/train.py \ --seed 1234 \ --lora_target_modules "qkv_proj,o_proj" ``` -where the Accelerate config file is [this one](https://github.com/regisss/lora/blob/main/configs/default_config.yaml). +where the Accelerate config file is [this one](https://github.com/mlcommons/training/blob/master/llama2_70b_lora/configs/default_config.yaml). > Using flash attention with `--use_flash_attn` is necessary for training on 8k-token sequences.