Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Lora config error for Llama3 #1659

Merged
merged 1 commit into from
May 28, 2024
Merged

Conversation

oaishi
Copy link
Contributor

@oaishi oaishi commented May 26, 2024

Description

I modified the sample yml file used to train Llama 8b with custom tokens in LORA.

Motivation and Context

The current yml code throws an error: ValueError: Please set lora_modules_to_save to [embed_tokens, lm_head] when using an adapter and changing the special tokens.

How has this been tested?

I successfully fine-tuned a model with the new changes without any error.

Screenshots (if appropriate)

Types of changes

Social Handles (Optional)

The current yml code throws an error: ValueError: Please set lora_modules_to_save to [`embed_tokens`, `lm_head`] when using an adapter and changing the special tokens.

I added the required changes to resolve it
Copy link
Collaborator

@winglian winglian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great catch. thanks!

@winglian winglian merged commit 230e0ac into axolotl-ai-cloud:main May 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants