Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

examples : fix RoPE defaults to match PR #3240 #3315

Merged
merged 1 commit into from
Sep 23, 2023

Conversation

cebtenzzre
Copy link
Collaborator

Follow-up to #3240. I forgot to update the defaults for rope_freq_base and rope_freq_scale in the gpt_params struct.
Tested on the 'main' example with codellama-34b.

@shibe2
Copy link
Contributor

shibe2 commented Sep 23, 2023

My codellama-34b.Q6_K.gguf works again after this change. Another model that I converted myself for testing, LlongOrca-7B-16k, previously required setting rope-freq-scale on command line, now works with defaults.

@ggerganov ggerganov merged commit 51a7cf5 into ggerganov:master Sep 23, 2023
32 of 33 checks passed
pkrmf pushed a commit to morlockstudios-com/llama.cpp that referenced this pull request Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants