Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LM Studio加载qwen2.5-coder-14b-instruct-q4_0.gguf模型文件运行回复乱码 #364

Open
blateyang opened this issue Jan 19, 2025 · 0 comments

Comments

@blateyang
Copy link

从LM Studio的模型仓库镜像网站hf-mirror.com下载qwen2.5-coder-14b-instruct-q4_0.gguf模型文件并加载运行后,模型回复乱码,问题截图如下图所示,请问有人知道是什么原因吗?要如何解决?

Image

模型的运行配置信息如下:
Context Length:4096

Model supports up to 131072 tokens

GPU Offload:24/ 48

CPU Thread Pool Size:6

Evaluation Batch Size:512

RoPE Frequency Base:Auto

RoPE Frequency Scale:Auto

Keep Model in Memory:on

Try mmap():on

Seed:Random Seed
`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant