We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
从LM Studio的模型仓库镜像网站hf-mirror.com下载qwen2.5-coder-14b-instruct-q4_0.gguf模型文件并加载运行后,模型回复乱码,问题截图如下图所示,请问有人知道是什么原因吗?要如何解决?
模型的运行配置信息如下: Context Length:4096
Context
Model supports up to 131072 tokens
GPU Offload:24/ 48
CPU Thread Pool Size:6
Evaluation Batch Size:512
RoPE Frequency Base:Auto
RoPE Frequency Scale:Auto
Keep Model in Memory:on
Try mmap():on
Seed:Random Seed `
The text was updated successfully, but these errors were encountered:
No branches or pull requests
从LM Studio的模型仓库镜像网站hf-mirror.com下载qwen2.5-coder-14b-instruct-q4_0.gguf模型文件并加载运行后,模型回复乱码,问题截图如下图所示,请问有人知道是什么原因吗?要如何解决?
模型的运行配置信息如下:
Context
Length:4096Model supports up to 131072 tokens
GPU Offload:24/ 48
CPU Thread Pool Size:6
Evaluation Batch Size:512
RoPE Frequency Base:Auto
RoPE Frequency Scale:Auto
Keep Model in Memory:on
Try mmap():on
Seed:Random Seed
`
The text was updated successfully, but these errors were encountered: