Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Add support for Qwen models #30

Closed
RyutaItabashi opened this issue Dec 21, 2023 · 1 comment
Closed

Request: Add support for Qwen models #30

RyutaItabashi opened this issue Dec 21, 2023 · 1 comment

Comments

@RyutaItabashi
Copy link

Hello,

As more people jump on board with using and developing Qwen's open-source models, we've seen a bunch of variants popping up. I think it'd be really cool if this project could support them. Right now, there's a lot of buzz around a variant from a Japanese LLM developer named Rinna, specifically the Nekomata-7b/14b based on Qwen, and it would be cool if this works on mobile devices easily.

I'm not totally sure how tough it would be to add this, but Qwen's already up and running in the original llama.cpp repo here, so this might help a bit.

(Also, English isn't my first language, so sorry for any odd bits🙏)

Thank you!

@RyutaItabashi
Copy link
Author

Sorry please don't mind about this, I simply missed that it's already supported...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant