Replies: 1 comment
-
Hi Thank you for your question. We are always looking for ways to run LLM inferences using CPUs. We plan to add to our model collections both BLING and DRAGON and are hard at work for a launch soon. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've been taking a look at your BLING models in HF, and IMO this is pretty interesting. Are you planning on working more on it?
Beta Was this translation helpful? Give feedback.
All reactions