LMFormatEnforcer support - JSON Mode for ExLlamaV2 #170
noamgat
started this conversation in
Show and tell
Replies: 1 comment
-
This looks very interesting. I've been experimenting on and off with various forms of grammar support, regular expressions and so on. It's basically why there is the filter interface that you seem to be hooking into (from a cursory glance at the example Colab notebook.) The only filter I've actually finished is the token healing feature. Which is super useful to have, but the plan was always to expand on it with JSON constraints, arbitrary grammars perhaps, maybe scripting features. It'll be very interesting to dig into LMFE a bit and see what's going on there. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am the author of LMFormatEnforcer. It is a library similar to guidance / jsonformer / outlines that allows for decoding outputs in a specific formats (JSON Schema / Regex mostly). It gives you the capability parallel to OpenAI's JSON mode, but for locally ran models.
Its biggest advantage is that it was designed to be easily integrated into any python-based inference library.
In the last version (v0.7.1), we added support for ExLlamaV2!
There is a Colab friendly sample notebook that shows how you can use LM Format Enforcer to decode specific JSON / regex from the models.
I think this can be very useful for ExLlamaV2 users.
Library maintainers - if you want to collaborate to add a similar integration to your examples repository, I will be happy to collaborate!
Beta Was this translation helpful? Give feedback.
All reactions