-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
spike: small model presets experiments #976
Comments
I can take this. Here's an example from AWS on collecting HIL feedback for LM evals: https://github.com/aws-samples/human-in-the-loop-llm-eval-blog Are the questions in that example appropriate for the intent of this experiment? |
Here's a list of additional models for consideration.
The above models would be used as is – without any quantization; unless it is preferred that I also quantize the models and provide feedback on 8-bit and 4-bit versions. I can also provide feedback on the models listed in the docs. |
I realize that not all of the models may be supported by llama-cpp-python or vLLM, and that I may need to add support for a custom model; especially for CPU only on macOS. Here are two GGUF conversion examples for reference: |
As for the models that have proprietary licenses – if anything, it will be beneficial to run these subjective evals and provide feedback for the community. From a user perspective – I'd probably opt for these models on my own, foregoing the default selections – though I understand the need for the LFAI team to ship with a default model that is very permissive in terms of licensing. |
For reference I've updated the issue description a little bit to help clarify a few things. For the AWS HIL example, that framework looks like it makes sense for what we're asking for, so if you would like to use it as a basis, go for it! I added these to the description, but we have a few limitations I didn't outline originally:
If you're on MacOS, we won't ask you to work outside of the deployment context available to you, so anything that can be run on llama-cpp-python is great. For simplicity, let's stick to model licenses that are as permissive as Apache-2.0 or greater. The vRAM requirements are ideally less than 12Gb, but anything that would fit under 16Gb is worth checking for our purposes (i.e single-GPU laptop deployment scenarios). That likely means quantizations, so if you can find quantizations for models you want to test, great! We're also open to managing our own quantized models, so feel free to experiment with your own quantizations if you want to, but it's certainly not required. It would be greatly helpful to compare any models you test with the current defaults from the docs (as you already listed). That would act as a fantastic point of comparison. |
Important RWKV is a recurrent model architecture (paper) RWKV is a different architecture than transformers based models. The model arch is not available in llama-cpp, but can be made available for use with llama-cpp-python by using the gguf-my-repo HF Space. With regard to the HF Space, a user must understand the quantization key provided below: https://huggingface.co/docs/hub/en/gguf#quantization-types. Note quantized, llama-cpp-python compatible models will be made available in this HF collection. |
I've successfully installed and deployed LFAI on my personal machine and can proceed by interacting with (1) the UI and/or (2) the API.
Which would you prefer for me to do @jalling97? |
I intend to collect results for the following models in the first iteration of this experiment:
|
@jxtngx let's go with Option 2. There's lots of value in working with the API directly as, like you mention, it'll allow you to iterate faster. The LeapfrogAI team would also greatly benefit from your feedback using the API. As for the models, those look like great choices! I'm curious to see the impacts of instruct vs. base vs. hermes 3. I would focus more on the 4-bit quantizations, as that tends to be a slightly better fit for single-GPU laptop deployment scenarios. |
sounds good. I'll create the 4 bit versions then share a new table with links. |
4 bit models:
|
please note that, for each of the 5 models, there are a few flavors of 4 bit quantized versions in the HF collection. Those flavors being:
the tl;dr on the types is that K type quants are favored over the 0 type quants – as the later is considered a legacy quantization method. please see below for more on quantization types found in GGUF: |
Meta released Llama 3.2 on 25 Sep '24; and the new family of models includes 1B and 3B versions which ought to be evaluated against the Phi 3 mini and small versions. release notes https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices 1B variants base: https://huggingface.co/meta-llama/Llama-3.2-1B 3B variants base: https://huggingface.co/meta-llama/Llama-3.2-3B 4bit quantized models 1B Instruct: jxtngx/Llama-3.2-1B-Instruct-Q4_K_M-GGUF cc @jalling97 |
@jxtngx good callout! Including these new models in the comparison would be great. To prevent from over-exploring in too many directions, feel free to take LLama3.1 8b base out of the comparison list, as the instruct finetune is usually what we'd lean towards anyways. |
just wondering – how were the current default models selected? |
The current default models were selected in the Fall of 2023 based on finding a balance between model performance and GPU requirements. The defaults needed to be small enough to run on GPU-enabled edge deployments while maximizing performance on the standard evaluations at the time. cc @justinthelaw @gphorvath if either of you want to add more context |
Model presets
LeapfrogAI currently has two primary models that are used on the backend, but more should be added/tested. By implementing certain small models and evaluating their efficacy from a human perspective, we can make better decisions as to what models to use and evaluate against.
Goal
To determine a list of models and model configs that work well in LFAI from a human-in-the-loop perspective (no automated evals).
Methodology
Limitations
Delivery
The text was updated successfully, but these errors were encountered: