Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use any small models, like LaMini, Flan-T5-783M etc. #32

Open
gitknu opened this issue Jul 2, 2023 · 0 comments
Open

Use any small models, like LaMini, Flan-T5-783M etc. #32

gitknu opened this issue Jul 2, 2023 · 0 comments

Comments

@gitknu
Copy link

gitknu commented Jul 2, 2023

Hello, sorry for the fact I couldn't find the solution in issues and if the question is dumb, but looking for the answer and trying by myself didn't give the result.

Details:
The problem is I have low-end PC which is capable of running Alpaca and Vicuna (both 7B), but quite slowly. On the other hand, trying different models I saw that models under 1B parameters run quite well. Mainly they are based on Flan-T5. They give good results as for my machine and quickly enough (about 3-5 tokens per second). Using it with text is another better point. For example, asking it "basing on this text, answer -..." I have almost perfect answer. But giving it text each time is bad practice as for me. I mean, time spend etc.

Short question:
Is there any way to use this tool with any of these models?

LaMini-Flan-T5-783M
Flan-T5-Alpaca (770M or something)
RWKV (under 1.5B)
(any other good small models, under 1B parameters)
If you give the detailed manual I will be very grateful! Solutions, other than pautobot, privateGPT etc. are also welcome!

Thank you for understanding, answers and sorry for any inconvenience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant