Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: LOMO support #29649

Closed
KaiLv69 opened this issue Mar 14, 2024 · 2 comments · Fixed by #30178
Closed

feat: LOMO support #29649

KaiLv69 opened this issue Mar 14, 2024 · 2 comments · Fixed by #30178
Labels
Feature request Request for a new feature

Comments

@KaiLv69
Copy link

KaiLv69 commented Mar 14, 2024

Feature request

Support for the LOMO training strategy, which allows full-weights training of 7B models on consumer 24GB cards like RTX 4090.

ArXiv: https://arxiv.org/pdf/2306.09782.pdf https://arxiv.org/pdf/2310.10195.pdf
Implementation: https://github.com/OpenLMLab/LOMO (An installable package lomo-optim is also available here.)

Motivation

This is another step towards democratizing LLM. And the variant AdaLomo performs on par with AdamW.

Your contribution

I'm willing to submit a PR and modify the lomo-optim package if needed.
I may need some kind help for the integration on what I should do.

@amyeroberts
Copy link
Collaborator

cc @younesbelkada

@younesbelkada
Copy link
Contributor

Nice and exciting work ! Happy to get to it after the work on #29588 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request Request for a new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants