This repository provides the source code of the accepted EMNLP2024 findings paper "LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation"
LaMDA: A Memory Efficient Fine-Tuning Approach for LLMs!
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
Armin Azizi,
Souvik Kundu,
Massoud Pedram
University of Southern California, Intel Labs
To install all dependencies and run the trainer replacement script, use the following commands:
pip install -r requirements.txt
./replace_trainer.sh
This “research quality code” is for Non-Commercial purposes and provided by the contributors “As Is” without any express or implied warranty of any kind. The organizations (USC or Intel) involved do not own the rights to the data sets used and do not confer any rights to it. The organizations (USC or Intel) do not warrant or assume responsibility for the accuracy or completeness of any information, text, graphics, links or other items within the code. A thorough security review has not been performed on this code. Additionally, this repository may contain components that are out of date or contain known security vulnerabilities.
@article{azizi2024lamda,
title={LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation},
author={Azizi, Seyedarmin and Kundu, Souvik and Pedram, Massoud},
journal={EMNLP},
year={2024}
}