Lightweight fine-tuning is one of the most important techniques for adapting foundation models, because it allows you to modify foundation models for your needs without needing substantial computational resources.
In this project, I will apply parameter-efficient fine-tuning using the
Hugging Face peft
library.
In this project, I will bring together all of the essential components of a PyTorch + Hugging Face training and inference process. Specifically, I will:
- Load a pre-trained model and evaluate its performance Perform
- parameter-efficient fine tuning using the pre-trained model Perform inference
- using the fine-tuned model and compare its performance to the original model
- PEFT technique: I used LoRA as my PEFT technique. LoRA is the only PEFT technique that is compatible with all models at this time.
- Model: I used distilbert-base-uncased as my model. This is a relatively small model that is compatible with sequence classification and LoRA.
- Evaluation approach: The evaluation approach covered in this project was the
evaluate
method with a Hugging FaceTrainer
. - Fine-tuning dataset: I use a dataset from Hugging Face's datasets library stanfordnlp/imdb.
(base)$: git@github.com:mafda/lightweight_fine_tuning_project.git
(base)$: cd lightweight_fine_tuning_project
-
Create the conda environment
(base)$: conda env create -f environment.yml
-
Activate the environment
(base)$: conda activate peft
-
Download the base model. Unzip the folder and place it in the repo, at location
path/to/lightweight_fine_tuning_project/model/
. -
Download the peft model. Unzip the folder and place it in the repo, at location
path/to/lightweight_fine_tuning_project/model/
. -
Donwload the lora model for the dog dataset. Place it in the repo, at location
path/to/lightweight_fine_tuning_project/model/
.
.
├── README.md
├── environment.yml
├── model
│ ├── base
│ ├── lora
│ └── peft
└── src
└── lightweight_finetuning.ipynb
made with 💙 by mafda