Skip to content

The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]

License

Notifications You must be signed in to change notification settings

monk1337/NanoPeft

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 

NanoPeft

The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs).

Why NanoPeft?

  • PEFT & LitGit are great libraries However, Hacking the Hugging Face PEFT (Parameter-Efficient Fine-Tuning) or LitGit packages seems like a lot of work to integrate a new LoRA method quickly and benchmark it.
  • By keeping the code so simple, it is very easy to hack to your needs, add new LoRA methods from papers in the layers/ directory, and fine-tune easily as per your needs.
  • This is mostly for experimental/research purposes, not for scalable solutions.

Installation

With pip

You should install NanoPeft using Pip command

pip3 install git+https://github.com/monk1337/NanoPeft.git

About

The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published