Review and implementation of papers
-
LoRA (ICLR 2022)
LLM Adapter for efficient fine-tuning
-
Transformer (NeurIPS 2017)
Sequence-to-sequence model with self-attention
-
PageRank (Communications of the ACM 2011)
Web page ranking algorithm using topology of web graph
-
AdaLoRA (ICLR 2023)
LLM adapter for efficient fine-tuning using SVD
-
DoRA (arXiv 2024)
LLM adapter for efficient fine-tuning
-
BERT (NAACL 2019)
Pre-trained model based on Transformer's encoder that enables understanding of bidirectional context