EMNLP24 "Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models"
please refer to https://github.com/antgroup/importance-aware-sparse-tuning-IST-paper
EMNLP24 "Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language Models"
please refer to https://github.com/antgroup/importance-aware-sparse-tuning-IST-paper