Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[llm]update peft docs #9655

Merged
merged 4 commits into from
Dec 19, 2024
Merged

[llm]update peft docs #9655

merged 4 commits into from
Dec 19, 2024

Conversation

lugimzzz
Copy link
Contributor

PR types

Others

PR changes

Docs

Description

优化PEFT相关文档

Copy link

paddle-bot bot commented Dec 18, 2024

Thanks for your contribution!

Copy link

codecov bot commented Dec 18, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 52.80%. Comparing base (90bc68e) to head (68219e3).
Report is 3 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9655      +/-   ##
===========================================
+ Coverage    52.21%   52.80%   +0.58%     
===========================================
  Files          721      718       -3     
  Lines       114885   112225    -2660     
===========================================
- Hits         59990    59259     -731     
+ Misses       54895    52966    -1929     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -7,7 +7,7 @@
- 易用并行策略:支持纯数据并行(Data Parallelism)、分组参数切片的数据并行(Sharding Parallelism)、张量模型并行(Tensor Parallelism)、流水线模型并行(Pipeline Parallelism)、序列并行(Sequence parallelism)。
- 多种精度训练:16/32bit 全量精调、4/8/16bit LoRA 精调、混合量化 LoRA 精调。
- 性能极致优化:FlashAttention-2、FlashMask、Greedy Zero Padding。
- 先进精调策略:LoRA+、PiSSA、rsLoRA、NEFTune、VeRA。
- 先进精调策略:LoRA+、PiSSA、rsLoRA、NEFTune、VeRA、MoRA、ReFT、MoSLoRA
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

可以加上LoRA-GA,代码已经合入

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这块我看还没有开源模型适配示例,等 @greycooker 适配后由他加入吧

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

嗯嗯,这块我来加一下

@@ -150,7 +150,7 @@ python run_finetune.py ./config/llama/lora_argument.json
python run_finetune.py ./config/llama/pt_argument.json
```

更多大模型精调分布式使用文档、训练细节和效果请参见[大模型精调教程](./docs/finetune.md)。
除了 LoRA、Prefix Tuning 外,还支持 LoKr、VeRA、MoRA、ReFT、rsLoRA、LoRA+、PiSSA、MoSLoRA 等多种精调算法,更多大模型精调使用文档、训练细节和效果请参见[大模型精调教程](./docs/finetune.md)。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LoRA-GA加上

Copy link
Collaborator

@wawltor wawltor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wawltor wawltor merged commit e0ace16 into PaddlePaddle:develop Dec 19, 2024
9 of 12 checks passed
@lugimzzz lugimzzz deleted the origin branch December 19, 2024 06:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants