-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifiy se_atten_v2 compression doc #3727
Conversation
WalkthroughWalkthroughThe recent changes focus on enhancing clarity in the DeePMD-kit documentation regarding attention layers. The updates streamline descriptions by removing specific conditions for model compression and introduce guidance for effectively using model compression with the Changes
Possibly related issues
Recent Review DetailsConfiguration used: CodeRabbit UI Files selected for processing (2)
Additional Context UsedLanguageTool (13)
Additional comments not posted (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## devel #3727 +/- ##
==========================================
+ Coverage 82.15% 82.19% +0.04%
==========================================
Files 511 513 +2
Lines 47364 47642 +278
Branches 2953 2982 +29
==========================================
+ Hits 38910 39159 +249
- Misses 7561 7572 +11
- Partials 893 911 +18 ☔ View full report in Codecov by Sentry. |
deepmodeling#3643 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> - **Documentation** - Simplified the description for the number of attention layers in the code documentation. - Added a notice about model compression compatibility for `se_atten_v2` descriptor in the documentation. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> (cherry picked from commit 62832e8) Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
#3643 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> - **Documentation** - Simplified the description for the number of attention layers in the code documentation. - Added a notice about model compression compatibility for `se_atten_v2` descriptor in the documentation. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> (cherry picked from commit 62832e8) Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
deepmodeling#3643 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Simplified the description for the number of attention layers in the code documentation. - Added a notice about model compression compatibility for `se_atten_v2` descriptor in the documentation. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
#3643
Summary by CodeRabbit
se_atten_v2
descriptor in the documentation.