-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: add document equations for se_atten_v2
#3828
Conversation
WalkthroughWalkthroughThe changes in the Changes
Sequence Diagram(s) (Beta)Silently ignoring generating sequence diagrams as the changes are documentation updates and do not involve modifications to the control flow or new features. Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (1)
Additional context usedMarkdownlint
Additional comments not posted (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (3)
doc/model/train-se-atten.md (3)
Line range hint
238-238
: Correct the spelling of "place holder" to "placeholder".- Atom type indexes (place holder) + Atom type indexes (placeholder)
Line range hint
110-110
: Consider adding commas for clarity in lists.- From left to right the members denote the sizes of each hidden layer + From left to right, the members denote the sizes of each hidden layer - to mask the diagonal in the attention weights and False is recommended + to mask the diagonal in the attention weights, and False is recommendedAlso applies to: 116-116, 142-142, 147-147
Line range hint
91-136
: Replace hard tabs with spaces to maintain consistency in formatting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Outside diff range and nitpick comments (2)
doc/model/train-se-atten.md (2)
Line range hint
238-238
: Correct the spelling in the documentation.- Atom type indexes (place holder) + Atom type indexes (placeholder)The term "placeholder" is conventionally spelled as one word.
Line range hint
224-224
: Specify the language for code blocks.- ``` + ```jsonFor better syntax highlighting and readability, specify the language for the fenced code blocks. This change should be applied to all JSON code blocks in the document.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (4)
doc/model/train-se-atten.md (4)
Line range hint
238-238
: Correct the spelling error.- place holder + placeholder
Line range hint
224-224
: Specify the language for code blocks.- ```json + ```json
Line range hint
91-136
: Replace hard tabs with spaces for consistency.- <hard tabs> + <appropriate number of spaces>
Line range hint
110-110
: Add missing commas for clarity.- From left to right the members denote + From left to right, the members denoteAlso applies to: 142-142
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## devel #3828 +/- ##
==========================================
- Coverage 82.61% 75.18% -7.43%
==========================================
Files 515 515
Lines 49566 49558 -8
Branches 2985 2985
==========================================
- Hits 40949 37262 -3687
- Misses 7706 11463 +3757
+ Partials 911 833 -78 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Using
tebd_input_mode
instead of"tebd_input_mode"
for model params. - It's better to place
"concat"
formulation in the description ofse_atten
descriptor (e.g. "Note that we obtain G using the type embedding method by default in this descriptor. By default, we concat the type embeddings...") and keep doc fortebd_input_mode
and"strip"
formulation inse_atten_v2
, by introducing"strip"
as a default value fortebd_input_mode
inse_atten_v2
.
se_atten_v2
se_atten_v2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (2)
doc/model/train-se-atten.md (2)
Line range hint
233-233
: Correct the spelling of "place holder" to "placeholder".- Atom type indexes (place holder) + Atom type indexes (placeholder)
Line range hint
219-219
: Specify the language for fenced code blocks to enable proper syntax highlighting.- ```json + ```json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (3)
doc/model/train-se-atten.md (3)
Line range hint
234-234
: Correct the spelling of "place holder" to "placeholder".- Atom type indexes (place holder) + Atom type indexes (placeholder)
Line range hint
95-140
: Replace hard tabs with spaces for consistent formatting.- "descriptor" :{ - "type": "se_atten", - "rcut_smth": 0.50, - "rcut": 6.00, - "sel": 120, - "neuron": [25, 50, 100], - "axis_neuron": 16, - "resnet_dt": false, - "attn": 128, - "attn_layer": 2, - "attn_mask": false, - "attn_dotr": true, - "seed": 1 - } + "descriptor" :{ + "type": "se_atten", + "rcut_smth": 0.50, + "rcut": 6.00, + "sel": 120, + "neuron": [25, 50, 100], + "axis_neuron": 16, + "resnet_dt": false, + "attn": 128, + "attn_layer": 2, + "attn_mask": false, + "attn_dotr": true, + "seed": 1 + }
Line range hint
220-220
: Specify the language for fenced code blocks to improve readability.- ``` + ```json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (2)
doc/model/train-se-atten.md (2)
Line range hint
238-238
: Correct the spelling of "place holder" to "placeholder".- Atom type indexes (place holder) + Atom type indexes (placeholder)
Line range hint
224-224
: Specify the language for fenced code blocks.- ```json + ```json
I think it is better to update the documentation in #3840. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (1)
doc/model/train-se-atten.md (1)
Line range hint
91-104
: Replace hard tabs with spaces for consistency.- "descriptor" :{ + "descriptor" :{ - "type": "se_atten", + "type": "se_atten", - "rcut_smth": 0.50, + "rcut_smth": 0.50, - "rcut": 6.00, + "rcut": 6.00, - "sel": 120, + "sel": 120, - "neuron": [25, 50, 100], + "neuron": [25, 50, 100], - "axis_neuron": 16, + "axis_neuron": 16, - "resnet_dt": false, + "resnet_dt": false, - "attn": 128, + "attn": 128, - "attn_layer": 2, + "attn_layer": 2, - "attn_mask": false, + "attn_mask": false, - "attn_dotr": true, + "attn_dotr": true, - "seed": 1 + "seed": 1
@@ -122,6 +126,16 @@ We highly recommend using the version 2.0 of the attention-based descriptor `"se | |||
"set_davg_zero": false | |||
``` | |||
|
|||
When using PyTorch backend, you must continue to use descriptor `"se_atten"` and specify `tebd_input_mode` as `"strip"` and `smooth_type_embedding` as `"true"`, which achieves the effect of `"se_atten_v2"`. The `tebd_input_mode` can take `"concat"` and `"strip"` as values. When using TensorFlow backend, you need to use descriptor `"se_atten_v2"` and do not need to set `tebd_input_mode` and `smooth_type_embedding` because the default value of `tebd_input_mode` is `"strip"`, and the default value of `smooth_type_embedding` is `"true"` in TensorFlow backend. When `tebd_input_mode` is set to `"strip"`, the embedding matrix $\mathcal{G}^i$ is constructed as: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In tensorflow, only type_one_side=false is supported when using se_atten_v2, see #3745 . So the formulation below is only the former one. Other sentences about tf are correct.
Solve issue deepmodeling#3139 `"se_atten_v2"` is inherited from `"se_atten"` with the following parameter modifications: ```json "tebd_input_mode": "strip", "smooth_type_embedding": true, "set_davg_zero": false ``` I add the equations for parameter `"tebd_input_mode"`. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Detailed the default value and functionality of the `"tebd_input_mode"` parameter. - Highlighted the performance superiority of `"se_atten_v2"` over `"se_atten"`. - Specified a model compression requirement for `se_atten_v2`. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com> (cherry picked from commit e3acea5) Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Solve issue #3139 `"se_atten_v2"` is inherited from `"se_atten"` with the following parameter modifications: ```json "tebd_input_mode": "strip", "smooth_type_embedding": true, "set_davg_zero": false ``` I add the equations for parameter `"tebd_input_mode"`. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Detailed the default value and functionality of the `"tebd_input_mode"` parameter. - Highlighted the performance superiority of `"se_atten_v2"` over `"se_atten"`. - Specified a model compression requirement for `se_atten_v2`. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com> (cherry picked from commit e3acea5) Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Solve issue deepmodeling#3139 `"se_atten_v2"` is inherited from `"se_atten"` with the following parameter modifications: ```json "tebd_input_mode": "strip", "smooth_type_embedding": true, "set_davg_zero": false ``` I add the equations for parameter `"tebd_input_mode"`. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **Documentation** - Detailed the default value and functionality of the `"tebd_input_mode"` parameter. - Highlighted the performance superiority of `"se_atten_v2"` over `"se_atten"`. - Specified a model compression requirement for `se_atten_v2`. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com>
Solve issue #3139
"se_atten_v2"
is inherited from"se_atten"
with the following parameter modifications:I add the equations for parameter
"tebd_input_mode"
.Summary by CodeRabbit
"tebd_input_mode"
parameter."se_atten_v2"
over"se_atten"
.se_atten_v2
.