Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: support preset bias of atomic model output #4116

Merged
merged 8 commits into from
Sep 11, 2024

Conversation

wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Sep 10, 2024

Summary by CodeRabbit

  • New Features

    • Introduced a new preset_out_bias parameter for enhanced model configuration, allowing users to define biases.
    • Added documentation for the preset_out_bias parameter in the model arguments for improved clarity.
  • Bug Fixes

    • Implemented validation to ensure the preset_out_bias length matches the model's type map, preventing runtime errors.
  • Tests

    • Added unit tests for the get_model function to validate model attributes and ensure proper error handling for the new bias parameter.

@wanghan-iapcm wanghan-iapcm requested a review from njzjz September 10, 2024 04:08
Copy link
Contributor

coderabbitai bot commented Sep 10, 2024

Walkthrough

Walkthrough

The pull request introduces modifications to the get_standard_model function in the deepmd/pt/model/model/__init__.py file by adding a new parameter, preset_out_bias, which allows for user-defined biases in model configurations. It includes validation to ensure the length of preset_out_bias matches type_map. Additionally, the BaseAtomicModel class in deepmd/pt/model/atomic_model/base_atomic_model.py is updated to change the type of preset_out_bias from PyTorch tensors to NumPy arrays. The model_args function in deepmd/utils/argcheck.py is updated to document this new parameter. A new test file, test_get_model.py, is added to validate the behavior of the get_model function, including tests for attribute settings and error handling.

Changes

File Change Summary
deepmd/pt/model/model/init.py Added preset_out_bias parameter to get_standard_model function with validation for length consistency with type_map. Introduced helper functions _convert_preset_out_bias_to_torch_tensor and _convert_preset_out_bias_to_array.
deepmd/pt/model/atomic_model/base_atomic_model.py Changed type of preset_out_bias from Dict[str, List[Optional[torch.Tensor]]] to Dict[str, List[Optional[np.ndarray]]] in BaseAtomicModel. Updated constructor to reflect this change and added import for NumPy.
deepmd/utils/argcheck.py Updated model_args function to include documentation for the new preset_out_bias parameter, defined as an optional dictionary with a default value of None.
deepmd/pt/utils/stat.py Modified type annotations for preset_bias in compute_output_stats and compute_output_stats_global functions from Optional[Dict[str, List[Optional[torch.Tensor]]]] to Optional[Dict[str, List[Optional[np.ndarray]]]]. Updated _make_preset_out_bias function to standardize ibias parameter type to Optional[np.ndarray].
source/tests/pt/model/test_get_model.py Introduced unit tests for the get_model function, including tests for attribute validation, default behavior when parameters are omitted, and error handling for incorrect preset_out_bias lengths. Added the TestGetModel class with three test methods: test_model_attr, test_notset_model_attr, and test_preset_wrong_len.

Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 836e6c2 and 1afca1c.

Files selected for processing (2)
  • deepmd/pt/model/atomic_model/base_atomic_model.py (3 hunks)
  • deepmd/pt/utils/stat.py (4 hunks)
Files skipped from review as they are similar to previous changes (1)
  • deepmd/pt/model/atomic_model/base_atomic_model.py
Additional comments not posted (2)
deepmd/pt/utils/stat.py (2)

190-191: LGTM!

The change in the type annotation of ibias parameter from Optional[np.array] to Optional[np.ndarray] is approved.


Line range hint 245-267: Verify the impact of using NumPy arrays instead of PyTorch tensors for preset_bias.

The type annotation of preset_bias parameter has been updated to use NumPy arrays (np.ndarray) instead of PyTorch tensors (torch.Tensor).

Please ensure that this change is thoroughly tested to confirm that it behaves as expected and does not introduce any unintended side effects or performance issues.

Run the following script to verify the usage of preset_bias:

Tools
Ruff

242-242: Do not use mutable data structures for argument defaults

Replace with None; initialize within function

(B006)


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@wanghan-iapcm wanghan-iapcm linked an issue Sep 10, 2024 that may be closed by this pull request
Copy link

codecov bot commented Sep 10, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83.02%. Comparing base (d2b6243) to head (1afca1c).
Report is 202 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4116      +/-   ##
==========================================
- Coverage   83.02%   83.02%   -0.01%     
==========================================
  Files         532      532              
  Lines       52174    52187      +13     
  Branches     3030     3031       +1     
==========================================
+ Hits        43319    43328       +9     
- Misses       7911     7913       +2     
- Partials      944      946       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range, codebase verification and nitpick comments (1)
deepmd/pt/model/model/__init__.py (1)

Line range hint 171-224: LGTM, but remove the unused variable.

The changes to get_standard_model look good. The new preset_out_bias parameter is correctly processed through the new helper function _convert_preset_out_bias_to_torch_tensor, allowing for user-defined biases in model configurations.

However, the static analysis tools have correctly identified that the variable default_preset_out_bias at line 197 is unused.

Please remove the unused variable:

-default_preset_out_bias = [None] * len(model_params["type_map"])
 preset_out_bias = model_params.get("preset_out_bias")
Tools
Ruff

197-197: Local variable default_preset_out_bias is assigned to but never used

Remove assignment to unused variable default_preset_out_bias

(F841)

GitHub Check: CodeQL

[notice] 197-197: Unused local variable
Variable default_preset_out_bias is not used.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

deepmd/pt/model/model/__init__.py Outdated Show resolved Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
deepmd/pt/model/model/__init__.py Show resolved Hide resolved
deepmd/utils/argcheck.py Outdated Show resolved Hide resolved
deepmd/utils/argcheck.py Outdated Show resolved Hide resolved
@njzjz
Copy link
Member

njzjz commented Sep 10, 2024

2024-09-10T20:18:26.6301739Z =========================== short test summary info ============================
2024-09-10T20:18:26.6302282Z FAILED source/tests/pt/model/test_property_fitting.py::TestInvariance::test_trans - AssertionError: 
2024-09-10T20:18:26.6302507Z Not equal to tolerance rtol=1e-07, atol=0
2024-09-10T20:18:26.6302517Z 
2024-09-10T20:18:26.6302717Z Mismatched elements: 55 / 55 (100%)
2024-09-10T20:18:26.6302927Z Max absolute difference: 0.00115797
2024-09-10T20:18:26.6303063Z Max relative difference: 0.02320881
2024-09-10T20:18:26.6303396Z  x: array([[[ 1.013011, -0.031964,  0.467955, -1.055318,  0.08362 ,
2024-09-10T20:18:26.6303659Z           0.343627,  0.329521, -1.231861, -0.326444, -1.820717,
2024-09-10T20:18:26.6303812Z           1.033206],...
2024-09-10T20:18:26.6304088Z  y: array([[[ 1.01299 , -0.03189 ,  0.467994, -1.055362,  0.08366 ,
2024-09-10T20:18:26.6304419Z           0.343748,  0.329551, -1.231867, -0.326422, -1.82073 ,
2024-09-10T20:18:26.6304542Z           1.033258],...
2024-09-10T20:18:26.6304892Z = 1 failed, 8331 passed, 3631 skipped, 59809 deselected, 145 warnings in 1976.94s (0:32:56) =

@wanghan-iapcm wanghan-iapcm requested a review from njzjz September 11, 2024 01:05
deepmd/pt/model/atomic_model/base_atomic_model.py Outdated Show resolved Hide resolved
deepmd/pt/utils/stat.py Outdated Show resolved Hide resolved
deepmd/pt/utils/stat.py Outdated Show resolved Hide resolved
@wanghan-iapcm wanghan-iapcm requested a review from njzjz September 11, 2024 03:34
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Sep 11, 2024
Merged via the queue into deepmodeling:devel with commit 85bd386 Sep 11, 2024
60 checks passed
@wanghan-iapcm wanghan-iapcm deleted the feat-preset-bias branch September 11, 2024 09:56
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `preset_out_bias` parameter for enhanced model
configuration, allowing users to define biases.
- Added documentation for the `preset_out_bias` parameter in the model
arguments for improved clarity.

- **Bug Fixes**
- Implemented validation to ensure the `preset_out_bias` length matches
the model's type map, preventing runtime errors.

- **Tests**
- Added unit tests for the `get_model` function to validate model
attributes and ensure proper error handling for the new bias parameter.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Han Wang <92130845+wanghan-iapcm@users.noreply.github.com>
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Co-authored-by: Han Wang <wang_han@iapcm.ac.cn>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PT: pass atom_ener to preset_out_bias
2 participants