Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/add dos finetune UT #3876

Merged
merged 10 commits into from
Jun 17, 2024
Merged

Conversation

anyangml
Copy link
Collaborator

@anyangml anyangml commented Jun 13, 2024

Summary by CodeRabbit

  • Bug Fixes

    • Improved the compute_output_stats_global function to handle empty model predictions gracefully.
  • Tests

    • Enhanced finetuning tests with new model support and additional checks for "dos" model setup.

Copy link
Contributor

coderabbitai bot commented Jun 13, 2024

Walkthrough

The overall changes involve modifying the training.py file by simplifying the single_model_finetune function and removing unnecessary imports. New test cases have been added to test_finetune.py to support these changes. Additionally, a modification in stat.py introduces an early return for handling empty model_pred dictionaries. These updates seek to improve code clarity, efficiency, and coverage.

Changes

File Path Change Summary
deepmd/pt/train/training.py Removed import of DOSModel; simplified single_model_finetune function.
source/tests/pt/test_finetune.py Added model_dos, new DataRequirementItem, updated test_finetune_change_out_bias, new test class.
deepmd/pt/utils/stat.py Added check for empty model_pred and early return in compute_output_stats_global function.

Sequence Diagram(s)

sequenceDiagram
    participant TestSuite
    participant TrainingModule
    participant StatisticsModule

    TestSuite->>+TrainingModule: call single_model_finetune()
    TrainingModule-->>-TrainingModule: Simplified _model assignment
    TrainingModule-->>TestSuite: Return result
    
    TestSuite->>+StatisticsModule: call compute_output_stats_global(model_pred)
    StatisticsModule-->>StatisticsModule: Check if model_pred is empty
    alt model_pred is empty
        StatisticsModule-->>TestSuite: Return {}, {}
    else model_pred is not empty
        StatisticsModule->>StatisticsModule: Proceed with computation
        StatisticsModule-->>TestSuite: Return computed stats
    end
Loading

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Outside diff range and nitpick comments (5)
deepmd/pt/train/training.py (5)

Line range hint 368-368: Refactor the usage of config.get("learning_rate_dict") to remove unnecessary None default.

- config.get("learning_rate_dict", None)
+ config.get("learning_rate_dict")

Also applies to: 462-462


Line range hint 560-560: Optimize dictionary key checks by removing .keys() for better performance.

- key in dict.keys()
+ key in dict

Also applies to: 1132-1132


Line range hint 705-705: Use context handlers for file operations to ensure files are properly closed after operations, avoiding resource leaks.

- fout = open(self.disp_file, mode="w", buffering=1)
+ with open(self.disp_file, mode="w", buffering=1) as fout:
- fout1 = open(record_file, mode="w", buffering=1)
+ with open(record_file, mode="w", buffering=1) as fout1:

Also applies to: 709-709


Line range hint 751-754: Use a ternary operator for cleaner and more concise logic when setting pref_lr.

- if _step_id < self.warmup_steps:
-     pref_lr = _lr.start_lr
- else:
-     pref_lr = cur_lr
+ pref_lr = _lr.start_lr if _step_id < self.warmup_steps else cur_lr

Line range hint 861-861: Remove unused loop control variable ii or replace it with _ to indicate it's intentionally unused.

- for ii in range(valid_numb_batch):
+ for _ in range(valid_numb_batch):

source/tests/pt/test_finetune.py Show resolved Hide resolved
source/tests/pt/test_finetune.py Show resolved Hide resolved
Copy link

codecov bot commented Jun 13, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.71%. Comparing base (d23f7ae) to head (9547f29).
Report is 115 commits behind head on devel.

Additional details and impacted files
@@           Coverage Diff           @@
##            devel    #3876   +/-   ##
=======================================
  Coverage   82.70%   82.71%           
=======================================
  Files         517      517           
  Lines       50137    50138    +1     
  Branches     2984     2987    +3     
=======================================
+ Hits        41467    41472    +5     
+ Misses       7760     7756    -4     
  Partials      910      910           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@anyangml anyangml marked this pull request as draft June 13, 2024 09:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range and nitpick comments (3)
deepmd/pt/utils/stat.py (3)

Line range hint 88-88: Avoid using mutable default arguments.

- def compute_output_stats(merged: Union[Callable[[], List[dict]], List[dict]] = [], ntypes: int, keys: Union[str, List[str]] = ["energy"], ...
+ def compute_output_stats(merged: Union[Callable[[], List[dict]], List[dict]] = None, ntypes: int, keys: Union[str, List[str]] = None, ...
+    if merged is None:
+        merged = []
+    if keys is None:
+        keys = ["energy"]

Also applies to: 238-238


Line range hint 154-154: Remove unused variable.

- natoms = system["natoms"]

Line range hint 166-166: Bind loop variables in function definition.

- for nframes in range(len(sampled)):
+ for nframes, system in enumerate(sampled):

Also applies to: 167-167

deepmd/pt/utils/stat.py Show resolved Hide resolved
anyangml and others added 3 commits June 13, 2024 19:09
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Anyang Peng <137014849+anyangml@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (4)
deepmd/pt/utils/stat.py (4)

Line range hint 88-88: Avoid using mutable default arguments to prevent unexpected behaviors.

- def make_stat_input(datasets, dataloaders, nbatches=[]):
+ def make_stat_input(datasets, dataloaders, nbatches=None):
+    if nbatches is None:
+        nbatches = []

Line range hint 154-154: Remove the unused variable natoms to clean up the code.

- natoms = system["natoms"]

Line range hint 238-238: Avoid using mutable default arguments to prevent unexpected behaviors.

- def compute_output_stats(merged, ntypes, keys=[], ...):
+ def compute_output_stats(merged, ntypes, keys=None, ...):
+    if keys is None:
+        keys = []

Line range hint 291-291: Simplify dictionary key checks by removing .keys().

- if kk in preset_bias.keys():
+ if kk in preset_bias:

Also applies to: 334-334, 343-343, 450-450, 487-487, 493-493, 495-495, 500-500

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (4)
deepmd/pt/utils/stat.py (4)

Line range hint 238-238: Refactor to avoid using .keys() when checking for key existence in dictionaries.

- if kk in preset_bias.keys()
+ if kk in preset_bias

This change should be applied to all similar instances in the file.

Also applies to: 334-334, 343-343, 450-450, 487-487, 493-493, 495-495, 500-500


Line range hint 88-88: Avoid using mutable data structures as default arguments to prevent potential bugs related to mutable default values.

- def compute_output_stats_global(sampled: List[dict], ntypes: int, keys: List[str] = ["energy"], rcond: Optional[float] = None, preset_bias: Optional[Dict[str, List[Optional[torch.Tensor]]]] = None, model_pred: Optional[Dict[str, np.ndarray]] = None):
+ def compute_output_stats_global(sampled: List[dict], ntypes: int, keys: Optional[List[str]] = None, rcond: Optional[float] = None, preset_bias: Optional[Dict[str, List[Optional[torch.Tensor]]]] = None, model_pred: Optional[Dict[str, np.ndarray]] = None):
+     if keys is None:
+         keys = ["energy"]

Also applies to: 238-238


Line range hint 154-154: Remove the unused variable natoms as it is defined but never used.

- natoms = {
-     kk: [
-         system["atype"]
-         for system in sampled
-         if ("atom_" + kk) in system and system.get(f"find_atom_{kk}", 0) > 0
-     ]
-     for kk in keys
- }

Line range hint 166-166: Bind the loop variables nframes and system to their respective functions to avoid potential scoping issues.

- nf = {kk: merged_natoms[kk].shape[0] for kk in keys if kk in merged_natoms}
+ nf = {kk: merged_natoms[kk].shape[0] for kk, system in enumerate(sampled) if kk in merged_natoms}

Also applies to: 167-167

@anyangml anyangml marked this pull request as ready for review June 14, 2024 03:04
@anyangml anyangml requested review from iProzd and wanghan-iapcm June 14, 2024 03:05
@iProzd iProzd added this pull request to the merge queue Jun 17, 2024
Merged via the queue into deepmodeling:devel with commit 160551f Jun 17, 2024
60 checks passed
mtaillefumier pushed a commit to mtaillefumier/deepmd-kit that referenced this pull request Sep 18, 2024
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved the `compute_output_stats_global` function to handle empty
model predictions gracefully.
  
- **Tests**
- Enhanced finetuning tests with new model support and additional checks
for "dos" model setup.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Anyang Peng <137014849+anyangml@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants