Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: optimize training loop #4426

Merged
merged 6 commits into from
Nov 27, 2024
Merged

perf: optimize training loop #4426

merged 6 commits into from
Nov 27, 2024

Conversation

caic99
Copy link
Member

@caic99 caic99 commented Nov 26, 2024

Improvements to the training process:

  • deepmd/pt/train/training.py: Added a check to skip setting the model to training mode if it already is. The profiling result shows it takes some time to recursively set it to all models.

  • deepmd/pt/train/training.py: Modified the gradient clipping function to include the error_if_nonfinite parameter, and removed the manual check for non-finite gradients and the associated exception raising.

Summary by CodeRabbit

  • New Features

    • Improved training loop with enhanced error handling and control flow.
    • Updated gradient clipping logic for better error detection.
    • Refined logging functionality for training and validation results.
  • Bug Fixes

    • Prevented redundant training calls by adding conditional checks.
  • Documentation

    • Clarified method logic in the Trainer class without changing method signatures.

Copy link
Contributor

coderabbitai bot commented Nov 26, 2024

📝 Walkthrough
<details>
<summary>📝 Walkthrough</summary>

## Walkthrough
The pull request introduces modifications to the `Trainer` class in `deepmd/pt/train/training.py`, focusing on enhancing the training loop's control flow and error handling. Key changes include the introduction of an error handling mechanism in the gradient clipping logic, improved logging for training and validation metrics, and adjustments to ensure that the training state is managed effectively. Several methods within the class have had their logic updated without changing their signatures.

## Changes

| File Path                     | Change Summary                                                                                      |
|-------------------------------|----------------------------------------------------------------------------------------------------|
| deepmd/pt/train/training.py   | - Modified training loop to include a check for `self.wrapper.train()` after evaluation.          |
|                               | - Updated gradient clipping logic with `error_if_nonfinite=True`.                                 |
|                               | - Enhanced logging functionality for clearer training and validation results.                      |
|                               | - Multiple method logic updates in `Trainer` class without changing method signatures.            |

## Possibly related PRs
- #4212: This PR modifies the logging and management of training steps in `deepmd/pt/train/training.py`, which is directly related to the changes made in the `Trainer` class regarding training state management and logging enhancements.

## Suggested reviewers
- wanghan-iapcm
- iProzd

</details>

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 078e848 and 97c3fcf.

📒 Files selected for processing (1)
  • deepmd/pt/train/training.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/pt/train/training.py

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

codecov bot commented Nov 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.64%. Comparing base (e7ad8dc) to head (97c3fcf).
Report is 5 commits behind head on devel.

Additional details and impacted files
@@           Coverage Diff           @@
##            devel    #4426   +/-   ##
=======================================
  Coverage   84.64%   84.64%           
=======================================
  Files         614      614           
  Lines       57135    57136    +1     
  Branches     3486     3488    +2     
=======================================
+ Hits        48364    48365    +1     
+ Misses       7645     7644    -1     
- Partials     1126     1127    +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

deepmd/pt/train/training.py Outdated Show resolved Hide resolved
@caic99 caic99 requested a review from njzjz November 27, 2024 04:49
@njzjz njzjz added this pull request to the merge queue Nov 27, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Nov 27, 2024
@njzjz njzjz added the Test CUDA Trigger test CUDA workflow label Nov 27, 2024
@github-actions github-actions bot removed the Test CUDA Trigger test CUDA workflow label Nov 27, 2024
@njzjz njzjz enabled auto-merge November 27, 2024 21:33
@njzjz njzjz added this pull request to the merge queue Nov 27, 2024
Merged via the queue into deepmodeling:devel with commit 037cf3f Nov 27, 2024
63 checks passed
@caic99 caic99 deleted the perf-training branch November 28, 2024 02:08
@caic99
Copy link
Member Author

caic99 commented Nov 28, 2024

Before:
image

After:

image

This was referenced Nov 28, 2024
@njzjz njzjz added this to the v3.0.1 milestone Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants