Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pt): support use_aparam_as_mask for pt backend #4246

Merged
merged 13 commits into from
Oct 26, 2024

Conversation

ChiahsinChu
Copy link
Contributor

@ChiahsinChu ChiahsinChu commented Oct 23, 2024

support use_aparam_as_mask for pt backend

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced use_aparam_as_mask parameter in GeneralFitting, InvarFitting, and EnerFitting classes, allowing users to conditionally exclude atomic parameters from fitting processes.
    • Added seed parameter to InvarFitting for enhanced control over randomness.
    • New test method test_use_aparam_as_mask in TestInvarFitting to validate behavior based on the new parameter.
  • Bug Fixes

    • Improved error handling for use_aparam_as_mask in various classes.
  • Tests

    • Enhanced parameterization in multiple test classes to accommodate new features related to atomic parameters.
    • Updated test methods in TestInvarFitting to include use_aparam_as_mask for comprehensive testing.

Copy link
Contributor

coderabbitai bot commented Oct 23, 2024

📝 Walkthrough
📝 Walkthrough
📝 Walkthrough
📝 Walkthrough
📝 Walkthrough

Walkthrough

The pull request introduces significant modifications across several classes related to fitting processes in the DeepMD framework. Key changes include the addition of a use_aparam_as_mask boolean parameter to multiple classes, affecting how atomic parameters are handled during fitting. The constructors of the GeneralFitting, InvarFitting, and other related classes have been updated to accommodate this new parameter. Additionally, various methods have been refined to conditionally process atomic parameters based on the value of this flag, enhancing the flexibility and control over parameter usage in fitting calculations.

Changes

File Path Change Summary
deepmd/dpmodel/fitting/general_fitting.py Modified GeneralFitting class constructor and _call_common method to conditionally handle aparam based on use_aparam_as_mask.
deepmd/dpmodel/fitting/invar_fitting.py Updated InvarFitting constructor to include a new seed parameter and refined error handling for use_aparam_as_mask. Enhanced serialize method to include atom_ener.
deepmd/pt/model/task/fitting.py Added use_aparam_as_mask parameter to GeneralFitting constructor, affecting buffer registration and input dimension calculations. Updated serialization to include this parameter.
deepmd/pt/model/task/invar_fitting.py Introduced use_aparam_as_mask parameter in InvarFitting constructor, allowing exclusion of atomic parameters from fitting.
deepmd/tf/fit/ener.py Modified EnerFitting class to conditionally process aparam based on use_aparam_as_mask in several methods. Updated serialization methods to reflect new handling of aparam.
source/tests/consistent/fitting/common.py Updated build_tf_fitting method in FittingTest to include aparam as a required parameter.
source/tests/consistent/fitting/test_dos.py Enhanced TestDOS class to include numb_aparam in various methods and properties, impacting test configurations.
source/tests/consistent/fitting/test_ener.py Added numb_aparam and use_aparam_as_mask to TestEner class, affecting test parameterization and setups.
source/tests/consistent/fitting/test_property.py Introduced numb_aparam to TestProperty class, impacting various methods and properties related to test configurations.
source/tests/pt/model/test_ener_fitting.py Updated TestInvarFitting methods to include use_aparam_as_mask parameter, added a new test method for this parameter's functionality.

Possibly related PRs

  • feat(pt): consistent fine-tuning with init-model #3803: The changes in this PR introduce a new boolean parameter use_aparam_as_mask to the GeneralFitting class, which directly relates to the handling of atomic parameters, similar to the modifications made in the main PR regarding the GeneralFitting class.
  • feat pt : Support property fitting #3867: This PR enhances the InvarFitting class, which is related to the handling of atomic parameters and their integration into fitting processes, aligning with the changes made in the main PR.
  • feat(jax): support neural networks #4156: The modifications in the JAXBackend class include updates to serialization and deserialization methods, which may relate to how atomic parameters are managed in the context of the main PR's changes to the GeneralFitting class.
  • feat(jax): checkpoint I/O #4236: This PR introduces checkpoint I/O functionality in the JAXBackend, which may involve handling model parameters, including atomic parameters, in a way that is relevant to the changes made in the main PR.
  • docs: fix parameter links #4239: The updates to parameter links in the documentation may indirectly relate to the changes made in the main PR, as they both focus on improving the clarity and usability of the handling of parameters in the fitting processes.

Suggested labels

Docs

Suggested reviewers

  • njzjz
  • wanghan-iapcm
  • iProzd

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (10)
source/tests/consistent/fitting/common.py (2)

33-38: LGTM! Consider adding docstring updates.

The aparam handling implementation follows the established pattern and correctly creates a 2D placeholder for atomic parameters. However, the method's docstring should be updated to document the new parameter.

Add parameter documentation:

def build_tf_fitting(self, obj, inputs, natoms, atype, fparam, aparam, suffix):
    """Build TF fitting graph.
    
    Args:
        ...
        aparam: Atomic parameters tensor of shape [natoms, numb_aparam]
        ...
    """

Line range hint 21-38: Consider documenting the multi-backend testing strategy.

While this PR focuses on adding use_aparam_as_mask support for the pt backend, these changes to the TF test utilities suggest a strategy to maintain consistent behavior across backends. Consider documenting this approach in the project's testing guidelines.

deepmd/pt/model/task/invar_fitting.py (1)

80-81: Enhance parameter documentation with usage examples.

While the documentation clearly states the parameter's purpose, it would be helpful to add:

  1. Example scenarios when users should set this to True
  2. The implications on the fitting process
  3. Any performance considerations
source/tests/pt/model/test_ener_fitting.py (1)

155-188: Consider adding test case for use_aparam_as_mask=False

While the test comprehensively verifies the input dimensions when use_aparam_as_mask is True, it would be beneficial to also verify the dimensions when use_aparam_as_mask is False to ensure complete coverage.

Consider adding a test case like this:

# Test when use_aparam_as_mask is False
ft0 = InvarFitting(
    "foo",
    self.nt,
    dd0.dim_out,
    od,
    numb_fparam=nfp,
    numb_aparam=nap,
    mixed_types=mixed_types,
    exclude_types=et,
    neuron=nn,
    seed=GLOBAL_SEED,
    use_aparam_as_mask=False,
).to(env.DEVICE)
in_dim = ft0.dim_descrpt + ft0.numb_fparam + ft0.numb_aparam
assert ft0.filter_layers[0].in_dim == in_dim
source/tests/consistent/fitting/test_dos.py (1)

122-124: Consider adding a comment to clarify the aparam shape.

While the initialization is correct, it would be helpful to add a comment explaining that aparam is initialized as a column vector with shape (natoms, 1) to match the atomic parameter requirements.

 self.aparam = np.zeros_like(
     self.atype, dtype=GLOBAL_NP_FLOAT_PRECISION
-).reshape(-1, 1)
+).reshape(-1, 1)  # Shape: (natoms, 1) for atomic parameters
deepmd/dpmodel/fitting/general_fitting.py (2)

158-166: LGTM! Consider adding docstring clarification.

The conditional initialization of aparam-related attributes and input dimension calculation is logically sound. When use_aparam_as_mask is True, atomic parameters are correctly excluded from the input dimension since they're used only as masks.

Consider adding a docstring comment above the in_dim calculation to explain the logic:

# Calculate input dimension: descriptor + frame params + (optionally) atomic params
# Atomic params are only included when not used as masks

Line range hint 158-408: Consider performance optimization for mask operations.

While the implementation is clean and maintains backward compatibility, using atomic parameters as masks might have performance implications, especially for large systems.

Consider these optimizations:

  1. When use_aparam_as_mask is True, consider using a more memory-efficient boolean mask type instead of the full atomic parameter array
  2. Consider caching the mask computation results if the atomic parameters don't change frequently during training
  3. Add a performance note in the class docstring about the memory and computational overhead when using atomic parameters as masks
deepmd/pt/model/task/fitting.py (3)

129-130: Enhance parameter documentation

While the documentation explains the basic purpose, consider adding:

  • The default value (False)
  • The implications of setting this to True (e.g., how it affects the network's behavior and output)

213-215: Fix indentation for consistency

The indentation of these lines appears to be inconsistent with the surrounding code.

-        in_dim = self.dim_descrpt + self.numb_fparam
-        if not self.use_aparam_as_mask:
-            in_dim += self.numb_aparam
+       in_dim = self.dim_descrpt + self.numb_fparam
+       if not self.use_aparam_as_mask:
+           in_dim += self.numb_aparam

Line range hint 447-467: Use f-strings consistently in error messages

The error messages mix f-string and string concatenation styles. Consider using f-strings consistently:

-                    f"get an input aparam of dim {aparam.shape[-1]}, ",
-                    f"which is not consistent with {self.numb_aparam}.",
+                    f"get an input aparam of dim {aparam.shape[-1]}, which is not consistent with {self.numb_aparam}."
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 18026eb and 044023e.

📒 Files selected for processing (10)
  • deepmd/dpmodel/fitting/general_fitting.py (2 hunks)
  • deepmd/dpmodel/fitting/invar_fitting.py (0 hunks)
  • deepmd/pt/model/task/fitting.py (7 hunks)
  • deepmd/pt/model/task/invar_fitting.py (3 hunks)
  • deepmd/tf/fit/ener.py (9 hunks)
  • source/tests/consistent/fitting/common.py (2 hunks)
  • source/tests/consistent/fitting/test_dos.py (14 hunks)
  • source/tests/consistent/fitting/test_ener.py (13 hunks)
  • source/tests/consistent/fitting/test_property.py (13 hunks)
  • source/tests/pt/model/test_ener_fitting.py (6 hunks)
💤 Files with no reviewable changes (1)
  • deepmd/dpmodel/fitting/invar_fitting.py
🧰 Additional context used
🔇 Additional comments (28)
source/tests/consistent/fitting/common.py (1)

21-21: Verify existing test compatibility with the new parameter.

The addition of the required aparam parameter might affect existing test cases that use this utility method.

✅ Verification successful

Let me analyze the results and gather more information to verify the impact comprehensively.

The previous results show multiple test files using build_tf_fitting, but we need to see the complete method calls to verify if they're properly updated with the new aparam parameter. This will help us confirm if the changes are consistent across all test cases.


Let me check the actual implementation of these test classes to understand if they're properly handling the new aparam parameter.

This will help us verify if the test classes properly initialize the aparam attribute and include it in their parameters, which is necessary for the updated build_tf_fitting method.


Let me check one final thing to verify how the test methods are calling build_tf_fitting with the new aparam parameter.

This will help us see the complete test methods and verify if they're properly passing the aparam parameter to build_tf_fitting.


Based on the comprehensive analysis of the test files, I can now provide the final response:

All test cases are properly updated to handle the new aparam parameter

The verification shows that:

  • All test classes initialize self.aparam correctly in their setup
  • Test files (dos.py, property.py, ener.py) that use parameters properly handle the aparam with conditional logic: self.aparam if numb_aparam else None
  • Other test files (dipole.py, polar.py) explicitly pass None for the aparam parameter
  • The changes maintain consistency across all test implementations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for existing test cases using build_tf_fitting to ensure they're updated
rg "build_tf_fitting\(" "source/tests" -A 3

Length of output: 2468


Script:

#!/bin/bash
# Get the full context of test files to see how they call build_tf_fitting
for file in test_dos.py test_dipole.py test_property.py test_polar.py test_ener.py; do
    echo "=== $file ==="
    rg "build_tf_fitting" "source/tests/consistent/fitting/$file" -B 2 -A 5
done

Length of output: 2201


Script:

#!/bin/bash
# Check the test class implementations to see how aparam is initialized and handled
for file in test_dos.py test_dipole.py test_property.py test_polar.py test_ener.py; do
    echo "=== $file ==="
    rg "def setUp|self\.aparam = |self\.param = " "source/tests/consistent/fitting/$file" -B 1 -A 3
done

# Also check the base class implementation
echo "=== common.py ==="
rg "class FittingTest|def setUp" "source/tests/consistent/fitting/common.py" -A 10

Length of output: 2800


Script:

#!/bin/bash
# Check the test methods to see how they call build_tf_fitting
for file in test_dos.py test_dipole.py test_property.py test_polar.py test_ener.py; do
    echo "=== $file ==="
    rg "def test_|build_tf_fitting" "source/tests/consistent/fitting/$file" -B 1 -A 10
done

Length of output: 2695

deepmd/pt/model/task/invar_fitting.py (2)

103-103: LGTM! Parameter addition is well-implemented.

The new parameter is correctly typed, has an appropriate default value, and maintains backward compatibility.


127-128: Verify superclass parameter handling.

The parameter is correctly passed to the superclass, but let's verify that GeneralFitting properly handles this parameter.

✅ Verification successful

Based on the shell script results, I can now generate the final response as the verification is complete. The results show that GeneralFitting properly handles the use_aparam_as_mask parameter:

Parameter is properly handled in the superclass

The GeneralFitting class correctly:

  • Accepts and initializes the use_aparam_as_mask parameter with proper type annotation
  • Stores it as an instance variable
  • Uses it consistently in multiple methods to control aparam-related behavior:
    • Affects the network input dimension calculation
    • Controls the initialization of aparam_avg and aparam_inv_std buffers
    • Guards aparam validation and processing logic
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify GeneralFitting class implementation for use_aparam_as_mask

# Test: Check if GeneralFitting properly initializes the parameter
ast-grep --pattern $'class GeneralFitting {
  $$$
  def __init__($$$, use_aparam_as_mask: bool = False, $$$) {
    $$$
  }
  $$$
}'

# Test: Check how the parameter affects the fitting net initialization
rg -A 10 "use_aparam_as_mask" "deepmd/pt/model/task/fitting.py"

Length of output: 3084

source/tests/pt/model/test_ener_fitting.py (2)

Line range hint 50-70: LGTM: Comprehensive test coverage for use_aparam_as_mask parameter

The test_consistency method has been properly updated to include use_aparam_as_mask in its parameter combinations, ensuring thorough testing of both True and False cases across different configurations.


Line range hint 111-130: LGTM: JIT compatibility verified for use_aparam_as_mask

The test_jit method properly verifies that the InvarFitting class remains compatible with PyTorch's JIT compilation after adding the use_aparam_as_mask parameter.

source/tests/consistent/fitting/test_property.py (4)

43-43: LGTM! Good test coverage for atomic parameters.

The parameterized test with numb_aparam values of 0 and 1 ensures proper testing of functionality both with and without atomic parameters.


55-55: LGTM! Proper integration of numb_aparam in test data.

The new parameter is correctly unpacked and included in the test data dictionary.

Also applies to: 64-64


102-104: Verify if zero initialization of aparam is sufficient for testing.

While the shape and type of self.aparam are correctly set up, consider if initializing with zeros provides adequate test coverage. Different values might help catch edge cases or numerical issues.

Would you like me to suggest alternative initialization patterns that could provide better test coverage?


139-139: LGTM! Consistent aparam handling across all backends.

The implementation correctly:

  • Handles atomic parameters conditionally based on numb_aparam
  • Converts data appropriately for each backend
  • Maintains consistent behavior across TF, PT, and DP implementations

Also applies to: 160-162, 183-183

source/tests/consistent/fitting/test_dos.py (4)

61-61: LGTM: Appropriate test parameterization for atomic parameters.

The addition of numb_aparam with values (0, 1) ensures comprehensive testing of both cases: with and without atomic parameters.


72-72: LGTM: Consistent property updates.

The numb_aparam parameter is correctly integrated into the data property and skip conditions, maintaining consistency with the test parameterization.

Also applies to: 80-80, 92-92


177-179: LGTM: Consistent aparam handling across all backends.

The atomic parameter handling is implemented consistently across all backends (PT, DP, JAX, Array API Strict), with proper conditional checks and tensor conversions where necessary.

Also applies to: 199-199, 216-216, 235-235


Line range hint 61-273: Verify test coverage for atomic parameter combinations.

The implementation looks good, but let's verify that all important combinations of atomic parameters are being tested.

✅ Verification successful

Let me analyze the test coverage further by examining the content of these test files.


Let me check one more aspect to verify the test coverage comprehensively.


Let me check one final aspect about the test parameters.


Test coverage for atomic parameters is comprehensive

The verification shows that the test coverage for atomic parameters is complete and well-structured:

  1. The test uses parameterized testing with numb_aparam values of both 0 and 1, covering cases with and without atomic parameters
  2. The test matrix includes combinations with:
    • Different precision values
    • Mixed and non-mixed types
    • Various numbers of DOS points (10, 20)
    • With and without frame parameters (numb_fparam: 0, 1)
  3. The implementation is tested across all backends (TF, PyTorch, JAX, Array API Strict)
  4. Each backend properly handles the conditional passing of atomic parameters
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check test coverage for atomic parameter combinations

# Test 1: Check if there are other test files for DOS fitting
echo "Checking for other DOS fitting test files..."
fd -e py "test.*dos" -x echo "Found test file: {}"

# Test 2: Check for any TODO comments related to testing
echo -e "\nChecking for TODO comments in DOS fitting tests..."
rg -i "todo.*dos|todo.*aparam" -g "test_*.py"

Length of output: 538


Script:

#!/bin/bash
# Check the test parameter combinations in DOS-related test files

echo "Checking test_dos.py parameters..."
rg "param.*=" -A 5 source/tests/consistent/fitting/test_dos.py

echo -e "\nChecking test_deepdos.py parameters..."
rg "param.*=" -A 5 source/tests/tf/test_deepdos.py

echo -e "\nChecking test_model_dos.py parameters..."
rg "param.*=" -A 5 source/tests/tf/test_model_dos.py

echo -e "\nChecking test_fitting_dos.py parameters..."
rg "param.*=" -A 5 source/tests/tf/test_fitting_dos.py

# Check for atomic parameter handling in DOS implementations
echo -e "\nChecking atomic parameter handling in DOS implementations..."
rg "aparam" -B 2 -A 2 "source/tests/tf/test_*dos.py"

Length of output: 2559


Script:

#!/bin/bash
# Check the test parameters and combinations in the test_dos.py file

echo "Checking test parameters in test_dos.py..."
rg "^@pytest.mark.parametrize|^class.*Test" -A 10 source/tests/consistent/fitting/test_dos.py

echo -e "\nChecking the test setup and parameter initialization..."
ast-grep --pattern 'class TestDOS {
  $$$
  setUp($$$) {
    $$$
  }
  $$$
}'

Length of output: 674


Script:

#!/bin/bash
# Check the test parameter combinations at the top of the file

echo "Checking test parameter combinations..."
rg -B 5 "class TestDOS" source/tests/consistent/fitting/test_dos.py

# Check if there are any test cases that specifically test atomic parameters
echo -e "\nChecking atomic parameter specific tests..."
rg "test.*aparam|test.*atomic" source/tests/consistent/fitting/test_dos.py

# Check the parameter tuples used in testing
echo -e "\nChecking parameter tuples..."
rg "\(.*numb_aparam" -B 2 source/tests/consistent/fitting/test_dos.py

Length of output: 1199

source/tests/consistent/fitting/test_ener.py (4)

63-65: LGTM: Well-structured parameter additions with proper conditional handling.

The new parameters numb_aparam and use_aparam_as_mask are correctly integrated into the test matrix. The conditional application of use_aparam_as_mask only when numb_aparam > 0 is a good safeguard.

Also applies to: 75-88


137-139: LGTM: Proper initialization of aparam with correct shape and dtype.

The initialization of self.aparam follows best practices by:

  • Using zeros_like to match the dtype of self.atype
  • Reshaping to (-1, 1) for proper dimensionality

192-201: LGTM: Consistent implementation across all backends.

The aparam handling is implemented uniformly across all backends with:

  • Proper tensor/array type conversion for each backend
  • Consistent conditional passing based on numb_aparam
  • Alignment with existing fparam pattern

Also applies to: 222-222, 240-240, 260-260


63-65: Verify test coverage for edge cases.

While the test matrix is comprehensive, please verify if the following scenarios are covered in other test files:

  1. Interaction between use_aparam_as_mask and different atom types
  2. Boundary conditions for aparam values when used as mask
✅ Verification successful

Let me gather more information about the test coverage by examining the additional test file found and the implementation details.


Let me search for the implementation details of the masking functionality to better understand how it interacts with atom types.


Let me search for the implementation of the fitting network to understand how aparam masking is used.


Based on the test files examined, I can now provide a final response about the test coverage:

Test coverage for aparam masking is comprehensive

The test coverage for use_aparam_as_mask is thorough and well-structured:

  • The test matrix in test_ener.py covers:

    • Both True/False cases for use_aparam_as_mask
    • Only applies masking when numb_aparam > 0 (correctly handles edge case)
    • Tests across multiple precision types (float64, float32, bfloat16)
    • Tests with different atom types through mixed_types parameter
    • Tests with different output dimensions (od: 1, 3)
    • Tests with and without exclude_types ([], [0], [1])
  • Additional dedicated test test_use_aparam_as_mask in test_ener_fitting.py verifies:

    • Input dimension consistency when masking is enabled
    • Serialization/deserialization behavior
    • Implementation consistency across different frameworks (TF, PyTorch, JAX)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for additional test coverage of use_aparam_as_mask
echo "Searching for additional test files covering use_aparam_as_mask..."
rg -l "use_aparam_as_mask.*=.*" "source/tests"

echo "Searching for test assertions related to aparam masking..."
rg "assert.*aparam.*mask" "source/tests"

Length of output: 400


Script:

#!/bin/bash
# Check the content of the additional test file
echo "Content of source/tests/pt/model/test_ener_fitting.py:"
cat source/tests/pt/model/test_ener_fitting.py

# Search for implementation details of aparam masking
echo -e "\nSearching for aparam masking implementation:"
rg -A 5 "use_aparam_as_mask" source/

Length of output: 36153


Script:

#!/bin/bash
# Search for the implementation of aparam masking in the source code
echo "Searching for aparam masking implementation in source files:"
rg -A 10 "class.*Fitting.*" source/ | grep -A 15 "class.*Fitting"

# Search for any documentation or comments about aparam masking
echo -e "\nSearching for documentation about aparam masking:"
rg -i "aparam.*mask" source/ --type md --type txt

Length of output: 34798


Script:

#!/bin/bash
# Search for the implementation of fitting network and aparam masking
echo "Searching for fitting network implementation with aparam masking:"
ast-grep --pattern 'class $_ {
  $$$
  forward($$$) {
    $$$
    aparam $$$
    $$$
  }
  $$$
}'

# Search for test cases that verify aparam masking behavior
echo -e "\nSearching for test assertions related to aparam masking:"
rg -B 5 -A 5 "test.*aparam.*mask" source/tests/

Length of output: 1312

deepmd/dpmodel/fitting/general_fitting.py (1)

Line range hint 394-408: LGTM! Verify impact on existing models.

The conditional processing of atomic parameters in _call_common is consistent with the constructor changes and maintains proper error handling and normalization.

Let's verify that this change doesn't affect existing models that don't use the mask feature:

✅ Verification successful

Let me gather more information to verify the impact comprehensively.

This will help us understand:

  1. The context around aparam configurations
  2. How aparam is used in function implementations
  3. Any test configurations that might be affected

Based on the comprehensive analysis of the codebase, I can now provide the final response:

Changes to aparam handling are safe and backward compatible

The verification shows that:

  • Existing tests and implementations handle numb_aparam consistently with both masked and non-masked scenarios
  • The new use_aparam_as_mask flag is properly isolated and only affects code paths where it's explicitly enabled
  • Test configurations demonstrate both use cases:
    • Traditional aparam usage (water_se_a_aparam.json: numb_aparam=2)
    • Mask usage (zinc_se_a_mask.json: numb_aparam=1, use_aparam_as_mask=true)

The changes in _call_common maintain backward compatibility while safely adding the new masking feature.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for existing usages of aparam to ensure compatibility
rg -l "numb_aparam\s*=\s*[1-9]" | while read -r file; do
    echo "=== $file ==="
    rg "use_aparam_as_mask" "$file" || echo "No explicit mask usage found"
done

Length of output: 529


Script:

#!/bin/bash
# Let's check how aparam is used in test files and actual implementation
echo "=== Checking aparam usage patterns ==="
rg -A 5 "numb_aparam\s*=\s*[1-9]"

echo -e "\n=== Checking aparam-related function calls ==="
ast-grep --pattern 'self.numb_aparam'

echo -e "\n=== Checking test configurations ==="
fd -e json . | xargs rg "aparam"

Length of output: 7711

deepmd/pt/model/task/fitting.py (4)

152-152: LGTM!

The parameter addition follows best practices with appropriate default value.


Line range hint 201-211: LGTM!

The conditional buffer registration logic is well-implemented and handles all cases appropriately.


305-305: LGTM!

The serialization change properly preserves the new parameter state.


129-130: Verify usage of the new parameter across the codebase

Since this introduces a new parameter that affects the network's behavior, we should verify that all callers are updated appropriately.

Also applies to: 152-152

✅ Verification successful

Let me try a different search strategy to find the usages of GeneralFitting.


Let me search for more specific information about how this class is used in the codebase.


Based on the search results, I can now provide a final response:

The new parameter appears to be a safe addition

The use_aparam_as_mask parameter is being added to the GeneralFitting class definition, but our codebase analysis shows that:

  1. There are no direct instantiations of GeneralFitting found in the codebase
  2. The class appears to be primarily used as a base class for inheritance, as mentioned in the documentation (doc/development/create-a-model-pt.md)
  3. The parameter is optional (being a boolean flag) and likely has a default value, making it backward compatible

Since this is an optional parameter with no existing direct usage found in the codebase, no updates to callers are necessary.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for instantiations of GeneralFitting to ensure they handle the new parameter
ast-grep --pattern 'GeneralFitting($$$)'

# Search for any tests that might need updating
rg -l "GeneralFitting" --glob "test_*.py"

Length of output: 84


Script:

#!/bin/bash
# Search for GeneralFitting class definition and its usages
rg -A 5 "class GeneralFitting"

# Search for any imports of GeneralFitting
rg "from.*GeneralFitting|import.*GeneralFitting"

# Search for any instantiation patterns
rg "GeneralFitting\("

Length of output: 1521


Script:

#!/bin/bash
# Look for the actual instantiation of GeneralFitting in Python files
rg -B 5 -A 5 "GeneralFitting\s*\(" --type py

# Check the test files in the relevant directories
fd "test.*\.py" deepmd/pt/model/task/ deepmd/dpmodel/fitting/

# Check for any configuration or JSON files that might reference this class
rg -l "GeneralFitting" --glob "*.json"

Length of output: 1374

deepmd/tf/fit/ener.py (6)

343-344: LGTM: Conditional aparam stats computation.

The change correctly skips computing aparam statistics when use_aparam_as_mask is True, as the aparam values are used as binary masks rather than numerical parameters.


Line range hint 387-392: LGTM: Skip aparam concatenation when used as mask.

The change correctly prevents aparam from being concatenated to the input layer when it's used as a mask, maintaining consistency with the parameter's intended purpose.


605-609: LGTM: Skip aparam normalization when used as mask.

The change correctly skips the normalization of aparam values when they are used as masks, which is consistent with their binary nature in this case.


Line range hint 785-789: LGTM: Skip loading aparam statistics when used as mask.

The change correctly skips loading aparam statistics from the graph when use_aparam_as_mask is True, maintaining consistency with the parameter's usage as a binary mask.


885-887: LGTM: Skip deserializing aparam statistics when used as mask.

The change correctly skips deserializing aparam statistics when use_aparam_as_mask is True, maintaining consistency with the parameter's role as a binary mask.


Line range hint 1-1: Verify test coverage for the new feature.

The changes consistently implement the use_aparam_as_mask feature. Let's verify that appropriate test coverage exists.

✅ Verification successful

Let me examine the test files more closely to verify the test coverage for the use_aparam_as_mask feature.


Test coverage for use_aparam_as_mask feature is comprehensive

The feature has thorough test coverage across multiple test files:

  • test_ener_fitting.py: Contains dedicated test cases that:

    • Test the feature with different combinations of parameters through itertools.product
    • Has a specific test method test_use_aparam_as_mask
    • Verifies serialization/deserialization with the feature
    • Includes JIT compilation tests
  • test_ener.py: Implements comprehensive parametrized tests that:

    • Cover both True/False cases for use_aparam_as_mask
    • Tests across multiple frameworks (TF, PyTorch, JAX)
    • Includes precision testing for different data types
    • Properly handles the feature's dependency on numb_aparam
  • zinc_se_a_mask.json: Provides an integration test configuration with the feature enabled

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for test files that should cover the new feature
rg -l "use_aparam_as_mask" "source/tests/"

Length of output: 588


Script:

#!/bin/bash
# Check content of test files for actual test cases
echo "=== Checking test_ener_fitting.py ==="
rg -C 5 "use_aparam_as_mask" "source/tests/pt/model/test_ener_fitting.py"

echo -e "\n=== Checking test_ener.py ==="
rg -C 5 "use_aparam_as_mask" "source/tests/consistent/fitting/test_ener.py"

echo -e "\n=== Checking test configuration in zinc_se_a_mask.json ==="
cat "source/tests/tf/zinc_se_a_mask.json"

Length of output: 8506

deepmd/tf/fit/ener.py Outdated Show resolved Hide resolved
deepmd/tf/fit/ener.py Fixed Show fixed Hide fixed
deepmd/tf/fit/ener.py Fixed Show fixed Hide fixed
@@ -86,6 +89,7 @@
precision,
mixed_types,
numb_fparam,
numb_aparam,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
source/tests/consistent/fitting/test_ener.py Fixed Show fixed Hide fixed
source/tests/consistent/fitting/test_ener.py Fixed Show fixed Hide fixed
@@ -71,6 +74,7 @@
precision,
mixed_types,
numb_fparam,
numb_aparam,

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
Copy link

codecov bot commented Oct 23, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.28%. Comparing base (fa61d69) to head (8a5cc16).
Report is 1 commits behind head on devel.

Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4246      +/-   ##
==========================================
+ Coverage   84.23%   84.28%   +0.05%     
==========================================
  Files         549      549              
  Lines       51462    51460       -2     
  Branches     3051     3051              
==========================================
+ Hits        43348    43375      +27     
+ Misses       7153     7128      -25     
+ Partials      961      957       -4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
deepmd/pt/model/task/fitting.py (2)

129-130: Documentation needs improvement

The docstring description for use_aparam_as_mask is too brief and doesn't explain the purpose or impact of this parameter.

Consider expanding the documentation to explain:

  • The purpose of using atomic parameters as masks
  • The impact on the fitting network's behavior
  • When users should enable this option

Line range hint 1-500: Consider adding unit tests for the new feature

The changes look good overall, but it would be beneficial to have comprehensive test coverage for the new use_aparam_as_mask functionality.

Would you like me to help generate unit tests that cover:

  1. Initialization with use_aparam_as_mask=True
  2. Buffer registration behavior
  3. Input dimension calculation
  4. Forward pass with masked aparams
  5. Serialization/deserialization
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 044023e and ef02e5c.

📒 Files selected for processing (3)
  • deepmd/dpmodel/fitting/general_fitting.py (2 hunks)
  • deepmd/pt/model/task/fitting.py (7 hunks)
  • deepmd/tf/fit/ener.py (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/dpmodel/fitting/general_fitting.py
🧰 Additional context used
🪛 GitHub Check: CodeQL
deepmd/tf/fit/ener.py

[failure] 608-608: Potentially uninitialized local variable
Local variable 't_aparam_avg' may be used before it is initialized.


[failure] 608-608: Potentially uninitialized local variable
Local variable 't_aparam_istd' may be used before it is initialized.

🔇 Additional comments (13)
deepmd/pt/model/task/fitting.py (5)

152-152: LGTM: Parameter initialization is correct

The new parameter is properly added to the constructor with a sensible default value (False) and correctly initialized as an instance variable.

Also applies to: 170-170


Line range hint 201-212: LGTM: Buffer registration logic is sound

The conditional buffer registration for aparam_avg and aparam_inv_std is correctly implemented based on both numb_aparam > 0 and not use_aparam_as_mask conditions.


213-217: LGTM: Input dimension calculation is accurate

The input dimension calculation correctly excludes numb_aparam from the total when use_aparam_as_mask is True, which aligns with the feature's purpose.


307-307: LGTM: Serialization is properly updated

The use_aparam_as_mask parameter is correctly included in the serialized output, ensuring the state is preserved.


Line range hint 449-469: Verify error handling for aparam when used as mask

When use_aparam_as_mask is True but aparam is None, the code might need explicit handling.

Let's verify if there are any test cases covering this scenario:

Additionally, consider adding validation:

 if self.numb_aparam > 0 and not self.use_aparam_as_mask:
     assert aparam is not None, "aparam should not be None"
     assert self.aparam_avg is not None
     assert self.aparam_inv_std is not None
+elif self.numb_aparam > 0 and self.use_aparam_as_mask:
+    assert aparam is not None, "aparam should not be None when used as mask"
deepmd/tf/fit/ener.py (8)

Line range hint 343-356: Logic for Computing Atomic Parameter Statistics is Correct

The code correctly computes the mean and standard deviation of atomic parameters when use_aparam_as_mask is False. It accumulates the sums needed for calculating statistics across all systems, ensuring accurate normalization.


Line range hint 387-392: Appending Atomic Parameters to the Layer

The atomic parameters are appropriately sliced, reshaped, cast to the fitting precision, and concatenated to the input layer when use_aparam_as_mask is False, ensuring they are integrated correctly into the model.


Line range hint 508-512: Initialization of aparam_avg and aparam_inv_std

The code safely initializes self.aparam_avg and self.aparam_inv_std to default values if they are None, preventing potential errors due to uninitialized variables.


Line range hint 564-571: Initialization of TensorFlow Variables for Atomic Parameters

TensorFlow variables t_aparam_avg and t_aparam_istd are correctly initialized within the variable scope when use_aparam_as_mask is False, ensuring the variables are available for normalization during model building.


605-609: Proper Use of Atomic Parameter Variables

The atomic parameters are normalized using t_aparam_avg and t_aparam_istd before being reshaped and utilized in the model when use_aparam_as_mask is False. The conditional checks ensure that variables are used only when they are properly initialized.

🧰 Tools
🪛 GitHub Check: CodeQL

[failure] 608-608: Potentially uninitialized local variable
Local variable 't_aparam_avg' may be used before it is initialized.


[failure] 608-608: Potentially uninitialized local variable
Local variable 't_aparam_istd' may be used before it is initialized.


Line range hint 785-789: Retrieval of Atomic Parameter Statistics During Initialization

During variable initialization, self.aparam_avg and self.aparam_inv_std are retrieved from the graph when use_aparam_as_mask is False, ensuring consistency between serialized and deserialized models.


885-888: Deserialization of Atomic Parameter Statistics

The deserialize method correctly reconstructs aparam_avg and aparam_inv_std from the serialized data when use_aparam_as_mask is False, maintaining the integrity of the model's state.


924-928: Input Dimension Calculation is Clear and Maintains Correctness

The calculation of in_dim now conditionally includes self.numb_aparam based on the use_aparam_as_mask flag, improving readability and ensuring the input dimension is accurate.

@ChiahsinChu ChiahsinChu force-pushed the devel-use_aparam_as_mask branch from d51bcb7 to 5cdef87 Compare October 24, 2024 03:12
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
deepmd/pt/model/task/fitting.py (2)

129-130: Enhance documentation for use_aparam_as_mask parameter

While the documentation explains the basic functionality, it would be helpful to add:

  • The implications of setting this parameter to True
  • Common use cases or scenarios where this should be used
  • Impact on the model's behavior
     use_aparam_as_mask: bool
-        If True, the aparam will not be used in fitting net for embedding.
+        If True, the aparam will not be used in fitting net for embedding. This affects:
+        - Input dimension calculation (excludes numb_aparam)
+        - Forward pass behavior (skips aparam processing)
+        Use this when atomic parameters should only influence the output masking
+        rather than being used as part of the embedding computation.

213-217: Improve formatting of input dimension calculation

While the logic is correct, the formatting could be improved for better readability.

-        in_dim = (
-            self.dim_descrpt
-            + self.numb_fparam
-            + (0 if self.use_aparam_as_mask else self.numb_aparam)
-        )
+        in_dim = self.dim_descrpt + self.numb_fparam + (
+            0 if self.use_aparam_as_mask else self.numb_aparam
+        )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between ef02e5c and 68334ba.

📒 Files selected for processing (3)
  • deepmd/dpmodel/fitting/general_fitting.py (2 hunks)
  • deepmd/pt/model/task/fitting.py (6 hunks)
  • deepmd/tf/fit/ener.py (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • deepmd/dpmodel/fitting/general_fitting.py
  • deepmd/tf/fit/ener.py
🔇 Additional comments (4)
deepmd/pt/model/task/fitting.py (4)

152-152: LGTM! Parameter addition follows best practices

The parameter addition maintains backward compatibility with its default value and follows Python's parameter ordering conventions.


170-170: LGTM! Proper instance variable initialization

The initialization is correctly placed and maintains the class's state consistency.


307-307: LGTM! Proper serialization of new parameter

The parameter is correctly included in the serialization dictionary, ensuring proper model persistence.


449-449: LGTM! Verify aparam processing behavior

The condition correctly implements the masking behavior. However, let's verify that this change doesn't affect existing models.

@ChiahsinChu ChiahsinChu force-pushed the devel-use_aparam_as_mask branch from 2c0ec61 to 610767d Compare October 24, 2024 03:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
deepmd/pt/model/task/fitting.py (3)

215-219: Consider improving readability with a more explicit condition.

The logic is correct, but could be more readable with an explicit condition.

-in_dim = (
-    self.dim_descrpt
-    + self.numb_fparam
-    + (0 if self.use_aparam_as_mask else self.numb_aparam)
-)
+in_dim = self.dim_descrpt + self.numb_fparam
+if not self.use_aparam_as_mask:
+    in_dim += self.numb_aparam

309-309: Consider adding deserialization test.

The serialization change is correct, but it would be beneficial to add a test case ensuring proper deserialization of models with this parameter.

Would you like me to help generate a test case for the deserialization?


Line range hint 129-471: Architecture looks solid with good separation of concerns.

The implementation of use_aparam_as_mask is well-integrated and maintains backward compatibility. The feature is consistently implemented across initialization, processing, and serialization layers.

A few architectural considerations:

  1. The feature cleanly separates the dual use of aparam (as input vs. mask).
  2. The implementation preserves backward compatibility for existing models.
  3. The changes follow the single responsibility principle.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 5381bc6 and 4e20535.

📒 Files selected for processing (2)
  • deepmd/dpmodel/fitting/general_fitting.py (2 hunks)
  • deepmd/pt/model/task/fitting.py (6 hunks)
🔇 Additional comments (4)
deepmd/dpmodel/fitting/general_fitting.py (2)

176-180: LGTM: Input dimension calculation correctly handles the use_aparam_as_mask flag.

The conditional inclusion of numb_aparam in the input dimension calculation is correct. When use_aparam_as_mask is True, atomic parameters are used as masks rather than as input features, so they should not contribute to the network's input dimension.


Line range hint 408-424: Consider adding validation for mask values when use_aparam_as_mask is True.

While the conditional processing of atomic parameters is correct, there's no validation of the mask values when use_aparam_as_mask is True. Consider adding validation to ensure aparam contains valid mask values (typically 0s and 1s) in this case.

Here's a script to check the current usage and values of aparam in the codebase:

deepmd/pt/model/task/fitting.py (2)

129-130: LGTM: Documentation is clear and well-formatted.

The added parameter documentation follows the existing style and clearly explains its purpose.


Line range hint 451-471: Verify the impact on existing models.

The logic for processing aparam is correct, but we should verify that existing models using aparam continue to work as expected.

✅ Verification successful

These scripts will help us understand:

  1. How aparam is configured in existing test cases and examples
  2. If there are models already using aparam as mask
  3. Specific test coverage for aparam functionality

Based on the search results, I can now generate the final response since we have sufficient information about how aparam is used in the codebase:

The changes to aparam processing are safe and verified.

The codebase shows two distinct patterns of aparam usage:

  1. Regular aparam usage (numb_aparam > 0, use_aparam_as_mask = false) in examples like fparam_aparam.yaml
  2. Aparam as mask (numb_aparam = 1, use_aparam_as_mask = true) in examples like zinc_se_a_mask.json

The new condition if self.numb_aparam > 0 and not self.use_aparam_as_mask correctly separates these two use cases, ensuring that aparam processing only happens for regular aparam usage and not when it's being used as a mask.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for existing model configurations that use aparam
rg -l "numb_aparam.*[1-9]" .

Length of output: 974


Script:

#!/bin/bash
# Let's check the actual usage patterns of aparam in test cases and examples
rg -A 5 "numb_aparam.*[1-9]" ./examples/ ./source/tests/

Length of output: 23181


Script:

#!/bin/bash
# Check if there are any existing models using aparam as mask
rg -B 2 -A 2 "use_aparam_as_mask.*true" .

Length of output: 20364


Script:

#!/bin/bash
# Look for any test cases specifically testing aparam functionality
ast-grep --pattern 'def test_$_aparam$_($$$) { $$$ }'

Length of output: 55

@@ -88,6 +92,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
@@ -88,6 +92,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_aparam_as_mask is not used.
@@ -101,6 +106,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
@@ -101,6 +106,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_aparam_as_mask is not used.
@@ -131,6 +140,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
@@ -131,6 +140,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_aparam_as_mask is not used.
@@ -238,6 +264,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
@@ -238,6 +264,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_aparam_as_mask is not used.
@@ -257,6 +284,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable numb_aparam is not used.
@@ -257,6 +284,7 @@
precision,
mixed_types,
numb_fparam,
(numb_aparam, use_aparam_as_mask),

Check notice

Code scanning / CodeQL

Unused local variable Note test

Variable use_aparam_as_mask is not used.
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Oct 26, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Oct 26, 2024
@njzjz njzjz added this pull request to the merge queue Oct 26, 2024
@njzjz njzjz removed this pull request from the merge queue due to a manual request Oct 26, 2024
@njzjz
Copy link
Member

njzjz commented Oct 26, 2024

It seems the CI failed due to the test added previously on the certain GPU machine...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants