Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat pt : Support property fitting #3867

Merged
merged 86 commits into from
Sep 5, 2024
Merged
Show file tree
Hide file tree
Changes from 76 commits
Commits
Show all changes
86 commits
Select commit Hold shift + click to select a range
1e610cf
3866
Chengqian-Zhang Jun 12, 2024
e30dfa0
Support intensive property fitting
Chengqian-Zhang Jun 13, 2024
50c8940
Solve conflict
Chengqian-Zhang Jun 13, 2024
b9a3f9a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 13, 2024
39d3d9c
Fix UT
Chengqian-Zhang Jun 13, 2024
6c24fb5
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Jun 13, 2024
b2ffadf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 13, 2024
7ece19e
delete try except
Chengqian-Zhang Jun 13, 2024
5c44bba
Solve conflict
Chengqian-Zhang Jun 13, 2024
4295dd5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 13, 2024
0739bf1
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 14, 2024
8ea3269
Add argcheck
Chengqian-Zhang Jun 14, 2024
512f438
Delete input.json
Chengqian-Zhang Jun 14, 2024
a5e29e6
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 14, 2024
9228959
fix UT
Chengqian-Zhang Jun 14, 2024
c8974cb
fix UT
Chengqian-Zhang Jun 14, 2024
2540d29
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 14, 2024
792e0c7
Add example
Chengqian-Zhang Jun 14, 2024
8d121bf
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 14, 2024
ce4e80f
Add dp model and consistent UT
Chengqian-Zhang Jun 14, 2024
6034cac
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Jun 14, 2024
7f66234
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 14, 2024
1679bfa
Add property deep_eval
Chengqian-Zhang Jun 17, 2024
e2e0454
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 17, 2024
beae3ca
fix fitting-->Fitting
Chengqian-Zhang Jun 17, 2024
1481417
add beta doc in loss
Chengqian-Zhang Jun 17, 2024
be6d431
solve pre-commit
Chengqian-Zhang Jun 17, 2024
5bc6d73
Solve conversation
Chengqian-Zhang Jun 17, 2024
3e33f6f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 17, 2024
6e68bf8
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 17, 2024
f239cce
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 18, 2024
cff1ce0
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 25, 2024
761a7dc
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 26, 2024
c7c852b
change reduciable to reducible
Chengqian-Zhang Jun 26, 2024
8850742
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 26, 2024
ec25e31
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 27, 2024
07e35a8
Add dp test
Chengqian-Zhang Jun 28, 2024
a1ab5ad
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Jun 28, 2024
8745744
Merge branch 'devel' into 3866
Chengqian-Zhang Jun 28, 2024
5d74917
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 28, 2024
4fb62e9
add get_intensive in atomic_model layer
Chengqian-Zhang Jun 28, 2024
6670877
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 28, 2024
286fa65
solv conversions
Chengqian-Zhang Jun 28, 2024
8430080
delete useless file
Chengqian-Zhang Jun 28, 2024
e367275
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 28, 2024
8094192
fix UT
Chengqian-Zhang Jun 28, 2024
fcaa14c
delete useless file
Chengqian-Zhang Jun 28, 2024
43dc706
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Jun 28, 2024
7915373
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 28, 2024
8701d10
Add property atomic_model and model in numpy
Chengqian-Zhang Jun 28, 2024
692ac97
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Jun 28, 2024
73ba974
merge devel
Chengqian-Zhang Aug 8, 2024
504a4f2
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 8, 2024
f681f60
delete example
Chengqian-Zhang Aug 8, 2024
467dfb3
Merge branch 'devel' into 3866
Chengqian-Zhang Aug 27, 2024
65ec9d7
change intensive operation to transform_output
Chengqian-Zhang Aug 27, 2024
0dafdb7
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 27, 2024
4a827a4
modify bias computing part
Chengqian-Zhang Aug 27, 2024
ac679a9
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Aug 27, 2024
2502b89
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 27, 2024
8a2e228
Delete useless get_intensive() method
Chengqian-Zhang Aug 27, 2024
72dd1d7
Add bias_method parameter in fitting_net
Chengqian-Zhang Aug 27, 2024
d423b28
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 27, 2024
e1d3558
add reducible and intensive dependency
Chengqian-Zhang Aug 27, 2024
a865f23
fix UT
Chengqian-Zhang Aug 27, 2024
3fd5697
solve serialize UT error
Chengqian-Zhang Aug 27, 2024
3d3700d
rerun UT
Chengqian-Zhang Aug 27, 2024
d580348
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 27, 2024
4a3f84c
Merge branch 'devel' into 3866
Chengqian-Zhang Aug 28, 2024
11051ca
Add universal test
Chengqian-Zhang Aug 28, 2024
ae4b4a9
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Aug 28, 2024
633bf8e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 28, 2024
74b3533
rerun UT
Chengqian-Zhang Aug 28, 2024
b3031d0
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 28, 2024
d5e03e9
Add doc and output_def UT
Chengqian-Zhang Aug 28, 2024
6870ecd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 28, 2024
5b237ac
change type_map of example
Chengqian-Zhang Sep 1, 2024
9830fd8
delete prop option
Chengqian-Zhang Sep 1, 2024
d0f8949
change doc_metric
Chengqian-Zhang Sep 1, 2024
abd507e
change property example to dpa1
Chengqian-Zhang Sep 1, 2024
a8ef765
Add se_a UT in universal/dpmodel/atomic_model/
Chengqian-Zhang Sep 1, 2024
b105e34
Add dptest property UT
Chengqian-Zhang Sep 3, 2024
d22f6b4
Merge branch 'devel' into 3866
Chengqian-Zhang Sep 3, 2024
a5da5a7
Merge branch '3866' of github.com:Chengqian-Zhang/deepmd-kit into 3866
Chengqian-Zhang Sep 3, 2024
dde48fd
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 3, 2024
4d8934f
Merge branch 'devel' into 3866
Chengqian-Zhang Sep 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions deepmd/dpmodel/atomic_model/property_atomic_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
from deepmd.dpmodel.fitting.property_fitting import (

Check warning on line 2 in deepmd/dpmodel/atomic_model/property_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/atomic_model/property_atomic_model.py#L2

Added line #L2 was not covered by tests
PropertyFittingNet,
)

from .dp_atomic_model import (

Check warning on line 6 in deepmd/dpmodel/atomic_model/property_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/atomic_model/property_atomic_model.py#L6

Added line #L6 was not covered by tests
DPAtomicModel,
)


class DPPropertyAtomicModel(DPAtomicModel):
def __init__(self, descriptor, fitting, type_map, **kwargs):
assert isinstance(fitting, PropertyFittingNet)
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
super().__init__(descriptor, fitting, type_map, **kwargs)

Check warning on line 14 in deepmd/dpmodel/atomic_model/property_atomic_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/atomic_model/property_atomic_model.py#L11-L14

Added lines #L11 - L14 were not covered by tests
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 4 additions & 0 deletions deepmd/dpmodel/fitting/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@
from .polarizability_fitting import (
PolarFitting,
)
from .property_fitting import (
PropertyFittingNet,
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
)
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved

__all__ = [
"InvarFitting",
Expand All @@ -25,4 +28,5 @@
"EnergyFittingNet",
"PolarFitting",
"DOSFittingNet",
"PropertyFittingNet",
]
136 changes: 136 additions & 0 deletions deepmd/dpmodel/fitting/property_fitting.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
import copy
from typing import (
List,
Optional,
Union,
)

import numpy as np

from deepmd.dpmodel.common import (
DEFAULT_PRECISION,
)
from deepmd.dpmodel.fitting.invar_fitting import (
InvarFitting,
)
from deepmd.utils.version import (
check_version_compatibility,
)


@InvarFitting.register("property")
class PropertyFittingNet(InvarFitting):
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
r"""Fitting the rotationally invariant porperties of `task_dim` of the system.

Parameters
----------
ntypes
The number of atom types.
dim_descrpt
The dimension of the input descriptor.
task_dim
The dimension of outputs of fitting net.
neuron
Number of neurons :math:`N` in each hidden layer of the fitting net
bias_atom_p
Average property per atom for each element.
rcond
The condition number for the regression of atomic energy.
trainable
If the weights of fitting net are trainable.
Suppose that we have :math:`N_l` hidden layers in the fitting net,
this list is of length :math:`N_l + 1`, specifying if the hidden layers and the output layer are trainable.
intensive
Whether the fitting property is intensive.
bias_method
The method of applying the bias to each atomic output, user can select 'normal' or 'no_bias'.
If 'normal' is used, the computed bias will be added to the atomic output.
If 'no_bias' is used, no bias will be added to the atomic output.
resnet_dt
Time-step `dt` in the resnet construction:
:math:`y = x + dt * \phi (Wx + b)`
numb_fparam
Number of frame parameter
numb_aparam
Number of atomic parameter
activation_function
The activation function :math:`\boldsymbol{\phi}` in the embedding net. Supported options are |ACTIVATION_FN|
precision
The precision of the embedding net parameters. Supported options are |PRECISION|
mixed_types
If false, different atomic types uses different fitting net, otherwise different atom types share the same fitting net.
exclude_types: List[int]
Atomic contributions of the excluded atom types are set zero.
type_map: List[str], Optional
A list of strings. Give the name to each type of atoms.
"""

def __init__(
self,
ntypes: int,
dim_descrpt: int,
task_dim: int = 1,
neuron: List[int] = [128, 128, 128],
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
bias_atom_p: Optional[np.ndarray] = None,
rcond: Optional[float] = None,
trainable: Union[bool, List[bool]] = True,
intensive: bool = False,
bias_method: str = "normal",
resnet_dt: bool = True,
numb_fparam: int = 0,
numb_aparam: int = 0,
activation_function: str = "tanh",
precision: str = DEFAULT_PRECISION,
mixed_types: bool = True,
exclude_types: List[int] = [],
type_map: Optional[List[str]] = None,
# not used
seed: Optional[int] = None,
):
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
self.task_dim = task_dim
self.intensive = intensive
self.bias_method = bias_method
super().__init__(
var_name="property",
ntypes=ntypes,
dim_descrpt=dim_descrpt,
dim_out=task_dim,
neuron=neuron,
bias_atom=bias_atom_p,
resnet_dt=resnet_dt,
numb_fparam=numb_fparam,
numb_aparam=numb_aparam,
rcond=rcond,
trainable=trainable,
activation_function=activation_function,
precision=precision,
mixed_types=mixed_types,
exclude_types=exclude_types,
type_map=type_map,
)

@classmethod
def deserialize(cls, data: dict) -> "PropertyFittingNet":
data = copy.deepcopy(data)
check_version_compatibility(data.pop("@version"), 2, 1)
data.pop("dim_out")
data.pop("var_name")
data.pop("tot_ener_zero")
data.pop("layer_name")
data.pop("use_aparam_as_mask", None)
data.pop("spin", None)
data.pop("atom_ener", None)
obj = super().deserialize(data)

return obj

def serialize(self) -> dict:
"""Serialize the fitting to dict."""
dd = {
**InvarFitting.serialize(self),
"type": "property",
"task_dim": self.task_dim,
}

return dd
4 changes: 4 additions & 0 deletions deepmd/dpmodel/model/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,16 @@
from .make_model import (
make_model,
)
from .property_model import (
PropertyModel,
)
from .spin_model import (
SpinModel,
)

__all__ = [
"EnergyModel",
"PropertyModel",
"DPModelCommon",
"SpinModel",
"make_model",
Expand Down
27 changes: 27 additions & 0 deletions deepmd/dpmodel/model/property_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# SPDX-License-Identifier: LGPL-3.0-or-later
from deepmd.dpmodel.atomic_model.dp_atomic_model import (
DPAtomicModel,
)
from deepmd.dpmodel.model.base_model import (
BaseModel,
)

from .dp_model import (
DPModelCommon,
)
from .make_model import (
make_model,
)

DPPropertyModel_ = make_model(DPAtomicModel)


@BaseModel.register("property")
class PropertyModel(DPModelCommon, DPPropertyModel_):
def __init__(
self,
*args,
**kwargs,
):
DPModelCommon.__init__(self)
DPPropertyModel_.__init__(self, *args, **kwargs)

Check warning on line 27 in deepmd/dpmodel/model/property_model.py

View check run for this annotation

Codecov / codecov/patch

deepmd/dpmodel/model/property_model.py#L26-L27

Added lines #L26 - L27 were not covered by tests
7 changes: 7 additions & 0 deletions deepmd/dpmodel/output_def.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,6 +186,8 @@ class OutputVariableDef:
If hessian is requred
magnetic : bool
If the derivatives of variable have magnetic parts.
intensive : bool
It indicates whether the fitting property is intensive or extensive.
"""

def __init__(
Expand All @@ -199,6 +201,7 @@ def __init__(
category: int = OutputVariableCategory.OUT.value,
r_hessian: bool = False,
magnetic: bool = False,
intensive: bool = False,
):
self.name = name
self.shape = list(shape)
Expand All @@ -211,13 +214,17 @@ def __init__(
self.reducible = reducible
self.r_differentiable = r_differentiable
self.c_differentiable = c_differentiable
self.intensive = intensive
Chengqian-Zhang marked this conversation as resolved.
Show resolved Hide resolved
if self.c_differentiable and not self.r_differentiable:
raise ValueError("c differentiable requires r_differentiable")
if self.reducible and not self.atomic:
raise ValueError("a reducible variable should be atomic")
if self.intensive and not self.reducible:
raise ValueError("an intensive variable should be reducible")
self.category = category
self.r_hessian = r_hessian
self.magnetic = magnetic
self.intensive = intensive
if self.r_hessian:
if not self.reducible:
raise ValueError("only reducible variable can calculate hessian")
Expand Down
Loading
Loading