Skip to content

Commit

Permalink
updates
Browse files Browse the repository at this point in the history
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
  • Loading branch information
njzjz committed Dec 21, 2024
1 parent c09c6f9 commit f2aeeee
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 4 deletions.
7 changes: 6 additions & 1 deletion deepmd/dpmodel/descriptor/dpa2.py
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ def __init__(
use_tebd_bias: bool = False,
type_map: Optional[list[str]] = None,
) -> None:
r"""The DPA-2 descriptor. see https://arxiv.org/abs/2312.15492.
r"""The DPA-2 descriptor[1]_.
Parameters
----------
Expand Down Expand Up @@ -434,6 +434,11 @@ def __init__(
sw: torch.Tensor
The switch function for decaying inverse distance.
References
----------
.. [1] Zhang, D., Liu, X., Zhang, X. et al. DPA-2: a
large atomic model as a multi-task learner. npj
Comput Mater 10, 293 (2024). https://doi.org/10.1038/s41524-024-01493-2
"""

def init_subclass_params(sub_data, sub_class):
Expand Down
7 changes: 6 additions & 1 deletion deepmd/pt/model/descriptor/dpa2.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ def __init__(
use_tebd_bias: bool = False,
type_map: Optional[list[str]] = None,
) -> None:
r"""The DPA-2 descriptor. see https://arxiv.org/abs/2312.15492.
r"""The DPA-2 descriptor[1]_.
Parameters
----------
Expand Down Expand Up @@ -147,6 +147,11 @@ def __init__(
sw: torch.Tensor
The switch function for decaying inverse distance.
References
----------
.. [1] Zhang, D., Liu, X., Zhang, X. et al. DPA-2: a
large atomic model as a multi-task learner. npj
Comput Mater 10, 293 (2024). https://doi.org/10.1038/s41524-024-01493-2
"""
super().__init__()

Expand Down
2 changes: 1 addition & 1 deletion doc/train/finetuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The model section will be overwritten (except the `type_map` subsection) by that

#### Fine-tuning from a multi-task pre-trained model

Additionally, within the PyTorch implementation and leveraging the flexibility offered by the framework and the multi-task training process proposed in DPA2 [paper](https://arxiv.org/abs/2312.15492),
Additionally, within the PyTorch implementation and leveraging the flexibility offered by the framework and the multi-task training process proposed in DPA2 [paper](https://doi.org/10.1038/s41524-024-01493-2),
we also support more general multitask pre-trained models, which includes multiple datasets for pre-training. These pre-training datasets share a common descriptor while maintaining their individual fitting nets,
as detailed in the paper above.

Expand Down
2 changes: 1 addition & 1 deletion doc/train/multi-task-training.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ and the Adam optimizer is executed to minimize $L^{(t)}$ for one step to update
In the case of multi-GPU parallel training, different GPUs will independently select their tasks.
In the DPA-2 model, this multi-task training framework is adopted.[^1]

[^1]: Duo Zhang, Xinzijian Liu, Xiangyu Zhang, Chengqian Zhang, Chun Cai, Hangrui Bi, Yiming Du, Xuejian Qin, Jiameng Huang, Bowen Li, Yifan Shan, Jinzhe Zeng, Yuzhi Zhang, Siyuan Liu, Yifan Li, Junhan Chang, Xinyan Wang, Shuo Zhou, Jianchuan Liu, Xiaoshan Luo, Zhenyu Wang, Wanrun Jiang, Jing Wu, Yudi Yang, Jiyuan Yang, Manyi Yang, Fu-Qiang Gong, Linshuang Zhang, Mengchao Shi, Fu-Zhi Dai, Darrin M. York, Shi Liu, Tong Zhu, Zhicheng Zhong, Jian Lv, Jun Cheng, Weile Jia, Mohan Chen, Guolin Ke, Weinan E, Linfeng Zhang, Han Wang, [arXiv preprint arXiv:2312.15492 (2023)](https://arxiv.org/abs/2312.15492) licensed under a [Creative Commons Attribution (CC BY) license](http://creativecommons.org/licenses/by/4.0/).
[^1]: Duo Zhang, Xinzijian Liu, Xiangyu Zhang, Chengqian Zhang, Chun Cai, Hangrui Bi, Yiming Du, Xuejian Qin, Anyang Peng, Jiameng Huang, Bowen Li, Yifan Shan, Jinzhe Zeng, Yuzhi Zhang, Siyuan Liu, Yifan Li, Junhan Chang, Xinyan Wang, Shuo Zhou, Jianchuan Liu, Xiaoshan Luo, Zhenyu Wang, Wanrun Jiang, Jing Wu, Yudi Yang, Jiyuan Yang, Manyi Yang, Fu-Qiang Gong, Linshuang Zhang, Mengchao Shi, Fu-Zhi Dai, Darrin M. York, Shi Liu, Tong Zhu, Zhicheng Zhong, Jian Lv, Jun Cheng, Weile Jia, Mohan Chen, Guolin Ke, Weinan E, Linfeng Zhang, Han Wang, DPA-2: a large atomic model as a multi-task learner. npj Comput Mater 10, 293 (2024). [DOI: 10.1038/s41524-024-01493-2](https://doi.org/10.1038/s41524-024-01493-2) licensed under a [Creative Commons Attribution (CC BY) license](http://creativecommons.org/licenses/by/4.0/).

Compared with the previous TensorFlow implementation, the new support in PyTorch is more flexible and efficient.
In particular, it makes multi-GPU parallel training and even tasks beyond DFT possible,
Expand Down

0 comments on commit f2aeeee

Please sign in to comment.