Releases: ORNL/HydraGNN
Releases · ORNL/HydraGNN
HydraGNN v3.0 Release
Summary
New or improved capabilities included in v3.0 release are as follows:
- Enhancement in message passing layers through generalization of the class inheritance to enable the inclusion of a broader set of message passing policies
- Inclusion of equivariant message passing layers from the original implementations of:
- SchNet https://pubs.aip.org/aip/jcp/article/148/24/241722/962591/SchNet-A-deep-learning-architecture-for-molecules
- DimeNet++ https://arxiv.org/abs/2011.14115
- EGNN https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjB_trwsrqCAxVUMlkFHRWjAmkQFnoECBcQAQ&url=https%3A%2F%2Farxiv.org%2Fpdf%2F2102.09844&usg=AOvVaw3Ey_bFHJxpm9_xFxbSqWAq&opi=89978449 models
- Restructuring of class inheritance for data management
- Support of DDStore https://github.com/ORNL/DDStore capabilities for improved distributed data parallelism on large volumes of data that cannot fit on intra-node memory capacities
- Large-scale system support for OLCF-Crusher and OLCF-Frontier
HydraGNN v2.0.0 Release
Summary
New or improved capabilities included in v2.0.0 release are as follows:
- Enhancement in message passing layers through class inheritance
- Adding transformation to ensure translation and rotation invariance
- Supporting various optimizers
- Atomic descriptors
- Integration with continuous CI test
- Distributed printouts and timers
- Profiling
- Support of ADIOS2 for scalable data loading
- Large-scale system support, including Summit (ORNL) and Perlmutter (NERSC)
Capabilities provided in v1.0.0 release (Oct 2021)
Major capabilities included in the previous release v1.0.0 are as follows:
- Multi-task graph neural network training with enhanced message passing layers
- Distributed Data Parallelism (DDP) support