-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add united test for trainer.test and description in the example #165
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…x and little cleanup in united tests
Codecov ReportPatch and project coverage have no change.
Additional details and impacted files@@ Coverage Diff @@
## main #165 +/- ##
=======================================
Coverage 98.07% 98.07%
=======================================
Files 28 28
Lines 1820 1820
=======================================
Hits 1785 1785
Misses 35 35 ☔ View full report in Codecov by Sentry. |
lbluque
pushed a commit
to lbluque/matgl
that referenced
this pull request
Sep 14, 2023
shyuep
added a commit
that referenced
this pull request
May 6, 2024
* ENH: fixing chgnet dset * MAINT: create tensors in lg device * MAINT: use register buffer in Potential and LightningPotential * MAIN: rename chgnet graph feats * FIX: clamp cos values to -1, 1 with eps * ENH: start implementing chgnetdset * Fix loading graphs * use dgl path attrs in chgnet dataset * TST: add chgnetdataset test and fix errors * TST assert that unnormalized predictions are not the same * TST: clamp cos values to -1, 1 with eps in tests * ENH: use torch.nan for None magmoms * BUG: fix setting lg node data * use no_grad in directed line graph * FIX: set lg data using num nodes * TST: test up to 4 decimals * MAINT: update to renamed DEFAULT_ELEMENTS * FIX: directed lg compatibility * maint: update to new dataset interface * MAINT: update to new dataset interface * TST: fix graph test * MAINT: minor edit in directed line graph * update to use dtype interface * add tol to threebody cutoff * add tol to threebody cutoff * FiX: remove tol and set pbc_offshift to float64 * ENH: chunked chgnet dataset * remove state attr in has_cache * fix chunk_sizes * trange when loading indices * singular keys in collate * hard code label keys * run pre-commit * change chgnet default elements * FIX: create nan tensor for missing magmoms * add tol to threebody cutoff * add tol to threebody cutoff * FiX: remove tol and set pbc_offshift to float64 * ENH: chunked chgnet dataset * remove state attr in has_cache * fix chunk_sizes * trange when loading indices * singular keys in collate * hard code label keys * run pre-commit * change chgnet default elements * FIX: nan tensor shape * FIX: allow skipping nan tensors * add xavier normal and update chunked dataset * fix getitem * fix getitem * fix getitem * fix getitem * fix getitem * fix getitem * huber loss * MAINT: use torch instead of numpy * MAINT: keep onehot matrix as attribute * MAINT: remove unnecessary statements * MAINT: remove unnecessary statements * MAINT: onehot as buffer * MAINT: property offset as buffer * MAINT: onehot as buffer * MAINT: property offset as buffer * change order in init * TST update tests * ENH use lstsq to avoid constructing full normal eqs * change order in init * TST update tests * ENH use lstsq to avoid constructing full normal eqs * remove numpy import * remove print * STY: fix lint * FIX: backwards compat with pre-trained models * ENH: raise load_model error from baseexception * TST: fix atomref tests * STY: ruff * FIX: use tuple in isinstance for 3.9 compat * remove numpy import * STY: ruff * remove numpy import * STY: ruff * remove assert in compat (fails for some batched graphs) * ENH: messy graphnorm mess * FIX: fix allow missing labels * use lg num_nodes() directly * use lg num_nodes() directly * do not assert * FIX: fix ensuring line graph for bonds right at cutoff * remove numpy import * STY: ruff * Remove wheel and release. * Bump pymatgen from 2023.9.2 to 2023.9.10 (#162) Bumps [pymatgen](https://github.com/materialsproject/pymatgen) from 2023.9.2 to 2023.9.10. - [Release notes](https://github.com/materialsproject/pymatgen/releases) - [Changelog](https://github.com/materialsproject/pymatgen/blob/master/CHANGES.md) - [Commits](materialsproject/pymatgen@v2023.9.2...v2023.9.10) --- updated-dependencies: - dependency-name: pymatgen dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add united test for trainer.test and description in the example (#165) * ENH: allow skipping label keys * use tuple * ENH: allow skipping label keys * use tuple * use skip labels in chunked dataset * add empty axis to magmoms * add empty axis to magmoms * ENH: graph norm implementation * TST: add graph_norm test * remove adding extra axis to magmoms * remove adding extra axis to magmoms * add skip label keys to chunked dataset * fix chunked dset * add OOM dataset * len w state_attr * int idx * increase compatibility tol * lintings * STY: fix some linting errors * STY: fix mypy errors * remove numpy import * STY: ruff * remove numpy import * STY: ruff * TYP: use Sequence instead of list * lint * MAINT: use sequential in MLP * ENH: norm gated MLP * MAINT: use sequential in MLP * store linear layers and activation separately in MLP * use MLP in gated MLP * remove unnecessary Sequential * correct magmom training index! * revert magmom index bc it was correct! * ENH: graphnorm in mlp and gmlp * remove numpy import * STY: ruff * remove numpy import * STY: ruff * FIX: remove repeated bond expansion * hack to load new state dicts in PL checkpoints * allow site_wise loss options * only set grad enabled in forward * adapt core to allow normalization of different layers * remove some TODOS * allow normalization in chgnet * always normalize last * always normalize last * fix normalization inputs * fix mlp forward * fix mlp forward * messy norm * allow norm kwargs and allow batching by edges or nodes in graphnorm * test graphnorm * graph norm in chgnet * allow layernorm in chgnet * allow layernorm in chgnet * rename args * rename args * fix mypy errors * add tolerance in lg compatibility * add tolerance in lg compatibility * raise runtime error for incompatible graph * raise runtime error for incompatible graph * create tensors on same device in norm * create tensors on same device in norm * update chgnet to use new line graph interface * update chgnet paper link * update line graph in dataset * no bias in output of conv layers * some docstrings * moved mlp_out from InteractionBlock to ConvFunctions and added non-linearity * fix typo * moved out_layer to linear * solved bug * solved bug * removed normalization from bondgraph layer * uploaded pretrained model and modified ASE interface * fix linting * fixed chgnet dataset by adding lattice * hot fix * add frac_coords to pre-processed graphs * hot fix * solved bug * remove ignore model * add 11M model weights * renamed pretrained weights * Adding CHGNet-matgl implementation * corrected texts and comments * fix more texts * more texts fixes * refactor CHGNet path in test * fixed linting * fixed texts * remove unused CHGNetDataset * restructure matgl modules for CHGNet implementations * fix ruff * update model versioning for Potential class --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: lbluque <lbluque@berkeley.edu> Co-authored-by: Shyue Ping Ong <shyuep@users.noreply.github.com> Co-authored-by: Shyue Ping Ong <sp@ong.ai> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Tsz Wai Ko <47970742+kenko911@users.noreply.github.com> Co-authored-by: lbluque <lbluque@meta.com> Co-authored-by: kenko911 <kenko911@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Major changes:
Adding a united test for trainer.test and description in the example
Checklist
[ X] Google format doc strings added. Check with
ruff
.[ X] Type annotations included. Check with
mypy
.[ X] Tests added for new features/fixes.
[ X] If applicable, new classes/functions/modules have
duecredit
@due.dcite
decorators to reference relevant papers by DOI (example)Tip: Install
pre-commit
hooks to auto-check types and linting before every commit: