-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New version of E3statistics
#17
Conversation
typed_dataset = idp(self.data.to_dict()) | ||
|
||
idp.get_irreps() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这块get_irreps的时候要设置一下参数,no_parity=False
for irrep in idp.orbpair_irreps: | ||
l = int(str(irrep)[2]) | ||
irrep_slice = slice(index, index+2*l+1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这块的写法其实要保证idp.orbpair_irreps中每个irrep都是1xne/1xno这样的形式,虽然我们的idp.orbpair_irreps是这样写的,但后面很难保证不做修改,这边应该写更general一点
onsite_tp_mask = onsite_block_mask[typed_dataset["atom_types"].flatten().eq(tp)] | ||
onsite_tp = features[typed_dataset["atom_types"].flatten().eq(tp)] | ||
filtered_vecs = torch.where(onsite_tp_mask, onsite_tp, torch.tensor(float('nan'))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这边为啥要填nan?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
主要是占位,给后面统一用irrep_slice取不同type的rme时候用
typed_norm_std[at] = bt_std | ||
|
||
return typed_norm_ave, typed_norm_std |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这边只有norm的std 和 mean,似乎缺了l=0时的mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l=0时norm的mean 和 mean是不同的
* add data * adapt data and nn module of nequip into deeptb * just modify some imports * update torch-geometry * add kpoint eigenvalue support * add support for nested tensor * update * update data and add batchlize hamiltonian * update se3 rotation * update test * update * debug e3 * update hamileig * delete nequip nn and write our own based on PyG * update nn * nn refactor, write hamiltonian and hop function * update sk hamiltonian and onsite function * refactor sktb and add register for descriptor * update param prototype and dptb * refactor index mapping to data transform * debug sktb and e3tb module * finish debuging sk and e3 * update data interfaces * update r2k and transform * remove dash line in file names * fnishied debugging deeptb module * finish debugging hr2hk * update overlap support * update base trainer and example quantities * update build model * update trainer * update pyproject.toml dependencies * update bond reduction and self-interaction * debug nnsk * nnsk run succeed, add from v1 json model * add nnsk test example of AlAs coupond system * Add 'ABACUSDataset' in data module (#9) * Prototype code for loading Hamiltonian * add 'ABACUSDataset' in data module * modified "basis.dat" storage & can load overlap * recover some original dataset settings * add ABACUSDataset in init * debug new dptb and trainer * debug datasets * pass cmd line train mod to new model and data * add some comments in neighbor_list_and_relative_vec. * add overlap fitting support * update baseline descriptor and debug validationer * update e3deeph module * update deephe3 module * Added ABACUSInMemoryDataset in data module (#11) * Prototype code for loading Hamiltonian * add 'ABACUSDataset' in data module * modified "basis.dat" storage & can load overlap * recover some original dataset settings * add ABACUSDataset in init * Add the in memory version of ABACUSDataset * add ABACUSInMemoryDataset in data package * update dataset and add deephdataset * gpu support and debugging * add dptb+nnsk mix model, debugging build, restart * align run.py, test.py, main.py * debugging * final * add new model backbone on allegro * add new e3 embeding and lr schedular * Added `DefaultDataset` (#12) * Prototype code for loading Hamiltonian * add 'ABACUSDataset' in data module * modified "basis.dat" storage & can load overlap * recover some original dataset settings * add ABACUSDataset in init * Add the in memory version of ABACUSDataset * add ABACUSInMemoryDataset in data package * Added `DefaultDataset` and unified `ABACUSDataset` * improved DefaultDataset & add `dptb data` entrypoint for preprocess * update `build_dataset` * aggregating new data class * debug plugin savor and support atom specific cutoffs * refactor bond reduction and rme parameterization * add E3 fitting analysis and E3 rescale * update LossAnalysis and e3baseline model * update band calc and debug nnsk add orbitals * update datatype switch * Unified dataset IO (#13) * Prototype code for loading Hamiltonian * add 'ABACUSDataset' in data module * modified "basis.dat" storage & can load overlap * recover some original dataset settings * add ABACUSDataset in init * Add the in memory version of ABACUSDataset * add ABACUSInMemoryDataset in data package * Added `DefaultDataset` and unified `ABACUSDataset` * improved DefaultDataset & add `dptb data` entrypoint for preprocess * update `build_dataset` * update `data` entrypoint * Unified dataset IO & added ASE trajectory support * Add support to save `.pth` files with different `info.json` settings. * Bug fix in dealing with "ase" info. * updated `argcheck` for setinfo. * added setinfo check when building dataset. * file IO improvements * bug fix in loading `info.json` * update e3 descriptor and OrbitalMapper * Bug fix in reading trajectory data (#15) * add comment and complete eig loss * update new embedding and dependencies * New version of `E3statistics` (#17) * new version of `E3statistics` function added in DefaultDataset. * fix bug in dealing with scalars in `E3statistics` * add "decay" option in E3statistics to return edge length dependence * fix bug in getting rmes when doing stat & update argcheck * adding statistics initialization * debug nnsk batchlization and eigenvalues loading * debug nnsk * optimizing saving best checkpoint * Pr/44 (#19) * add comments QG * add comment QG * debug nnsk add orbital and strain * update `.npy` files loading procedure in DefaultDataset (#18) * optimizing init and restart param loading * update nnsk push thr * update mix model param and deeptb sktb param * BUG FIX in loading `kpoints.npy` files with `ndim==3` (#20) * bug fix in loading `kpoints.npy` files with `ndim==3` * added tests for nnsk training * main program for test_train * refactor test * update nrl * denote run --------- Co-authored-by: Sharp Londe <93334987+SharpLonde@users.noreply.github.com> Co-authored-by: qqgu <guqq_phy@qq.com> Co-authored-by: Qiangqiang Gu <98570179+QG-phy@users.noreply.github.com>
New version of
E3statistics
function is added in DefaultDataset.It calculates the norm and standard deviation of node / edge features by their irreducible matrix sub-blocks.