-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instructions for use of the benchmark datasets and metrics on custom generative models #10
Comments
Hi Sterling, thank you for your interest. Use our datasets on other modelsOur datasets are To adopt G-SchNet, we processed the crystals to ASE atoms. I am enclosing my code here:
and then one can follow the instruction to build dataset objects for G-SchNet. Use our benchmark metricsA dictionary containing the following is all you need for evaluation:
Any crystal generative models would generate these quantities to be complete. Our evaluation scripts for computing metrics are independent of CDVAE. One just need to save these quantities as a torch pickle file, and then run compute_metrics.py with that file as input. See https://github.com/txie-93/cdvae/blob/main/scripts/compute_metrics.py#L267 on how the saved crystals are loaded. Hope this helps. |
@kyonofx thank you! As I was browsing further, also noticed the README in the data directory. I appreciate the extra clarification here. |
@kyonofx while the scripts are in separate files/folders from cdvae, there are import dependencies that trace back to CDVAE: cdvae/scripts/compute_metrics.py Lines 19 to 21 in f857f59
Lines 15 to 18 in f857f59
|
Hi, Yes, you still need to install the cdvae package, and evaluation can be run without training a cdvae model. |
Hi, I think it might be the easiest to splice out the evaluation code as they only compose a small fraction of the cdvae codebase. |
@kyonofx separating it out is turning out to be ☠️ I'm reconstructing most of the repository piece by piece. Not very straightforward, as it accesses many files in the repository. |
Note that this is mainly due to some metric(s) requiring predictions from a CDVAE submodel (i.e. property regressor). |
Hi @txie-93, I'm enjoying digging into the manuscript, and congratulations on its acceptance to ICLR! It is really nice to see the comparison with FTCP and other methods, and CDVAE certainly has some impressive results.
Would you mind providing some instructions in the repository for using the benchmark datasets and the metrics on a custom generative model? For example, how would this look for FTCP or the slew of other generative models in this space (i.e. the general inverse design ones)?
The text was updated successfully, but these errors were encountered: