-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MACE model support #61
Conversation
openmmml/models/macepotential.py
Outdated
|
||
import torch | ||
import openmmtorch | ||
from e3nn.util import jit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should give a better error message than import failed. An explanation of missing MACE and pip command should be shown.
openmmml/models/macepotential.py
Outdated
super(MACEForce, self).__init__() | ||
|
||
|
||
if not torch.cuda.is_available(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenMM-Torch takes care of loading to the right devices. Here you just need to load on CPU.
openmmml/models/macepotential.py
Outdated
# get the dtype of the saved model | ||
model_dtype = [p.dtype for p in self.model.parameters()][0] | ||
|
||
# check is user has request a different dtype |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model should run on the precision with which it was trained. If OpenMM runs at a different precision (double rather float), one just casts inputs/outputs accordingly rather than change the model.
openmmml/models/macepotential.py
Outdated
else: | ||
self.default_dtype = model_dtype | ||
|
||
torch.set_default_dtype(self.default_dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This a global PyTorch setting. It is better to set tensor precisions explicitly.
openmmml/models/macepotential.py
Outdated
|
||
# setup input | ||
N = len(atomic_numbers) | ||
self.ptr = torch.nn.Parameter(torch.tensor([0,N], dtype=torch.long), requires_grad=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why these are parameters rather than buffers?
openmmml/models/macepotential.py
Outdated
if atoms is not None: | ||
includedAtoms = [includedAtoms[i] for i in atoms] | ||
|
||
class MACEForce(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For better readability, don't nest the classes.
openmmml/models/macepotential.py
Outdated
"shifts": shifts} | ||
|
||
# predict | ||
out = self.model(input_dict,compute_force=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do the forces are computed here?
openmmml/models/macepotential.py
Outdated
out = self.model(input_dict,compute_force=False) | ||
|
||
energy = out[self.return_energy_type] | ||
if energy is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can you get energy None
? The simulation should fail rather than silently assume it is 0.
openmmml/models/utils.py
Outdated
from typing import Tuple | ||
|
||
@torch.jit.script | ||
def simple_nl(positions: torch.Tensor, cell: torch.Tensor, pbc: bool, cutoff: float, sorti: bool=False) -> Tuple[torch.Tensor, torch.Tensor]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to invent a new neighbor list rather than use the one in NNPOps?
I have been working on this PR and would like to discuss the best approach for incorporating the computation of the neighbour list. There are 3 possible options:
What do you think would be preferable? The first approach has the advantage of not needing additional modules to be added to |
It's not mandatory: it's only needed if you use MACE. I think that's ok. I expect this package to eventually have interfaces to a lot of other libraries, each of which might or might not be installed. So for example if MACE is installed, you can create MACE models. And if it isn't installed, that doesn't stop you from creating other types of models.
Compared to the cost of computing the model, computing the vector between two atoms should be totally negligible! |
Sounds good. And I agree that the calculation of the vector should be negligible. |
This is ready for review. I've addressed most of the comments raised by @raimis. In addition to general refactoring, I have removed the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Just a few very minor comments.
openmmml/models/macepotential.py
Outdated
self-energy, instead of the default interaction energy, by setting `returnEnergyType` to | ||
`energy`. For example: | ||
>>> system = potential.createSystem(topology, returnEnergyType=`energy`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and in similar places, you probably meant to use single quotes rather than back quotes? Make it clear that 'energy' is a string literal
openmmml/models/macepotential.py
Outdated
# Convert it to TorchScript and save it. | ||
module = torch.jit.script(maceForce) | ||
module.save(filename) | ||
|
||
# Create the TorchForce and add it to the System. | ||
force = openmmtorch.TorchForce(filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saving to a file isn't needed anymore. TorchForce can now take a model directly. You can get rid of the filename
argument and just write
force = openmmtorch.TorchForce(module)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be changed here too?
openmm-ml/openmmml/models/anipotential.py
Lines 142 to 149 in 09be77f
# Convert it to TorchScript and save it. | |
module = torch.jit.script(aniForce) | |
module.save(filename) | |
# Create the TorchForce and add it to the System. | |
force = openmmtorch.TorchForce(filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it would probably be good to clean that up eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll take care of it.
openmmml/models/macepotential.py
Outdated
atoms: Optional[Iterable[int]], | ||
forceGroup: int, | ||
precision: Optional[str] = None, | ||
returnEnergyType: str = "energy", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you set the default to 'energy', but in the docstring you say the default is 'interaction_energy'. Probably interaction energy is a better default.
…ry model writing, and corrected default value of returnEnergyType parameter
Thank you for the review, @peastman. All fixed now. |
I've also added a fix to ensure tensors are placed on the correct devices. I was only testing the implementation on the CPU and Reference platforms and was not having any issues. I've now conducted testing across all platforms, and everything should be functioning correctly. |
Looks good. Is this ready to merge? |
Ready to merge. |
Thanks! |
This adds support for MACE models: https://github.com/ACEsuit/mace.
The MACE model framework is well defined. With this PR a user can run an OpenMM simulation using a MACE model trained with the scripts in https://github.com/ACEsuit/mace.
cc @jharrymoore @jmichel80