Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add general integer exponents #128

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Add general integer exponents #128

wants to merge 3 commits into from

Conversation

kvhuguenin
Copy link
Contributor

@kvhuguenin kvhuguenin commented Dec 16, 2024

Add exponents >3 to the main branch.
The core changes in the potential have already been implemented.
The most important change that still has to be implemented and tested in the rest of the code is that exponent now is of data type int rather than float.
Also, we need some good unit tests. I was thinking of a combination of regression tests for some of the more complicated structures + generalized Madelung constants that I have computed in the past with a different code.

Contributor (creator of pull-request) checklist

  • Tests updated (for new features and bugfixes)?
  • Documentation updated (for new features)?
  • Issue referenced (for PRs that solve an issue)?

Reviewer checklist

  • CHANGELOG updated with public API or any other important changes?

📚 Documentation preview 📚: https://torch-pme--128.org.readthedocs.build/en/128/

Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As test I would just do a regression test for the function.

@@ -17,6 +17,27 @@ def gamma(x: torch.Tensor) -> torch.Tensor:
return torch.exp(gammaln(x))


# Auxilary function for stable Fourier transform implementation
def gammainc_upper_over_powerlaw(exponent, zz):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Give the equation in the docstring here.

return (
(2 - 4 * zz) * torch.exp(-zz)
+ 4 * torch.sqrt(torch.pi) * zz**1.5 * torch.erfc(torch.sqrt(zz))
) / 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need an error message if the exponent is not supported.

I think there is a check already at the init of InversePowerLawPotential.

I would maybe call this function in the init of InversePowerLawPotential for the given exponent and maybe zz=0 to not have a duplicated test. But up to you.

Copy link
Contributor

@E-Rum E-Rum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but there are some things that need to be double-checked. I will continue updating the tests once we fix them.

if exponent == 2:
return torch.sqrt(torch.pi / zz) * torch.erfc(torch.sqrt(zz))
if exponent == 3:
return -torch.expi(-zz)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know torch.expi doesn't exist in torch module ...

Comment on lines +21 to +43
def gammainc_upper_over_powerlaw(exponent, zz):
if exponent not in [1, 2, 3, 4, 5, 6]:
raise ValueError(f"Unsupported exponent: {exponent}")

if exponent == 1:
return torch.exp(-zz) / zz
if exponent == 2:
return torch.sqrt(torch.pi / zz) * torch.erfc(torch.sqrt(zz))
if exponent == 3:
return -torch.expi(-zz)
if exponent == 4:
return 2 * (
torch.exp(-zz) - torch.sqrt(torch.pi * zz) * torch.erfc(torch.sqrt(zz))
)
if exponent == 5:
return torch.exp(-zz) + zz * torch.expi(-zz)
if exponent == 6:
return (
(2 - 4 * zz) * torch.exp(-zz)
+ 4 * torch.sqrt(torch.pi) * zz**1.5 * torch.erfc(torch.sqrt(zz))
) / 3


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we double-check that everything here is correct? With the new implementation, test_values_ewald.py fails for inverse potentials with an exponent of 1, as it does not coincide with the Madelung constant.

@E-Rum
Copy link
Contributor

E-Rum commented Dec 26, 2024

The current status is as follows:

For arbitrary odd integer exponents ( p >=3 ), we need the exponential integral, which is not implemented in PyTorch. The current solution is to use the existing SplinedPotential to model 1/r^p type potentials. However, before fully adopting it, I plan to use the scipy module, which includes the exponential integral, to verify that the values generated by SplinedPotential are consistent with the theoretical ones.

However, there is a specific issue when p = 3. In this case, certain variables become degenerate, leading to situations where we raise values to the power of zero and even encounter division by zero (e.g. background_correction function).

Can someone provide insights into this issue and suggest potential solutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants