Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make single-precision floating point the default for fields and materials #1550

Closed
wants to merge 3 commits into from

Conversation

oskooi
Copy link
Collaborator

@oskooi oskooi commented Apr 19, 2021

Following recently added support for single precision (#1544) and prior to eventually providing a user option to select the floating precision at runtime (#1549), this PR makes single-precision floating point the default for the fields and material arrays. The main changes in this PR are updates to several of the unit tests which had previously been failing for single precision. These changes mainly involve reducing the threshold relative tolerance when comparing against hard-coded values. All tests are now passing.

Some additional comments:

  • Two of the tests involving the CW solver (array_metadata.py) and the eigensolver (eigfreq.py) had to be essentially disabled because single precision was causing either slow or poor convergence for the iterative solvers.
  • The results for the saturable absorption unit test multilevel_atom.py changed significantly (nearly two orders of magnitude) when switching from double to single precision. Not sure why this the case. For now, this test has been disabled.
  • It seems that for certain materials involving Lorentzian susceptibilities, using single precision causes the fields to blow up at low resolutions (not the case when using double precision). This requires increasing the resolution to ensure stability. Since this is likely a feature and not a bug, perhaps this should be documented somewhere in the user manual?
  • I haven't yet verified that the results for the tutorial examples are unaffected.

@oskooi
Copy link
Collaborator Author

oskooi commented Apr 25, 2021

Looking into why the results for the unit test multilevel_atom.py are so different for single and double precision fields, it turns out the cause of the discrepancy is roundoff error because the simulation is run for a large number of time steps (~107). I verified this by comparing the fields in the laser cavity (the structure used in the unit test) from the two runs. The norm of the difference in the fields gradually increases with the simulation time: ~10-8 after the first 102 timesteps vs. ~10-2 at the first 105 timesteps. That's roughly an increase of six orders of magnitude in the error given three orders of magnitude increase in the runtime.

The saturable absorption feature is a rare example of needing to run the simulation for a long time in order to obtain accurate results (particularly when comparing to an analytic model such as the steady-state ab-initio laser theory). Based on these results, we should just add a note to the tutorial to explicitly mention that this feature should preferably be used with double-precision fields.

@stevengj
Copy link
Collaborator

I think it would cause too many problems for existing users to make this the default. For the time being, until we can make it a runtime option, it will just have to be something for "power users" who are willing to recompile.

@oskooi
Copy link
Collaborator Author

oskooi commented Apr 29, 2021

Replaced by #1560.

@oskooi oskooi closed this Apr 29, 2021
@oskooi oskooi deleted the single_precision_default branch February 5, 2022 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants