Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve gradient performance #53

Merged
merged 11 commits into from
Mar 2, 2021

Conversation

thangleiter
Copy link
Member

This is a refactor meant to improve the performance of the gradient calculation and align the API to the main modules.

Currently, there are two solutions implemented for testing:

  • vectorized: gradient.calculate_derivative_of_control_matrix_from_scratch, which then uses gradient.control_matrix_at_timestep_derivative. Python loops are mostly avoided in favor of closed einsum expressions.
  • looped: gradient.calculate_derivative_of_control_matrix_from_scratch_loop loops over one time dimension in a Python loop (similar to the calculation of the control matrix), uses gradient.control_matrix_at_timestep_derivative_loop.

Benchmarks indicate that as soon as the dimension d > 2, the loop version is faster because the einsum expressions become too large:
timings
timings_d2
timings_d4

Should improve performance by factor of ~2-5 in most cases. For now,
there are two working solutions: one vectorized and one where one time
axis is looped over, similar to the calculation of the filter function.
@codecov
Copy link

codecov bot commented Mar 2, 2021

Codecov Report

Merging #53 (2718040) into master (3659145) will decrease coverage by 0.16%.
The diff coverage is 91.06%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #53      +/-   ##
==========================================
- Coverage   97.21%   97.04%   -0.17%     
==========================================
  Files           9        9              
  Lines        2187     2203      +16     
  Branches      502      499       -3     
==========================================
+ Hits         2126     2138      +12     
- Misses         25       29       +4     
  Partials       36       36              
Impacted Files Coverage Δ
filter_functions/numeric.py 94.82% <81.25%> (-2.09%) ⬇️
filter_functions/util.py 97.05% <89.47%> (-0.63%) ⬇️
filter_functions/gradient.py 95.32% <95.04%> (+7.69%) ⬆️
filter_functions/basis.py 98.06% <100.00%> (-0.06%) ⬇️
filter_functions/plotting.py 99.65% <100.00%> (ø)
filter_functions/pulse_sequence.py 97.32% <100.00%> (-0.06%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3659145...29f637d. Read the comment docs.

Looped calculation is faster in almost all cases.
liouville_derivative and control_matrix_at_timestep_derivative do not
really make sense on their own.
Used in both Basis and PulseSequence constructors
@thangleiter thangleiter merged commit b79c7a3 into master Mar 2, 2021
@thangleiter thangleiter deleted the feature/improve_gradient_performance branch March 3, 2021 06:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant