Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generating Random Error Generators #473

Open
sserita opened this issue Jul 30, 2024 · 4 comments
Open

Generating Random Error Generators #473

sserita opened this issue Jul 30, 2024 · 4 comments
Labels
enhancement Request for a new feature or a change to an existing feature
Milestone

Comments

@sserita
Copy link
Contributor

sserita commented Jul 30, 2024

Is your feature request related to a problem? Please describe.
We often like to test new protocols or features with random noise to see how it performs in simulation. It is fairly straightforward to generate realistic random noise for full/TP parameterized operations, using the depolarize, rotate, or kick functions. However, we currently don't have a good tool for generating random CPTP operations that are parameterized by error generators.

Describe the solution you'd like
A function (maybe in lindbladtools?) which can generate valid random CPTP maps parameterized by error generators. Ideal features include:

  • being able to specify noise strengths for H, S, C, A sectors (CA may have to be tied together, and even SCA being tied together would be a start)
  • what kind of distribution they are drawn from (normal makes sense for H, uniform makes more sense for SCA)
  • different noise strengths based on weight or qubit support (maybe weight-2 should be smaller than weight-1, maybe one qubit is particularly bad)

Of course, the challenge is being able to provide these sorts of tunable parameters and ensuring that the SCA matrix is PSD such that the map is CPTP overall...

Describe alternatives you've considered
We've tried variants of this in the past, normally by sampling some matrices and using A@A.T tricks to enforce the PSD constraint. Let's do this once correctly and put it in pyGSTi so we can stop rediscovering this every few months. :)

@sserita sserita added the enhancement Request for a new feature or a change to an existing feature label Jul 30, 2024
@sserita sserita added this to the 0.9.13 milestone Jul 30, 2024
@coreyostrove
Copy link
Contributor

Just a heads-up to avoid working at cross purposes, I've started working on an implementation of this as part of a soon to be published branch. The currently envisioned features are (thanks to @kevincyoung for letting me pick his brain):

  • Specification of error generator sectors
  • Setting a target generator infidelity (if none is specified this wouldn't be fixed)
  • Relative weights on the error generator sectors (with regards to some norm)
  • Specification of maximum (pauli) weights for each of the error generator sectors.

This overlaps a good deal with your list of desiderata, with the possible exception of:

different noise strengths based on weight or qubit support (maybe weight-2 should be smaller than weight-1, maybe one qubit is particularly bad)

I hadn't considered this, but it should be doable. This option is definitely something I can envision someone wanting down the road, so implementing it now is definitely in the spirit of 'do it once so we don't need to do this again.'

Follow up on your parenthetical:

(CA may have to be tied together, and even SCA being tied together would be a start)

I hadn't considered this. My current target was for the output of this function to return a dictionary with ElementaryErrorgenLabel objects as keys and rates as values. I.e. the dictionary format used by the from_elementary_errorgens constructor for LindbladErrorgen, or as used in the various set_errorgen_coefficients methods. Such dictionaries are in general agnostic to the implementation details at the LindbladCoefficientBlock level though. What would currently happen if I passed the constructor a dictionary with just, say, S and C terms? Would it allocate parameters for the A terms and just set them to zero?

@sserita
Copy link
Contributor Author

sserita commented Jul 31, 2024

What would currently happen if I passed the constructor a dictionary with just, say, S and C terms? Would it allocate parameters for the A terms and just set them to zero?

In this case, the LindbladCoefficientBlock that is allocated would be the full SCA one, i.e. yes you would have A terms present in the parameterization and they would be set to 0. It is only possible to have the following sets of blocks:

  • H (block_type = "ham")
  • S (block_type = "other_diagonal")
  • SCA (block_type = "other")

@coreyostrove
Copy link
Contributor

I see. This definitely a scope creepy sort of question, but how much of a lift would it be to decouple the C and A terms? E.g. just having an SC or SA block? This is something we've mentioned wanting to be able to do before for finer tuned reduce model construction. It is also a parallel issue to this thread, however.

@sserita
Copy link
Contributor Author

sserita commented Jul 31, 2024

Medium lift IMO. It's luckily mostly localized to LindbladCoefficientBlock, but I wouldn't be surprised if there are inherent assumptions in other places in the code that if you have C, you have A, or vice versa. We'd probably want to build off of the param_mode = "elements", but that is significantly less used/tested compared to param_mode="cholesky", so wouldn't be surprised if we have to do some fixing there too. Plus I have this intuition that the Cholesky decomp is critical to enforcing the PSD constraint, so I'm missing a piece in understanding how we are enforcing CPTP with "elements" parameterization.

Edit: Thinking through it more, it's possible we aren't enforcing CPTP with "elements" and "elements" is just how we are doing the GLND parameterization. In which case, I'm still missing a conceptual piece about how to separate C and A while doing param_mode="cholesky".

To avoid scope creep, I'd be inclined to keep the SCA sectors tied together for this PR, and then when we decouple later, we update the random generation as well. Really, nothing needs to happen in the random generation code if you are describing via the dictionaries and we are automagically detecting block structure, just under the hood we would allow an SC block instead of an SCA block in your example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Request for a new feature or a change to an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants