Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyper-optimized tensor network contractions for complete networks #175

Open
shadibeh opened this issue Apr 13, 2023 · 3 comments
Open

Hyper-optimized tensor network contractions for complete networks #175

shadibeh opened this issue Apr 13, 2023 · 3 comments

Comments

@shadibeh
Copy link

shadibeh commented Apr 13, 2023

What is your issue?

Hi everyone.

I am using quimb to optimize QAOA circuit. In the documentation, there is the example of " Bayesian Optimizing QAOA Circuit Energy" doing that for 3-reg graph which works really well.

However, when I changed it to a fully-connected graph, it needs lots of memory and it is somehow slow.
I tried different optimizers such as greedy and kahypar, but it did not resolve the problem.

Would you please if you could kindly recommend me an optimizer or a kind of technique to resolve the problem.

Thanks

@jcmgray
Copy link
Owner

jcmgray commented Apr 13, 2023

Do you mean specifically the contraction optimization is slow or the whole computation process?

All to all is just a much more challenging geometry, it has approx n times more gates and no structure to exploit, so the expectation is just that it should be much harder.

@shadibeh
Copy link
Author

Thanks for the quick reply.
I am going to find contraction cost for the problem, so I used the following command:

local_exp_rehs = [ circ_ex.local_expectation_rehearse(weight * ZZ, edge, optimize=opt) for edge, weight in tqdm.tqdm(list(terms.items())) ]
and it requires large amount of memory to find the contraction cost for a fully connected graph of N=50 qubits. Is there any way to reduce the required memory?

Thanks

@jcmgray
Copy link
Owner

jcmgray commented Apr 14, 2023

I'm not sure there is anything easy to do to reduce the memory.. each layer of gates will be adding several thousand tensors to the network. If you can profile the code using e.g. filprofiler or whichever tool of your choice, and identify which data structures or calls are using the most time and memory than that is useful information that I can look into for optimizing things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants