Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing the code in the paper #12

Merged
merged 4 commits into from
Sep 9, 2023
Merged

Testing the code in the paper #12

merged 4 commits into from
Sep 9, 2023

Conversation

rsln-s
Copy link
Contributor

@rsln-s rsln-s commented Sep 9, 2023

Adding the tests for snippets in the upcoming paper "Fast Simulation of High-Depth QAOA Circuits" describing QOKit.

@rsln-s
Copy link
Contributor Author

rsln-s commented Sep 9, 2023

test_listing_3 is perhaps OK to skip, but test_listing_2 must pass. Currently, I get the following error on a GPU instance:

======================================================== FAILURES ========================================================
_____________________________________________________ test_listing_2 _____________________________________________________

    def test_listing_2():
        import qokit
    
        simclass = qokit.fur.choose_simulator_xycomplete()
        n = 8
        terms = qokit.labs.get_terms(n)
>       sim = simclass(n, terms=terms)

tests/test_examples_in_the_paper.py:30: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
qokit/fur/nbcuda/qaoa_simulator.py:24: in __init__
    super().__init__(n_qubits, costs, terms)
qokit/fur/qaoa_simulator_base.py:63: in __init__
    self._hc_diag = self._diag_from_terms(terms)
qokit/fur/nbcuda/qaoa_simulator.py:32: in _diag_from_terms
    precompute_gpu(0, self.n_qubits, terms, out)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

rank = 0, n_local_qubits = 8, terms = [(0, 1, 3, 4), (0, 1, 4, 5), (0, 2, 3, 5), (3, 7), (4, 6), (5, 7), ...]
out = <numba.cuda.cudadrv.devicearray.DeviceNDArray object at 0x7f138519b7c0>

    def compute_costs(rank: int, n_local_qubits: int, terms, out):
        offset = rank << n_local_qubits
        n = len(out)
        zero_init_kernel.forall(n)(out)
    
>       for coef, pos in terms:
E       ValueError: too many values to unpack (expected 2)

qokit/fur/diagonal_precomputation/gpu_numba.py:39: ValueError

@rsln-s
Copy link
Contributor Author

rsln-s commented Sep 9, 2023

The same issue happens in CPU-only as evidenced by the GitHub workflow logs:

...
>       for coeff, pos in weighted_terms:
E       ValueError: too many values to unpack (expected 2)
...

Copy link
Contributor Author

@rsln-s rsln-s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo the removal of assertion in qokit/labs.py

@@ -579,6 +601,4 @@ def get_gate_optimized_terms_greedy(N: int, number_of_gate_zones: int = 4, seed:
j += 1
k += swapped

assert set(circuit) == set(get_terms(N))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to drop this check? Now that get_terms returns weights well, should this become assert set(circuit) == set(get_term_indices(N)) or something like that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already checked with tests, so runinng get_terms seemed redundant. We can keep the assert but then the tests are a bit silly:)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right! Thanks for catching

@rsln-s rsln-s merged commit dd06be6 into main Sep 9, 2023
12 checks passed
@alex124585 alex124585 deleted the test_code_from_paper branch September 20, 2023 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants