-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track Sabre decay heuristic on physical qubits #10756
Conversation
One or more of the the following people are requested to review this:
|
This converts all of our Rust-space components that are concerned with virtual and physical qubits (via the `NLayout` class) to represent those integers with newtypes wrapping them as half-size indices, and changes the `NLayout` API to only allow access via these types and not by raw integers. The way this is done should have no overhead in time usage from Rust, and is actually a reduction in memory usage because all the qubits are stored at half the width they were previously (for most systems). This is done to add type safety to all our components that were concerned with the mixing of these two qubits. The implementation of this commit already turned up the logical bug fixed by Qiskitgh-10756.
This converts all of our Rust-space components that are concerned with virtual and physical qubits (via the `NLayout` class) to represent those integers with newtypes wrapping them as half-size indices, and changes the `NLayout` API to only allow access via these types and not by raw integers. The way this is done should have no overhead in time usage from Rust, and is actually a reduction in memory usage because all the qubits are stored at half the width they were previously (for most systems). This is done to add type safety to all our components that were concerned with the mixing of these two qubits. The implementation of this commit already turned up the logical bug fixed by Qiskitgh-10756.
This converts all of our Rust-space components that are concerned with virtual and physical qubits (via the `NLayout` class) to represent those integers with newtypes wrapping them as half-size indices, and changes the `NLayout` API to only allow access via these types and not by raw integers. The way this is done should have no overhead in time usage from Rust, and is actually a reduction in memory usage because all the qubits are stored at half the width they were previously (for most systems). This is done to add type safety to all our components that were concerned with the mixing of these two qubits. The implementation of this commit already turned up the logical bug fixed by Qiskitgh-10756.
a1cb72b
to
431b910
Compare
* Use qubit-index newtypes in Rust space This converts all of our Rust-space components that are concerned with virtual and physical qubits (via the `NLayout` class) to represent those integers with newtypes wrapping them as half-size indices, and changes the `NLayout` API to only allow access via these types and not by raw integers. The way this is done should have no overhead in time usage from Rust, and is actually a reduction in memory usage because all the qubits are stored at half the width they were previously (for most systems). This is done to add type safety to all our components that were concerned with the mixing of these two qubits. The implementation of this commit already turned up the logical bug fixed by gh-10756. * Separate out type-safe enforcement in `NeighborTable` to another commit This reverts the changes to the coupling-map generation and the indexing into `NeighborTable` that were designed to make it more type safe in favour of doing that in a separate commit.
The custom `decay` heuristic is supposed to penalise increases in depth in the output. The output space of qubits are the physical qubits, so the depth is defined in relation to those. Since its introduction in Qiskitgh-4537, this heuristic has instead been tracking the depth on the virtual qubits, which due to the swaps is not necessarily related to the depth in the output. Notably, this commit actually makes the depth for routing large volumetric circuits slightly _worse_ on average (at least for heavy-hex topologies). This may be because the effect on the heuristic is overweighted, or that the depth tracking resets after each gate is routed (and occasionally in between) to be flat across all qubits, rather than reflecting the actual depth each qubit is subject to.
431b910
to
e586f01
Compare
The change itself seems fine. But I think we need to get to the bottom of why the depth gets worse, whereas we expect it to get better given that we incentivize the algorithm to pick non-overlapping qubits. Is it possible to debug before/after on a small example to see what's going on? It may be sensitive to hyper-parameters. One thing that I noticed (which is not related to this PR but was introduced earlier in #9012) is that the cost function is different compared to the original one in my PR. The original cost function for the 'decay' simply took the cost for 'lookahead' and multiplied it by max decay factor. |
The |
Basically, the I wrote a bit more about potential problems with the decay heuristic up in the PR comment. |
Ok let's go ahead with this PR as it restores the original intent.
Maybe you can experiment a bit with how the decay factors are reset and change it to something that reflects the true depth a bit more. But thinking more about it I think this routing algorithm is really made for minimizing number of cnots as it assigns scores to each individual SWAP rather than layers of SWAPs. The decay factor was probably shoehorned later to kind of penalize lots of serial SWAPs, but still one would probably get better depth by scoring layers of SWAPs (i.e. a matching on the coupling map graph). I think Sabre's real value is its bidirectional search algorithm for initial layout finding. For routing I think many improvements can be made (which was why I wanted to separate the layout from the routing initially and make the layout work with any routing algorithm). |
Sounds good to me, thanks Ali. I would be super interested in doing the research, but pragmatically, I think it might be something I have to step aside in and delegate to your team (unless I have any immediate obviously good ideas) in order to have time to do my other priorities. |
Summary
The custom
decay
heuristic is supposed to penalise increases in depth in the output. The output space of qubits are the physical qubits, so the depth is defined in relation to those. Since its introduction in gh-4537, this heuristic has instead been tracking the depth on the virtual qubits, which due to the swaps is not necessarily related to the depth in the output.Notably, this commit actually makes the depth for routing large volumetric circuits slightly worse on average (at least for heavy-hex topologies). This may be because the effect on the heuristic is overweighted, or that the depth tracking resets after each gate is routed (and occasionally in between) to be flat across all qubits, rather than reflecting the actual depth each qubit is subject to.
Details and comments
Copying in some data/comments of mine from a Slack thread:
Using a setup of:
I ran
and found that the current implementation results in a distribution of depths with mean 6172(15), and switching it over results in a distribution of depths with mean 6289(15), where the uncertainties are the standard error of the mean. Both distributions have a standard deviation consistent with 150.
I also ran my statistics on
QuantumVolume
seedsrange(20)
and 20 single-trial Sabre passes and the total means went from the current implementation’s 6281(8) to the physical-decay form’s 6379(9), so it wasn’t just a function of one particular QV pattern. Note that the means are higher since there’s only one trial per run now (which is somewhat encouraging - we select the trial result based on swap size not depth, but the depth mean increasing at least hints that they are correlated).