Skip to content

Commit

Permalink
Remove Sabre's manual insertion-order iteration and unnecessary sorts
Browse files Browse the repository at this point in the history
The manual insertion-order tracking was initially added in 02a1939
(Qiskitgh-9012) during a complete rewrite of the scoring algorithms for Sabre
to demonstrate that the new algorithm could be made fully RNG
compatible.  It maintained the identical iteration order to the previous
iteration once the front-layer data structure was swapped from being a
raw list to hash-based.

The new data structure doesn't require the manual tracking to be
reproducible as long as the iteration order is independent of the hash
seed.  This swaps the relevant places over the `IndexMap` to be able to
remove a lot of the manual tracking.  In casual testing, this didn't
appear to have much effect on performance, but the resulting code is
much simpler.

The sorts (and deliberate canonicalisation of the swaps) was necessary
to match RNG compatibility in the pre-relative-score Sabre, but since
the swaps are now iterated through in a deterministic order and
guaranteed to be generated only once each (the previous version used a
hashset to remove duplicates), neither step is necessary.

For a QV circuit of depth 5 at 1081 qubits on heavy hex, this is worth
almost a 2x speedup in routing performance.
  • Loading branch information
jakelishman committed Feb 9, 2023
1 parent 26d1554 commit 0db3ac3
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 83 deletions.
8 changes: 8 additions & 0 deletions releasenotes/notes/sabre-sort-rng-056f26f205e38bab.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
features:
- |
Internal tweaks to the routing algorithm in :class:`.SabreSwap`, used in
transpilation of non-dynamic circuits at all non-zero optimization levels,
have sped up routing for very large circuits. As an example, the time to
route a depth-5 :class:`.QuantumVolume` circuit for a 1081-qubit heavy-hex
coupling map is approximately halved.
83 changes: 11 additions & 72 deletions src/sabre_swap/layer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@
// copyright notice, and modified files need to carry a notice indicating
// that they have been altered from the originals.

use ahash;
use hashbrown::HashMap;
use indexmap::IndexMap;
use ndarray::prelude::*;
use rustworkx_core::petgraph::prelude::*;

Expand All @@ -21,94 +23,38 @@ use crate::nlayout::NLayout;
/// unsatisfied 2q predecessor, which disqualifies it from being in the front layer.
pub struct FrontLayer {
/// Map of the (index to the) node to the qubits it acts on.
nodes: HashMap<NodeIndex, [usize; 2]>,
nodes: IndexMap<NodeIndex, [usize; 2], ahash::RandomState>,
/// Map of each qubit to the node that acts on it and the other qubit that node acts on, if this
/// qubit is active (otherwise `None`).
qubits: Vec<Option<(NodeIndex, usize)>>,
/// Tracking the insertion order of nodes, so iteration can always go through them in a
/// deterministic order. This is important for reproducibility from a set seed - when building
/// up the extended set with a fixed, finite size, the iteration order through the nodes of the
/// front layer is important. We need to maintain the insertion order even with removals from
/// the layer.
iteration_order: Vec<Option<NodeIndex>>,
/// The index of the first populated entry in the `iteration_order`. If the iteration order is
/// empty, this will be 0.
iteration_start: usize,
/// The index one past the last populated entry in the `iteration_order`. If the iteration
/// order is empty, this will be 0.
iteration_end: usize,
}

impl FrontLayer {
pub fn new(num_qubits: usize) -> Self {
FrontLayer {
// This is the maximum capacity of the front layer, since each qubit must be one of a
// pair, and can only have one gate in the layer.
nodes: HashMap::with_capacity(num_qubits / 2),
nodes: IndexMap::with_capacity_and_hasher(
num_qubits / 2,
ahash::RandomState::default(),
),
qubits: vec![None; num_qubits],
iteration_order: vec![None; num_qubits],
iteration_start: 0,
iteration_end: 0,
}
}

/// Add a node into the front layer, with the two qubits it operates on. This usually has
/// constant-time complexity, except if the iteration-order buffer is full.
/// Add a node into the front layer, with the two qubits it operates on.
pub fn insert(&mut self, index: NodeIndex, qubits: [usize; 2]) {
let [a, b] = qubits;
self.qubits[a] = Some((index, b));
self.qubits[b] = Some((index, a));
self.nodes.insert(index, qubits);

self.iteration_order[self.iteration_end] = Some(index);
self.iteration_end += 1;
if self.iteration_end == self.iteration_order.len() {
// Condense items back to the start of the vector.
let mut ptr = 0;
for i in self.iteration_start..self.iteration_end {
if let Some(value) = self.iteration_order[i] {
self.iteration_order[i] = None;
self.iteration_order[ptr] = Some(value);
ptr += 1;
}
}
self.iteration_start = 0;
self.iteration_end = ptr;
}
}

/// Remove a node from the front layer.
pub fn remove(&mut self, index: &NodeIndex) {
let [q0, q1] = self.nodes.remove(index).unwrap();
self.qubits[q0] = None;
self.qubits[q1] = None;

// If the element was at the start of the iteration order, advance the pointer.
match self.iteration_order[self.iteration_start] {
Some(a) if a == *index => {
self.iteration_order[self.iteration_start] = None;
if self.iteration_start + 1 == self.iteration_end {
self.iteration_start = 0;
self.iteration_end = 0;
}
while self.iteration_start < self.iteration_end
&& self.iteration_order[self.iteration_start].is_none()
{
self.iteration_start += 1;
}
}
_ => (),
}
// Search through and remove the element. We leave a gap and preserve the insertion order.
for i in (self.iteration_start + 1)..self.iteration_end {
match self.iteration_order[i] {
Some(a) if a == *index => {
self.iteration_order[i] = None;
break;
}
_ => (),
}
}
}

/// Query whether a qubit has an active node.
Expand Down Expand Up @@ -189,24 +135,17 @@ impl FrontLayer {

/// Iterator over the nodes and the pair of qubits they act on.
pub fn iter(&self) -> impl Iterator<Item = (&NodeIndex, &[usize; 2])> {
(&self.iteration_order)[self.iteration_start..self.iteration_end]
.iter()
.filter_map(move |node_opt| node_opt.as_ref().map(|node| (node, &self.nodes[node])))
self.nodes.iter()
}

/// Iterator over the nodes.
pub fn iter_nodes(&self) -> impl Iterator<Item = &NodeIndex> {
(&self.iteration_order)[self.iteration_start..self.iteration_end]
.iter()
.filter_map(|node_opt| node_opt.as_ref())
self.nodes.keys()
}

/// Iterator over the qubits that have active nodes on them.
pub fn iter_active(&self) -> impl Iterator<Item = &usize> {
(&self.iteration_order)[self.iteration_start..self.iteration_end]
.iter()
.filter_map(move |node_opt| node_opt.as_ref().map(|node| &self.nodes[node]))
.flatten()
self.nodes.values().flatten()
}
}

Expand Down
12 changes: 1 addition & 11 deletions src/sabre_swap/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,14 +80,7 @@ fn obtain_swaps<'a>(
.filter_map(move |&neighbor| {
let virtual_neighbor = layout.phys_to_logic[neighbor];
if virtual_neighbor > v || !front_layer.is_active(virtual_neighbor) {
// This normalisation line is only necessary to ensure equal output in the
// swap-sorting stage later to the previous version of this algorithm; it can be
// removed when we break that matching. It isn't needed for determinism.