Skip to content

Commit

Permalink
Address NumPy 2 data type promotion warnings
Browse files Browse the repository at this point in the history
One of the changes in NumPy 2 is to the [behavior of type
promotion](https://numpy.org/devdocs/numpy_2_0_migration_guide.html#changes-to-numpy-data-type-promotion).
A possible negative impact of the changes is that some operations
involving scalar types can lead to lower precision, or even overflow.
For example, `uint8(100) + 200` previously (in Numpy < 2.0) produced a
`unit16` value, but now results in a `unit8` value and an overflow
_warning_ (not error). This can have an impact on Cirq. For example,
in Cirq, simulator measurement result values are `uint8`'s, and in
some places, arrays of values are summed; this leads to overflows if
the sum > 128. It would not be appropriate to change measurement
values to be larger than `uint8`, so in cases like this, the proper
solution is probably to make sure that where values are summed or
otherwise numerically manipulated, `uint16` or larger values are
ensured.

NumPy 2 offers a new option
(`np._set_promotion_state("weak_and_warn")`) to produce warnings where
data types are changed. Commit 6cf50eb adds a new command-line to our
pytest framework, such that running

```bash
check/pytest --warn-numpy-data-promotion
```

will turn on this NumPy setting. Running `check/pytest` with this
option enabled revealed quite a lot of warnings. The present commit
changes code in places where those warnings were raised, in an effort
to eliminate as many of them as possible.

It is certainly the case that not all of the type promotion warnings
are meaningful. Unfortunately, I found it sometimes difficult to be
sure of which ones _are_ meaningful, in part because Cirq's code has
many layers and uses ndarrays a lot, and understanding the impact of a
type demotion (say, from `float64` to `float32`) was difficult for me
to do. In view of this, I wanted to err on the side of caution and try
to avoid losses of precision. The principles followed in the changes
are roughly the following:

* Don't worry about warnings about changes from `complex64` to
  `complex128`, as this obviously does not reduce precision.

* If a warning involves an operation using an ndarray, change the code
  to try to get the actual data type of the data elements in the array
  rather than use a specific data type. This is the reason some of the
  changes look like the following, where it reaches into an ndarray to
  get the dtype of an element and then later uses the `.type()` method
  of that dtype to cast the value of something else:

    ```python
    dtype = args.target_tensor.flat[0].dtype
    .....
    args.target_tensor[subspace] *= dtype.type(x)
    ```

* In cases where the above was not possible, or where it was obvious
  what the type must always be, the changes add type casts with
  explicit types like `complex(x)` or `np.float64(x)`.

It is likely that this approach resulted in some unnecessary
up-promotion of values and may have impacted run-time performance.
Some simple overall timing of `check/pytest` did not reveal a glaring
negative impact of the changes, but that doesn't mean real
applications won't be impacted. Perhaps a future review can evaluate
whether speedups are possible.
  • Loading branch information
mhucka committed Sep 15, 2024
1 parent 37899df commit eeeabef
Show file tree
Hide file tree
Showing 22 changed files with 80 additions and 60 deletions.
4 changes: 3 additions & 1 deletion cirq-core/cirq/contrib/quantum_volume/quantum_volume.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,9 @@ def compute_heavy_set(circuit: cirq.Circuit) -> List[int]:
# The output wave function is a vector from the result value (big-endian) to
# the probability of that bit-string. Return all of the bit-string
# values that have a probability greater than the median.
return [idx for idx, amp in enumerate(results.state_vector()) if np.abs(amp**2) > median]
results_vector = results.state_vector()
return [idx for idx, amp in enumerate(results_vector)
if np.abs(np.square(amp)) > median]


@dataclass
Expand Down
6 changes: 4 additions & 2 deletions cirq-core/cirq/devices/grid_qubit.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,8 @@ def __add__(self, other: Union[Tuple[int, int], Self]) -> Self:
'Can only add integer tuples of length 2 to '
f'{type(self).__name__}. Instead was {other}'
)
return self._with_row_col(row=self._row + other[0], col=self._col + other[1])
return self._with_row_col(row=np.int64(self._row) + other[0],
col=np.int64(self._col) + other[1])

def __sub__(self, other: Union[Tuple[int, int], Self]) -> Self:
if isinstance(other, _BaseGridQid):
Expand All @@ -171,7 +172,8 @@ def __sub__(self, other: Union[Tuple[int, int], Self]) -> Self:
"Can only subtract integer tuples of length 2 to "
f"{type(self).__name__}. Instead was {other}"
)
return self._with_row_col(row=self._row - other[0], col=self._col - other[1])
return self._with_row_col(row=np.int64(self._row) - other[0],
col=np.int64(self._col) - other[1])

def __radd__(self, other: Tuple[int, int]) -> Self:
return self + other
Expand Down
3 changes: 2 additions & 1 deletion cirq-core/cirq/experiments/fidelity_estimation.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,8 @@ def xeb_fidelity(
output_probabilities = state_vector_to_probabilities(output_state)
bitstring_probabilities = output_probabilities[bitstrings].tolist()
else:
bitstring_probabilities = [abs(amplitudes[bitstring]) ** 2 for bitstring in bitstrings]
bitstring_probabilities = [np.abs(amplitudes[bitstring], dtype=float) ** 2
for bitstring in bitstrings]
return estimator(dim, bitstring_probabilities)


Expand Down
8 changes: 5 additions & 3 deletions cirq-core/cirq/linalg/transformations.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def dephase(v):
if r == 0:
return 1j if i < 0 else -1j

return np.exp(-1j * np.arctan2(i, r))
return np.exp(-1j * complex(np.arctan2(i, r)), dtype=v.dtype)

# Zero the phase at this entry in both matrices.
return a * dephase(a[k]), b * dephase(b[k])
Expand Down Expand Up @@ -237,13 +237,14 @@ def _build_from_slices(
"""
d = len(source.shape)
out[...] = 0
dtype = source.flat[0].dtype
for arg in args:
source_slice: List[Any] = [slice(None)] * d
target_slice: List[Any] = [slice(None)] * d
for sleis in arg.slices:
source_slice[sleis.axis] = sleis.source_index
target_slice[sleis.axis] = sleis.target_index
out[tuple(target_slice)] += arg.scale * source[tuple(source_slice)]
out[tuple(target_slice)] += dtype.type(arg.scale) * source[tuple(source_slice)]
return out


Expand Down Expand Up @@ -564,7 +565,8 @@ def sub_state_vector(
best_candidate = max(candidates, key=lambda c: float(np.linalg.norm(c, 2)))
best_candidate = best_candidate / np.linalg.norm(best_candidate)
left = np.conj(best_candidate.reshape((keep_dims,))).T
coherence_measure = sum([abs(np.dot(left, c.reshape((keep_dims,)))) ** 2 for c in candidates])
coherence_measure = sum([np.abs(np.dot(left, c.reshape((keep_dims,))), dtype=float) ** 2
for c in candidates])

if protocols.approx_eq(coherence_measure, 1, atol=atol):
return np.exp(2j * np.pi * np.random.random()) * best_candidate.reshape(ret_shape)
Expand Down
2 changes: 1 addition & 1 deletion cirq-core/cirq/ops/clifford_gate.py
Original file line number Diff line number Diff line change
Expand Up @@ -673,7 +673,7 @@ def pauli_tuple(self, pauli: Pauli) -> Tuple[Pauli, bool]:
to = x_to * z_to # Y = iXZ
to._coefficient *= 1j
# pauli_mask returns a value between 0 and 4 for [I, X, Y, Z].
to_gate = Pauli._XYZ[to.pauli_mask[0] - 1]
to_gate = Pauli._XYZ[to.pauli_mask[0] - np.uint8(1)]
return (to_gate, bool(to.coefficient != 1.0))

def dense_pauli_string(self, pauli: Pauli) -> 'cirq.DensePauliString':
Expand Down
18 changes: 10 additions & 8 deletions cirq-core/cirq/ops/common_gates.py
Original file line number Diff line number Diff line change
Expand Up @@ -426,8 +426,8 @@ def _apply_unitary_(self, args: 'protocols.ApplyUnitaryArgs') -> Optional[np.nda
return NotImplemented
zero = args.subspace_index(0)
one = args.subspace_index(1)
args.available_buffer[zero] = -1j * args.target_tensor[one]
args.available_buffer[one] = 1j * args.target_tensor[zero]
args.available_buffer[zero] = np.complex128(-1j) * args.target_tensor[one]
args.available_buffer[one] = np.complex128(1j) * args.target_tensor[zero]
p = 1j ** (2 * self._exponent * self._global_shift)
if p != 1:
args.available_buffer *= p
Expand Down Expand Up @@ -542,7 +542,7 @@ def __init__(self, *, rads: value.TParamVal):
rads: Radians to rotate about the Y axis of the Bloch sphere.
"""
self._rads = rads
super().__init__(exponent=rads / _pi(rads), global_shift=-0.5)
super().__init__(exponent=rads / _pi(rads), global_shift=float(-0.5))

def _with_exponent(self, exponent: value.TParamVal) -> 'Ry':
return Ry(rads=exponent * _pi(exponent))
Expand Down Expand Up @@ -638,10 +638,11 @@ def _apply_unitary_(self, args: 'protocols.ApplyUnitaryArgs') -> Optional[np.nda
if protocols.is_parameterized(self):
return None

dtype = args.target_tensor.flat[0].dtype
for i in range(1, self._dimension):
subspace = args.subspace_index(i)
c = 1j ** (self._exponent * 4 * i / self._dimension)
args.target_tensor[subspace] *= c
args.target_tensor[subspace] *= dtype.type(c)
p = 1j ** (2 * self._exponent * self._global_shift)
if p != 1:
args.target_tensor *= p
Expand Down Expand Up @@ -991,11 +992,12 @@ def _apply_unitary_(self, args: 'protocols.ApplyUnitaryArgs') -> Optional[np.nda

zero = args.subspace_index(0)
one = args.subspace_index(1)
dtype = args.target_tensor.flat[0].dtype
args.target_tensor[one] -= args.target_tensor[zero]
args.target_tensor[one] *= -0.5
args.target_tensor[one] *= -dtype.type(0.5)
args.target_tensor[zero] -= args.target_tensor[one]
p = 1j ** (2 * self._exponent * self._global_shift)
args.target_tensor *= np.sqrt(2) * p
args.target_tensor *= np.sqrt(2, dtype=dtype) * dtype.type(p)
return args.target_tensor

def _decompose_(self, qubits):
Expand All @@ -1005,7 +1007,6 @@ def _decompose_(self, qubits):
yield cirq.Y(q) ** 0.5
yield cirq.XPowGate(global_shift=-0.25 + self.global_shift).on(q)
return

yield YPowGate(exponent=0.25).on(q)
yield XPowGate(exponent=self._exponent, global_shift=self.global_shift).on(q)
yield YPowGate(exponent=-0.25).on(q)
Expand Down Expand Up @@ -1097,8 +1098,9 @@ def _apply_unitary_(
if protocols.is_parameterized(self):
return NotImplemented

c = 1j ** (2 * self._exponent)
one_one = args.subspace_index(0b11)
dtype = args.target_tensor[one_one].dtype
c = dtype.type(1j ** (2 * self._exponent))
args.target_tensor[one_one] *= c
p = 1j ** (2 * self._exponent * self._global_shift)
if p != 1:
Expand Down
6 changes: 3 additions & 3 deletions cirq-core/cirq/ops/dense_pauli_string.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ def __mul__(self, other):
if split is not None:
p, i = split
mask = np.copy(self.pauli_mask)
mask[i] ^= p
mask[i] ^= np.int64(p)
return concrete_class(
pauli_mask=mask,
coefficient=self.coefficient * _vectorized_pauli_mul_phase(self.pauli_mask[i], p),
Expand All @@ -293,7 +293,7 @@ def __rmul__(self, other):
if split is not None:
p, i = split
mask = np.copy(self.pauli_mask)
mask[i] ^= p
mask[i] ^= np.int64(p)
return type(self)(
pauli_mask=mask,
coefficient=self.coefficient * _vectorized_pauli_mul_phase(p, self.pauli_mask[i]),
Expand Down Expand Up @@ -552,7 +552,7 @@ def __imul__(self, other):
if split is not None:
p, i = split
self._coefficient *= _vectorized_pauli_mul_phase(self.pauli_mask[i], p)
self.pauli_mask[i] ^= p
self.pauli_mask[i] ^= np.int64(p)
return self

return NotImplemented
Expand Down
3 changes: 2 additions & 1 deletion cirq-core/cirq/ops/fourier_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,9 +138,10 @@ def _apply_unitary_(self, args: 'cirq.ApplyUnitaryArgs'):
return NotImplemented

n = int(np.prod([args.target_tensor.shape[k] for k in args.axes], dtype=np.int64))
dtype = args.target_tensor.flat[0].dtype
for i in range(n):
p = 1j ** (4 * i / n * self.exponent)
args.target_tensor[args.subspace_index(big_endian_bits_int=i)] *= p
args.target_tensor[args.subspace_index(big_endian_bits_int=i)] *= dtype.type(p)

return args.target_tensor

Expand Down
3 changes: 2 additions & 1 deletion cirq-core/cirq/ops/parity_gates.py
Original file line number Diff line number Diff line change
Expand Up @@ -335,7 +335,8 @@ def _apply_unitary_(self, args: 'protocols.ApplyUnitaryArgs') -> Optional[np.nda
if global_phase != 1:
args.target_tensor *= global_phase

relative_phase = 1j ** (2 * self.exponent)
dtype = args.target_tensor.flat[0].dtype
relative_phase = dtype.type(1j ** (2 * self.exponent))
zo = args.subspace_index(0b01)
oz = args.subspace_index(0b10)
args.target_tensor[oz] *= relative_phase
Expand Down
2 changes: 1 addition & 1 deletion cirq-core/cirq/ops/pauli_string.py
Original file line number Diff line number Diff line change
Expand Up @@ -740,7 +740,7 @@ def _expectation_from_density_matrix_no_validation(
while any(result.shape):
result = np.trace(result, axis1=0, axis2=len(result.shape) // 2)

return float(np.real(result * self.coefficient))
return float(np.real(result * result.dtype.type(self.coefficient)))

def zip_items(
self, other: 'cirq.PauliString[TKey]'
Expand Down
2 changes: 1 addition & 1 deletion cirq-core/cirq/ops/pauli_string_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -980,7 +980,7 @@ def test_expectation_from_state_vector_invalid_input():
rho_or_wf = 0.5 * np.ones((2, 2), dtype=np.complex64)
_ = ps.expectation_from_state_vector(rho_or_wf, q_map)

wf = np.arange(16, dtype=np.complex64) / np.linalg.norm(np.arange(16))
wf = np.arange(16, dtype=np.complex64) / np.linalg.norm(np.arange(16, dtype=np.complex64))
with pytest.raises(ValueError, match='shape'):
ps.expectation_from_state_vector(wf.reshape((16, 1)), q_map_2)
with pytest.raises(ValueError, match='shape'):
Expand Down
4 changes: 2 additions & 2 deletions cirq-core/cirq/ops/swap_gates.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,8 +244,8 @@ def _apply_unitary_(self, args: 'protocols.ApplyUnitaryArgs') -> Optional[np.nda
args.available_buffer[zo] = args.target_tensor[zo]
args.target_tensor[zo] = args.target_tensor[oz]
args.target_tensor[oz] = args.available_buffer[zo]
args.target_tensor[zo] *= 1j
args.target_tensor[oz] *= 1j
args.target_tensor[zo] *= args.target_tensor[zo].dtype.type(1j)
args.target_tensor[oz] *= args.target_tensor[oz].dtype.type(1j)
p = 1j ** (2 * self._exponent * self._global_shift)
if p != 1:
args.target_tensor *= p
Expand Down
4 changes: 2 additions & 2 deletions cirq-core/cirq/qis/measures.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ def _numpy_arrays_to_state_vectors_or_density_matrices(
def _fidelity_state_vectors_or_density_matrices(state1: np.ndarray, state2: np.ndarray) -> float:
if state1.ndim == 1 and state2.ndim == 1:
# Both state vectors
return np.abs(np.vdot(state1, state2)) ** 2
return np.abs(np.vdot(state1, state2), dtype=float) ** 2
elif state1.ndim == 1 and state2.ndim == 2:
# state1 is a state vector and state2 is a density matrix
return np.real(np.conjugate(state1) @ state2 @ state1)
Expand All @@ -245,7 +245,7 @@ def _fidelity_state_vectors_or_density_matrices(state1: np.ndarray, state2: np.n
# Both density matrices
state1_sqrt = _sqrt_positive_semidefinite_matrix(state1)
eigs = linalg.eigvalsh(state1_sqrt @ state2 @ state1_sqrt)
trace = np.sum(np.sqrt(np.abs(eigs)))
trace = np.sum(np.sqrt(np.abs(eigs, dtype=float)))
return trace**2
raise ValueError(
'The given arrays must be one- or two-dimensional. '
Expand Down
7 changes: 4 additions & 3 deletions cirq-core/cirq/qis/states.py
Original file line number Diff line number Diff line change
Expand Up @@ -609,8 +609,8 @@ def bloch_vector_from_state_vector(
"""
rho = density_matrix_from_state_vector(state_vector, [index], qid_shape=qid_shape)
v = np.zeros(3, dtype=np.float32)
v[0] = 2 * np.real(rho[0][1])
v[1] = 2 * np.imag(rho[1][0])
v[0] = np.float32(2) * np.real(rho[0][1])
v[1] = np.float32(2) * np.imag(rho[1][0])
v[2] = np.real(rho[0][0] - rho[1][1])

return v
Expand Down Expand Up @@ -738,7 +738,8 @@ def dirac_notation(
ket = "|{}⟩"
for x in range(len(perm_list)):
format_str = "({:." + str(decimals) + "g})"
val = round(state_vector[x].real, decimals) + 1j * round(state_vector[x].imag, decimals)
val = (round(state_vector[x].real, decimals)
+ np.complex128(1j) * round(state_vector[x].imag, decimals))

if round(val.real, decimals) == 0 and round(val.imag, decimals) != 0:
val = val.imag
Expand Down
14 changes: 7 additions & 7 deletions cirq-core/cirq/sim/clifford/clifford_simulator_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,25 +23,25 @@ def test_run_no_repetitions():
simulator = cirq.CliffordSimulator()
circuit = cirq.Circuit(cirq.H(q0), cirq.measure(q0))
result = simulator.run(circuit, repetitions=0)
assert sum(result.measurements['q(0)']) == 0
assert sum(result.measurements['q(0)'].astype(np.uint16)) == 0


def test_run_hadamard():
q0 = cirq.LineQubit(0)
simulator = cirq.CliffordSimulator()
circuit = cirq.Circuit(cirq.H(q0), cirq.measure(q0))
result = simulator.run(circuit, repetitions=100)
assert sum(result.measurements['q(0)'])[0] < 80
assert sum(result.measurements['q(0)'])[0] > 20
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] > 20


def test_run_GHZ():
(q0, q1) = (cirq.LineQubit(0), cirq.LineQubit(1))
simulator = cirq.CliffordSimulator()
circuit = cirq.Circuit(cirq.H(q0), cirq.H(q1), cirq.measure(q0))
result = simulator.run(circuit, repetitions=100)
assert sum(result.measurements['q(0)'])[0] < 80
assert sum(result.measurements['q(0)'])[0] > 20
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] > 20


def test_run_correlations():
Expand Down Expand Up @@ -392,8 +392,8 @@ def test_clifford_circuit_2(qubits, split):
circuit.append(cirq.measure(qubits[0]))
result = cirq.CliffordSimulator(split_untangled_states=split).run(circuit, repetitions=100)

assert sum(result.measurements['q(0)'])[0] < 80
assert sum(result.measurements['q(0)'])[0] > 20
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] > 20


@pytest.mark.parametrize('split', [True, False])
Expand Down
10 changes: 5 additions & 5 deletions cirq-core/cirq/sim/sparse_simulator_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ def test_run_repetitions_terminal_measurement_stochastic():
q = cirq.LineQubit(0)
c = cirq.Circuit(cirq.H(q), cirq.measure(q, key='q'))
results = cirq.Simulator().run(c, repetitions=10000)
assert 1000 <= sum(v[0] for v in results.measurements['q']) < 9000
assert 1000 <= sum(np.int64(v[0]) for v in results.measurements['q']) < 9000


@pytest.mark.parametrize('dtype', [np.complex64, np.complex128])
Expand Down Expand Up @@ -255,7 +255,7 @@ def test_run_mixture(dtype: Type[np.complexfloating], split: bool):
simulator = cirq.Simulator(dtype=dtype, split_untangled_states=split)
circuit = cirq.Circuit(cirq.bit_flip(0.5)(q0), cirq.measure(q0))
result = simulator.run(circuit, repetitions=100)
assert 20 < sum(result.measurements['q(0)'])[0] < 80
assert 20 < sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80


@pytest.mark.parametrize('dtype', [np.complex64, np.complex128])
Expand All @@ -265,8 +265,8 @@ def test_run_mixture_with_gates(dtype: Type[np.complexfloating], split: bool):
simulator = cirq.Simulator(dtype=dtype, split_untangled_states=split, seed=23)
circuit = cirq.Circuit(cirq.H(q0), cirq.phase_flip(0.5)(q0), cirq.H(q0), cirq.measure(q0))
result = simulator.run(circuit, repetitions=100)
assert sum(result.measurements['q(0)'])[0] < 80
assert sum(result.measurements['q(0)'])[0] > 20
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80
assert sum(result.measurements['q(0)'].astype(np.uint16))[0] > 20


@pytest.mark.parametrize('dtype', [np.complex64, np.complex128])
Expand Down Expand Up @@ -1385,7 +1385,7 @@ def test_noise_model():
simulator = cirq.Simulator(noise=noise_model)
result = simulator.run(circuit, repetitions=100)

assert 20 <= sum(result.measurements['q(0)'])[0] < 80
assert 20 <= sum(result.measurements['q(0)'].astype(np.uint16))[0] < 80


def test_separated_states_str_does_not_merge():
Expand Down
4 changes: 2 additions & 2 deletions cirq-core/cirq/sim/state_vector_simulation_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ def prepare_into_buffer(k: int):

for index in range(len(kraus_tensors)):
prepare_into_buffer(index)
weight = float(np.linalg.norm(self._buffer) ** 2)
weight = float(np.linalg.norm(self._buffer)) ** 2

if weight > fallback_weight:
fallback_weight_index = index
Expand All @@ -248,7 +248,7 @@ def prepare_into_buffer(k: int):
weight = fallback_weight
index = fallback_weight_index

self._buffer /= np.sqrt(weight)
self._buffer /= np.sqrt(weight, dtype=self._buffer.dtype)
self._swap_target_tensor_for(self._buffer)
return index

Expand Down
Loading

0 comments on commit eeeabef

Please sign in to comment.