Skip to content

Commit

Permalink
Added support for EstimatorV2 primitives (#48)
Browse files Browse the repository at this point in the history
* Migrating `qiskit_algorithms` (qiskit-community#817)

* Update README.md

* Generalize the Einstein summation signature

* Add reno

* Update Copyright

* Rename and add test

* Update Copyright

* Add docstring for `test_get_einsum_signature`

* Correct spelling

* Disable spellcheck for comments

* Add `docstring` in pylint dict

* Delete example in docstring

* Add Einstein in pylint dict

* Add full use case in einsum dict

* Spelling and type ignore

* Spelling and type ignore

* Spelling and type ignore

* Spelling and type ignore

* Spelling and type ignore

* Remove for loop in einsum function and remove Literal arguments (1/2)

* Remove for loop in einsum function and remove Literal arguments (1/2)

* Remove for loop in einsum function and remove Literal arguments (2/2)

* Update RuntimeError msg

* Update RuntimeError msg - line too long

* Trigger CI

* Merge algos, globals.random to fix

* Fixed `algorithms_globals`

* Import /tests and run CI locally

* Fix copyrights and some spellings

* Ignore mypy in 8 instances

* Merge spell dicts

* Black reformatting

* Black reformatting

* Add reno

* Lint sanitize

* Pylint

* Pylint

* Pylint

* Pylint

* Fix relative imports in tutorials

* Fix relative imports in tutorials

* Remove algorithms from Jupyter magic methods

* Temporarily disable "Run stable tutorials" tests

* Change the docstrings with imports from qiskit_algorithms

* Styling

* Update qiskit_machine_learning/optimizers/gradient_descent.py

Co-authored-by: Declan Millar <declan.millar@ibm.com>

* Update qiskit_machine_learning/optimizers/optimizer_utils/learning_rate.py

Co-authored-by: Declan Millar <declan.millar@ibm.com>

* Add more tests for utils

* Add more tests for optimizers: adam, bobyqa, gsls and imfil

* Fix random seed for volatile optimizers

* Fix random seed for volatile optimizers

* Add more tests

* Pylint dict

* Activate scikit-quant-0.8.2

* Remove scikit-quant methods

* Remove scikit-quant methods (2)

* Edit the release notes and Qiskit version 1+

* Edit the release notes and Qiskit version 1+

* Add Qiskit 1.0 upgrade in reno

* Add Qiskit 1.0 upgrade in reno

* Add Qiskit 1.0 upgrade in reno

* Apply line breaks

* Restructure line breaks

---------

Co-authored-by: FrancescaSchiav <FrancescaSchiav@users.noreply.github.com>
Co-authored-by: M. Emre Sahin <40424147+OkuyanBoga@users.noreply.github.com>
Co-authored-by: Declan Millar <declan.millar@ibm.com>

* Revamp readme pt2 (qiskit-community#822)

* Restructure README.md
---------

Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com>

* Added support for EstimatorV2 primitives

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/gradients/param_shift/param_shift_estimator_gradient.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/gradients/spsa/spsa_estimator_gradient.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/gradients/param_shift/param_shift_estimator_gradient.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/neural_networks/estimator_qnn.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Update qiskit_machine_learning/gradients/param_shift/param_shift_estimator_gradient.py

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>

* Fix lint errors due to Pylint 3.3.0 update in CI (qiskit-community#833)

* disable=too-many-positional-arguments

* Transfer pylint rc to toml

* Transfer pylint rc to toml

* Cleaner statements

* Remove Python 3.8 from CI (qiskit-community#824) (qiskit-community#826)

* Remove Python 3.8 in CI (qiskit-community#824)

* Correct `tmp` dirs (qiskit-community#818)

* Correct unit py version (qiskit-community#818)

* Add reno (qiskit-community#818)

* Finalze removal of py38 (qiskit-community#818)

* Spelling

* Remove duplicate tmp folder

* Updated the release note

* Bump min pyversion in toml

* Remove ipython constraints

* Update reno

* Added unit tests for estimatorqnnV2 and minor fixes

* Make black

* Make lint and changes to V1/2 choice logics

* Update requirements

* Add default precision

* Update estimator tests

* Change num qubits in backend

* Allow for num_qubits=None

* Fix shape in parameter shift

* Fix shape in parameter shift

* Fix shape observables

* Fix shape observables

* Change default precision to match base estimator

* Fix remaining shape issues

* Estimator seed has no effect in local testing

* fix argnames and supress error tolerance for test_estimator_qnn_v2

* Added pass manager the gradients.

* quick bugfix for isa_circuits

* Updating PUBs for estimatorqnn, updating test_estimator_qnn_v2 for ISA circs and relaxing tolerances

* Lint and formatting

* Tranpiling observables for isa g circs

* fixing apply_layout

---------

Co-authored-by: Edoardo Altamura <38359901+edoaltamura@users.noreply.github.com>
Co-authored-by: FrancescaSchiav <FrancescaSchiav@users.noreply.github.com>
Co-authored-by: Declan Millar <declan.millar@ibm.com>
Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com>
Co-authored-by: oscar-wallis <oscar.wallis@outlook.com>
  • Loading branch information
6 people authored Nov 7, 2024
1 parent 2bbb57c commit 1712ebe
Show file tree
Hide file tree
Showing 7 changed files with 718 additions and 50 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,12 @@

from qiskit.circuit import Parameter, ParameterExpression, QuantumCircuit
from qiskit.primitives import BaseEstimator
from qiskit.primitives.base import BaseEstimatorV2
from qiskit.primitives.utils import _circuit_key
from qiskit.providers import Options
from qiskit.quantum_info.operators.base_operator import BaseOperator
from qiskit.transpiler.passes import TranslateParameterizedGates
from qiskit.transpiler.passmanager import BasePassManager

from .estimator_gradient_result import EstimatorGradientResult
from ..utils import (
Expand All @@ -46,13 +48,15 @@ class BaseEstimatorGradient(ABC):

def __init__(
self,
estimator: BaseEstimator,
estimator: BaseEstimator | BaseEstimatorV2,
pass_manager: BasePassManager | None = None,
options: Options | None = None,
derivative_type: DerivativeType = DerivativeType.REAL,
):
r"""
Args:
estimator: The estimator used to compute the gradients.
pass_manager: pass manager for isa_circuit transpilation.
options: Primitive backend runtime options used for circuit execution.
The order of priority is: options in ``run`` method > gradient's
default options > primitive's default setting.
Expand All @@ -68,7 +72,8 @@ def __init__(
gradient and this type is the only supported type for function-level schemes like
finite difference.
"""
self._estimator: BaseEstimator = estimator
self._estimator = estimator
self._pass_manager = pass_manager
self._default_options = Options()
if options is not None:
self._default_options.update_options(**options)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,19 @@

from collections.abc import Sequence

import numpy as np

from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.quantum_info.operators.base_operator import BaseOperator
from qiskit.primitives.base import BaseEstimatorV2
from qiskit.primitives import BaseEstimatorV1
from qiskit.providers.options import Options

from ..base.base_estimator_gradient import BaseEstimatorGradient
from ..base.estimator_gradient_result import EstimatorGradientResult
from ..utils import _make_param_shift_parameter_values

from ...exceptions import AlgorithmError
from ...exceptions import QiskitMachineLearningError


class ParamShiftEstimatorGradient(BaseEstimatorGradient):
Expand Down Expand Up @@ -97,26 +102,57 @@ def _run_unique(
job_param_values.extend(param_shift_parameter_values)
all_n.append(n)

# Run the single job with all circuits.
job = self._estimator.run(
job_circuits,
job_observables,
job_param_values,
**options,
)
try:
# Determine how to run the estimator based on its version
if isinstance(self._estimator, BaseEstimatorV1):
# Run the single job with all circuits.
job = self._estimator.run(
job_circuits,
job_observables,
job_param_values,
**options,
)
results = job.result()
except Exception as exc:
raise AlgorithmError("Estimator job failed.") from exc

# Compute the gradients.
gradients = []
partial_sum_n = 0
for n in all_n:
result = results.values[partial_sum_n : partial_sum_n + n]
gradient_ = (result[: n // 2] - result[n // 2 :]) / 2
gradients.append(gradient_)
partial_sum_n += n

opt = self._get_local_options(options)

# Compute the gradients.
gradients = []
partial_sum_n = 0
for n in all_n:
result = results.values[partial_sum_n : partial_sum_n + n]
gradient_ = (result[: n // 2] - result[n // 2 :]) / 2
gradients.append(gradient_)
partial_sum_n += n

opt = self._get_local_options(options)

elif isinstance(self._estimator, BaseEstimatorV2):
isa_g_circs = self._pass_manager.run(job_circuits)
isa_g_observables = [op.apply_layout(isa_g_circs[i].layout) for i, op in enumerate(job_observables)]
# Prepare circuit-observable-parameter tuples (PUBs)
circuit_observable_params = []
for pub in zip(isa_g_circs, isa_g_observables, job_param_values):
circuit_observable_params.append(pub)

# For BaseEstimatorV2, run the estimator using PUBs and specified precision
job = self._estimator.run(circuit_observable_params)
results = job.result()
results = np.array([float(r.data.evs) for r in results])

# Compute the gradients.
gradients = []
partial_sum_n = 0
for n in all_n:
result = results[partial_sum_n : partial_sum_n + n]
gradient_ = (result[: n // 2] - result[n // 2 :]) / 2
gradients.append(gradient_)
partial_sum_n += n

opt = Options(**options)

else:
raise QiskitMachineLearningError(
"The accepted estimators are BaseEstimatorV1 and BaseEstimatorV2; got "
+ f"{type(self._estimator)} instead. Note that BaseEstimatorV1 is deprecated in"
+ "Qiskit and removed in Qiskit IBM Runtime."
)

return EstimatorGradientResult(gradients=gradients, metadata=metadata, options=opt)
97 changes: 76 additions & 21 deletions qiskit_machine_learning/neural_networks/estimator_qnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,22 @@
from __future__ import annotations

import logging
import warnings
from copy import copy
from typing import Sequence

import numpy as np

from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.primitives import BaseEstimator, Estimator, EstimatorResult
from qiskit.primitives.base import BaseEstimatorV2
from qiskit.primitives import BaseEstimator, BaseEstimatorV1, Estimator, EstimatorResult
from qiskit.quantum_info import SparsePauliOp
from qiskit.quantum_info.operators.base_operator import BaseOperator

from ..gradients import (
BaseEstimatorGradient,
EstimatorGradientResult,
ParamShiftEstimatorGradient,
)

from ..circuit.library import QNNCircuit
from ..exceptions import QiskitMachineLearningError

Expand Down Expand Up @@ -64,7 +66,7 @@ class EstimatorQNN(NeuralNetwork):
num_qubits = 2
# Using the QNNCircuit:
# Create a parameterized 2 qubit circuit composed of the default ZZFeatureMap feature map
# Create a parametrrized 2 qubit circuit composed of the default ZZFeatureMap feature map
# and RealAmplitudes ansatz.
qnn_qc = QNNCircuit(num_qubits)
Expand Down Expand Up @@ -105,12 +107,14 @@ def __init__(
self,
*,
circuit: QuantumCircuit,
estimator: BaseEstimator | None = None,
estimator: BaseEstimator | BaseEstimatorV2 | None = None,
observables: Sequence[BaseOperator] | BaseOperator | None = None,
input_params: Sequence[Parameter] | None = None,
weight_params: Sequence[Parameter] | None = None,
gradient: BaseEstimatorGradient | None = None,
input_gradients: bool = False,
num_virtual_qubits: int | None = None,
default_precision: float = 0.015625,
):
r"""
Args:
Expand All @@ -127,12 +131,12 @@ def __init__(
input_params: The parameters that correspond to the input data of the network.
If ``None``, the input data is not bound to any parameters.
If a :class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`input_params` value here is ignored. Instead the value is taken from the
`input_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` input_parameters.
weight_params: The parameters that correspond to the trainable weights.
If ``None``, the weights are not bound to any parameters.
If a :class:`~qiskit_machine_learning.circuit.library.QNNCircuit` is provided the
`weight_params` value here is ignored. Instead the value is taken from the
`weight_params` value here is ignored. Instead, the value is taken from the
:class:`~qiskit_machine_learning.circuit.library.QNNCircuit` weight_parameters.
gradient: The estimator gradient to be used for the backward pass.
If None, a default instance of the estimator gradient,
Expand All @@ -141,6 +145,8 @@ def __init__(
Note that this parameter is ``False`` by default, and must be explicitly set to
``True`` for a proper gradient computation when using
:class:`~qiskit_machine_learning.connectors.TorchConnector`.
num_virtual_qubits: Number of virtual qubits.
default_precision: The default precision for the estimator if not specified during run.
Raises:
QiskitMachineLearningError: Invalid parameter values.
Expand All @@ -149,19 +155,46 @@ def __init__(
estimator = Estimator()
self.estimator = estimator
self._org_circuit = circuit

if num_virtual_qubits is None:
self.num_virtual_qubits = circuit.num_qubits
warnings.warn(
f"No number of qubits was not specified ({num_virtual_qubits}) and was retrieved from "
+ f"`circuit` ({self.num_virtual_qubits:d}). If `circuit` is transpiled, this may cause "
+ "unstable behaviour.",
UserWarning,
stacklevel=2,
)
else:
self.num_virtual_qubits = num_virtual_qubits

if observables is None:
observables = SparsePauliOp.from_list([("Z" * circuit.num_qubits, 1)])
observables = SparsePauliOp.from_sparse_list(
[("Z" * self.num_virtual_qubits, range(self.num_virtual_qubits), 1)],
num_qubits=self.circuit.num_qubits,
)

if isinstance(observables, BaseOperator):
observables = (observables,)

self._observables = observables

if isinstance(circuit, QNNCircuit):
self._input_params = list(circuit.input_parameters)
self._weight_params = list(circuit.weight_parameters)
else:
self._input_params = list(input_params) if input_params is not None else []
self._weight_params = list(weight_params) if weight_params is not None else []

if gradient is None:
if isinstance(self.estimator, BaseEstimatorV2):
raise QiskitMachineLearningError(
"Please provide a gradient with pass manager initialised."
)

gradient = ParamShiftEstimatorGradient(self.estimator)

self._default_precision = default_precision
self.gradient = gradient
self._input_gradients = input_gradients

Expand Down Expand Up @@ -198,33 +231,52 @@ def weight_params(self) -> Sequence[Parameter] | None:
@property
def input_gradients(self) -> bool:
"""Returns whether gradients with respect to input data are computed by this neural network
in the ``backward`` method or not. By default such gradients are not computed."""
in the ``backward`` method or not. By default, such gradients are not computed."""
return self._input_gradients

@input_gradients.setter
def input_gradients(self, input_gradients: bool) -> None:
"""Turn on/off computation of gradients with respect to input data."""
self._input_gradients = input_gradients

@property
def default_precision(self) -> float:
"""Return the default precision"""
return self._default_precision

def _forward_postprocess(self, num_samples: int, result: EstimatorResult) -> np.ndarray:
"""Post-processing during forward pass of the network."""
return np.reshape(result.values, (-1, num_samples)).T
return np.reshape(result, (-1, num_samples)).T

def _forward(
self, input_data: np.ndarray | None, weights: np.ndarray | None
) -> np.ndarray | None:
"""Forward pass of the neural network."""
parameter_values_, num_samples = self._preprocess_forward(input_data, weights)
job = self.estimator.run(
[self._circuit] * num_samples * self.output_shape[0],
[op for op in self._observables for _ in range(num_samples)],
np.tile(parameter_values_, (self.output_shape[0], 1)),
)
try:
results = job.result()
except Exception as exc:
raise QiskitMachineLearningError("Estimator job failed.") from exc

# Determine how to run the estimator based on its version
if isinstance(self.estimator, BaseEstimatorV1):
job = self.estimator.run(
[self._circuit] * num_samples * self.output_shape[0],
[op for op in self._observables for _ in range(num_samples)],
np.tile(parameter_values_, (self.output_shape[0], 1)),
)
results = job.result().values

elif isinstance(self.estimator, BaseEstimatorV2):
# Prepare circuit-observable-parameter tuples (PUBs)
circuit_observable_params = []
for observable in self._observables:
circuit_observable_params.append((self._circuit, observable, parameter_values_))
# For BaseEstimatorV2, run the estimator using PUBs and specified precision
job = self.estimator.run(circuit_observable_params, precision=self._default_precision)
results = [result.data.evs for result in job.result()]
else:
raise QiskitMachineLearningError(
"The accepted estimators are BaseEstimatorV1 and BaseEstimatorV2; got "
+ f"{type(self.estimator)} instead. Note that BaseEstimatorV1 is deprecated in"
+ "Qiskit and removed in Qiskit IBM Runtime."
)
return self._forward_postprocess(num_samples, results)

def _backward_postprocess(
Expand Down Expand Up @@ -269,8 +321,11 @@ def _backward(
param_values = np.tile(parameter_values, (num_observables, 1))

job = None

if self._input_gradients:
job = self.gradient.run(circuits, observables, param_values) # type: ignore[arg-type]
job = self.gradient.run(
circuits, observables, param_values
) # type: ignore[arg-type]
elif len(parameter_values[0]) > self._num_inputs:
params = [self._circuit.parameters[self._num_inputs :]] * num_circuits
job = self.gradient.run(
Expand All @@ -281,7 +336,7 @@ def _backward(
try:
results = job.result()
except Exception as exc:
raise QiskitMachineLearningError("Estimator job failed.") from exc
raise QiskitMachineLearningError(f"Estimator job failed. {exc}") from exc

input_grad, weights_grad = self._backward_postprocess(num_samples, results)

Expand Down
6 changes: 3 additions & 3 deletions qiskit_machine_learning/neural_networks/neural_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,9 +293,9 @@ def _reparameterize_circuit(

if len(parameters) != (self.num_inputs + self.num_weights):
raise ValueError(
f"Number of circuit parameters {len(parameters)}"
f" mismatch with sum of num inputs and weights"
f" {self.num_inputs + self.num_weights}"
f"Number of circuit parameters ({len(parameters)})"
f" does not match the sum of number of inputs and weights"
f" ({self.num_inputs + self.num_weights})."
)

new_input_params = ParameterVector("inputs", self.num_inputs)
Expand Down
1 change: 1 addition & 0 deletions requirements-dev.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,4 @@ mypy>=0.981
mypy-extensions>=0.4.3
nbsphinx
qiskit_sphinx_theme~=1.16.0
qiskit-ibm-runtime>=0.21
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,10 @@
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.circuit.library import ZZFeatureMap, RealAmplitudes, ZFeatureMap
from qiskit.quantum_info import SparsePauliOp
from qiskit_machine_learning.circuit.library import QNNCircuit

from qiskit_machine_learning.circuit.library import QNNCircuit
from qiskit_machine_learning.neural_networks.estimator_qnn import EstimatorQNN
from qiskit_machine_learning.utils import algorithm_globals

CASE_DATA = {
"shape_1_1": {
Expand Down Expand Up @@ -178,6 +179,7 @@ def _test_network_passes(
estimator_qnn,
case_data,
):
algorithm_globals.random_seed = 52
test_data = case_data["test_data"]
weights = case_data["weights"]
correct_forwards = case_data["correct_forwards"]
Expand Down Expand Up @@ -407,7 +409,7 @@ def test_setters_getters(self):
estimator_qnn.input_gradients = True
self.assertTrue(estimator_qnn.input_gradients)

def test_qnn_qc_circui_construction(self):
def test_qnn_qc_circuit_construction(self):
"""Test Estimator QNN properties and forward/backward pass for QNNCircuit construction"""
num_qubits = 2
feature_map = ZZFeatureMap(feature_dimension=num_qubits)
Expand Down
Loading

0 comments on commit 1712ebe

Please sign in to comment.