Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add QuantumKernel class using terra primitives #437

Merged
merged 121 commits into from
Oct 14, 2022
Merged
Show file tree
Hide file tree
Changes from 102 commits
Commits
Show all changes
121 commits
Select commit Hold shift + click to select a range
71e1bf1
Add primitives branch to CI workflow
manoelmarques Jun 29, 2022
ae9c043
added pocs
gentinettagian Jun 29, 2022
3aae2df
init fixed
gentinettagian Jun 30, 2022
fa98e91
more init
gentinettagian Jun 30, 2022
5106333
Add qnns PoC
ElePT Jul 5, 2022
de53484
lint fixed
gentinettagian Jul 8, 2022
28d578c
Merge branch 'primitives' of https://github.com/ElePT/qiskit-machine-…
gentinettagian Jul 8, 2022
cdfcfe8
Migrate qnns to new branch
ElePT Jul 8, 2022
f65b5f4
Merge branch 'Qiskit:main' into primitives-kernel
ElePT Jul 8, 2022
5368534
some unittests
gentinettagian Jul 8, 2022
a1a2a7f
Merge branch 'primitives-kernel' of https://github.com/ElePT/qiskit-m…
gentinettagian Jul 8, 2022
586c6a6
more unittest
gentinettagian Jul 8, 2022
8f68be7
Merge branch 'Qiskit:main' into primitives-kernel
ElePT Jul 11, 2022
c67ed15
Merge from main
manoelmarques Jul 14, 2022
d2342e0
Merge branch 'main'
manoelmarques Jul 15, 2022
a092545
Merge branch 'primitives-kernel' of https://github.com/ElePT/qiskit-m…
gentinettagian Jul 19, 2022
d255e02
adapted new design without unittests
gentinettagian Jul 19, 2022
52c928e
removed compute
gentinettagian Jul 19, 2022
a3cf674
removed compute
gentinettagian Jul 19, 2022
0636e2d
Change inputs, assign parameters method name
ElePT Jul 19, 2022
c3b0692
docstrings
gentinettagian Jul 19, 2022
fcc670f
working on dict version
gentinettagian Jul 19, 2022
ba575ce
pulled
gentinettagian Jul 19, 2022
5de6d92
support dictionarys
gentinettagian Jul 19, 2022
fef72ba
Update qiskit_machine_learning/primitives/kernels/base_kernel.py
ElePT Jul 20, 2022
a855070
Update qiskit_machine_learning/primitives/kernels/base_kernel.py
ElePT Jul 20, 2022
a724193
Small fixes
ElePT Jul 20, 2022
1b11415
Merge branch 'primitives-kernel' of https://github.com/ElePT/qiskit-m…
ElePT Jul 20, 2022
c42228e
Remove changes from trailing commit
ElePT Jul 20, 2022
fe61463
Update qiskit_machine_learning/utils/utils.py
ElePT Jul 20, 2022
44e20a6
raise error if not all parameters bound
gentinettagian Jul 21, 2022
e92e4b0
Merge branch 'primitives-kernel' of https://github.com/ElePT/qiskit-m…
gentinettagian Jul 21, 2022
d368f19
typo
gentinettagian Jul 21, 2022
460e0b2
unit tests
gentinettagian Jul 21, 2022
a0c8847
Merge branch 'main'
manoelmarques Jul 21, 2022
e9ace5f
training unit tests
gentinettagian Jul 22, 2022
2eaecc3
feature params order fixex
gentinettagian Jul 22, 2022
fa7b434
lint
gentinettagian Jul 22, 2022
69c06a3
Merge branch 'main'
manoelmarques Jul 22, 2022
bd32153
Create trainable mixin
ElePT Jul 27, 2022
d553721
Update qiskit_machine_learning/primitives/kernels/base_kernel.py
ElePT Jul 28, 2022
2fadf5a
Update qiskit_machine_learning/primitives/kernels/base_kernel.py
ElePT Jul 28, 2022
fdfc0ce
Update qiskit_machine_learning/primitives/kernels/pseudo_kernel.py
ElePT Jul 29, 2022
af47536
Fix fidelity imports (new location)
ElePT Jul 29, 2022
36b5f2a
Remove pseudo kernel
ElePT Jul 29, 2022
f8e3cca
evaluate duplicates
gentinettagian Jul 29, 2022
beb1ec5
Merge branch 'main' into primitives
manoelmarques Aug 8, 2022
9667f3c
Merge branch 'main' into primitives
manoelmarques Aug 10, 2022
e164da8
Merge branch 'main' into primitives
manoelmarques Aug 11, 2022
eaf42a4
Merge branch 'main' into primitives
manoelmarques Aug 18, 2022
4c3bae1
Merge branch 'main' into primitives
manoelmarques Aug 30, 2022
bb2c8d4
Merge branch 'main' into primitives
manoelmarques Aug 31, 2022
cd5e55c
Merge branch 'main' into primitives
manoelmarques Aug 31, 2022
065c90e
Merge branch 'main' into primitives
manoelmarques Sep 1, 2022
4a35d64
Merge branch 'main' into primitives
manoelmarques Sep 6, 2022
4bd864d
Merge branch 'main' of https://github.com/Qiskit/qiskit-machine-learn…
adekusar-drl Sep 8, 2022
fa9b41f
merge
adekusar-drl Sep 8, 2022
7c203a8
black
adekusar-drl Sep 8, 2022
1bbc84c
fix some issues
adekusar-drl Sep 8, 2022
769629c
Merge branch 'primitives' of https://github.com/Qiskit/qiskit-machine…
adekusar-drl Sep 8, 2022
e715a4a
update to the recent interfaces
adekusar-drl Sep 8, 2022
c06b78d
update tests
adekusar-drl Sep 8, 2022
600989e
fix fidelity parameterization
adekusar-drl Sep 8, 2022
760cbd3
skip some tests
adekusar-drl Sep 8, 2022
dd906b9
fix pylint, mypy
adekusar-drl Sep 9, 2022
9b24926
fix copyright
adekusar-drl Sep 9, 2022
68b974c
more tweaks
adekusar-drl Sep 9, 2022
e6cf1d1
rename methods
adekusar-drl Sep 9, 2022
7379f53
refactor
adekusar-drl Sep 12, 2022
0d6d0d1
clean up
adekusar-drl Sep 13, 2022
50ef2ca
some docstrings
adekusar-drl Sep 13, 2022
5312d49
more tests
adekusar-drl Sep 13, 2022
f3c47cf
streamline base class
adekusar-drl Sep 13, 2022
05d7a62
fix spell
adekusar-drl Sep 13, 2022
1b2c592
trainable qk tweaks
adekusar-drl Sep 13, 2022
f642185
Merge branch 'main' of https://github.com/Qiskit/qiskit-machine-learn…
adekusar-drl Sep 20, 2022
1af8f9c
fix trainability
adekusar-drl Sep 20, 2022
8d5453d
revert workflows
adekusar-drl Sep 20, 2022
d4cf9e8
revert the notebook
adekusar-drl Sep 20, 2022
65489df
renamings, refactorings, code review
adekusar-drl Sep 22, 2022
63219e2
code review
adekusar-drl Sep 22, 2022
9c031fc
remove commented code
adekusar-drl Sep 22, 2022
4b35cdd
bump terra requirements
adekusar-drl Sep 23, 2022
51b339f
code review
adekusar-drl Sep 23, 2022
a7f82b1
code review
adekusar-drl Sep 23, 2022
f59f38b
fix pylint
adekusar-drl Sep 23, 2022
661773b
enforce psd test
adekusar-drl Sep 23, 2022
68432dc
fix docstring
adekusar-drl Sep 23, 2022
5a28321
add reno
adekusar-drl Sep 23, 2022
1458256
fix documentation reference
adekusar-drl Sep 23, 2022
3cc7475
update qsvc, more tests
adekusar-drl Sep 23, 2022
4ca12f7
update qsvr, even more tests
adekusar-drl Sep 23, 2022
5207a67
fix qsvr tests
adekusar-drl Sep 26, 2022
11e2352
fix lint
adekusar-drl Sep 26, 2022
30ad7e3
Change Terra requirements to accept rc
manoelmarques Sep 28, 2022
2ee5174
Merge branch 'main' of https://github.com/Qiskit/qiskit-machine-learn…
adekusar-drl Oct 3, 2022
d306f9a
update pegasos
adekusar-drl Oct 3, 2022
29f9989
fix test
adekusar-drl Oct 3, 2022
4acbc1a
Update qiskit_machine_learning/algorithms/classifiers/pegasos_qsvc.py
adekusar-drl Oct 4, 2022
69cdae6
fix complex eigenvalues
adekusar-drl Oct 10, 2022
9b00a48
Merge remote-tracking branch 'ElePT/primitives-kernel' into primitive…
adekusar-drl Oct 10, 2022
41809da
pass parameters as keywords
adekusar-drl Oct 10, 2022
49540c7
fix trainability, code review
adekusar-drl Oct 11, 2022
fcac58d
mixin -> abstract class, keywords
adekusar-drl Oct 12, 2022
024193c
fix docs
adekusar-drl Oct 12, 2022
156b041
update docs
adekusar-drl Oct 12, 2022
c9b6588
code review
adekusar-drl Oct 12, 2022
8629917
fix tests, pylint
adekusar-drl Oct 13, 2022
6c8400a
update reno
adekusar-drl Oct 13, 2022
41bd100
Merge branch 'main' into primitives-kernel
adekusar-drl Oct 13, 2022
ecfc266
update reno
adekusar-drl Oct 13, 2022
80c1954
fix reno
adekusar-drl Oct 13, 2022
7e53c38
Merge branch 'main' into primitives-kernel
adekusar-drl Oct 13, 2022
5c7db95
Update qiskit_machine_learning/algorithms/classifiers/qsvc.py
adekusar-drl Oct 13, 2022
0166fe6
Update qiskit_machine_learning/algorithms/regressors/qsvr.py
adekusar-drl Oct 13, 2022
bebbdd9
Update qiskit_machine_learning/kernels/quantum_kernel.py
adekusar-drl Oct 13, 2022
f90bf0c
Update releasenotes/notes/add-fidelity-quantum-kernel-d40278abb49e19b…
adekusar-drl Oct 13, 2022
57535b5
code review
adekusar-drl Oct 13, 2022
9739d48
fix tests
adekusar-drl Oct 14, 2022
b54f250
terra 0.22
adekusar-drl Oct 14, 2022
fbc7d84
more tests
adekusar-drl Oct 14, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .pylintdict
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ ansatz
ansatzes
args
asmatrix
async
autograd
autosummary
backend
Expand Down Expand Up @@ -62,6 +63,7 @@ et
eval
expressibility
farrokh
fidelities
formatter
func
gambetta
Expand Down
37 changes: 24 additions & 13 deletions qiskit_machine_learning/algorithms/classifiers/pegasos_qsvc.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,19 @@

import logging
from datetime import datetime
from typing import Optional, Dict
from typing import Optional, Dict, Union

import numpy as np
from qiskit.utils import algorithm_globals
from sklearn.base import ClassifierMixin

from ...algorithms.serializable_model import SerializableModelMixin
from ...exceptions import QiskitMachineLearningError
from ...kernels.quantum_kernel import QuantumKernel
from ...kernels import BaseKernel, FidelityQuantumKernel
from ...kernels.quantum_kernel import QuantumKernel as QuantumKernelOld

QuantumKernel = Union[QuantumKernelOld, BaseKernel]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could the existing QuantumKernel, i.e QuantumKernelOld alias here be easily altered and made a BaseKernel (i.e. extend it). Going forwards, unless we really want QuantumKernel to be a synonym for BaseKernel (when QuantumKernelOld here goes away) this would allow things to be changed to take and return a BaseKernel, no?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to keep QuantumKernel separated from the new hierarchy. Especially considering it's training capabilities. While inheriting from BaseKernel may be not so bad, but adding the mixin will make it over complicated, the ways how kernel elements are computed are too different now. Since we want to deprecate the existing one, I don't really want to change it now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking and being BaseKernel it would mean a bunch of change that could be done now to standardize around that rather needing a Union type (of which the same QuantumKernel type is different Unions in different places) which I don't know what you have in mind once half the Union above goes away. From what I could tell the former QuantumKernel already seemed to fit the BaseKernel public API. So it seemed, on the surface, simple enough and would mean going forwards the class would imply get removed, no other code like this Union etc would need changing. The change would have to leave it compatible, but if all it meant is changing to extend BaseKernel it seemed simply enough and from what I could see had the benefit of avoid a potential bunch of change again in the future,

Copy link
Collaborator

@adekusar-drl adekusar-drl Oct 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All right, now we have

  • BaseKernel
  • FidelityQuantumKernel(BaseKernel)
  • TrainableKernel(BaseKernel, ABC)
  • FidelityTrainableQuantumKernel(TrainableKernel, FidelityQuantumKernel)
  • QuantumKernel(BaseKernel, TrainableKernel)

Additional tweaks of **kwargs may be required in the future to support more inheritance patterns.



logger = logging.getLogger(__name__)

Expand All @@ -38,7 +42,7 @@ class PegasosQSVC(ClassifierMixin, SerializableModelMixin):

.. code-block:: python

quantum_kernel = QuantumKernel()
quantum_kernel = FidelityQuantumKernel()

pegasos_qsvc = PegasosQSVC(quantum_kernel=quantum_kernel)
pegasos_qsvc.fit(sample_train, label_train)
Expand All @@ -64,7 +68,7 @@ def __init__(
) -> None:
"""
Args:
quantum_kernel: QuantumKernel to be used for classification. Has to be ``None`` when
quantum_kernel: a quantum kernel to be used for classification. Has to be ``None`` when
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to change things. when we are changing the code away from Union and Optional etc. Looking at the above constructor it still the old way - perhaps as we are changing things we can do this too.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just to emphasize this is not a class, but rather a term. The PR is large enough, I don't want to change other things. All type hints require an additional revision of the QML code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can also update Union & Optional constructs too, like for the quantum_kernel arg here, to use the | that we are preferring to use now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to do that in a separate PR as this one is heavy now, but okay. Replaced with|.

a precomputed kernel is used.
C: Positive regularization parameter. The strength of the regularization is inversely
proportional to C. Smaller ``C`` induce smaller weights which generally helps
Expand All @@ -85,17 +89,20 @@ def __init__(
- if ``quantum_kernel`` is passed and ``precomputed`` is set to ``True``. To use
a precomputed kernel, ``quantum_kernel`` has to be of the ``None`` type.
TypeError:
- if ``quantum_instance`` neither instance of ``QuantumKernel`` nor ``None``.
- if ``quantum_kernel`` neither instance of
:class:`~qiskit_machine_learning.kernels.BaseKernel` nor ``None``.
Comment on lines +90 to +91
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not strictly true at present right since it allows the pre-existing QuantumKernel code too.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now should be fine :)

"""

if precomputed:
if quantum_kernel is not None:
raise ValueError("'quantum_kernel' has to be None to use a precomputed kernel")
else:
if quantum_kernel is None:
quantum_kernel = QuantumKernel()
elif not isinstance(quantum_kernel, QuantumKernel):
raise TypeError("'quantum_kernel' has to be of type None or QuantumKernel")
quantum_kernel = FidelityQuantumKernel()
elif not isinstance(quantum_kernel, QuantumKernelOld) and not isinstance(
quantum_kernel, BaseKernel
):
raise TypeError("'quantum_kernel' has to be of type None or BaseKernel")

self._quantum_kernel = quantum_kernel
self._precomputed = precomputed
Expand Down Expand Up @@ -130,7 +137,8 @@ def fit(
"""Fit the model according to the given training data.

Args:
X: Train features. For a callable kernel (an instance of ``QuantumKernel``) the shape
X: Train features. For a callable kernel (an instance of
:class:`~qiskit_machine_learning.kernels.BaseKernel`) the shape
should be ``(n_samples, n_features)``, for a precomputed kernel the shape should be
``(n_samples, n_samples)``.
y: shape (n_samples), train labels . Must not contain more than two unique labels.
Expand Down Expand Up @@ -206,7 +214,8 @@ def predict(self, X: np.ndarray) -> np.ndarray:
Perform classification on samples in X.

Args:
X: Features. For a callable kernel (an instance of ``QuantumKernel``) the shape
X: Features. For a callable kernel (an instance of
:class:`~qiskit_machine_learning.kernels.BaseKernel`) the shape
should be ``(m_samples, n_features)``, for a precomputed kernel the shape should be
``(m_samples, n_samples)``. Where ``m`` denotes the set to be predicted and ``n`` the
size of the training set. In that case, the kernel values in X have to be calculated
Expand Down Expand Up @@ -234,7 +243,8 @@ def decision_function(self, X: np.ndarray) -> np.ndarray:
Evaluate the decision function for the samples in X.

Args:
X: Features. For a callable kernel (an instance of ``QuantumKernel``) the shape
X: Features. For a callable kernel (an instance of
:class:`~qiskit_machine_learning.kernels.BaseKernel`) the shape
should be ``(m_samples, n_features)``, for a precomputed kernel the shape should be
``(m_samples, n_samples)``. Where ``m`` denotes the set to be predicted and ``n`` the
size of the training set. In that case, the kernel values in X have to be calculated
Expand Down Expand Up @@ -340,14 +350,15 @@ def precomputed(self) -> bool:
@precomputed.setter
def precomputed(self, precomputed: bool):
"""Sets the pre-computed kernel flag. If ``True`` is passed then the previous kernel is
cleared. If ``False`` is passed then a new instance of ``QuantumKernel`` is created."""
cleared. If ``False`` is passed then a new instance of
:class:`~qiskit_machine_learning.kernels.BaseKernel` is created."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be FidelityQuantumKernel - that is a BaseKernel, but is that too abstract that a user needs to know the actual type?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right, updated.

self._precomputed = precomputed
if precomputed:
# remove the kernel, a precomputed will
self._quantum_kernel = None
else:
# re-create a new default quantum kernel
self._quantum_kernel = QuantumKernel()
self._quantum_kernel = FidelityQuantumKernel()

# reset training status
self._reset_state()
Expand Down
9 changes: 6 additions & 3 deletions qiskit_machine_learning/algorithms/classifiers/qsvc.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,14 +13,17 @@
"""Quantum Support Vector Classifier"""

import warnings
from typing import Optional
from typing import Optional, Union

from qiskit.utils.algorithm_globals import algorithm_globals
from sklearn.svm import SVC

from qiskit_machine_learning.algorithms.serializable_model import SerializableModelMixin
from qiskit_machine_learning.exceptions import QiskitMachineLearningWarning
from qiskit_machine_learning.kernels.quantum_kernel import QuantumKernel
from qiskit_machine_learning.kernels import BaseKernel, FidelityQuantumKernel
from qiskit_machine_learning.kernels import QuantumKernel as QuantumKernelOld

QuantumKernel = Union[QuantumKernelOld, BaseKernel]


class QSVC(SVC, SerializableModelMixin):
Expand Down Expand Up @@ -65,7 +68,7 @@ def __init__(self, *args, quantum_kernel: Optional[QuantumKernel] = None, **kwar
# if we don't delete, then this value clashes with our quantum kernel
del kwargs["kernel"]

self._quantum_kernel = quantum_kernel if quantum_kernel else QuantumKernel()
self._quantum_kernel = quantum_kernel if quantum_kernel else FidelityQuantumKernel()

if "random_state" not in kwargs:
kwargs["random_state"] = algorithm_globals.random_seed
Expand Down
13 changes: 8 additions & 5 deletions qiskit_machine_learning/algorithms/regressors/qsvr.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,16 @@
"""Quantum Support Vector Regressor"""

import warnings
from typing import Optional
from typing import Optional, Union

from sklearn.svm import SVR

from ..serializable_model import SerializableModelMixin
from ...exceptions import QiskitMachineLearningWarning
from ...kernels.quantum_kernel import QuantumKernel
from qiskit_machine_learning.algorithms.serializable_model import SerializableModelMixin
from qiskit_machine_learning.exceptions import QiskitMachineLearningWarning
from qiskit_machine_learning.kernels import BaseKernel, FidelityQuantumKernel
from qiskit_machine_learning.kernels import QuantumKernel as QuantumKernelOld

QuantumKernel = Union[QuantumKernelOld, BaseKernel]


class QSVR(SVR, SerializableModelMixin):
Expand Down Expand Up @@ -64,7 +67,7 @@ def __init__(self, *args, quantum_kernel: Optional[QuantumKernel] = None, **kwar
# if we don't delete, then this value clashes with our quantum kernel
del kwargs["kernel"]

self._quantum_kernel = quantum_kernel if quantum_kernel else QuantumKernel()
self._quantum_kernel = quantum_kernel if quantum_kernel else FidelityQuantumKernel()

super().__init__(kernel=self._quantum_kernel.evaluate, *args, **kwargs)

Expand Down
18 changes: 16 additions & 2 deletions qiskit_machine_learning/kernels/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@
:nosignatures:

QuantumKernel
BaseKernel
FidelityQuantumKernel
TrainableKernelMixin
TrainableFidelityQuantumKernel

Submodules
==========
Expand All @@ -34,5 +38,15 @@
"""

from .quantum_kernel import QuantumKernel

__all__ = ["QuantumKernel"]
from .base_kernel import BaseKernel
from .fidelity_quantum_kernel import FidelityQuantumKernel
from .trainable_kernel_mixin import TrainableKernelMixin
from .trainable_fidelity_quantum_kernel import TrainableFidelityQuantumKernel

__all__ = [
"QuantumKernel",
"BaseKernel",
"FidelityQuantumKernel",
"TrainableKernelMixin",
"TrainableFidelityQuantumKernel",
]
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
# that they have been altered from the originals.

"""Quantum Kernel Trainer"""

import copy
from functools import partial
from typing import Union, Optional, Sequence
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take advantage of this PR to update the annotations in this class too, but I cannot comment on unchanged lines :(
So I will add the rest of the suggestions in a follow-up comment that you can copy-paste.

Suggested change
from typing import Union, Optional, Sequence
from __future__ import annotations
from collections.abc import Sequence

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • line 41: def quantum_kernel(self) -> QuantumKernel | None:
  • line 94+ :
def __init__(
        self,
        quantum_kernel: QuantumKernel,
        loss: str | KernelLoss | None = None,
        optimizer: Optimizer | None = None,
        initial_point: Sequence[float] | None = None,
    ):
  • line 141: def loss(self, loss: str | KernelLoss | None) -> None:
  • line 164: def initial_point(self) -> Sequence[float] | None:
  • line 169: def initial_point(self, initial_point: Sequence[float] | None) -> None:
  • line 228: def _set_loss(self, loss: str | KernelLoss | None) -> None:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd keep the changes to the existing classes minimal in this PR. Just to make the review simpler. We can address type hints in a separate PR.

Expand All @@ -22,7 +23,11 @@
from qiskit.algorithms.variational_algorithm import VariationalResult
from qiskit_machine_learning.utils.loss_functions import KernelLoss, SVCLoss

from qiskit_machine_learning.kernels import QuantumKernel
from qiskit_machine_learning.kernels import QuantumKernel as QuantumKernelOld
from qiskit_machine_learning.kernels import TrainableFidelityQuantumKernel


QuantumKernel = Union[QuantumKernelOld, TrainableFidelityQuantumKernel]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the latter Union type not be TrainableKernelMixin? rather than the specific TrainableFidelityQuantumKernel. And I would have the same question as to whether the existing QuantumKernel could be easily altered to implement that mixin too. Would simplify types etc the same way (where it seems awkward where we have different local QuantumKernel types defined like this depending on where you look) - it would not change the fact that it would be pending deprecation and eventually be removed though of course.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I highly doubt that this would be straightforward. The whole reason for this PR (before we started adding the primitives) was that it was basically impossible to extend the QuantumKernel class in a clean way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not looking to extend the old QuantumKernel. More would it be simple enough to adapt it a little so the old code looked like the new way wrt to the interface e.g. BaseKernel that I commented elsewhere or TrainableKernelMixin here. The intent remains to drop it going forwards I was more wondering if adapting it lightly would mean this code could be what we want going forwards at this point rather than having some Union and having to change this once again when the QuantumKernel gets dropped.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, no more unions.



class QuantumKernelTrainerResult(VariationalResult):
Expand Down
134 changes: 134 additions & 0 deletions qiskit_machine_learning/kernels/base_kernel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.

"""Base kernel"""

from __future__ import annotations

from abc import abstractmethod, ABC

import numpy as np
from qiskit import QuantumCircuit
from qiskit.circuit.library import ZZFeatureMap


class BaseKernel(ABC):
r"""
Abstract class defining the Kernel interface.

The general task of machine learning is to find and study patterns in data. For many
algorithms, the datapoints are better understood in a higher dimensional feature space,
through the use of a kernel function:

.. math::

K(x, y) = \langle f(x), f(y)\rangle.

Here K is the kernel function, x, y are n dimensional inputs. f is a map from n-dimension
to m-dimension space. :math:`\langle x, y \rangle` denotes the dot product.
Usually m is much larger than n.

The quantum kernel algorithm calculates a kernel matrix, given datapoints x and y and feature
map f, all of n dimension. This kernel matrix can then be used in classical machine learning
algorithms such as support vector classification, spectral clustering or ridge regression.
"""

def __init__(self, feature_map: QuantumCircuit = None, enforce_psd: bool = True) -> None:
"""
Args:
feature_map: Parameterized circuit to be used as the feature map. If ``None`` is given,
:class:`~qiskit.circuit.library.ZZFeatureMap` is used with two qubits. If there's
a mismatch in the number of qubits of the feature map and the number of features
in the dataset, then the kernel will try to adjust the feature map to reflect the
number of features.
enforce_psd: Project to closest positive semidefinite matrix if ``x = y``.
Default ``True``.
"""
if feature_map is None:
feature_map = ZZFeatureMap(2)

self._feature_map = feature_map
self._enforce_psd = enforce_psd
Comment on lines +60 to +61
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we want to expose these (feature_map and enforce_psd) as public instance vars that the user can get/set to read/update these?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resetting the feature map means reinitializing a fidelity instance with the corresponding circuit. If number of parameters change, this also needs to be accounted for which is especially non-trivial in the trainable case. We thought this through at some point and came to the conclusion that it might be easier to just create a new kernel instance when you need to change the feature map.
With enforce_psd you're right, I think that could be made public without issues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updating the feature map may be non-trivial, that's right. Even for trivial properties, I'm hesitant to provide setters, immutable object safer and easier to maintain. Added some getters.


@abstractmethod
def evaluate(self, x_vec: np.ndarray, y_vec: np.ndarray | None = None) -> np.ndarray:
r"""
Construct kernel matrix for given data.

If y_vec is None, self inner product is calculated.

Args:
x_vec: 1D or 2D array of datapoints, NxD, where N is the number of datapoints,
D is the feature dimension
y_vec: 1D or 2D array of datapoints, MxD, where M is the number of datapoints,
D is the feature dimension

Returns:
2D matrix, NxM
"""
raise NotImplementedError()

def _validate_input(
self, x_vec: np.ndarray, y_vec: np.ndarray | None
) -> tuple[np.ndarray, np.ndarray | None]:
x_vec = np.asarray(x_vec)

if x_vec.ndim > 2:
raise ValueError("x_vec must be a 1D or 2D array")

if x_vec.ndim == 1:
x_vec = np.reshape(x_vec, (-1, len(x_vec)))

if x_vec.shape[1] != self._feature_map.num_parameters:
# before raising an error we try to adjust the feature map
# to the required number of qubit.
try:
self._feature_map.num_qubits = x_vec.shape[1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we raise a warning here to let the user know the feature map used is not exactly what they provided as an argument?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic was adapted from another algorithm, perhaps VQE, where the same approach without warnings is implemented. I'd keep as is for consistency reasons.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable. I just thought it would be more transparent and probably help with debugging as I think if someone uses training data that doesn't fit the feature map it's probably more likely to be a mistake. It's nice that the algorithm still runs but might lead to unexpected results..

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was these were more template circuits where you defined what you wanted upfront but maybe at that time do not have detail on the problem. Later when we do have actual problem it can then be updated. In VQE it adjusts the ansatz to match the operator.

except AttributeError as a_e:
raise ValueError(
f"x_vec and class feature map have incompatible dimensions.\n"
f"x_vec has {x_vec.shape[1]} dimensions, "
f"but feature map has {self._feature_map.num_parameters}."
) from a_e

if y_vec is not None:
y_vec = np.asarray(y_vec)

if y_vec.ndim == 1:
y_vec = np.reshape(y_vec, (-1, len(y_vec)))

if y_vec.ndim > 2:
raise ValueError("y_vec must be a 1D or 2D array")

if y_vec.shape[1] != x_vec.shape[1]:
raise ValueError(
"x_vec and y_vec have incompatible dimensions.\n"
f"x_vec has {x_vec.shape[1]} dimensions, but y_vec has {y_vec.shape[1]}."
)

return x_vec, y_vec

# pylint: disable=invalid-name
def _make_psd(self, kernel_matrix: np.ndarray) -> np.ndarray:
r"""
Find the closest positive semi-definite approximation to symmetric kernel matrix.
The (symmetric) matrix should always be positive semi-definite by construction,
but this can be violated in case of noise, such as sampling noise.

Args:
kernel_matrix: symmetric 2D array of the kernel entries

Returns:
the closest positive semi-definite matrix.
"""
d, u = np.linalg.eig(kernel_matrix)
return u @ np.diag(np.maximum(0, d)) @ u.transpose()
Loading