Skip to content
This repository has been archived by the owner on Dec 7, 2021. It is now read-only.

Implement selection of optimizer in the QGAN algorithm #1253

Merged
merged 20 commits into from
Oct 15, 2020

Conversation

Zoufalc
Copy link
Contributor

@Zoufalc Zoufalc commented Sep 16, 2020

Summary

This PR addresses Issue #1248.

Details and comments

The change enables to set a custom optimizer for the quantum generator when running the qGAN algorithm.

Copy link
Member

@woodsp-ibm woodsp-ibm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this adds a feature that end users would be interested in could you please add a reno release note. If you look in the CONTRIBUTING.md it has information on what is needed. Thanks.

@@ -62,6 +64,7 @@ def __init__(self,
or a QuantumCircuit implementing the generator.
init_params: 1D numpy array or list, Initialization for
the generator's parameters.
optimizer: optimizer to be used for the training of the generator
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any specific aspects or configuration of the optimizer that we could add here to the docstring to inform the user? For instance I see that the default optimizer just has one iteration.

Also in train I see this code that I think will fail if another optimizer is supplied.

        self._optimizer._maxiter = 1
        self._optimizer._t = 0

Could we have a test case with different optimizer to assure things work.

@woodsp-ibm
Copy link
Member

Can I point these lines out again from train() that I mentioned above

        self._optimizer._maxiter = 1
        self._optimizer._t = 0

These are really updating private variables known to be there for ADAM. Worst case _t may be some internal variable for another optimizer and this clobbers it - best case the above simply create unused variables on the optimizer instance and it effect is benign. Maybe the above code should only be executed in the case that the optimizer is an instance of ADAM. This presumably forces the interations to be 1 - as I mentioned do we need to inform users in the docstring that this is how some other the optimizer should be set for best operation since it cannot be forced the way it is done there when its ADAM?

@woodsp-ibm woodsp-ibm added the Changelog: New Feature Include in the Added section of the changelog label Sep 17, 2020
@woodsp-ibm woodsp-ibm added this to the 0.8 milestone Sep 17, 2020
# Force single optimization iteration
self._optimizer._maxiter = 1
self._optimizer._t = 0
if self._optimizer._maxiter != 1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently there is no standard interface to query this. Some optimizers take max fun as well as max iter type settings, where the former, without deeper knowledge of what is happening internal figuring iters from funs is problematic.

I would suggest this type of text could just be the docstring for the optimizer param as I am not sure there is any real way to check this across the optimizers. So if a user supplies their own optimizer blindly without reading the text then yes things might behave unexpectedly - but hopefully they would consult the documentation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In thinking since maxiter is placed in the options dictionary that could be checked and if the key exists then its value could be checked. If the key did not exist or its value was not 1 then an appropriate warning could be given. The options dictionary is not exposed via a public getter interface but could be (it can be accessed via its private instance variable, but if this is something that was wanted maybe making a getter and having it public would be better). This would be a simple change to base optimizer class to make a getter is you feel that this level of checking is needed and just documenting it will not suffice.

@Cryoris Cryoris modified the milestones: 0.8, 0.9 Oct 15, 2020
woodsp-ibm
woodsp-ibm previously approved these changes Oct 15, 2020
@manoelmarques manoelmarques merged commit 67778b5 into qiskit-community:master Oct 15, 2020
manoelmarques added a commit to qiskit-community/qiskit-machine-learning that referenced this pull request Feb 27, 2021
…ity/qiskit-aqua#1253)

* Update qgan.py

Raise error if the number of the training data items is smaller than the batch size, e.g. due to truncation to bounds.

* Update qgan.py

* Update qgan.py

* add possibility to choose an optimizer for the quantum generator in the qGAN algorithm

* Update quantum_generator.py

* Add optimizer instances to test_qgan.py

* Update test_qgan.py

* remove forced maxiter and t=0 for optimizer

* Update quantum_generator.py

* fix lint

* fix linting and brakets

* fix optimizer argument

* Change maxiter check, fix tests to set maxiter to 1

* fix lint

Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com>
Co-authored-by: woodsp <woodsp@us.ibm.com>
Co-authored-by: Manoel Marques <Manoel.Marques@ibm.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Changelog: New Feature Include in the Added section of the changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants