Skip to content

Callback function is not being called for TrainableModels when using optimizers that don't inherit from SciPyOptimizer #893

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
matheoxz opened this issue Feb 11, 2025 · 1 comment
Labels
type: bug 🐞 Something isn't working
Milestone

Comments

@matheoxz
Copy link

Environment

  • Qiskit Machine Learning version: 0.8.2
  • Python version: 3.1.1
  • Operating system: Windows 11

What is happening?

Callback function is not being called for TrainableModels when using optimzers that don't inherit from SciPyOptimizer

Around 2 months ago a commit named "Fix callback compatibility for trainable_model" introduced an if clause for the callback calling in the _get_objective method, checking if the optimizer is a SciPyOptimizer instance to then execute the callback function.

But some of the optimizers in the repo are not inheriting from SciPyOptimizer, but from Optimizer, causing a bug where the callback function does not execute since the if clause is false.

ADAM, AQGD, GradientDescent, GSLS, SPSA and UMDA are not SciPyOptimizers, but it is important for any to have the callback function executed at each iteraction.

How can we reproduce the issue?

  • Train a VQC or any TrainableModel implementation
    -- with any callback function
    -- with any of the following optimizers: ADAM, AQGD, GradientDescent, GSLS, SPSA or UMDA
    -- with any data, feature_map and ansatz

What should happen?

The callback function is never executed.

Any suggestions?

No response

@matheoxz matheoxz changed the title Callback function is not being called for TrainableModels when using optimzers that don't inherit from SciPyOptimizer Callback function is not being called for TrainableModels when using optimizers that don't inherit from SciPyOptimizer Feb 11, 2025
@edoaltamura edoaltamura added the type: bug 🐞 Something isn't working label Feb 20, 2025
@edoaltamura edoaltamura added this to the v.0.8.3 milestone Feb 20, 2025
@OkuyanBoga
Copy link
Collaborator

OkuyanBoga commented Mar 17, 2025

Hi @matheoxz, thank you for raising this issue.

We discovered that when a non-SciPyOptimizer is used, the callback is being triggered more often than necessary—for example, the SPSA callback was called three times instead of once. To address this, the callback is now passed directly to the non-SciPyOptimizer functions rather than to TrainableModels.
Here you can see the example of changes in the unit tests:

02ee6a7#diff-b0eb994eebb823e4a87b21ccf3edd69499cf91a0adc3e9dcb2582afcdf3e112aR508-R535

Currently, for example, instead of TrainableModel(*, callback=SPSAcallback) , it can be used SPSA(*, callback=SPSAcallback), TrainableModel(*)

Maybe we can add a check to TrainableModel to check if it is not a SciPyOptimizer, and we can assign optimizer.callback=callback, with the caveat of different input arguments for each callback function.

If you have any alternative suggestions or examples, please share them so we can consider incorporating your ideas in the next version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug 🐞 Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants