Skip to content

added initial wrapper for migrad #568

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .tools/envs/testenv-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ dependencies:
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
- iminuit # dev, tests
- pip: # dev, tests, docs
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
Expand Down
1 change: 1 addition & 0 deletions .tools/envs/testenv-numpy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ dependencies:
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
- iminuit # dev, tests
- pip: # dev, tests, docs
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
Expand Down
1 change: 1 addition & 0 deletions .tools/envs/testenv-others.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ dependencies:
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
- iminuit # dev, tests
- pip: # dev, tests, docs
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
Expand Down
1 change: 1 addition & 0 deletions .tools/envs/testenv-pandas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ dependencies:
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
- iminuit # dev, tests
- pip: # dev, tests, docs
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
Expand Down
48 changes: 48 additions & 0 deletions docs/source/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -3936,6 +3936,54 @@ addition to optimagic when using an NLOPT algorithm. To install nlopt run
10 * (number of parameters + 1).
```

## Optimizers from iminuit

optimagic supports the [IMINUIT MIGRAD Optimizer](https://iminuit.readthedocs.io/). To
use MIGRAD, you need to have
[the iminuit package](https://github.com/scikit-hep/iminuit) installed (pip install
iminuit).

```{eval-rst}
.. dropdown:: iminuit_migrad

.. code-block::

"iminuit_migrad"

`MIGRAD <https://iminuit.readthedocs.io/en/stable/reference.html#iminuit.Minuit.migrad>`_ is
the workhorse algorithm of the MINUIT optimization suite, which has been widely used in the
high-energy physics community since 1975. The IMINUIT package is a Python interface to the
Minuit2 C++ library developed by CERN.

Migrad uses a quasi-Newton method, updating the Hessian matrix iteratively
to guide the optimization. The algorithm adapts dynamically to challenging landscapes
using several key techniques:

- **Quasi-Newton updates**: The Hessian is updated iteratively rather than recalculated at
each step, improving efficiency.
- **Steepest descent fallback**: When the Hessian update fails, Migrad falls back to steepest
descent with line search.
- **Box constraints handling**: Parameters with bounds are transformed internally to ensure
they remain within allowed limits.
- **Heuristics for numerical stability**: Special cases such as flat gradients or singular
Hessians are managed using pre-defined heuristics.
- **Stopping criteria based on Estimated Distance to Minimum (EDM)**: The optimization halts
when the predicted improvement becomes sufficiently small.

For details see :cite:`JAMES1975343`.

**Optimizer Parameters:**

- **stopping.maxfun** (int): Maximum number of function evaluations. If reached, the optimization stops
but this is not counted as successful convergence. Function evaluations used for numerical gradient
calculations do not count toward this limit. Default is 1,000,000.

- **n_restarts** (int): Number of times to restart the optimizer if convergence is not reached.

- A value of 1 (the default) indicates that the optimizer will only run once, disabling the restart feature.
- Values greater than 1 specify the maximum number of restart attempts.
```

## References

```{eval-rst}
Expand Down
13 changes: 13 additions & 0 deletions docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -893,4 +893,17 @@ @book{Conn2009
URL = {https://epubs.siam.org/doi/abs/10.1137/1.9780898718768},
}

@article{JAMES1975343,
title = {Minuit - a system for function minimization and analysis of the parameter errors and correlations},
journal = {Computer Physics Communications},
volume = {10},
number = {6},
pages = {343-367},
year = {1975},
issn = {0010-4655},
doi = {https://doi.org/10.1016/0010-4655(75)90039-9},
url = {https://www.sciencedirect.com/science/article/pii/0010465575900399},
author = {F. James and M. Roos}
}

@Comment{jabref-meta: databaseType:bibtex;}
1 change: 1 addition & 0 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ dependencies:
- jinja2 # dev, tests
- furo # dev, docs
- annotated-types # dev, tests
- iminuit # dev, tests
- pip: # dev, tests, docs
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
Expand Down
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ dependencies = [
"sqlalchemy>=1.3",
"annotated-types",
"typing-extensions",
"iminuit",
]
dynamic = ["version"]
keywords = [
Expand Down Expand Up @@ -378,5 +379,6 @@ module = [
"optimagic._version",
"annotated_types",
"pdbp",
"iminuit",
]
ignore_missing_imports = true
17 changes: 17 additions & 0 deletions src/optimagic/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from optimagic.optimization.algorithm import Algorithm
from optimagic.optimizers.bhhh import BHHH
from optimagic.optimizers.fides import Fides
from optimagic.optimizers.iminuit_migrad import IminuitMigrad
from optimagic.optimizers.ipopt import Ipopt
from optimagic.optimizers.nag_optimizers import NagDFOLS, NagPyBOBYQA
from optimagic.optimizers.neldermead import NelderMeadParallel
Expand Down Expand Up @@ -286,6 +287,7 @@ def Scalar(self) -> BoundedGradientBasedLocalNonlinearConstrainedScalarAlgorithm
@dataclass(frozen=True)
class BoundedGradientBasedLocalScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -840,6 +842,7 @@ def NonlinearConstrained(
@dataclass(frozen=True)
class BoundedGradientBasedLocalAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -889,6 +892,7 @@ def Scalar(self) -> GradientBasedLocalNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class GradientBasedLocalScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -956,6 +960,7 @@ def Scalar(self) -> BoundedGradientBasedNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class BoundedGradientBasedScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -1674,6 +1679,7 @@ def Scalar(self) -> BoundedLocalNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class BoundedLocalScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
nlopt_bobyqa: Type[NloptBOBYQA] = NloptBOBYQA
Expand Down Expand Up @@ -1943,6 +1949,7 @@ def Scalar(self) -> GlobalGradientBasedScalarAlgorithms:
class GradientBasedLocalAlgorithms(AlgoSelection):
bhhh: Type[BHHH] = BHHH
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -1985,6 +1992,7 @@ def Scalar(self) -> GradientBasedLocalScalarAlgorithms:
@dataclass(frozen=True)
class BoundedGradientBasedAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -2054,6 +2062,7 @@ def Scalar(self) -> GradientBasedNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class GradientBasedScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -2577,6 +2586,7 @@ def Scalar(self) -> GlobalParallelScalarAlgorithms:
@dataclass(frozen=True)
class BoundedLocalAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
Expand Down Expand Up @@ -2659,6 +2669,7 @@ def Scalar(self) -> LocalNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class LocalScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
Expand Down Expand Up @@ -2809,6 +2820,7 @@ def Scalar(self) -> BoundedNonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class BoundedScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
nlopt_bobyqa: Type[NloptBOBYQA] = NloptBOBYQA
Expand Down Expand Up @@ -3063,6 +3075,7 @@ def Local(self) -> LeastSquaresLocalParallelAlgorithms:
class GradientBasedAlgorithms(AlgoSelection):
bhhh: Type[BHHH] = BHHH
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nlopt_ccsaq: Type[NloptCCSAQ] = NloptCCSAQ
nlopt_lbfgsb: Type[NloptLBFGSB] = NloptLBFGSB
Expand Down Expand Up @@ -3246,6 +3259,7 @@ def Scalar(self) -> GlobalScalarAlgorithms:
class LocalAlgorithms(AlgoSelection):
bhhh: Type[BHHH] = BHHH
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
Expand Down Expand Up @@ -3316,6 +3330,7 @@ def Scalar(self) -> LocalScalarAlgorithms:
@dataclass(frozen=True)
class BoundedAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
Expand Down Expand Up @@ -3451,6 +3466,7 @@ def Scalar(self) -> NonlinearConstrainedScalarAlgorithms:
@dataclass(frozen=True)
class ScalarAlgorithms(AlgoSelection):
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
neldermead_parallel: Type[NelderMeadParallel] = NelderMeadParallel
Expand Down Expand Up @@ -3625,6 +3641,7 @@ def Scalar(self) -> ParallelScalarAlgorithms:
class Algorithms(AlgoSelection):
bhhh: Type[BHHH] = BHHH
fides: Type[Fides] = Fides
iminuit_migrad: Type[IminuitMigrad] = IminuitMigrad
ipopt: Type[Ipopt] = Ipopt
nag_dfols: Type[NagDFOLS] = NagDFOLS
nag_pybobyqa: Type[NagPyBOBYQA] = NagPyBOBYQA
Expand Down
8 changes: 8 additions & 0 deletions src/optimagic/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,14 @@
IS_NUMBA_INSTALLED = True


try:
import iminuit # noqa: F401
except ImportError:
IS_IMINUIT_INSTALLED = False

Check warning on line 98 in src/optimagic/config.py

View check run for this annotation

Codecov / codecov/patch

src/optimagic/config.py#L97-L98

Added lines #L97 - L98 were not covered by tests
else:
IS_IMINUIT_INSTALLED = True


# ======================================================================================
# Check if pandas version is newer or equal to version 2.1.0
# ======================================================================================
Expand Down
13 changes: 13 additions & 0 deletions src/optimagic/optimization/algo_options.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,19 @@

"""

N_RESTARTS = 1
"""int: Number of times to restart the optimizer if convergence is not reached.
This parameter controls how many times the optimization process is restarted
in an attempt to achieve convergence.

- A value of 1 (the default) indicates that the optimizer will only run once,
disabling the restart feature.
- Values greater than 1 specify the maximum number of restart attempts.

Note: This is distinct from `STOPPING_MAXITER`, which limits the number of
iterations within a single optimizer run, not the number of restarts.
"""


def get_population_size(population_size, x, lower_bound=10):
"""Default population size for genetic algorithms."""
Expand Down
Loading
Loading