Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
afc73ff
change documention of pso to new format, remove duplicated one_plus_o…
gauravmanmode Aug 1, 2025
684cf29
add missing booktitle and publiser in refs
gauravmanmode Aug 1, 2025
49f4208
doc for cmaes
gauravmanmode Aug 1, 2025
f6778d5
use tab-set instead of tabbed in how to guide
gauravmanmode Aug 5, 2025
179d94b
fix #627 incompitble bayes_opt and nevergrad
gauravmanmode Aug 8, 2025
5fba9f6
add doc oneplusone
gauravmanmode Aug 8, 2025
94f4961
add doc differential evolution
gauravmanmode Aug 8, 2025
a3a685a
add doc bayes optim
gauravmanmode Aug 8, 2025
ba2ce82
add doc emna
gauravmanmode Aug 8, 2025
f767fe5
add doc cga
gauravmanmode Aug 8, 2025
63362c7
add doc eda
gauravmanmode Aug 8, 2025
91cecee
add doc tbpsa
gauravmanmode Aug 9, 2025
5f7d725
add doc randomsearch
gauravmanmode Aug 9, 2025
097b921
add doc samplingsearch
gauravmanmode Aug 9, 2025
f1c43ac
add doc ngopt
gauravmanmode Aug 10, 2025
41f1a04
add doc meta
gauravmanmode Aug 10, 2025
616b854
use annotations from __future__
gauravmanmode Aug 10, 2025
3b32118
fix doc evalrst blocks and math
gauravmanmode Aug 10, 2025
cf7083e
fix tests
gauravmanmode Aug 13, 2025
6737294
ignore cma warnings, missing test
gauravmanmode Aug 14, 2025
eb454ff
removes bayes_optim from environment.yml
gauravmanmode Aug 15, 2025
6b95a2d
remove comments
gauravmanmode Aug 24, 2025
39e01ae
Merge remote-tracking branch 'upstream/main' into nevergrad_documenta…
gauravmanmode Aug 28, 2025
8f2ae14
fix mypy error
gauravmanmode Aug 28, 2025
a8228e9
fix link, smallcase for algoname
gauravmanmode Aug 28, 2025
6b7a899
add comments
gauravmanmode Aug 30, 2025
b214243
change to enum type, rename ngopt to wizard and meta to portfolio, an…
gauravmanmode Sep 1, 2025
1388b7e
show in docs
gauravmanmode Sep 1, 2025
d5fea5c
Merge branch 'main' into nevergrad_documentation_to_new_format
gauravmanmode Sep 19, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .tools/envs/testenv-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
Expand Down
3 changes: 1 addition & 2 deletions .tools/envs/testenv-nevergrad.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,13 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
- sqlalchemy-stubs # dev, tests
- sphinxcontrib-mermaid # dev, tests, docs
- -e ../../
- bayesian_optimization==1.4.0
- nevergrad
- -e ../../
1 change: 0 additions & 1 deletion .tools/envs/testenv-numpy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
Expand Down
1 change: 0 additions & 1 deletion .tools/envs/testenv-others.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
Expand Down
1 change: 0 additions & 1 deletion .tools/envs/testenv-pandas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
Expand Down
3 changes: 1 addition & 2 deletions .tools/envs/testenv-plotly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,12 @@ dependencies:
- DFO-LS>=1.5.3 # dev, tests
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
- sqlalchemy-stubs # dev, tests
- sphinxcontrib-mermaid # dev, tests, docs
- kaleido<0.3
- -e ../../
- kaleido<0.3
848 changes: 290 additions & 558 deletions docs/source/algorithms.md

Large diffs are not rendered by default.

157 changes: 76 additions & 81 deletions docs/source/how_to/how_to_start_parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,125 +14,120 @@ advantages and drawbacks of each of them.
Again, we use the simple `sphere` function you know from other tutorials as an example.

```{eval-rst}
.. tabbed:: Array

A frequent choice of ``params`` is a one-dimensional numpy array. This is
because one-dimensional numpy arrays are all that is supported by most optimizer
libraries.
.. tab-set::
.. tab-item:: Array

In our opinion, it is rarely a good choice to represent parameters as flat numpy arrays
and then access individual parameters or sclices by positions. The only exception
are simple optimization problems with very-fast-to-evaluate criterion functions where
any overhead must be avoided.
A frequent choice of ``params`` is a one-dimensional numpy array. This is
because one-dimensional numpy arrays are all that is supported by most optimizer
libraries.

If you still want to use one-dimensional numpy arrays, here is how:
In our opinion, it is rarely a good choice to represent parameters as flat numpy arrays
and then access individual parameters or sclices by positions. The only exception
are simple optimization problems with very-fast-to-evaluate criterion functions where
any overhead must be avoided.

.. code-block:: python
If you still want to use one-dimensional numpy arrays, here is how:

import optimagic as om
.. code-block:: python

import optimagic as om

def sphere(params):
return params @ params

def sphere(params):
return params @ params

om.minimize(
fun=sphere,
params=np.arange(3),
algorithm="scipy_lbfgsb",
)

```
om.minimize(
fun=sphere,
params=np.arange(3),
algorithm="scipy_lbfgsb",
)

```{eval-rst}
.. tabbed:: DataFrame
.. tab-item:: DataFrame

Originally, pandas DataFrames were the mandatory format for ``params`` in optimagic.
They are still highly recommended and have a few special features. For example,
they allow to bundle information on start parameters and bounds together into one
data structure.
Originally, pandas DataFrames were the mandatory format for ``params`` in optimagic.
They are still highly recommended and have a few special features. For example,
they allow to bundle information on start parameters and bounds together into one
data structure.

Let's look at an example where we do that:
Let's look at an example where we do that:

.. code-block:: python
.. code-block:: python

def sphere(params):
return (params["value"] ** 2).sum()
def sphere(params):
return (params["value"] ** 2).sum()


params = pd.DataFrame(
data={"value": [1, 2, 3], "lower_bound": [-np.inf, 1.5, 0]},
index=["a", "b", "c"],
)
params = pd.DataFrame(
data={"value": [1, 2, 3], "lower_bound": [-np.inf, 1.5, 0]},
index=["a", "b", "c"],
)

om.minimize(
fun=sphere,
params=params,
algorithm="scipy_lbfgsb",
)
om.minimize(
fun=sphere,
params=params,
algorithm="scipy_lbfgsb",
)

DataFrames have many advantages:
DataFrames have many advantages:

- It is easy to select single parameters or groups of parameters or work with
the entire parameter vector. Especially, if you use a well designed MultiIndex.
- It is very easy to produce publication quality LaTeX tables from them.
- If you have nested models, you can easily update the parameter vector of a larger
model with the values from a smaller one (e.g. to get good start parameters).
- You can bundle information on bounds and values in one place.
- It is easy to compare two params vectors for equality.
- It is easy to select single parameters or groups of parameters or work with
the entire parameter vector. Especially, if you use a well designed MultiIndex.
- It is very easy to produce publication quality LaTeX tables from them.
- If you have nested models, you can easily update the parameter vector of a larger
model with the values from a smaller one (e.g. to get good start parameters).
- You can bundle information on bounds and values in one place.
- It is easy to compare two params vectors for equality.


If you are sure you won't have bounds on your parameter, you can also use a
pandas.Series instead of a pandas.DataFrame.
If you are sure you won't have bounds on your parameter, you can also use a
pandas.Series instead of a pandas.DataFrame.

A drawback of DataFrames is that they are not JAX compatible. Another one is that
they are a bit slower than numpy arrays.
A drawback of DataFrames is that they are not JAX compatible. Another one is that
they are a bit slower than numpy arrays.


```
.. tab-item:: Dict

```{eval-rst}
.. tabbed:: Dict
``params`` can also be a (nested) dictionary containing all of the above and more.

``params`` can also be a (nested) dictionary containing all of the above and more.
.. code-block:: python

.. code-block:: python
def sphere(params):
return params["a"] ** 2 + params["b"] ** 2 + (params["c"] ** 2).sum()

def sphere(params):
return params["a"] ** 2 + params["b"] ** 2 + (params["c"] ** 2).sum()

res = om.minimize(
fun=sphere,
params={"a": 0, "b": 1, "c": pd.Series([2, 3, 4])},
algorithm="scipy_neldermead",
)

res = om.minimize(
fun=sphere,
params={"a": 0, "b": 1, "c": pd.Series([2, 3, 4])},
algorithm="scipy_neldermead",
)
Dictionarys of arrays are ideal if you want to do vectorized computations with
groups of parameters. They are also a good choice if you calculate derivatives
with JAX.

Dictionarys of arrays are ideal if you want to do vectorized computations with
groups of parameters. They are also a good choice if you calculate derivatives
with JAX.
While optimagic won't stop you, don't go too far! Having parameters in very deeply
nested dictionaries makes it hard to visualize results and/or even to compare two
estimation results.

While optimagic won't stop you, don't go too far! Having parameters in very deeply
nested dictionaries makes it hard to visualize results and/or even to compare two
estimation results.

```
.. tab-item:: Scalar

```{eval-rst}
.. tabbed:: Scalar
If you have a one-dimensional optimization problem, the natural way to represent
your params is a float:

If you have a one-dimensional optimization problem, the natural way to represent
your params is a float:
.. code-block:: python

.. code-block:: python
def sphere(params):
return params**2

def sphere(params):
return params**2

om.minimize(
fun=sphere,
params=3,
algorithm="scipy_lbfgsb",
)

om.minimize(
fun=sphere,
params=3,
algorithm="scipy_lbfgsb",
)
```
5 changes: 3 additions & 2 deletions docs/source/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -964,8 +964,8 @@ @inproceedings{tbpsaimpl
year = {2016},
month = {09},
pages = {},
title = {Evolution under Strong Noise: A Self-Adaptive Evolution Strategy Can Reach the Lower Performance Bound - the pcCMSA-ES},
volume = {9921},
title = {Evolution under Strong Noise: A Self-Adaptive Evolution Strategy Can Reach the Lower Performance Bound - the pcCMSA-ES},
booktitle = {Parallel Problem Solving from Nature -- PPSN XIII},volume = {9921},
isbn = {9783319458229},
doi = {10.1007/978-3-319-45823-6_3}
}
Expand Down Expand Up @@ -1037,6 +1037,7 @@ @book{emnaimpl
pages = {},
title = {Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation},
isbn = {9781461356042},
publisher = {Springer},
journal = {Genetic algorithms and evolutionary computation ; 2},
doi = {10.1007/978-1-4615-1539-5}
}
Expand Down
1 change: 0 additions & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- pre-commit>=4 # dev
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- -e . # dev
# type stubs
Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -383,6 +383,7 @@ module = [
"pdbp",
"iminuit",
"nevergrad",
"nevergrad.optimization.base",
"pygad",
"yaml",
"gradient_free_optimizers",
Expand Down
Loading
Loading