Skip to content

Commit

Permalink
Moved to experimental
Browse files Browse the repository at this point in the history
  • Loading branch information
mbaudin47 committed May 7, 2024
1 parent d056915 commit 2654156
Show file tree
Hide file tree
Showing 6 changed files with 20 additions and 19 deletions.
8 changes: 4 additions & 4 deletions python/doc/theory/meta_modeling/cross_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -635,16 +635,16 @@ The generic cross-validation method can be implemented using the following class
to split the data set.

Since the :class:`~openturns.LinearModelResult` is based on linear least
squares, fast methods are implemented in the :class:`~openturns.LinearModelValidation`.
squares, fast methods are implemented in the :class:`~openturns.experimental.LinearModelValidation`.

See :ref:`pce_cross_validation` and :class:`~openturns.FunctionalChaosValidation`
See :ref:`pce_cross_validation` and :class:`~openturns.experimental.FunctionalChaosValidation`
for specific methods for the the cross-validation of a polynomial chaos expansion.

.. topic:: API:

- See :class:`~openturns.MetaModelValidation`
- See :class:`~openturns.LinearModelValidation`
- See :class:`~openturns.FunctionalChaosValidation`
- See :class:`~openturns.experimental.LinearModelValidation`
- See :class:`~openturns.experimental.FunctionalChaosValidation`
- See :class:`~openturns.KFoldSplitter`
- See :class:`~openturns.LeaveOneOutSplitter`

Expand Down
4 changes: 2 additions & 2 deletions python/doc/theory/meta_modeling/pce_cross_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,11 @@ then the fast methods presented in :ref:`cross_validation` can be applied:
- the fast leave-one-out cross-validation,
- the fast K-Fold cross-validation.

Fast methods are implemented in :class:`~openturns.FunctionalChaosValidation`.
Fast methods are implemented in :class:`~openturns.experimental.FunctionalChaosValidation`.

.. topic:: API:

- See :class:`~openturns.FunctionalChaosValidation`
- See :class:`~openturns.experimental.FunctionalChaosValidation`

.. topic:: References:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Results
FunctionalChaosSobolIndices

:template: classWithPlot.rst_t
FunctionalChaosValidation
experimental.FunctionalChaosValidation

Functional chaos on fields
==========================
Expand Down
2 changes: 1 addition & 1 deletion python/doc/user_manual/response_surface/lm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ Post-processing

:template: classWithPlot.rst_t

LinearModelValidation
experimental.LinearModelValidation
8 changes: 4 additions & 4 deletions python/src/FunctionalChaosValidation_doc.i.in
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ conditions are met.
If model selection is involved, the naive methods based on the
:class:`~openturns.LeaveOneOutSplitter` and :class:`~openturns.KFoldSplitter`
classes can be used, but this can be much slower than the
analytical methods implemented in the :class:`~openturns.FunctionalChaosValidation`
analytical methods implemented in the :class:`~openturns.experimental.FunctionalChaosValidation`
class.
In many cases, however, the order of magnitude of the estimate from the
analytical formula applied to a sparse model is correct: the estimate of
Expand All @@ -68,15 +68,15 @@ the :math:`i`-th prediction is the prediction of the linear model
trained using the hold-out sample where the :math:`i`-th observation
was removed.
This produces a sample of residuals which can be retrieved using
the :class:`~openturns.FunctionalChaosValidation.getResidualSample` method.
The :class:`~openturns.FunctionalChaosValidation.drawValidation` performs
the :class:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method.
The :class:`~openturns.experimental.FunctionalChaosValidation.drawValidation` performs
similarly.

If the weights of the observations are not equal, the analytical method
may not necessarily provide an accurate estimator of the mean squared error (MSE).
This is because LOO and K-Fold cross-validation do not take the weights
into account.
Since the :class:`~openturns.FunctionalChaosResult` object does not know
Since the :class:`~openturns.experimental.FunctionalChaosResult` object does not know
if the weights are equal, no exception can be generated.

If the sample was not produced from Monte Carlo, then the leave-one-out
Expand Down
15 changes: 8 additions & 7 deletions python/src/LinearModelValidation_doc.i.in
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ cross-validation methods can be used.
If model selection is involved, the naive methods based on the
:class:`~openturns.LeaveOneOutSplitter` and :class:`~openturns.KFoldSplitter`
classes can be used directly, but this can be much slower than the
analytical methods implemented in the :class:`~openturns.LinearModelValidation`
analytical methods implemented in the :class:`~openturns.experimental.LinearModelValidation`
class.
In many cases, however, the order of magnitude of the estimate from the
analytical formula applied to a sparse model is correct: the estimate of
Expand All @@ -60,15 +60,16 @@ the :math:`i`-th prediction is the prediction of the linear model
trained using the hold-out sample where the :math:`i`-th observation
was removed.
This produces a sample of residuals which can be retrieved using
the :class:`~openturns.LinearModelValidation.getResidualSample` method.
The :class:`~openturns.LinearModelValidation.drawValidation` performs
the :class:`~openturns.experimental.LinearModelValidation.getResidualSample` method.
The :class:`~openturns.experimental.LinearModelValidation.drawValidation` performs
similarly.

Examples
--------
Create a linear model.

>>> import openturns as ot
>>> import openturns.experimental as otexp
>>> func = ot.SymbolicFunction(
... ['x1', 'x2', 'x3'],
... ['x1 + x2 + sin(x2 * 2 * pi_) / 5 + 1e-3 * x3^2']
Expand All @@ -84,26 +85,26 @@ Create a linear model.

Validate the linear model using leave-one-out cross-validation.

>>> validation = ot.LinearModelValidation(result)
>>> validation = otexp.LinearModelValidation(result)

We can use a specific cross-validation splitter if needed.

>>> splitterLOO = ot.LeaveOneOutSplitter(sampleSize)
>>> validation = ot.LinearModelValidation(result, splitterLOO)
>>> validation = otexp.LinearModelValidation(result, splitterLOO)
>>> r2Score = validation.computeR2Score()
>>> print('R2 = ', r2Score[0])
R2 = 0.98...

Validate the linear model using K-Fold cross-validation.

>>> splitterKFold = ot.KFoldSplitter(sampleSize)
>>> validation = ot.LinearModelValidation(result, splitterKFold)
>>> validation = otexp.LinearModelValidation(result, splitterKFold)

Validate the linear model using K-Fold cross-validation and set K.

>>> kFoldParameter = 10
>>> splitterKFold = ot.KFoldSplitter(sampleSize, kFoldParameter)
>>> validation = ot.LinearModelValidation(result, splitterKFold)
>>> validation = otexp.LinearModelValidation(result, splitterKFold)

Draw the validation graph.

Expand Down

0 comments on commit 2654156

Please sign in to comment.