Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
Co-authored-by: josephmure <31989332+josephmure@users.noreply.github.com>
  • Loading branch information
mbaudin47 and josephmure committed Jun 20, 2024
1 parent 09ad0a8 commit 41a1c5a
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 11 deletions.
4 changes: 2 additions & 2 deletions python/doc/theory/meta_modeling/cross_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -634,8 +634,8 @@ The generic cross-validation method can be implemented using the following class
- :class:`~openturns.KFoldSplitter`: uses the K-Fold method
to split the data set.

Since the :class:`~openturns.LinearModelResult` is based on linear least
squares, fast methods are implemented in the :class:`~openturns.experimental.LinearModelValidation`.
Since :class:`~openturns.LinearModelResult` is based on linear least
squares, fast methods are implemented in :class:`~openturns.experimental.LinearModelValidation`.

See :ref:`pce_cross_validation` and :class:`~openturns.experimental.FunctionalChaosValidation`
for specific methods for the the cross-validation of a polynomial chaos expansion.
Expand Down
14 changes: 7 additions & 7 deletions python/src/FunctionalChaosValidation_doc.i.in
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
Parameters
----------
result : :class:`~openturns.FunctionalChaosResult`
A functional chaos result resulting from a polynomial chaos expansion.
A functional chaos result obtained from a polynomial chaos expansion.

splitter : :class:`~openturns.SplitterImplementation`, optional
The cross-validation method.
Expand All @@ -28,17 +28,17 @@ cross-validation methods presented in :ref:`pce_cross_validation`.
Analytical cross-validation can only be performed accurately if some
conditions are met.

- This can be done only if the coefficients of the expansion are estimated
- This can only be done if the coefficients of the expansion are estimated
using least squares regression: if the expansion is computed from integration,
then an exception is produced.
- This can be done only if the coefficients of the expansion are estimated
- This can only be done if the coefficients of the expansion are estimated
using full expansion, without model selection: if the expansion is computed
with model selection, then an exception is produced by default.
This is because model selection leads to supposedly improved coefficients,
so that the hypotheses required to estimate the mean squared error
using the cross-validation method are not satisfied anymore.
As a consequence, using the analytical formula without taking into
account for the model selection leads to a biased, optimistic, mean squared
As a consequence, using the analytical formula without taking model selection into
account leads to a biased, overly optimistic, mean squared
error.
More precisely, the analytical formula produces a MSE which is lower
than the true one on average.
Expand Down Expand Up @@ -72,8 +72,8 @@ the :math:`i`-th prediction is the prediction of the linear model
trained using the hold-out sample where the :math:`i`-th observation
was removed.
This produces a sample of residuals which can be retrieved using
the :class:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method.
The :class:`~openturns.experimental.FunctionalChaosValidation.drawValidation` performs
the :meth:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method.
The :meth:`~openturns.experimental.FunctionalChaosValidation.drawValidation` method performs
similarly.

If the weights of the observations are not equal, the analytical method
Expand Down
4 changes: 2 additions & 2 deletions python/test/t_LinearModelValidation_std.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
kFoldParameter = 4
foldRootSize = 3
# Makes so that k does not divide the sample size.
# In this case, we must take into account for the different weight of
# each fold.
# In this case, we must take the different weigths
# of each fold into account.
sampleSize = foldRootSize * kFoldParameter + 1
print("sampleSize = ", sampleSize)
aCollection = []
Expand Down

0 comments on commit 41a1c5a

Please sign in to comment.