diff --git a/python/doc/theory/meta_modeling/cross_validation.rst b/python/doc/theory/meta_modeling/cross_validation.rst index d8adcd16e2..d346aaa680 100644 --- a/python/doc/theory/meta_modeling/cross_validation.rst +++ b/python/doc/theory/meta_modeling/cross_validation.rst @@ -634,8 +634,8 @@ The generic cross-validation method can be implemented using the following class - :class:`~openturns.KFoldSplitter`: uses the K-Fold method to split the data set. -Since the :class:`~openturns.LinearModelResult` is based on linear least -squares, fast methods are implemented in the :class:`~openturns.experimental.LinearModelValidation`. +Since :class:`~openturns.LinearModelResult` is based on linear least +squares, fast methods are implemented in :class:`~openturns.experimental.LinearModelValidation`. See :ref:`pce_cross_validation` and :class:`~openturns.experimental.FunctionalChaosValidation` for specific methods for the the cross-validation of a polynomial chaos expansion. diff --git a/python/src/FunctionalChaosValidation_doc.i.in b/python/src/FunctionalChaosValidation_doc.i.in index 1d4348b3d2..36dbd12062 100644 --- a/python/src/FunctionalChaosValidation_doc.i.in +++ b/python/src/FunctionalChaosValidation_doc.i.in @@ -5,7 +5,7 @@ Parameters ---------- result : :class:`~openturns.FunctionalChaosResult` - A functional chaos result resulting from a polynomial chaos expansion. + A functional chaos result obtained from a polynomial chaos expansion. splitter : :class:`~openturns.SplitterImplementation`, optional The cross-validation method. @@ -24,17 +24,17 @@ cross-validation methods presented in :ref:`pce_cross_validation`. Analytical cross-validation can only be performed accurately if some conditions are met. -- This can be done only if the coefficients of the expansion are estimated +- This can only be done if the coefficients of the expansion are estimated using least squares regression: if the expansion is computed from integration, then an exception is produced. -- This can be done only if the coefficients of the expansion are estimated +- This can only be done if the coefficients of the expansion are estimated using full expansion, without model selection: if the expansion is computed with model selection, then an exception is produced by default. This is because model selection leads to supposedly improved coefficients, so that the hypotheses required to estimate the mean squared error using the cross-validation method are not satisfied anymore. - As a consequence, using the analytical formula without taking into - account for the model selection leads to a biased, optimistic, mean squared + As a consequence, using the analytical formula without taking model selection into + account leads to a biased, overly optimistic, mean squared error. More precisely, the analytical formula produces a MSE which is lower than the true one on average. @@ -69,7 +69,7 @@ trained using the hold-out sample where the :math:`i`-th observation was removed. This produces a sample of residuals which can be retrieved using the :class:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method. -The :class:`~openturns.experimental.FunctionalChaosValidation.drawValidation` performs +The :class:`~openturns.experimental.FunctionalChaosValidation.drawValidation` method performs similarly. If the weights of the observations are not equal, the analytical method