From 910e1531f3e6400677e09f0a0eed3eb79eddce74 Mon Sep 17 00:00:00 2001 From: Michael Baudin <31351465+mbaudin47@users.noreply.github.com> Date: Sun, 16 Jun 2024 14:44:59 +0200 Subject: [PATCH] Apply suggestions from code review Co-authored-by: josephmure <31989332+josephmure@users.noreply.github.com> --- .../doc/theory/meta_modeling/cross_validation.rst | 4 ++-- python/src/FunctionalChaosValidation_doc.i.in | 14 +++++++------- python/test/t_LinearModelValidation_std.py | 4 ++-- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/python/doc/theory/meta_modeling/cross_validation.rst b/python/doc/theory/meta_modeling/cross_validation.rst index d8adcd16e2..d346aaa680 100644 --- a/python/doc/theory/meta_modeling/cross_validation.rst +++ b/python/doc/theory/meta_modeling/cross_validation.rst @@ -634,8 +634,8 @@ The generic cross-validation method can be implemented using the following class - :class:`~openturns.KFoldSplitter`: uses the K-Fold method to split the data set. -Since the :class:`~openturns.LinearModelResult` is based on linear least -squares, fast methods are implemented in the :class:`~openturns.experimental.LinearModelValidation`. +Since :class:`~openturns.LinearModelResult` is based on linear least +squares, fast methods are implemented in :class:`~openturns.experimental.LinearModelValidation`. See :ref:`pce_cross_validation` and :class:`~openturns.experimental.FunctionalChaosValidation` for specific methods for the the cross-validation of a polynomial chaos expansion. diff --git a/python/src/FunctionalChaosValidation_doc.i.in b/python/src/FunctionalChaosValidation_doc.i.in index 07b574fdb3..153d6157ee 100644 --- a/python/src/FunctionalChaosValidation_doc.i.in +++ b/python/src/FunctionalChaosValidation_doc.i.in @@ -9,7 +9,7 @@ Parameters ---------- result : :class:`~openturns.FunctionalChaosResult` - A functional chaos result resulting from a polynomial chaos expansion. + A functional chaos result obtained from a polynomial chaos expansion. splitter : :class:`~openturns.SplitterImplementation`, optional The cross-validation method. @@ -28,17 +28,17 @@ cross-validation methods presented in :ref:`pce_cross_validation`. Analytical cross-validation can only be performed accurately if some conditions are met. -- This can be done only if the coefficients of the expansion are estimated +- This can only be done if the coefficients of the expansion are estimated using least squares regression: if the expansion is computed from integration, then an exception is produced. -- This can be done only if the coefficients of the expansion are estimated +- This can only be done if the coefficients of the expansion are estimated using full expansion, without model selection: if the expansion is computed with model selection, then an exception is produced by default. This is because model selection leads to supposedly improved coefficients, so that the hypotheses required to estimate the mean squared error using the cross-validation method are not satisfied anymore. - As a consequence, using the analytical formula without taking into - account for the model selection leads to a biased, optimistic, mean squared + As a consequence, using the analytical formula without taking model selection into + account leads to a biased, overly optimistic, mean squared error. More precisely, the analytical formula produces a MSE which is lower than the true one on average. @@ -72,8 +72,8 @@ the :math:`i`-th prediction is the prediction of the linear model trained using the hold-out sample where the :math:`i`-th observation was removed. This produces a sample of residuals which can be retrieved using -the :class:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method. -The :class:`~openturns.experimental.FunctionalChaosValidation.drawValidation` performs +the :meth:`~openturns.experimental.FunctionalChaosValidation.getResidualSample` method. +The :meth:`~openturns.experimental.FunctionalChaosValidation.drawValidation` method performs similarly. If the weights of the observations are not equal, the analytical method diff --git a/python/test/t_LinearModelValidation_std.py b/python/test/t_LinearModelValidation_std.py index 4e7f503f40..92d2a10b90 100644 --- a/python/test/t_LinearModelValidation_std.py +++ b/python/test/t_LinearModelValidation_std.py @@ -10,8 +10,8 @@ kFoldParameter = 4 foldRootSize = 3 # Makes so that k does not divide the sample size. -# In this case, we must take into account for the different weight of -# each fold. +# In this case, we must take the different weigths +# of each fold into account. sampleSize = foldRootSize * kFoldParameter + 1 print("sampleSize = ", sampleSize) aCollection = []