From 66d24204e79401a99a0ae348a54565764e45fca8 Mon Sep 17 00:00:00 2001 From: Kishan Savant Date: Fri, 24 Nov 2023 23:47:05 +0530 Subject: [PATCH] Fix minor typos --- docs/how_it_works/data_reconstruction.rst | 4 ++-- docs/how_it_works/estimation_of_standard_error.rst | 6 +++--- docs/how_it_works/performance_estimation.rst | 2 +- docs/how_it_works/ranking.rst | 2 +- docs/how_it_works/univariate_drift_comparison.rst | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/how_it_works/data_reconstruction.rst b/docs/how_it_works/data_reconstruction.rst index a291d2fe4..4c2056407 100644 --- a/docs/how_it_works/data_reconstruction.rst +++ b/docs/how_it_works/data_reconstruction.rst @@ -91,9 +91,9 @@ model input data and ignores any random noise that is usually present. The third step is decompressing the data we just compressed. This is done using the inverse PCA transformation which transforms the data from latent space -back to the prepocessed model input space. +back to the preprocessed model input space. -Then, the euclidean distance between the original data points and their re-cosntructed counterparts +Then, the euclidean distance between the original data points and their re-constructed counterparts is computed. The resulting distances are then aggregated to get their average. The resulting number is called :term:`Reconstruction Error`. diff --git a/docs/how_it_works/estimation_of_standard_error.rst b/docs/how_it_works/estimation_of_standard_error.rst index 00aebe1a5..548eaf0a1 100644 --- a/docs/how_it_works/estimation_of_standard_error.rst +++ b/docs/how_it_works/estimation_of_standard_error.rst @@ -187,13 +187,13 @@ Average ------- The :ref:`sum_stats_avg` standard error calculations are an application of what -we disccused at :ref:`introducing_sem`. +we discussed at :ref:`introducing_sem`. Summation --------- -The :ref:`sum_stats_sum` standard error calculations are also straghtfoward. +The :ref:`sum_stats_sum` standard error calculations are also straightforward. Through a simple application of error propagation: .. math:: @@ -234,7 +234,7 @@ The standard error of the median asymptotically tends towards: \frac{1}{4nf^2(m)} } -where :math:`f` is the probability density function of the random variable in quetion and +where :math:`f` is the probability density function of the random variable in question and :math:`f(m)` is its value at the estimated median value. diff --git a/docs/how_it_works/performance_estimation.rst b/docs/how_it_works/performance_estimation.rst index c171851a0..96cb4ae8a 100644 --- a/docs/how_it_works/performance_estimation.rst +++ b/docs/how_it_works/performance_estimation.rst @@ -573,7 +573,7 @@ Just like :class:`~nannyml.performance_estimation.confidence_based.cbpe.CBPE`, i While dealing well with covariate shift, DLE will not work under :term:`concept drift`. This shouldn't happen when the :term:`child model` has access to all the variables affecting the outcome and the problem is stationary. An example of a stationary model would be forecasting energy demand for heating - purposes. Since the phyiscal laws underpinning the problem are the same, energy demand based on outside temperature + purposes. Since the physical laws underpinning the problem are the same, energy demand based on outside temperature should stay the same. However if energy prices became too high and people decide to heat their houses less because they couldn't pay, then our model would experience concept drift. diff --git a/docs/how_it_works/ranking.rst b/docs/how_it_works/ranking.rst index 0de8ab4ce..11a50c852 100644 --- a/docs/how_it_works/ranking.rst +++ b/docs/how_it_works/ranking.rst @@ -44,7 +44,7 @@ the average performance during the reference period. This value is saved at the Then we proceed with the :meth:`~nannyml.drift.ranking.CorrelationRanking.rank` method where we provide the chosen univariate drift and performance results. The performance results are preprocessed -in order to caclulate the absolute difference of observed performance values with the mean performance +in order to calculate the absolute difference of observed performance values with the mean performance on reference. We can see how this transformation affects the performance values below: .. nbimport:: diff --git a/docs/how_it_works/univariate_drift_comparison.rst b/docs/how_it_works/univariate_drift_comparison.rst index db8dd48cd..d0af00c83 100644 --- a/docs/how_it_works/univariate_drift_comparison.rst +++ b/docs/how_it_works/univariate_drift_comparison.rst @@ -299,7 +299,7 @@ The interesting things to note in this experiment compared to the previous one a * Jensen-Shannon is less sensitive when a category disappears compared to when a new category appears, - * Hellinger distance behaves the same when a catgory disappears compared to when a new category appears, + * Hellinger distance behaves the same when a category disappears compared to when a new category appears, * Chi-square grows linearly when the new category increases its relative frequency but it grows faster when a category disappears.