Skip to content

Commit

Permalink
Fix minor typos
Browse files Browse the repository at this point in the history
  • Loading branch information
NeoKish committed Nov 24, 2023
1 parent 2e99128 commit 66d2420
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 8 deletions.
4 changes: 2 additions & 2 deletions docs/how_it_works/data_reconstruction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,9 +91,9 @@ model input data and ignores any random noise that is usually present.

The third step is decompressing the data we just compressed.
This is done using the inverse PCA transformation which transforms the data from latent space
back to the prepocessed model input space.
back to the preprocessed model input space.

Then, the euclidean distance between the original data points and their re-cosntructed counterparts
Then, the euclidean distance between the original data points and their re-constructed counterparts
is computed. The resulting distances are then aggregated to get their average. The resulting
number is called :term:`Reconstruction Error`.

Expand Down
6 changes: 3 additions & 3 deletions docs/how_it_works/estimation_of_standard_error.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,13 +187,13 @@ Average
-------

The :ref:`sum_stats_avg` standard error calculations are an application of what
we disccused at :ref:`introducing_sem`.
we discussed at :ref:`introducing_sem`.


Summation
---------

The :ref:`sum_stats_sum` standard error calculations are also straghtfoward.
The :ref:`sum_stats_sum` standard error calculations are also straightforward.
Through a simple application of error propagation:

.. math::
Expand Down Expand Up @@ -234,7 +234,7 @@ The standard error of the median asymptotically tends towards:
\frac{1}{4nf^2(m)}
}
where :math:`f` is the probability density function of the random variable in quetion and
where :math:`f` is the probability density function of the random variable in question and
:math:`f(m)` is its value at the estimated median value.


Expand Down
2 changes: 1 addition & 1 deletion docs/how_it_works/performance_estimation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ Just like :class:`~nannyml.performance_estimation.confidence_based.cbpe.CBPE`, i
While dealing well with covariate shift, DLE will not work under :term:`concept drift`.
This shouldn't happen when the :term:`child model` has access to all the variables affecting the outcome and
the problem is stationary. An example of a stationary model would be forecasting energy demand for heating
purposes. Since the phyiscal laws underpinning the problem are the same, energy demand based on outside temperature
purposes. Since the physical laws underpinning the problem are the same, energy demand based on outside temperature
should stay the same. However if energy prices became too high and people decide to heat their houses less
because they couldn't pay, then our model would experience concept drift.

Expand Down
2 changes: 1 addition & 1 deletion docs/how_it_works/ranking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ the average performance during the reference period. This value is saved at the

Then we proceed with the :meth:`~nannyml.drift.ranking.CorrelationRanking.rank` method where we provide
the chosen univariate drift and performance results. The performance results are preprocessed
in order to caclulate the absolute difference of observed performance values with the mean performance
in order to calculate the absolute difference of observed performance values with the mean performance
on reference. We can see how this transformation affects the performance values below:

.. nbimport::
Expand Down
2 changes: 1 addition & 1 deletion docs/how_it_works/univariate_drift_comparison.rst
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,7 @@ The interesting things to note in this experiment compared to the previous one a

* Jensen-Shannon is less sensitive when a category disappears compared to when a new category appears,

* Hellinger distance behaves the same when a catgory disappears compared to when a new category appears,
* Hellinger distance behaves the same when a category disappears compared to when a new category appears,

* Chi-square grows linearly when the new category increases its relative frequency but it grows faster when a
category disappears.
Expand Down

0 comments on commit 66d2420

Please sign in to comment.