diff --git a/17_GP_priors.ipynb b/17_GP_priors.ipynb
index 368d1df..f972216 100644
--- a/17_GP_priors.ipynb
+++ b/17_GP_priors.ipynb
@@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Nonparamteric models\n",
+ "## Nonparametric models\n",
"\n",
"So far in this course, all the models we have built were regressions, working in the supervised learning setting and using parametric models. We tried to describe functions with unknown parameters using Bayesian formalism.\n",
"\n",
@@ -61,7 +61,7 @@
"\n",
"This was a preview. We will return to the formal definition once again later in this lecture.\n",
"\n",
- "Now let's build the inredients which we need to understand Gaussian processes step by step."
+ "Now let's build the ingredients which we need to understand Gaussian processes step by step."
]
},
{
@@ -72,10 +72,10 @@
"\n",
"### Univariate Normal distribution\n",
"```{margin}\n",
- "In the chapter about distributions we used notation $X$ for a random variable and $x$ for its values. Here we will use $y$ instead. It will soom become clear soon why we need to switch, and why we need to reserve $x$ for something else.\n",
+ "In the chapter about distributions we used notation $X$ for a random variable and $x$ for its values. Here we will use $y$ instead. It will soon become clear why we need to switch, and why we need to reserve $x$ for something else.\n",
"```\n",
"\n",
- "Recall from previous chapters, the univaritae normal distribution has PDF\n",
+ "Recall from previous chapters, the univariate normal distribution has PDF\n",
"\n",
"$$\n",
"\\mathcal{N}(y \\mid \\mu, \\sigma) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left(-\\frac{(y - \\mu)^2}{2\\sigma^2}\\right).\n",
@@ -436,7 +436,7 @@
"\n",
"A kernel $k: \\mathcal{X} \\times \\mathcal{X} \\to \\mathbb{R}$ is positive semi-definite, if for any finite collection $x= (x_1, ..., x_d)$ the matrix $k_{xx}$ with $[k_{xx}]_{ij}=k(x_i, x_j)$ is positive semi-definite.\n",
"\n",
- "A symmetric matrix $A \\in \\mathbb{R}^{N \\times N}$ is called positve semi-definite if\n",
+ "A symmetric matrix $A \\in \\mathbb{R}^{N \\times N}$ is called positive semi-definite if\n",
"\n",
"$$\n",
"v^T A v \\ge 0\n",
@@ -444,7 +444,7 @@
"\n",
"for any $v \\in \\mathbb{R}^d.$\n",
"\n",
- "Kernel functions $k(x, x′)$ encode our prior beliefs about data-generating latent functions. These typically include continuity, smomothness (differentialbility),periodicity, stationarity, and so on.\n",
+ "Kernel functions $k(x, x′)$ encode our prior beliefs about data-generating latent functions. These typically include continuity, smoothness (differentiability),periodicity, stationarity, and so on.\n",
"\n",
"Covariance functions typically have hyperparameters that we aim to learn from data.\n",
"\n",
diff --git a/18_GP_inference.ipynb b/18_GP_inference.ipynb
index c9fff6c..f834300 100644
--- a/18_GP_inference.ipynb
+++ b/18_GP_inference.ipynb
@@ -17,7 +17,7 @@
"\\end{align*}\n",
"$$\n",
"\n",
- "Here $\\{(x_i, y_i)\\}_{i=1}^n$ are paris of observations $y_i$, and locations of those observations $x_i$. The role of $f(x)$ now is to serve as a latent field capturing dependencies between locations $x$. The expression $\\Pi_i p(y_i \\vert f(x_i))$ provides a likliehood, allowing us to link observed data to the model and enabling parameter inference.\n",
+ "Here $\\{(x_i, y_i)\\}_{i=1}^n$ are pairs of observations $y_i$, and locations of those observations $x_i$. The role of $f(x)$ now is to serve as a latent field capturing dependencies between locations $x$. The expression $\\Pi_i p(y_i \\vert f(x_i))$ provides a likelihood, allowing us to link observed data to the model and enabling parameter inference.\n",
"\n",
"```{margin}\n",
"The task of predicting at unobserved locations is often referred to as **kriging** and is the underlying concenpt in spatial statistics. We will talk about it in the next chapter.\n",
@@ -26,7 +26,7 @@
"\n",
"## Gaussian process regression\n",
"\n",
- "The simplest case of the setting described above is the Gaussian process regression where the outcome variable is modelled as a GP with added noise $\\epsilon$. It assumes that the data consists of pairs $\\{(x_i, y_i)\\}_{i=1}^n$ and the likeihood is Gaussian with variance $\\sigma^2_\\epsilon:$\n",
+ "The simplest case of the setting described above is the Gaussian process regression where the outcome variable is modelled as a GP with added noise $\\epsilon$. It assumes that the data consists of pairs $\\{(x_i, y_i)\\}_{i=1}^n$ and the likelihood is Gaussian with variance $\\sigma^2_\\epsilon:$\n",
"\n",
"$$\n",
"\\begin{align*}\n",
@@ -50,7 +50,7 @@
"then \n",
"\n",
"```{margin}\n",
- "Notice how `K_{mn} K_{nn}^{-1}` repeatedly participates in computations. It is often convenient to precomput this matrix.\n",
+ "Notice how `K_{mn} K_{nn}^{-1}` repeatedly participates in computations. It is often convenient to precompute this matrix.\n",
"```\n",
"$$\n",
"\\begin{align*}\n",
@@ -905,7 +905,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We have inferred the variance parameter successfully. Estimating lengthscale, espetially for less smooth kernels is harder. One more issue is the non-identifiability of the pair lengthscale-variance. Hence, if they both really *need* to be inferred, strong priors would be beneficial."
+ "We have inferred the variance parameter successfully. Estimating lengthscale, especially for less smooth kernels is harder. One more issue is the non-identifiability of the pair lengthscale-variance. Hence, if they both really *need* to be inferred, strong priors would be beneficial."
]
},
{
diff --git a/19_geostatistics.ipynb b/19_geostatistics.ipynb
index 900f5ba..3a9600c 100644
--- a/19_geostatistics.ipynb
+++ b/19_geostatistics.ipynb
@@ -35,7 +35,7 @@
"\n",
"Geostatistics is the subarea of spatial statstistics which works with geostatstitical data. It finds applications in various fields such as natural resource exploration (e.g., oil and gas reserves estimation), environmental monitoring (e.g., air and water quality assessment), agriculture (e.g., soil fertility mapping), and urban planning (e.g., land use analysis) and, of course, epidemiology (e.g., disease mapping). It offers powerful tools for spatial data analysis, decision-making, and resource management in both scientific research and practical applications.\n",
"\n",
- " Kriging is a statistical interpolation technique used primarily in geostatistics. It is named after South African mining engineer DanielG. Krige, who developed the method in the 1950s. Kriging is employed to estimate the value of a variable at an unmeasured locationbased on the values observed at nearby locations.\n",
+ " Kriging is a statistical interpolation technique used primarily in geostatistics. It is named after South African mining engineer DanielG. Krige, who developed the method in the 1950s. Kriging is employed to estimate the value of a variable at an unmeasured location based on the values observed at nearby locations.\n",
"\n",
"The basic idea behind kriging is to model the spatial correlation or spatial autocorrelation of the variable being studied. This means that kriging considers the spatial structure of the data. It assumes that nearby points are more similar than those farther away and uses this information to make predictions.\n",
"\n",
diff --git a/23_ID_modelling.ipynb b/23_ID_modelling.ipynb
index a2fa1b3..6af1912 100644
--- a/23_ID_modelling.ipynb
+++ b/23_ID_modelling.ipynb
@@ -36,7 +36,7 @@
"- Look at data from an outbreak so far, \n",
"- Construct a model that represents the underlying epidemiology of the system, \n",
"- Perform simulations of the model *forward* to *predict* what might likely happen in the future (how many cases are we expecting to see per day?), \n",
- "- Introduce *control intervnetions* into the model to look at how different control intervention might effect the outcomes we would like to see in the future.\n",
+ "- Introduce *control interventions* into the model to look at how different control intervention might effect the outcomes we would like to see in the future.\n",
"\n",
"## Key questions\n",
"\n",
diff --git a/24_other.ipynb b/24_other.ipynb
index 6023e07..f1ba6d1 100644
--- a/24_other.ipynb
+++ b/24_other.ipynb
@@ -14,11 +14,11 @@
"- Kronecker product\n",
"- Log-Gaussian Cox process implementation (LGCP)\n",
"- Renewal equation\n",
- "- Hilbert Space Gaussian Porcess approximation (HSGP)\n",
+ "- Hilbert Space Gaussian Process approximation (HSGP)\n",
"- Active learning\n",
"- Variational autoencoders (VAEs)\n",
"\n",
- "Two more interetsing topic which we don't have time for:\n",
+ "Two more interesting topic which we don't have time for:\n",
"\n",
"- Variational inference\n",
"- Bayesian neural networks"