Skip to content

Commit

Permalink
Merge pull request #21 from iamhaingo/main
Browse files Browse the repository at this point in the history
Fix typos
  • Loading branch information
elizavetasemenova authored Apr 11, 2024
2 parents 2358e82 + cc612b0 commit 7f6942c
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 14 deletions.
12 changes: 6 additions & 6 deletions 17_GP_priors.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Nonparamteric models\n",
"## Nonparametric models\n",
"\n",
"So far in this course, all the models we have built were regressions, working in the supervised learning setting and using parametric models. We tried to describe functions with unknown parameters using Bayesian formalism.\n",
"\n",
Expand Down Expand Up @@ -61,7 +61,7 @@
"\n",
"This was a preview. We will return to the formal definition once again later in this lecture.\n",
"\n",
"Now let's build the inredients which we need to understand Gaussian processes step by step."
"Now let's build the ingredients which we need to understand Gaussian processes step by step."
]
},
{
Expand All @@ -72,10 +72,10 @@
"\n",
"### Univariate Normal distribution\n",
"```{margin}\n",
"In the chapter about distributions we used notation $X$ for a random variable and $x$ for its values. Here we will use $y$ instead. It will soom become clear soon why we need to switch, and why we need to reserve $x$ for something else.\n",
"In the chapter about distributions we used notation $X$ for a random variable and $x$ for its values. Here we will use $y$ instead. It will soon become clear why we need to switch, and why we need to reserve $x$ for something else.\n",
"```\n",
"\n",
"Recall from previous chapters, the univaritae normal distribution has PDF\n",
"Recall from previous chapters, the univariate normal distribution has PDF\n",
"\n",
"$$\n",
"\\mathcal{N}(y \\mid \\mu, \\sigma) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left(-\\frac{(y - \\mu)^2}{2\\sigma^2}\\right).\n",
Expand Down Expand Up @@ -436,15 +436,15 @@
"\n",
"A kernel $k: \\mathcal{X} \\times \\mathcal{X} \\to \\mathbb{R}$ is positive semi-definite, if for any finite collection $x= (x_1, ..., x_d)$ the matrix $k_{xx}$ with $[k_{xx}]_{ij}=k(x_i, x_j)$ is positive semi-definite.\n",
"\n",
"A symmetric matrix $A \\in \\mathbb{R}^{N \\times N}$ is called <font color='orange'>positve semi-definite</font> if\n",
"A symmetric matrix $A \\in \\mathbb{R}^{N \\times N}$ is called <font color='orange'>positive semi-definite</font> if\n",
"\n",
"$$\n",
"v^T A v \\ge 0\n",
"$$ \n",
"\n",
"for any $v \\in \\mathbb{R}^d.$\n",
"\n",
"Kernel functions $k(x, x′)$ encode our prior beliefs about data-generating latent functions. These typically include continuity, smomothness (differentialbility),periodicity, stationarity, and so on.\n",
"Kernel functions $k(x, x′)$ encode our prior beliefs about data-generating latent functions. These typically include continuity, smoothness (differentiability),periodicity, stationarity, and so on.\n",
"\n",
"Covariance functions typically have <font color='orange'>hyperparameters</font> that we aim to learn from data.\n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions 18_GP_inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"\\end{align*}\n",
"$$\n",
"\n",
"Here $\\{(x_i, y_i)\\}_{i=1}^n$ are paris of observations $y_i$, and locations of those observations $x_i$. The role of $f(x)$ now is to serve as a <font color='orange'>latent field</font> capturing dependencies between locations $x$. The expression $\\Pi_i p(y_i \\vert f(x_i))$ provides a likliehood, allowing us to link observed data to the model and enabling parameter inference.\n",
"Here $\\{(x_i, y_i)\\}_{i=1}^n$ are pairs of observations $y_i$, and locations of those observations $x_i$. The role of $f(x)$ now is to serve as a <font color='orange'>latent field</font> capturing dependencies between locations $x$. The expression $\\Pi_i p(y_i \\vert f(x_i))$ provides a likelihood, allowing us to link observed data to the model and enabling parameter inference.\n",
"\n",
"```{margin}\n",
"The task of predicting at unobserved locations is often referred to as **kriging** and is the underlying concenpt in spatial statistics. We will talk about it in the next chapter.\n",
Expand All @@ -26,7 +26,7 @@
"\n",
"## Gaussian process regression\n",
"\n",
"The simplest case of the setting described above is the Gaussian process regression where the outcome variable is modelled as a GP with added noise $\\epsilon$. It assumes that the data consists of pairs $\\{(x_i, y_i)\\}_{i=1}^n$ and the likeihood is Gaussian with variance $\\sigma^2_\\epsilon:$\n",
"The simplest case of the setting described above is the Gaussian process regression where the outcome variable is modelled as a GP with added noise $\\epsilon$. It assumes that the data consists of pairs $\\{(x_i, y_i)\\}_{i=1}^n$ and the likelihood is Gaussian with variance $\\sigma^2_\\epsilon:$\n",
"\n",
"$$\n",
"\\begin{align*}\n",
Expand All @@ -50,7 +50,7 @@
"then \n",
"\n",
"```{margin}\n",
"Notice how `K_{mn} K_{nn}^{-1}` repeatedly participates in computations. It is often convenient to precomput this matrix.\n",
"Notice how `K_{mn} K_{nn}^{-1}` repeatedly participates in computations. It is often convenient to precompute this matrix.\n",
"```\n",
"$$\n",
"\\begin{align*}\n",
Expand Down Expand Up @@ -905,7 +905,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We have inferred the variance parameter successfully. Estimating lengthscale, espetially for less smooth kernels is harder. One more issue is the non-identifiability of the pair lengthscale-variance. Hence, if they both really *need* to be inferred, strong priors would be beneficial."
"We have inferred the variance parameter successfully. Estimating lengthscale, especially for less smooth kernels is harder. One more issue is the non-identifiability of the pair lengthscale-variance. Hence, if they both really *need* to be inferred, strong priors would be beneficial."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion 19_geostatistics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
"\n",
"Geostatistics is the subarea of spatial statstistics which works with geostatstitical data. It finds applications in various fields such as natural resource exploration (e.g., oil and gas reserves estimation), environmental monitoring (e.g., air and water quality assessment), agriculture (e.g., soil fertility mapping), and urban planning (e.g., land use analysis) and, of course, epidemiology (e.g., disease mapping). It offers powerful tools for spatial data analysis, decision-making, and resource management in both scientific research and practical applications.\n",
"\n",
" <span style=\"color:orange\">Kriging</span> is a statistical interpolation technique used primarily in geostatistics. It is named after South African mining engineer DanielG. Krige, who developed the method in the 1950s. Kriging is employed to estimate the value of a variable at an <span style=\"color:orange\">unmeasured location</span>based on the values <span style=\"color:orange\">observed</span> at nearby locations.\n",
" <span style=\"color:orange\">Kriging</span> is a statistical interpolation technique used primarily in geostatistics. It is named after South African mining engineer DanielG. Krige, who developed the method in the 1950s. Kriging is employed to estimate the value of a variable at an <span style=\"color:orange\">unmeasured location</span> based on the values <span style=\"color:orange\">observed</span> at nearby locations.\n",
"\n",
"The basic idea behind kriging is to model the spatial correlation or spatial autocorrelation of the variable being studied. This means that kriging considers the spatial structure of the data. It assumes that nearby points are more similar than those farther away and uses this information to make predictions.\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion 23_ID_modelling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
"- Look at data from an outbreak so far, \n",
"- Construct a model that represents the underlying epidemiology of the system, \n",
"- Perform simulations of the model *forward* to *predict* what might likely happen in the future (how many cases are we expecting to see per day?), \n",
"- Introduce *control intervnetions* into the model to look at how different control intervention might effect the outcomes we would like to see in the future.\n",
"- Introduce *control interventions* into the model to look at how different control intervention might effect the outcomes we would like to see in the future.\n",
"\n",
"## Key questions\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions 24_other.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@
"- Kronecker product\n",
"- Log-Gaussian Cox process implementation (LGCP)\n",
"- Renewal equation\n",
"- Hilbert Space Gaussian Porcess approximation (HSGP)\n",
"- Hilbert Space Gaussian Process approximation (HSGP)\n",
"- Active learning\n",
"- Variational autoencoders (VAEs)\n",
"\n",
"Two more interetsing topic which we don't have time for:\n",
"Two more interesting topic which we don't have time for:\n",
"\n",
"- Variational inference\n",
"- Bayesian neural networks"
Expand Down

0 comments on commit 7f6942c

Please sign in to comment.