Skip to content

Commit

Permalink
fixing docs
Browse files Browse the repository at this point in the history
  • Loading branch information
NicholasCowie committed Mar 5, 2024
2 parents f42b3f5 + c6e9abd commit a4931cb
Show file tree
Hide file tree
Showing 13 changed files with 696 additions and 304 deletions.
Binary file added docs/img/dimension_plot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/dimension_sphere_diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/markov_chain.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 6 additions & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -4764,6 +4764,12 @@
<a href="./week2.html" class="sidebar-item-text sidebar-link">
<span class="menu-text">MCMC and Stan</span></a>
</div>
</li>
<li class="sidebar-item">
<div class="sidebar-item-container">
<a href="./week3.html" class="sidebar-item-text sidebar-link">
<span class="menu-text">What to do after MCMC</span></a>
</div>
</li>
<li class="sidebar-item">
<div class="sidebar-item-container">
Expand Down
50 changes: 28 additions & 22 deletions docs/metropolis-hastings.html

Large diffs are not rendered by default.

270 changes: 179 additions & 91 deletions docs/search.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/site_libs/bootstrap/bootstrap.min.css

Large diffs are not rendered by default.

65 changes: 52 additions & 13 deletions docs/site_libs/quarto-search/quarto-search.js
Original file line number Diff line number Diff line change
Expand Up @@ -675,6 +675,18 @@ function showCopyLink(query, options) {
// create the index
var fuseIndex = undefined;
var shownWarning = false;

// fuse index options
const kFuseIndexOptions = {
keys: [
{ name: "title", weight: 20 },
{ name: "section", weight: 20 },
{ name: "text", weight: 10 },
],
ignoreLocation: true,
threshold: 0.1,
};

async function readSearchData() {
// Initialize the search index on demand
if (fuseIndex === undefined) {
Expand All @@ -685,17 +697,7 @@ async function readSearchData() {
shownWarning = true;
return;
}
// create fuse index
const options = {
keys: [
{ name: "title", weight: 20 },
{ name: "section", weight: 20 },
{ name: "text", weight: 10 },
],
ignoreLocation: true,
threshold: 0.1,
};
const fuse = new window.Fuse([], options);
const fuse = new window.Fuse([], kFuseIndexOptions);

// fetch the main search.json
const response = await fetch(offsetURL("search.json"));
Expand Down Expand Up @@ -1226,8 +1228,34 @@ function algoliaSearch(query, limit, algoliaOptions) {
});
}

function fuseSearch(query, fuse, fuseOptions) {
return fuse.search(query, fuseOptions).map((result) => {
let subSearchTerm = undefined;
let subSearchFuse = undefined;
const kFuseMaxWait = 125;

async function fuseSearch(query, fuse, fuseOptions) {
let index = fuse;
// Fuse.js using the Bitap algorithm for text matching which runs in
// O(nm) time (no matter the structure of the text). In our case this
// means that long search terms mixed with large index gets very slow
//
// This injects a subIndex that will be used once the terms get long enough
// Usually making this subindex is cheap since there will typically be
// a subset of results matching the existing query
if (subSearchFuse !== undefined && query.startsWith(subSearchTerm)) {
// Use the existing subSearchFuse
index = subSearchFuse;
} else if (subSearchFuse !== undefined) {
// The term changed, discard the existing fuse
subSearchFuse = undefined;
subSearchTerm = undefined;
}

// Search using the active fuse
const then = performance.now();
const resultsRaw = await index.search(query, fuseOptions);
const now = performance.now();

const results = resultsRaw.map((result) => {
const addParam = (url, name, value) => {
const anchorParts = url.split("#");
const baseUrl = anchorParts[0];
Expand All @@ -1244,4 +1272,15 @@ function fuseSearch(query, fuse, fuseOptions) {
crumbs: result.item.crumbs,
};
});

// If we don't have a subfuse and the query is long enough, go ahead
// and create a subfuse to use for subsequent queries
if (now - then > kFuseMaxWait && subSearchFuse === undefined) {
subSearchTerm = query;
subSearchFuse = new window.Fuse([], kFuseIndexOptions);
resultsRaw.forEach((rr) => {
subSearchFuse.add(rr.item);
});
}
return results;
}
75 changes: 60 additions & 15 deletions docs/week1.html

Large diffs are not rendered by default.

75 changes: 60 additions & 15 deletions docs/week2.html

Large diffs are not rendered by default.

336 changes: 195 additions & 141 deletions docs/week3.html

Large diffs are not rendered by default.

100 changes: 100 additions & 0 deletions materials/data.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
5.133879959574598
4.937817051434988
5.277966400350326
4.978073616723295
4.768264765712152
5.00325432375216
5.140401818212519
5.220085220743337
4.9103261367115305
4.916817929108131
4.676230504106508
4.681155942296321
4.939456345611404
5.404674908950762
5.0000565051183505
4.946723670230919
5.071484229192757
4.702765071407966
4.990440219285413
4.757565482906698
4.90342085169882
4.909731907856895
4.7502350067852035
4.735636087526655
4.831835910148419
5.083641260592571
4.941212006801077
4.836347213445427
4.962098544360349
4.686040420643092
5.124470580419098
5.066542799622644
5.116490362362727
4.978068545964261
4.733571541168396
4.799377562666428
4.731238173387429
4.927630515268987
5.276180964383851
4.605853804427195
4.895306070517735
5.125441726077501
5.048540015619509
5.134475090531417
5.020340763177987
4.744641578259603
4.9620314387166395
4.664576116370324
5.244742999934165
5.279274831080141
5.089479340563577
5.071743145634021
4.942265997797657
5.089398760768866
5.458231752210997
4.786148013337675
4.7972136274342025
4.93152718719522
4.978778063916119
4.801232311921279
4.686526370326429
5.105113175288019
4.8016552715132645
4.978252685586892
4.903595625784314
5.150389785891813
4.957095292110073
4.851463539442262
5.18128245975816
5.139918945662961
5.020216551825798
4.84115302966151
4.975904991006625
5.401969287491859
4.811230682660985
4.7993454128378925
5.08505429688905
4.6388175324133565
5.363488556427758
4.825053384694479
5.304806718052644
5.321881179050992
5.044592651869508
4.944522400406804
4.9930450107605076
5.400020830329514
5.162848379281823
4.851419760797657
5.021789943039787
4.831523320170004
4.870328937789049
4.927266223569667
5.023563865935118
4.969113692374906
4.997160262493862
4.747684042472069
5.070321872431043
4.899256187007241
5.111542353099519
5.201047790285543
21 changes: 15 additions & 6 deletions materials/week3.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ idata
```

The function `az.summary` lets us look at some useful summary statistics,
including $\hat{R}$ and divergent transitions.
including $\hat{R}$, divergent transitions and MCSE.

The variable `lp`, which you can find in the group `sample_stats` is the
model's total log probability density. It's not very meaningful on its own,
Expand All @@ -131,23 +131,30 @@ divergent transitions.
az.summary(idata.sample_stats, var_names=["lp", "diverging"])
```

Sometimes it's useful to check the convergence of individual parameters. This
can be done by pointing `az.summary` at the group where the parameters of
interest live. In this case the group is called `posterior`.
Sometimes it's useful to summarise individual parameters. This can be done by
pointing `az.summary` at the group where the parameters of interest live. In
this case the group is called `posterior`.

```{python}
az.summary(idata.posterior, var_names=["sigma", "g"])
```

The function `az.loo` performs approximate leave-one-out cross validation, which can be useful for evaluating how well the model might make predictions.
The function `az.loo` performs approximate leave-one-out cross validation, which
can be useful for evaluating how well the model might make predictions. Watch
out for the `warning` column, which can tell you if the approximation is likely
to be incorrect.

```{python}
az.loo(idata, var_name="y", pointwise=True)
```

The function `az.compare` is useful for comparing different out of sample log
likelihood estimates.

```{python}
idata.log_likelihood["fake"] = xr.DataArray(
# generate some fake log likelihoods
np.random.normal(0, 2, [4, 500, 919]),
coords=idata.log_likelihood.coords,
dims=idata.log_likelihood.dims
Expand All @@ -160,3 +167,5 @@ az.compare(
)
```

az.plot_ppc(j)

0 comments on commit a4931cb

Please sign in to comment.