Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating Bayesian power analysis method #292

Closed
wants to merge 14 commits into from
299 changes: 292 additions & 7 deletions causalpy/pymc_experiments.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"""

import warnings # noqa: I001
from typing import Union
from typing import Union, Dict

import arviz as az
import matplotlib.pyplot as plt
Expand All @@ -32,7 +32,10 @@
IVDataValidator,
)
from causalpy.plot_utils import plot_xY
from causalpy.utils import round_num
from causalpy.utils import (
round_num,
compute_bayesian_tail_probability,
)

LEGEND_FONT_SIZE = 12
az.style.use("arviz-darkgrid")
Expand Down Expand Up @@ -321,18 +324,300 @@

return fig, ax

def summary(self, round_to=None) -> None:
def summary(
self, round_to=None, version: str = "coefficients", **kwargs
) -> Union[None, pd.DataFrame]:
"""
Print text output summarising the results

:param round_to:
Number of decimals used to round results. Defaults to 2. Use "None" to return raw numbers.
"""

print(f"{self.expt_type:=^80}")
print(f"Formula: {self.formula}")
# TODO: extra experiment specific outputs here
self.print_coefficients(round_to)
if version == "coefficients":
print(f"{self.expt_type:=^80}")
print(f"Formula: {self.formula}")

Check warning on line 339 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L337-L339

Added lines #L337 - L339 were not covered by tests
# TODO: extra experiment specific outputs here
self.print_coefficients()
elif version == "intervention":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I'm not 100% sure about this. At the moment we have a summary method for all classes which provides an estimate of the model coefficients. The approach used here is to also call the summary method but change what it does completely by changing a kwarg.

I'm thinking that it might be cleaner to completely separate this and just have a new method called power_analysis_summary, or intervention_summary, whatever.

return self._summary_intervention(**kwargs)

Check warning on line 343 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L341-L343

Added lines #L341 - L343 were not covered by tests

cetagostini marked this conversation as resolved.
Show resolved Hide resolved
def _power_estimation(self, alpha: float = 0.05, correction: bool = False) -> Dict:
"""
Estimate the statistical power of an intervention based on cumulative and mean results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this is defined as sub-function under the Pre-Post design class which (i understand) is intended to apply to multiple experiment designs which avail of this structure. As such the doc-string should be very clear on the nature of the power calculation being done.

As it stands the doc-string tries to cover too much and does so too quickly mixing technical concepts like confidence intervals and MDE which are more naturally stated in the frequentist paradigm. Additionally the function itself is overloaded which makes it hard to parse what is going on.

Not saying you should introduce the material here for giving background on Bayesian power analysis, but the doc-string should have something about the structure of how the pre-post design informs the kind of power analysis we are conducting here? I.e. where aiming to gauge mean changes in the posterior over time....

It can apply corrections to account for systematic differences in the data.

This is too vague. What corrections?

It's not even really clear what we mean by "statistical power" in this context.

This function calculates posterior estimates, systematic differences, confidence intervals, and
minimum detectable effects (MDE) for both cumulative and mean measures. It can apply corrections to
account for systematic differences in the data.
Parameters:
- alpha (float, optional): The significance level for confidence interval calculations. Default is 0.05.
- correction (bool, optional): If True, applies corrections to account for systematic differences in
cumulative and mean calculations. Default is False.
Returns:
- Dict: A dictionary containing key statistical measures such as posterior estimation,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how pedantic I should be, but are we actually getting a nested dictionary out here?

systematic differences, confidence intervals, and posterior MDE for both cumulative and mean results.
"""
results = {}
ci = (alpha * 100) / 2

Check warning on line 360 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L359-L360

Added lines #L359 - L360 were not covered by tests
cetagostini marked this conversation as resolved.
Show resolved Hide resolved
# Cumulative calculations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change this comment to be more descriptive in terms of what it is doing?

cumulative_results = self.post_y.sum()
_mu_samples_cumulative = (

Check warning on line 363 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L362-L363

Added lines #L362 - L363 were not covered by tests
self.post_pred["posterior_predictive"]
.mu.stack(sample=("chain", "draw"))
.sum("obs_ind")
)
# Mean calculations
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change this comment to be more descriptive in terms of what it is doing?

mean_results = self.post_y.mean()
_mu_samples_mean = (

Check warning on line 370 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L369-L370

Added lines #L369 - L370 were not covered by tests
self.post_pred["posterior_predictive"]
.mu.stack(sample=("chain", "draw"))
.mean("obs_ind")
)
# Posterior Mean
results["posterior_estimation"] = {

Check warning on line 376 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L376

Added line #L376 was not covered by tests
"cumulative": np.mean(_mu_samples_cumulative.values),
"mean": np.mean(_mu_samples_mean.values),
}
results["results"] = {

Check warning on line 380 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L380

Added line #L380 was not covered by tests
"cumulative": cumulative_results,
"mean": mean_results,
}
results["_systematic_differences"] = {

Check warning on line 384 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L384

Added line #L384 was not covered by tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a quick comment to explain what this is doing

"cumulative": results["results"]["cumulative"]
- results["posterior_estimation"]["cumulative"],
"mean": results["results"]["mean"]
- results["posterior_estimation"]["mean"],
}
if correction:
_mu_samples_cumulative += results["_systematic_differences"]["cumulative"]
_mu_samples_mean += results["_systematic_differences"]["mean"]
results["ci"] = {

Check warning on line 393 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L390-L393

Added lines #L390 - L393 were not covered by tests
"cumulative": [
np.percentile(_mu_samples_cumulative, ci),
np.percentile(_mu_samples_cumulative, 100 - ci),
],
"mean": [
np.percentile(_mu_samples_mean, ci),
np.percentile(_mu_samples_mean, 100 - ci),
],
}
cumulative_upper_mde = (

Check warning on line 403 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L403

Added line #L403 was not covered by tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could add a comment here just saying that we are calculating Minimum Detectible Effect (bounds/intervals?).

results["ci"]["cumulative"][1]
- results["posterior_estimation"]["cumulative"]
)
cumulative_lower_mde = (

Check warning on line 407 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L407

Added line #L407 was not covered by tests
results["posterior_estimation"]["cumulative"]
- results["ci"]["cumulative"][0]
)
mean_upper_mde = (

Check warning on line 411 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L411

Added line #L411 was not covered by tests
results["ci"]["mean"][1] - results["posterior_estimation"]["mean"]
)
mean_lower_mde = (

Check warning on line 414 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L414

Added line #L414 was not covered by tests
results["posterior_estimation"]["mean"] - results["ci"]["mean"][0]
)
results["posterior_mde"] = {

Check warning on line 417 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L417

Added line #L417 was not covered by tests
"cumulative": (cumulative_upper_mde + cumulative_lower_mde) / 2,
"mean": (mean_upper_mde + mean_lower_mde) / 2,
}
return results

Check warning on line 421 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L421

Added line #L421 was not covered by tests
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider splitting these steps into smaller functions with individual docstrings desribing the metric you're calculating in each case.

correction (bool, optional): If True, applies corrections to account for systematic differences in
cumulative and mean calculations. Default is False.

This really needs clarifying with respect to the nature the particular type of power analysis you're running on the pre-post design structure.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, but I think better describe on the document than the docstring otherwise can be to much.


def _summary_intervention(self, alpha: float = 0.05, **kwargs) -> pd.DataFrame:
"""
Calculate and summarize the intervention analysis results in a DataFrame format.

This function performs cumulative and mean calculations on the posterior predictive distributions,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again the doc-string is overloaded and unclear. Not clear why the Pre-Post design is relevant. Is this a summary of the intervention or just the intervention with respect to the power analysis we're running?

But now we also have reference to Bayesian tail probabilities, posterior estimations and causal effects. Too much going on.

If we're using particular bayesian tail probabilities because we have assumed a particular likelihood distribution we should clarify this here.

computes Bayesian tail probabilities, posterior estimations, causal effects, and confidence intervals.
It optionally applies corrections to the cumulative and mean calculations.

Parameters
----------
- alpha (float, optional): The significance level for confidence interval calculations. Default is 0.05.
- kwargs (Dict[str, Any], optional): Additional keyword arguments.
- "correction" (bool or Dict[str, float]): If True, applies predefined corrections to cumulative and mean results.
If a dictionary, the corrections for 'cumulative' and 'mean' should be provided. Default is False.

Returns
-------
- pd.DataFrame: A DataFrame where each row represents different statistical measures such as
Bayesian tail probability, posterior estimation, causal effect, and confidence intervals for cumulative and mean results.
"""
correction = kwargs.get("correction", False)

Check warning on line 443 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L443

Added line #L443 was not covered by tests

results = {}
ci = (alpha * 100) / 2

Check warning on line 446 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L445-L446

Added lines #L445 - L446 were not covered by tests
cetagostini marked this conversation as resolved.
Show resolved Hide resolved

# Cumulative calculations
cumulative_results = self.post_y.sum()
_mu_samples_cumulative = (

Check warning on line 450 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L449-L450

Added lines #L449 - L450 were not covered by tests
self.post_pred["posterior_predictive"]
.mu.stack(sample=("chain", "draw"))
.sum("obs_ind")
)

# Mean calculations
mean_results = self.post_y.mean()
_mu_samples_mean = (

Check warning on line 458 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L457-L458

Added lines #L457 - L458 were not covered by tests
self.post_pred["posterior_predictive"]
.mu.stack(sample=("chain", "draw"))
.mean("obs_ind")
)

if not isinstance(correction, bool):
cetagostini marked this conversation as resolved.
Show resolved Hide resolved
_mu_samples_cumulative += correction["cumulative"]
_mu_samples_mean += correction["mean"]

Check warning on line 466 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L464-L466

Added lines #L464 - L466 were not covered by tests

# Bayesian Tail Probability
results["bayesian_tail_probability"] = {

Check warning on line 469 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L469

Added line #L469 was not covered by tests
"cumulative": compute_bayesian_tail_probability(
posterior=_mu_samples_cumulative, x=cumulative_results
),
"mean": compute_bayesian_tail_probability(
posterior=_mu_samples_mean, x=mean_results
),
}

# Posterior Mean
results["posterior_estimation"] = {

Check warning on line 479 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L479

Added line #L479 was not covered by tests
"cumulative": np.mean(_mu_samples_cumulative.values),
"mean": np.mean(_mu_samples_mean.values),
}

results["results"] = {

Check warning on line 484 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L484

Added line #L484 was not covered by tests
"cumulative": cumulative_results,
"mean": mean_results,
}

# Causal Effect
results["causal_effect"] = {

Check warning on line 490 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L490

Added line #L490 was not covered by tests
"cumulative": cumulative_results
- results["posterior_estimation"]["cumulative"],
"mean": mean_results - results["posterior_estimation"]["mean"],
}

# Confidence Intervals
results["ci"] = {

Check warning on line 497 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L497

Added line #L497 was not covered by tests
"cumulative": [
np.percentile(_mu_samples_cumulative, ci),
np.percentile(_mu_samples_cumulative, 100 - ci),
],
"mean": [
np.percentile(_mu_samples_mean, ci),
np.percentile(_mu_samples_mean, 100 - ci),
],
}

# Convert to DataFrame
results_df = pd.DataFrame(results)

Check warning on line 509 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L509

Added line #L509 was not covered by tests

return results_df

Check warning on line 511 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L511

Added line #L511 was not covered by tests

def power_summary(
cetagostini marked this conversation as resolved.
Show resolved Hide resolved
self, alpha: float = 0.05, correction: bool = False
) -> pd.DataFrame:
"""
Summarize the power estimation results in a DataFrame format.

This function perform a power estimation and then
converts the resulting dictionary into a pandas DataFrame for easier analysis and visualization.

Parameters
----------
- alpha (float, optional): The significance level for confidence interval calculations used in power estimation. Default is 0.05.
- correction (bool, optional): Indicates whether to apply corrections in the power estimation process. Default is False.

Returns
-------
- pd.DataFrame: A DataFrame representing the power estimation results, including posterior estimations,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again too vague since we're not clarifying before hand what the goal of Bayesian power analysis is in this Pre-Post design context.

Copy link

@cetagostini-wise cetagostini-wise Apr 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, I believe the description of the power can be reference to the document but not the added directly. What do you think?

systematic differences, confidence intervals, and posterior MDE for cumulative and mean results.
"""
warnings.warn(

Check warning on line 532 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L532

Added line #L532 was not covered by tests
"The power function is experimental and the API may change in the future."
)
return pd.DataFrame(self._power_estimation(alpha=alpha, correction=correction))

Check warning on line 535 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L535

Added line #L535 was not covered by tests

def plot_power(self, alpha: float = 0.05, correction: bool = False) -> plt.Figure:
"""
Generate and return a figure containing plots that visualize power estimation results.

This function creates a two-panel plot (for mean and cumulative measures) to visualize the posterior distributions
along with the confidence intervals, real mean, and posterior mean values. It allows for adjustments based on
systematic differences if the correction is applied.

Parameters
----------
- alpha (float, optional): The significance level for confidence interval calculations used in power estimation. Default is 0.05.
- correction (bool, optional): Indicates whether to apply corrections for systematic differences in the plotting process. Default is False.

Returns
-------
- plt.Figure: A matplotlib figure object containing the plots.
"""
_estimates = self._power_estimation(alpha=alpha, correction=correction)

Check warning on line 554 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L554

Added line #L554 was not covered by tests

fig, axs = plt.subplots(1, 2, figsize=(20, 6)) # Two subplots side by side

Check warning on line 556 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L556

Added line #L556 was not covered by tests

# Adjustments for Mean and Cumulative plots
for i, key in enumerate(["mean", "cumulative"]):
_mu_samples = self.post_pred["posterior_predictive"].mu.stack(

Check warning on line 560 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L559-L560

Added lines #L559 - L560 were not covered by tests
sample=("chain", "draw")
)
if key == "mean":
_mu_samples = _mu_samples.mean("obs_ind")
elif key == "cumulative":
_mu_samples = _mu_samples.sum("obs_ind")

Check warning on line 566 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L563-L566

Added lines #L563 - L566 were not covered by tests

if correction:
_mu_samples += _estimates["_systematic_differences"][key]

Check warning on line 569 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L568-L569

Added lines #L568 - L569 were not covered by tests

# Histogram and KDE
sns.histplot(

Check warning on line 572 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L572

Added line #L572 was not covered by tests
_mu_samples,
bins=30,
kde=True,
ax=axs[i],
color="C0",
stat="density",
alpha=0.6,
)
kde_x, kde_y = (

Check warning on line 581 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L581

Added line #L581 was not covered by tests
sns.kdeplot(_mu_samples, color="C1", fill=True, ax=axs[i])
.get_lines()[0]
.get_data()
)

# Adjust y-limits based on max density
max_density = max(kde_y)
axs[i].set_ylim(0, max_density + 0.05 * max_density) # Adding 5% buffer

Check warning on line 589 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L588-L589

Added lines #L588 - L589 were not covered by tests

# Fill between for the percentile interval
axs[i].fill_betweenx(

Check warning on line 592 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L592

Added line #L592 was not covered by tests
y=np.linspace(0, max_density + 0.05 * max_density, 100),
x1=_estimates["ci"][key][0],
x2=_estimates["ci"][key][1],
color="C0",
alpha=0.3,
label="C.I",
)

# Vertical lines for the means
axs[i].axvline(

Check warning on line 602 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L602

Added line #L602 was not covered by tests
_estimates["results"][key], color="C3", linestyle="-", label="Real Mean"
)
if not correction:
axs[i].axvline(

Check warning on line 606 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L605-L606

Added lines #L605 - L606 were not covered by tests
_estimates["posterior_estimation"][key],
color="C4",
linestyle="--",
label="Posterior Mean",
)

axs[i].set_title(f"Posterior of mu ({key.capitalize()})")
axs[i].set_xlabel("mu")
axs[i].set_ylabel("Density")
axs[i].legend()
warnings.warn(

Check warning on line 617 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L613-L617

Added lines #L613 - L617 were not covered by tests
"The power function is experimental and the API may change in the future."
)
return fig, axs

Check warning on line 620 in causalpy/pymc_experiments.py

View check run for this annotation

Codecov / codecov/patch

causalpy/pymc_experiments.py#L620

Added line #L620 was not covered by tests


class InterruptedTimeSeries(PrePostFit):
Expand Down
50 changes: 49 additions & 1 deletion causalpy/tests/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,15 @@
Tests for utility functions
"""

import numpy as np
import pandas as pd

from causalpy.utils import _is_variable_dummy_coded, _series_has_2_levels, round_num
from causalpy.utils import (
_is_variable_dummy_coded,
_series_has_2_levels,
compute_bayesian_tail_probability,
round_num,
)


def test_dummy_coding():
Expand Down Expand Up @@ -44,3 +50,45 @@ def test_round_num():
assert round_num(123.456, 5) == "123.46"
assert round_num(123.456, 6) == "123.456"
assert round_num(123.456, 7) == "123.456"


def test_compute_bayesian_tail_probability():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've got multiple different scenarios here, each of which should be a different test. But for each test, we can have multiple asserts, like the output is a scalar float, between 0 and 1, etc

"""
Re-running all tests for the compute_bayesian_tail_probability function with the corrected understanding
and expectations for various scenarios.
"""
# Test 1: Posterior is a standard normal distribution, x = mean = 0
posterior_standard_normal = np.random.normal(0, 1, 10000)
x_at_mean = 0
prob_at_mean = compute_bayesian_tail_probability(
posterior_standard_normal, x_at_mean
)
assert np.isclose(prob_at_mean, 1, atol=0.05), f"Expected 1, got {prob_at_mean}"

# Test 2: Posterior is a standard normal distribution, x = 1
x_one_std_above = 1
prob_one_std_above = compute_bayesian_tail_probability(
posterior_standard_normal, x_one_std_above
)
assert (
0 < prob_one_std_above < 1
), "Probability should decrease from 1 as x moves away from mean"

# Test 3: Posterior is a standard normal distribution, x well outside the distribution
x_far_out = 5
prob_far_out = compute_bayesian_tail_probability(
posterior_standard_normal, x_far_out
)
# Expect a very low probability for a value far outside the distribution
assert prob_far_out < 0.01, f"Expected a value < 0.01, got {prob_far_out}"

# Test 4: Posterior is a normal distribution with mean=5, std=2, x = mean
posterior_shifted = np.random.normal(5, 2, 10000)
x_at_shifted_mean = 5
prob_at_shifted_mean = compute_bayesian_tail_probability(
posterior_shifted, x_at_shifted_mean
)
# Expect the probability at the mean of a shifted distribution to be close to 1
assert np.isclose(
prob_at_shifted_mean, 1, atol=0.05
), f"Expected 1, got {prob_at_shifted_mean}"
Loading
Loading