-
Hi, I'm curious what are the recommended approaches to a situation like this one: Researcher A thinks an effect must lie between 0.1 to 2.0, but researcher B thinks it lies between 1.0 and 3.0. Then they collect some data to see who's more correct. One potential solution is to set up two models with two different priors. The example code below is what I'm thinking of, and in this case, the output of 0.816 says researcher A is slightly more correct than researcher B. Does this approach look right? I'm also wondering if it's possible to use the "testing against a null-region" in this situation. But in this case, the "null region" would be either researcher A or researcher B's range. The idea is that if the researcher's estimates (i.e., "null region" of either [0.1, 2.0] or [1.0, 3.0]) is more correct, the degree of shift in prior beliefs after observing the data should be less, which means the researcher with a smaller BF is more correct? Does this interpretation/approach sound right? Appreciate any thoughts/suggestions anyone can provide. Thanks! library(brms)
library(bayestestR)
# uniform priors aren't great, but it's used here just to match the numbers/hypotheses in my dummy example
priors1 <- set_prior("uniform(0.1, 2)", class = "b", lb = 0.1, ub = 2) # researcher A
model1 <- brm(mpg ~ qsec, mtcars, prior = priors1, sample_prior = T, save_pars = save_pars(all = TRUE))
priors2 <- set_prior("uniform(1.0, 3.0)", class = "b", lb = 1.0, ub = 3.0) # researcher B
model2 <- brm(mpg ~ qsec, mtcars, prior = priors2, sample_prior = T, save_pars = save_pars(all = TRUE))
bayesfactor_models(model1, model2)
# output
Bayes Factors for Model Comparison
Model BF
[2] qsec 0.816
* Against Denominator: [1] qsec
* Bayes Factor Type: marginal likelihoods (bridgesampling) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
I don't think bayes factors are very useful here, particularly because the hypotheses are overlapping. A more useful approach in my opinion is to treat the two ranges as ROPE bounds and to compute proportion of the posterior that's in both regions. The region that contains more the posterior is better supported by the data |
Beta Was this translation helpful? Give feedback.
-
Bayes factors are perfect here! You have competing priors, and you want to see which is relatively closer to the observed data. There is no problem with the fact that they are overlapping, @bwiernik. eg., A has a prior of [0, Inf], while B has a prior of [-0.1, +0.1]. These are also overlapping, but are formulations of different priors (positive slope vs small slope). The downside to using a BF here is that both A and B can be very wrong, and BF will still give some value indicating who is less wrong. You can also use a posterior based method to see if the posterior looks like which of the hypotheses. However, you would need to do so with what is sometimes called "estimation priors" - priors that are more lax than the true priors of any of the researchers to allow some learning (these can still be "strong" priors, just not as strong). I would recommend fitting 3 models:
Compare likelihood of 1 and 2 (BF) |
Beta Was this translation helpful? Give feedback.
Bayes factors are perfect here! You have competing priors, and you want to see which is relatively closer to the observed data.
There is no problem with the fact that they are overlapping, @bwiernik. eg., A has a prior of [0, Inf], while B has a prior of [-0.1, +0.1]. These are also overlapping, but are formulations of different priors (positive slope vs small slope).
The downside to using a BF here is that both A and B can be very wrong, and BF will still give some value indicating who is less wrong.
You can also use a posterior based method to see if the posterior looks like which of the hypotheses. However, you would need to do so with what is sometimes called "estimation priors" - pri…