Unlike Bayesian posterior distributions, confidence/consonance functions do not have any distributional properties and also lack the interpretation that should be given to Bayesian posterior intervals. For example, a Bayesian 95% posterior interval has the proper interpretation of having a 95% probability of containing the true value.

This does not apply to 95% frequentist intervals, where the 95% refers to the long run coverage of these intervals containing the true parameter if the study were repeated over and over. Thus, either a 95% frequentist interval contains the true parameter or it does not. In the code below, we simulate some data where the true population parameter is 20 and we know this because we’re the deities of this world. A properly behaving statistical procedure with a set alpha of 0.05 will yield at least 95% intervals in the long run that will include this population parameter of 20. Those that do not are marked in red.

sim <- function() {
  fake <- data.frame((x <- rnorm(100, 100, 20)), (y <- rnorm(100, 80, 20)))
  intervals <- t.test(x = x, y = y, data = fake, conf.level = .95)$conf.int[]
}

set.seed(1031)

z <- replicate(100, sim(), simplify = FALSE)

df <- data.frame(do.call(rbind, z))
df$studynumber <- (1:length(z))
intrvl.limit <- c("lower.limit", "upper.limit", "studynumber")
colnames(df) <- intrvl.limit
df$point <- ((df$lower.limit + df$upper.limit) / 2)
df$covered <- (df$lower.limit <= 20 & 20 <= df$upper.limit)
df$coverageprob <- ((as.numeric(table(df$covered)[2]) / nrow(df) * 100))

library(ggplot2)


ggplot(data = df, aes(x = studynumber, y = point, ymin = lower.limit, ymax = upper.limit)) +
  geom_pointrange(mapping = aes(color = covered), size = .40) +
  geom_hline(yintercept = 20, lty = 1, color = "red", alpha = 0.5) +
  coord_flip() +
  labs(
    title = "Simulated 95% Intervals",
    x = "Study Number",
    y = "Estimate",
    subtitle = "Population Parameter is 20"
  ) +
  theme_bw() + # use a white background
  theme(legend.position = "none") +
  annotate(
    geom = "text", x = 102, y = 30,
    label = "Coverage (%) =", size = 2.5, color = "black"
  ) +
  annotate(
    geom = "text", x = 102, y = 35,
    label = df$coverageprob, size = 2.5, color = "black"
  )

Although the code above demonstrates this, one of the best visualization tools to understand this long-run behavior is the D3.js visualization created by Kristoffer Magnusson, which can be viewed here.

However, despite these differences in interpretation, Bayesian and frequentist intervals often end up converging, especially when there are large amounts of data. They also end up converging when a Bayesian posterior distribution is computed with a flat or weakly informative prior. However, there are several problems with using flat priors, such as giving equal weight to all values in the interval including implausible ones. These sorts of priors should generally be avoided. However, for the sake of this demonstration, we will be using flat priors.

Here, I demonstrate with a simple example how Bayesian posterior distributions and frequentist confidence functions end up converging in some scenarios. For these first few examples, I’ll be using the rstanarm package.1

library(concurve)
library(rstan)
#> Loading required package: StanHeaders
#> rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
#> For execution on a local, multicore CPU with excess RAM we recommend calling
#> options(mc.cores = parallel::detectCores()).
#> To avoid recompilation of unchanged Stan programs, we recommend calling
#> rstan_options(auto_write = TRUE)
library(rstanarm)
#> Loading required package: Rcpp
#> Registered S3 methods overwritten by 'lme4':
#>   method                          from
#>   cooks.distance.influence.merMod car 
#>   influence.merMod                car 
#>   dfbeta.influence.merMod         car 
#>   dfbetas.influence.merMod        car
#> This is rstanarm version 2.21.1
#> - See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
#> - Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
#> - For execution on a local, multicore CPU with excess RAM we recommend calling
#>   options(mc.cores = parallel::detectCores())
#> 
#> Attaching package: 'rstanarm'
#> The following object is masked from 'package:rstan':
#> 
#>     loo
library(ggplot2)
library(cowplot)
#> 
#> ********************************************************
#> Note: As of version 1.0.0, cowplot does not change the
#>   default ggplot2 theme anymore. To recover the previous
#>   behavior, execute:
#>   theme_set(theme_cowplot())
#> ********************************************************
library(bayesplot)
#> This is bayesplot version 1.7.2
#> - Online documentation and vignettes at mc-stan.org/bayesplot
#> - bayesplot theme set to bayesplot::theme_default()
#>    * Does _not_ affect other ggplot2 plots
#>    * See ?bayesplot_theme_set for details on theme setting
library(scales)

We will simulate some data (two variables) from a normal distribution with a location parameter of 0 and scale parameter of 1 (something very simple) and then regress the second variables (GroupB) on the first using the base lm function. We will take the regression coefficient and construct a consonance function for it.

GroupA <- rnorm(50)
GroupB <- rnorm(50)
RandomData <- data.frame(GroupA, GroupB)
model_freq <- lm(GroupA ~ GroupB, data = RandomData)

Now we will do the same using Bayesian methods, but instead of specifying a prior, we will use a flat prior to show the convergence of the posterior with the consonance function.

rstan_options(auto_write = TRUE)

# Using flat prior
model_bayes <- stan_lm(GroupA ~ GroupB,
  data = RandomData, prior = NULL,
  iter = 5000, warmup = 1000, chains = 4
)

Now that we’ve fit the models, we can graph the functions.

randomframe <- curve_gen(model_freq, "GroupB", steps = 10000)

(function1 <- ggcurve(type = "c", randomframe[[1]], nullvalue = TRUE))

color_scheme_set("teal")

function2 <- mcmc_dens(model_bayes, pars = "GroupB") +
  ggtitle("Posterior Distribution") +
  labs(subtitle = "Function Displays the Full Posterior Distribution", x = "Range of Values", y = "Posterior Probability") +
  scale_y_continuous(breaks = c(0, 0.30, 0.60, 0.90, 1.20, 1.50, 1.80, 2.10, 2.40, 2.70, 3.0))
#> Scale for 'y' is already present. Adding another scale for 'y', which will
#> replace the existing scale.


(breaks1 <- c(0, 0.30, 0.60, 0.90, 1.20, 1.50, 1.80, 2.10, 2.40, 2.70, 3.0))
#>  [1] 0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0

(adjustment <- function(x) {
  x / 3
})
#> function(x) {
#>   x / 3
#> }

(labels <- adjustment(breaks1))
#>  [1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

breaks <- labels
labels1 <- labels

(function3 <- mcmc_dens(model_bayes, pars = "GroupB") +
  ggtitle("Posterior Distribution") +
  labs(subtitle = "Function Displays the Full Posterior Distribution", x = "Range of Values", y = "Posterior Probability") +
  scale_x_continuous(expand = c(0, 0), breaks = scales::pretty_breaks(n = 10)) +
  scale_y_continuous(expand = c(0, 0), breaks = waiver(), labels = waiver(), n.breaks = 10, limits = c(0, 3.25)) +
  yaxis_text(on = TRUE) +
  yaxis_ticks(on = TRUE) +
  annotate("segment",
    x = 0, xend = 0, y = 0, yend = 3,
    color = "#990000", alpha = 0.4, size = .75, linetype = 3
  ))
#> Scale for 'x' is already present. Adding another scale for 'x', which will
#> replace the existing scale.
#> Scale for 'y' is already present. Adding another scale for 'y', which will
#> replace the existing scale.

I made some adjustments above to the bayesplot code so that we could more easily compare the consonance distribution to the posterior distribution. We will be using plot_grid() from cowplot to achieve this.

plot_grid(function1, function3, ncol = 1, align = "v")

As you can see, the results end up being very similar. You can likely get very similar results with weakly informative priors normal(0, 100) or with much larger datasets, where the likelihood will end up swamping the prior, though this isn’t always the case.

Here’s another example, but this time the variables we simulate have different location parameters.

GroupA <- rnorm(500, mean = 2)
GroupB <- rnorm(500, mean = 1)
RandomData <- data.frame(GroupA, GroupB)
model_freq <- lm(GroupA ~ GroupB, data = RandomData)
# Using flat prior
model_bayes <- stan_lm(GroupA ~ GroupB,
  data = RandomData, prior = NULL,
  iter = 5000, warmup = 1000, chains = 4
)
randomframe <- curve_gen(model_freq, "GroupB", steps = 10000)

(function1 <- ggcurve(type = "c", randomframe[[1]], nullvalue = TRUE))

color_scheme_set("teal")

function2 <- mcmc_dens(model_bayes, pars = "GroupB") +
  ggtitle("Posterior Distribution") +
  labs(subtitle = "Function Displays the Full Posterior Distribution", x = "Range of Values", y = "Posterior Probability") +
  scale_y_continuous(breaks = c(0, 0.30, 0.60, 0.90, 1.20, 1.50, 1.80, 2.10, 2.40, 2.70, 3.0))
#> Scale for 'y' is already present. Adding another scale for 'y', which will
#> replace the existing scale.


(breaks1 <- c(0, 0.30, 0.60, 0.90, 1.20, 1.50, 1.80, 2.10, 2.40, 2.70, 3.0))
#>  [1] 0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0

(adjustment <- function(x) {
  x / 3
})
#> function(x) {
#>   x / 3
#> }

(labels <- adjustment(breaks1))
#>  [1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

breaks <- labels
labels1 <- labels

(function3 <- mcmc_dens(model_bayes, pars = "GroupB") +
  ggtitle("Posterior Distribution") +
  labs(subtitle = "Function Displays the Full Posterior Distribution", x = "Range of Values", y = "Posterior Probability") +
  scale_x_continuous(expand = c(0, 0), breaks = scales::pretty_breaks(n = 10)) +
  scale_y_continuous(expand = c(0, 0), breaks = waiver(), labels = waiver(), n.breaks = 10, limits = c(0, 9)) +
  yaxis_text(on = TRUE) +
  yaxis_ticks(on = TRUE) +
  annotate("segment",
    x = 0, xend = 0, y = 0, yend = 9,
    color = "#990000", alpha = 0.4, size = .75, linetype = 3
  ))
#> Scale for 'x' is already present. Adding another scale for 'x', which will
#> replace the existing scale.
#> Scale for 'y' is already present. Adding another scale for 'y', which will
#> replace the existing scale.

plot_grid(function1, function3, ncol = 1, align = "v")

Here’s another dataset, however, here we’re not generating random numbers.

data(kidiq)

# flat prior

post1 <- stan_lm(kid_score ~ mom_hs,
  data = kidiq, prior = NULL,
  seed = 12345
)
post2 <- lm(kid_score ~ mom_hs, data = kidiq)

df3 <- curve_gen(post2, "mom_hs")

(function99 <- ggcurve(df3[[1]]))

summary(post1)
#> 
#> Model Info:
#>  function:     stan_lm
#>  family:       gaussian [identity]
#>  formula:      kid_score ~ mom_hs
#>  algorithm:    sampling
#>  sample:       4000 (posterior sample size)
#>  priors:       see help('prior_summary')
#>  observations: 434
#>  predictors:   2
#> 
#> Estimates:
#>                 mean   sd   10%   50%   90%
#> (Intercept)   77.4    2.1 74.8  77.4  80.0 
#> mom_hs        12.0    2.3  9.0  12.0  14.9 
#> sigma         19.9    0.7 19.0  19.8  20.7 
#> log-fit_ratio -0.2    0.0 -0.2  -0.2  -0.1 
#> R2             0.1    0.0  0.0   0.1   0.1 
#> 
#> Fit Diagnostics:
#>            mean   sd   10%   50%   90%
#> mean_PPD 86.8    1.3 85.1  86.8  88.5 
#> 
#> The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
#> 
#> MCMC diagnostics
#>               mcse Rhat n_eff
#> (Intercept)   0.1  1.0   863 
#> mom_hs        0.1  1.0   907 
#> sigma         0.0  1.0  1736 
#> log-fit_ratio 0.0  1.0  1077 
#> R2            0.0  1.0   837 
#> mean_PPD      0.0  1.0  3663 
#> log-posterior 0.0  1.0  1349 
#> 
#> For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).

color_scheme_set("teal")

(function101 <- mcmc_areas(post1, pars = "mom_hs", point_est = "none", prob = 1, prob_outer = 1, area_method = "equal height") +
  ggtitle("Posterior Distribution") +
  labs(subtitle = "Function Displays the Full Posterior Distribution", x = "Range of Values", y = "Posterior Probability") +
  yaxis_text(on = TRUE) +
  yaxis_ticks(on = TRUE))

cowplot::plot_grid(function99, function101, ncol = 1, align = "v")

Practically the same.


References


Session info

#> R version 4.0.2 (2020-06-22)
#> Platform: x86_64-apple-darwin17.0 (64-bit)
#> Running under: macOS Catalina 10.15.6
#> 
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
#> 
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] scales_1.1.1         bayesplot_1.7.2      cowplot_1.0.0       
#> [4] rstanarm_2.21.1      Rcpp_1.0.5           rstan_2.21.2        
#> [7] StanHeaders_2.21.0-5 concurve_2.7.0       ggplot2_3.3.2       
#> 
#> loaded via a namespace (and not attached):
#>   [1] minqa_1.2.4           colorspace_1.4-1      ggsignif_0.6.0       
#>   [4] ellipsis_0.3.1        rio_0.5.16            ggridges_0.5.2       
#>   [7] rsconnect_0.8.16      rprojroot_1.3-2       flextable_0.5.10     
#>  [10] markdown_1.1          base64enc_0.1-3       fs_1.5.0             
#>  [13] rstudioapi_0.11       ggpubr_0.4.0          farver_2.0.3         
#>  [16] DT_0.15               fansi_0.4.1           xml2_1.3.2           
#>  [19] codetools_0.2-16      splines_4.0.2         knitr_1.29           
#>  [22] shinythemes_1.1.2     jsonlite_1.7.0        nloptr_1.2.2.2       
#>  [25] broom_0.7.0           km.ci_0.5-2           bcaboot_0.2-1        
#>  [28] shiny_1.5.0           compiler_4.0.2        backports_1.1.8      
#>  [31] fastmap_1.0.1         assertthat_0.2.1      Matrix_1.2-18        
#>  [34] cli_2.0.2             later_1.1.0.1         htmltools_0.5.0      
#>  [37] prettyunits_1.1.1     tools_4.0.2           igraph_1.2.5         
#>  [40] gtable_0.3.0          glue_1.4.1            reshape2_1.4.4       
#>  [43] dplyr_1.0.1           V8_3.2.0              carData_3.0-4        
#>  [46] cellranger_1.1.0      pkgdown_1.5.1         vctrs_0.3.2          
#>  [49] nlme_3.1-148          crosstalk_1.1.0.1     xfun_0.16            
#>  [52] stringr_1.4.0         ps_1.3.3              lme4_1.1-23          
#>  [55] openxlsx_4.1.5        miniUI_0.1.1.1        mime_0.9             
#>  [58] lifecycle_0.2.0       gtools_3.8.2          statmod_1.4.34       
#>  [61] rstatix_0.6.0         MASS_7.3-51.6         zoo_1.8-8            
#>  [64] colourpicker_1.0      promises_1.1.1        hms_0.5.3            
#>  [67] ProfileLikelihood_1.1 parallel_4.0.2        inline_0.3.15        
#>  [70] shinystan_2.5.0       metafor_2.4-0         yaml_2.2.1           
#>  [73] curl_4.3              memoise_1.1.0         gridExtra_2.3        
#>  [76] KMsurv_0.1-5          gdtools_0.2.2         loo_2.3.1            
#>  [79] stringi_1.4.6         dygraphs_1.1.1.6      desc_1.2.0           
#>  [82] boot_1.3-25           pkgbuild_1.1.0        zip_2.0.4            
#>  [85] rlang_0.4.7           pkgconfig_2.0.3       systemfonts_0.2.3    
#>  [88] matrixStats_0.56.0    evaluate_0.14         lattice_0.20-41      
#>  [91] purrr_0.3.4           htmlwidgets_1.5.1     rstantools_2.1.1     
#>  [94] labeling_0.3          tidyselect_1.1.0      processx_3.4.3       
#>  [97] plyr_1.8.6            magrittr_1.5          R6_2.4.1             
#> [100] generics_0.0.2        pillar_1.4.6          haven_2.3.1          
#> [103] foreign_0.8-80        withr_2.2.0           xts_0.12-0           
#> [106] survival_3.2-3        abind_1.4-5           tibble_3.0.3         
#> [109] crayon_1.3.4          car_3.0-8             survMisc_0.5.5       
#> [112] uuid_0.1-4            rmarkdown_2.3         officer_0.3.12       
#> [115] grid_4.0.2            readxl_1.3.1          data.table_1.13.0    
#> [118] callr_3.4.3           threejs_0.3.3         forcats_0.5.0        
#> [121] digest_0.6.25         pbmcapply_1.5.0       xtable_1.8-4         
#> [124] httpuv_1.5.4          tidyr_1.1.1           RcppParallel_5.0.2   
#> [127] stats4_4.0.2          munsell_0.5.0         survminer_0.4.8      
#> [130] shinyjs_1.1

1. Goodrich B, Gabry J, Ali I, Brilleman S. Rstanarm: Bayesian Applied Regression Modeling via Stan.; 2020. https://mc-stan.org/rstanarm.