Next time you see someone “misinterpret” a confidence interval, wait a second. They’re actually probably okay.
It is regular sport for Bayesians to criticize frequentist confidence intervals as unintuitive, usually misinterpreted, and based on what are usually unjustifiable assumptions. There are good reasons for this: they are unintuitive, usually misinterpreted, and based on what are usually unjustifiable assumptions.
Confidence intervals take into account the uncertainty we have in trying to describe a population given that we only observe a random sample. Maybe our sample is representative of the population, maybe not. The smaller the sample, the more likely it is that, through random variation, we got a sample that suggests a relationship (causes us to reject the null hypothesis) when really there is no relationship (the null is true.) If we take 100 random samples of the same size and for each construct a (different) confidence interval, how many will contain the true value of the parameter? We expect that 95 of those constructed intervals will contain the true value.
We don’t have 100 samples of the same size, so this is not the question we want to answer. Here’s what we want to know: What is the probability that the true value is in the interval?
There are a variety of very good, sound reasons why the frequentist approach does not make sense. I understand the difference between believing that the true parameter is “fixed and known to God” (per the frequentist assumption) and a random variable (the Bayesian assumption.) I agree completely that a confidence interval answers the wrong question.
It doesn’t matter.
For large samples and given the regularity conditions of maximum likelihood estimators, the marginal posterior distribution for a single parameter is approximately normal with a mean equal to the MLE and standard deviation equal to the standard error. Under these conditions, maximum likelihood and Bayesian estimators give you the same inferences.
Bayesian researcher perspective: Suppose I want a credible interval for a parameter where I have a lot of data and the model is well-behaved but I don’t have convenient code for drawing posteriors. I could estimate the MLE and use the resulting confidence interval as an approximate credible interval. My friend and colleague Jeff Gill calls this the “lazy Bayesian” approach. Lazy, efficient, tomato, tomahto.
Bayesian consumer perspective: Suppose you are reading an article where there is plenty of data and a well-behaved model, but the author provides frequentist confidence intervals. You can treat them as approximate credible intervals. Easy peasy.
Pragmatist perspective: As long as the conditions are met, you can go on “misinterpreting” confidence intervals.
This “lazy Bayesian” approach is not limited to making simple inferences about single parameters. King, Tomz, and Wittenberg explain how to generate draws from the approximate posterior (as opposed to approximate draws from the actual posterior via MCMC) and make any inference a Bayesian can using the output from maximum likelihood estimation.
As a Bayesian, I advocate strongly for increasing the use of Bayesian methods. However, we should be careful to avoid overselling the advantages.