Bayes fixes small n, doesn’t it?
What is a methods-careful practitioner to do when the number of observations () is small? I don’t know how many times I’ve been told by a well-meaning Bayesian some variation of
Bayesian estimation addresses the “small problem”
This is right and wrong.
Maximum likelihood estimators (MLEs) leverage large in two ways.
- Making inferences about parameters.
- Checking model fit.
The goal of #1 is to answer questions like “Is ?” or “What is the shortest interval that has a 95% change of containing ?” MLEs let us take a stab at answering* this. A large number of observations means we can apply the Central Limit Theorem, which means we can use the standard errors to test simple hypotheses and build confidence intervals. The goal of #2 is to justify our choice of model after the fact. Presumably we had a good, substantively-informed theory to justify our choice of model, but it’s nice to be able to say afterward “See? The models fits well, so we weren’t crazy to choose this model.”
How many is a “small” number of observations? It’s difficult to say. Long (1997) gives some guidance about this, suggesting that 100 is a minimal number for an MLE, and that one needs at least 10 observations per parameter in the model. However, we never really know if this is enough to be able to trust our inferences and fit.
What happens if we have too few observations? Both #1 and #2 become unreliable. We have too little data to assume that the Central Limit Theorem has “kicked in,” so our point estimates have more uncertainty, which means our standard errors are too small**, and therefore any inferences are unreliable. Our tests for model fit are similarly starved for information, so any post-hoc model justification will be difficult.
What does Bayes fix? Bayes estimators are finite-data estimators: more data gives more accurate estimates, but the measures of uncertainty of those estimates are reliable regardless of the amount of data. This means that inferences are reliable regardless of the size of the data set. Want to know if but our is small? No problem for a Bayesian.
What about model fit? Bayesians have the same tools as frequentists for checking model fit and they also have the numerical and graphical analysis of posterior predictive distributions. As with inferences about parameters, more data is more information and thus is preferable. However, any tests of model fit are still reliable. As a purely practical matter, if we have a very small data set we probably will not be able to conclude anything about model fit from the data. This puts the burden back on the substantive/theoretical argument for the model form. If we understand the data-generating process very well, great; if not, then this part of our argument will need extra scrutiny.
Bottom line: Bayesian estimators don’t create more information. However, they do let us correctly identify how sure we are about the inferences we draw. That’s a clear improvement. Other Bayesians aren’t helping by overselling it.
===
* Don’t get saucy with me about how frequentist confidence intervals (CIs) either contain or don’t contain , implying that it’s meaningless to talk about the probability of being in the interval. CIs are better than commonly described — details in my next post.
** In theory our standard errors could be too small, too large, or correct. We can appropriately account for this additional uncertainty by increasing the size of our standard errors (think “put a confidence interval around our confidence interval”) except we can’t know how much to increase them. See “reasoning, circular”.