Is Subjectivist Bayesianism "Biased"?
Jake Metzger
Online frequentists, at least from what I find, often charge that (subjectivist) Bayesians do little more than inject bias into their inferences through the use of a subjective prior; frequentism, however, very often claims asymptotic unbiasedness as a virtue for many of its estimators. This is a point in favor of frequentism, or so the proponents say.
There are two ways to read this as a complaint about Bayesianism: from the internal perspective of a frequentist or from an external, potentially non-frequentist or even Bayesian perspective. This is because the very definition of bias depends on what one considers to be the truth of the matter, the norm governing the correctness of the inference.
Frequentists generally (but not universally) see inference to be in the business of recovering true model parameters from a true data generating distribution. Subjectivist Bayesians, however, do not necessarily see the role of inference in this way but as instead representing a probabilistically consistent and maximally conservative method of updating degrees of belief as other beliefs change, typically as the result of the acknowledgement of empirical data. So, it’s unsurprising that these methods should come across potentially many instances in which they disagree in estimation about nominally identical quantities, especially in the short run. So, when a frequentist complains to a Bayesian that they are violating some frequentist performance criterion, it’s important to question whether that criterion is an artifact of the frequentist perspective or whether it’s an independently-motivated criterion. But, if it’s motivated by an independent criterion, surely it should find it’s way into a general case in favor of frequentism rather than a rather fiddly complaint about specifically Bayesian performance. So, I’m not inclined to take frequentists throwing stones merely from within their glass houses very seriously; let’s consider this complaint instead without assuming the correctness of frequentism and see what it amounts to.
To do this, I think its sufficient to highlight the role that the notion of bias plays in the complaint. After all, bias is a kind of error and error is understood to be a negative norm: less error is generally better. If we understand “bias” to be a quantitative measure of systematic error, then we need to be clear about what the error is supposed to be in the relevant inferences.
Under the frequentist perspective, bias is the systematic difference between an estimated value and the “true” value. This is usually cashed out as a long-run expectation of an estimator and a true parameter value. In a simulation context, we can actually talk about “true” parameter values, but in the real world, it’s questionable whether there are true parameters at all for most interesting statistical questions and models (though there are so-called agnostic frequentists that acknowledge this point). So, this can’t quite be correct for frequentist notions of bias as applied to real world applications, but let’s set that aside.
While frequentists talk about bias and flaunt unbiasedness as a virtue, Bayesians don’t talk about bias much. Why is that? Well, subjectivist Bayesians often consider inference to be akin to an extension of logic – as long as the method is correctly followed and the assumptions granted, the inference is correct. So, if we identify the result of Bayesian inference with the result of the application of Bayes’ Rule, then there is no error and thus particularly no systematic error in the previous sense: the estimated value is the correct value, by definition. So, for these Bayesians applying Bayes’ Rule, there’s no sense in talking about bias.
However, let’s continue to put the shoe on the other foot and consider the performance of frequentist estimators relative to their Bayesian ground-truth. Indeed, because frequentists neglect an important term in production of their estimators, precisely because they have no role for a prior distribution, it is they who turn out to be in systematic error.
So while frequentists may be correct in pointing out that Bayesian estimates often have systematic frequentist error, it’s also true that frequentist estimates often have systematic Bayesian error. So, for a subjectivist Bayesian, it is the frequentist that is biased in their inferences.
Overall, the complaint about Bayesians being biased isn’t entirely true: they’re biased based on a frequentist measure of performance – long-run frequentist error. If we consider a subjectivist Bayesian measure of performance, it’s the frequentist that has the systematic error. Without independent reasons to already accept frequentism, this complaint doesn’t have much substance. A better version of this complaint would be about calibration and its importance, for example, in scientific inference. But calibration, as I’ve considered in previous posts, is not about just making inferences match empirical rates or about reducing frequentist bias but about situating empirical probability within probabilistic inference. This is incompatible with subjectivist Bayesianism, but is required on a properly objectivist Bayesian view, such as I’ve previously discussed.
Reminder: I’m not a subjectivist Bayesian and find myself somewhat sympathetic to the concerns of the frequentist here, but I just think this articulation is empty.