## What is an Unbiased Estimator?

Aliases: *unbiased*

Finite-sample unbiasedness is one of the desirable properties of good estimators. An estimator is finite-sample unbiased when it does not show systemic bias away from the true value (θ*), on average, for any sample size n. If we perform infinitely many estimation procedures with a given sample size n, the arithmetic mean of the estimate from those will equal the true value θ*. In other words, the estimator's sampling distribution has a mean equal to the parameter it estimates.

It is easy to see why this is a desirable property: we do not want an estimator that systematically under or over-estimates the value. All estimators are subject to the bias-variance trade-off: the more unbiased an estimator is, the larger its variance, and vice-versa: the less variance it has, the more biased it becomes. A simple extreme example can be illustrate the issue. Say you are using the estimator E that produces the fixed value "5%" no matter what θ* is. Its variance is zero, however it is also maximally biased since it will show 5% no matter if the true value θ* is 3% or 99%.

Frequentist estimators used in A/B testing are normally unbiased. If not fully unbiased, then they aim to be close to achieving it. For example, in performing sequential testing one produces an estimator which is unconditionally-unbiased but which shows high bias in very early or very late monitoring stages. A part of an AGILE A/B test is the deployment of procedures that produce near-unbiased estimators conditional on the stopping stage.