## What is beta?

The greek leter beta (β) is most-often used to denote the pre-test probability of committing a type II error deemed satisfactory for the test at hand given a specific effect of interest. A test with a lower β is more sensitive towards smaller departures from the null hypothesis and thus has a greater probability of rejecting a false null.

Beta is inversely related to statistical power since POW = 1 - β (from POW(T_{(α)}; μ_{1}) = P(d(X) > c_{(α)}; μ_{1}) and β(T_{(α)}; μ_{1}) = P(d(X) ≤ c_{(α)}; μ = μ_{1}) for any μ_{1} greater than μ_{0}, where c_{(α)} is the significance threshold, d(X) is a test statistic (distance function), μ is the true magnitude of the effect while μ_{1} is the magnitude of the effect under a particular alternative hypothesis H_{1}).

After a test is complete beta somewhat loses its purpose and while it can still be computed versus different values of interest using a confidence interval to judge the test's sensitivity based on the interval width is a more compelling approach. It should be noted that in case the result was just significant, the test\'s power is always equal to 0.5.

The error rates alpha (α) and beta (β) are inversely related: increasing one decreases the other, assuming fixed variance, sample size and minimum effect of interest.

## Articles on beta

If you want to read more on A/B testing stats, check out the upcoming book "Statistical Methods in Online A/B Testing" by the author of this glossary, Georgi Georgiev.