## What is a Type I Error?

Aliases: *false positive*

A type I error a.k.a. an error of the first kind is committed when a true null hypothesis is **incorrectly rejected**. For example, assuming the null hypothesis is that of no difference, even though there is in fact no difference in the means of the Control Group and Test Group(s) we observe a statistically significant difference between them.

After an A/B Test is completed we have either committed a type I error, or we have not. **The type I error rate** is therefore a characteristic of the testing procedure, not of the hypothesis being tested. A properly designed and executed significance test offers conservative guarantees regarding the probability of committing a type I error. In particular, when using a proper statistic (often the sample mean) it offers asymptotic consistency and is finite-sample unbiased, fully efficient and sufficient.

The p-value reported after an A/B test corresponds to a test designed to have a type I error probability equal to the p-value. It should be noted that in case of violations of the test assumptions the statistical model is no longer adequate with respect to the test actually performed and there might be significant discrepancies between the nominal (reported) and actual type I error of the procedure (see nominal p-value and actual p-value).

The type I error of a test is at odds with the type II error: increasing one leads to decreasing the other, and vice versa, assuming fixed variance, sample size and minimum effect of interest.

## Articles on Type I Error

Like this glossary entry? For an in-depth and comprehensive reading on A/B testing stats, check out the book "Statistical Methods in Online A/B Testing" by the author of this glossary, Georgi Georgiev.