# What does "False Positive Risk" mean?

Definition of False Positive Risk in the context of A/B testing (online controlled experiments).

## What is False Positive Risk?

Aliases: False Discovery Rate, False Findings Rate, FPR, FFR

False positive risk is the probability of a statistically significant outcome to be a false positive. Terms used to describe the same concept in the literature include “false discovery rate” (FDR), “false positive report probability” (FPRP), and “false findings rate” (FFR). Not to be confused with the famous Benjamini & Hochberg False Discovery Rate.

While the false positive rate is defined as the proportion of statistically significant outcomes assuming a true null hypothesis and is typically known as the type I error rate, false positive risk concerns only tests which have produced a statistically significant outcome (see Statistical Significance) as measured by a p-value lower than a target alpha.

On a program-wide level it aims to estimate the proportion of such findings that are false. When applied to an individual test it aims to provide a Bayesian probability that the conclusion is false with reference to some set of A/B tests. It has even been proposed as a guide to setting the significance threshold and statistical power of a test.

False Positive Risk sounds like a nice thing to estimate, until one understands what it actually estimates. Briefly:

• At the level of a set of A/B tests the false positive frequency can be estimated with some objectivity, but it is hardly interesting and offers little insight, especially compared to a full evaluation of the value of the experiments.
• False positive risk fails as a guide to setting the significance threshold and power of a test since it does not take into account the need to balance the two, and it does not take into account sample size constraints or incorporate business parameters key to a proper risk-reward analysis.
• FPR does not make sense as a post-test statistic at the level of an individual test as it reveals no test-specific information not already contained in the p-value. Any information from other tests it incorporates in its value is inherently subjective and with questionable relevance.

Consider the referenced article for a detailed explanation of the above issues.

Like this glossary entry? For an in-depth and comprehensive reading on A/B testing stats, check out the book "Statistical Methods in Online A/B Testing" by the author of this glossary, Georgi Georgiev.

#### Articles on False Positive Risk

False Positive Risk in A/B Testing
blog.analytics-toolkit.com

#### Related A/B Testing terms

Statistical PowerType I Errorp-value

Statistical Methods in Online A/B Testing

Take your A/B testing program to the next level with the most comprehensive book on user testing statistics in e-commerce.

#### Glossary index by letter

Select a letter to see all A/B testing terms starting with that letter or visit the Glossary homepage to see all.