A/B Test Significance Calculator
Determine if the difference between your A/B test variations is statistically significant. Enter visitors and conversions for each group to get a p-value, relative lift, and confidence interval.
A/B testing compares two versions (control A and variation B) to determine which performs better. Statistical significance tells you whether the observed difference is real or likely due to random chance.
The method: This calculator uses a two-proportion z-test. It computes a pooled proportion, calculates the standard error, and derives a z-statistic. The p-value represents the probability of observing a difference this large (or larger) if the two versions truly had equal conversion rates.
Interpreting results: A p-value below your significance level (typically 0.05) means the result is statistically significant - the variation likely performs differently from the control. The confidence interval shows the plausible range of the true difference. Always ensure adequate sample size before drawing conclusions; premature testing inflates false positive rates.