Farid Asadi

Growth, Experiment Driven.

Stopping Your Experiment at Statistical Significance Isn’t a Good Idea!

Posted on

The experiment is running, and you notice that the variant is the winner — statistically significant. Stopping and concluding the test might be a type one error. There are lots of times when the winner changes so quickly.

Yeah, the result is significant but the timing is biased.

What can you do instead?

In your pre-experiment calculations, remember to include the Expiration Date for the experiment — and stick to it. The duration will be different depending on the control conversion rate, population, and minimum detectable effect. Don’t extend the experiment, and don’t stop it sooner.

Read On

  1. Value–Action Loops in Growth; How Value Creation Generates User Decisions?
  2. Strategy Experimentation: How to Choose a Strategy?
  3. You Lose Your Focus Here!
  4. Price–Information Relationship in CRO
  5. Applying The Loss Aversion Principle on Pricing Pages
  6. Why Google Analytics 4 Can’t Go Far?
  7. Best Practices in Conversion Optimization