Minute Monday 20: When NOT to run a formal A/B test
4 questions to ask
This post is presented by Skiplevel. Become more technical without learning how to code. In my career leading product and growth teams, something I often saw holding product managers back was their technical literacy and ability to fluently and productively work with their counterparts in engineering.
Skiplevel trains product managers and tech operators like you to improve your technical communication, improve collaboration with engineering teams, and accelerate your career in tech through their comprehensive program.
I’m a huge fan of experimentation, both as a general approach to product development and in the more formal definition:
(ex·per·i·ment | ik-ˈsper-ə-mənt)
A procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.
But sometimes, running formal A/B, A/B/n, or multivariate tests is not appropriate.
In the form of 4 questions to ask yourselves, here is some simple guidance on when you should NOT run a formal experiment:
1. Do we have such high confidence in doing something (maybe we've learned from past experiments or have other learnings) that just moving forward with the implementation/scaling of an idea is low risk?
Is there a plausible downside to just doing this without an experiment?
2. Do we have enough traffic to this product surface area to make it feasible to run an experiment within a timeframe we feel is acceptable for learning?
Traffic volume is a critical parameter in experiment design, and assuming reasonable experiment parameters, we need enough traffic across all of our variants to be able to learn quickly with high confidence. If we don’t, we inevitably compromise our learning velocity with tests that take too long to conclude or tests that are inconclusive because of excessively large MDE.
3. Are the questions we're trying to answer more 'why?' than 'what?'?
If so, then a formal experiment likely isn't the best path forward, and instead leaning on qualitative research (user interviews, observational studies etc.) might be more appropriate.
4. Do we have a well-informed and well-formed hypothesis?
Is there objective data (qualitative or quantitative) that supports our beliefs, or are we leaning too much on opinion? If not, we should try to find that before investing resources into launching an experiment.
If this checklist helps prevent just one unnecessary or misguided experiment, I’ll be a happy chap.