In this week newsletter the fourth statistical ghost: Peeking.
When conducting AB tests, it is tempting to check your results whilst the test is running to see how it is performing, colloquially known as ‘peeking’. But in doing so you are giving this ghost more opportunities to fool you in to thinking an ineffective treatment has had an impact.
Simply looking at the data is not a problem, but taking any actions based on what you see can introduce bias. After all, it is far more common to stop an experiment halfway through the intended run time because it is significant than it is to stop it because it hasn’t reached significance yet.
What else in this newsletter:
- How small should the change in my A/B test variation be?
- Methods of workflow sanity with Google Tag Manager.
- [DUTCH] Opzetten van een experiment gedreven cultuur
- AB Tasty Launches Nudge Engagement
- AI For Everyone | Coursera
Enjoy your week.