Reading Time: 3 minutes

A/B testing is such an important step in product development because it’s directly tied to revenue. Data-driven companies know this. They run thousands of tests concurrently and even have teams that develop in-house systems for monitoring all these tests. It’s a massive part of their lifeblood. And for good reason: data science models and product enhancements can’t be considered improved unless they’re measured. And A/B testing is how that’s generally done. But like anything else, some companies get it right, and some don’t.

It’s really easy to waste a lot of time and resources on A/B testing. You’ve got to know how to make tests work for, not against your business.

And if you’re not yet testing, there’s no better time to start than now! After all, what gets measured gets managed, and A/B tests are ultimately just powerful measurement tools.

I’ll assume you’re on board with the importance of testing. I’m glad that’s out of the way! There are two main ways we usually see companies approach tests: using frequentist and probabilistic methods. Frequentist methods involve p-values and not looking at the results before they’re done (no peeking!). They also involve lots of impressions, meaning that you’re going to be sending thousands of impressions to option A and B, regardless of how well each performs during the test. Probabilistic methods are much more relaxed about statistical significance and avoid huge number of impressions by adapting in real time to the preferences of your customers.

For one recent client, changing from frequentist A/B tests to a probabilistic framework shaved months off their iteration time, reducing testing time from 3-4 months to as little as half a day. That’s because the number of impressions required for probabilistic tests to “pick a winner” is much lower. Drastically lower.

The probabilistic method I’m talking about here is the Multi Armed Bandit (MAB). And it’s what big companies like Google use. It’s fast. It self optimizes (meaning that as the A/B test is running, it actually starts to send more traffic to the better option). And the MAB is very simple to use.

If you’re still using regular ole A/B testing and you’re tired of seeing thousands of impressions go by before you’re allowed to pick a winner, have a read through this article and see if a MAB approach might save you a lot of time and money.

https://bennettdatascience.com/better-testing-equals-more-revenue/

Of Interest

When A/B testing doesn’t tell the whole story:
Ever wonder if the winner of an A/B test is actually the best long-term option? In other words, what if option A gets more clicks, but those who chose B went on to have higher customer lifetime value? In that case, A/B testing may not be the right tool. And that’s where reinforcement learning comes in. Google’s Deep Mind has open-sourced some new libraries to be used in this space. Learn more here: https://towardsdatascience.com/deepmind-quietly-open-sourced-three-new-impressive-reinforcement-learning-frameworks-f99443910b16
How should we handle exams when A.I. is available?
If a university has no way of determining whether an assignment was written by a human or an algorithm, existing grading systems lose any semblance of meritocracy or fairness. This article dives into the power of neural networks to complete assignments that we cannot currently tell aren’t authentic and what we can do about it: https://onezero.medium.com/a-i-and-the-future-of-cheating-caa0eef4b25d
Keeping with this week’s theme:
Here’s a Collection of A/B Testing Learning Resources: Newbie to Master: https://medium.com/@eva.gong/a-collection-of-a-b-testing-learning-resources-newbie-to-master-6bab1e0d7845