Reading Time: 3 minutes

Want to lower risk and stay in tune with your customers?

Constant hypothesis testing is one of the best ways to do this.

As data scientists, we build algorithms with the purpose of automating tasks. We have a good idea that they’ll perform; We generally use historic data to assess performance before we send our predictive models out to the world.

But what then?

Constant Hypothesis Testing

That’s when testing must take over.

Is the current algorithm keeping customers engaged the way it’s supposed to? The way the offline test showed? The only way to understand the answers to these questions is through comparison to a baseline.

That’s where A/B testing comes in. We’ve written a lot about this topic and fast, efficient ways to run A/B tests. It’s something we take seriously, as this is generally how our work is graded. If we increase engagement (or whatever KPI we’re trying to optimize), this will only become evident through careful testing.

This week, I’ll keep it short and sweet by recommending two articles that explain how to run efficient tests and grow your bottom line:

  1. The first article, Better Testing Equals More Revenue, offers straightforward examples to explain the importance of good A/B testing and the implications of getting it right.
  2. The second article, Know Your Options When It Comes To A/B Testing, digs into various A/B testing options and is substantially more advanced.

I hope this is helpful content to you!

Have a wonderful week,

-Zank

Of Interest

Kate Crawford: ‘A.I. is Neither Artificial nor Intelligent’
Kate Crawford is a research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. In this article, she comments on how natural resources and human labour drive machine learning and the regressive stereotypes that are baked into its algorithms. What’s at stake as A.I. reshapes our world?

Google Hopes A.I. can Turn Search Into a Conversation
With the evolution of search, Google wants its core product to infer meaning from human language, answer multipart questions, and look more like Google Assistant sounds. Last month, CEO Sundar Pichai introduced LaMDA, A.I. designed to have a conversation on any topic. “We believe LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use,” Pichai shares. Read more here.

When A.I. Becomes Child’s Play
Despite their popularity with kids, tablets and other connected devices are built on top of systems that weren’t designed for them to easily understand or navigate. But adapting algorithms to interact with a child isn’t without its complications—as no one child is exactly like another. Most recognition algorithms look for patterns and consistency to successfully identify objects. But kids are notoriously inconsistent. In this podcast episode, MIT Technology Review examines the relationship A.I. has with kids.