Reading Time: 3 minutes

It used to be that when we got a recommendation for a product, we were told, or otherwise understood, that other people like us also liked this thing we were being recommended. Generally speaking, that’s called collaborative filtering: filtering all products for those that people like you (the collaborative part) also liked.

Popularity is another, even easier, algorithm to understand. “Here are the most popular products, just for you!” It’s not exactly personalized, but it sounds like it and, more importantly, performs well in many cases.

Today’s neural networks are capable of making incredibly accurate (dare I say, engaging) recommendations. Take Spotify’s Discover Weekly or Facebook’s news feed for example.

However, using more complex neural networks to accomplish tasks often comes at a cost. In addition to the increased compute power and time to train some of these vastly more complex models comes the lack of explainability. Oftentimes, analysts create “black boxes” that show increased accuracy but for reasons that are often too complex to understand. By anyone.

That gap between what is known about how neural networks arrive at an answer and the desire of practitioners to explain results has created an opportunity.

Without jumping into the very deep pool, the explainability of algorithms of any type has roots in ethical fairness, especially when algorithms are commonly fed by biased training data that leads to biased results.

For these reasons, the topic of A.I. explainability is hot right now.

One company working on this problem is Fiddler A.I., founded by an ex-Facebook manager and backed by $32MM in funding. Google and IBM are also working on tools to explain the results of complex algorithms.

I believe we’ll see a lot of growth in this space for two reasons:

  1. Very often, neural networks provide a huge lift in accuracy above typical machine learning methodologies; and
  2. The ethical concerns around algorithmic explainability are, thankfully, in the spotlight these days.

Read more about Fiddler A.I. here in a recent article by Fortune.

At Bennett Data Science, we constantly work with our customers to bake in explainability and root out biases every time we can. I hope you demand the same of your analytics team. It matters to your customers and your employees.

Have a nice week,

-Zank

Of Interest

Israeli Startup Raises $18.5 MM to train A.I. with Fake Data
Companies interested in using artificial intelligence face a big obstacle: Having enough of the right kind of data to train their systems. Companies need large amounts of labelled, historical examples to train A.I. systems, particularly those that work with images and videos. The demand has spawned a whole sub-industry of companies that specialize in helping other businesses annotate their data. But there is another way to produce enough data to train A.I. systems: Fabricate it. Fake it till you make it.

A.I. Restores The Missing Edges of Rembrandt’s Painting
Rembrandt’s painting The Night Watch, created in 1642, was trimmed in 1715 to fit between two doors at Amsterdam’s city hall. Since then, 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right have been missing. But computer software has now restored the full painting for the first time in 300 years. Have a look here.

Tesla Vision Development Explained By Company’s A.I. Guru
While many of us might assume that when it comes to creating a system that allows a car to drive autonomously, the more data from multiple sorts of sensors, the better, that may not actually be the case. At least, not according to Karpathy, Director of Artificial Intelligence at Tesla. In this video, he lays out why Tesla has moved to a pure vision-based approach and how it works.