Reading Time: 4 minutes

Transfer learning is helping companies to bring powerful predictive models to market in a fraction of the time and money once required. Let’s dive in and look at this transformative technology.

Transfer learning involves using a model trained for one specific task on another related task. It is technology that allows small models to leverage the incredibly deep insights developed by the biggest A.I. companies in the world. Think: Google and Facebook. Transfer learning can be considered a big shortcut for many companies looking to boost the commercial success of A.I., as it’s generally easy to implement, requires only modest data before producing business results. For example, with transfer learning, a model originally built to find cars and people in images can be used to detect cats and dogs.

We’ve used it to: build a text-based travel recommender, detect and count animals in video, predict product categories in massive e-commerce datasets, find similar images quickly across hundreds of thousands of images, and to predict fashion styles of people based on only a single face photo.

How is transfer learning different than regular machine learning?
This is best explained by an example. With the advent of USB-C connectors, I’ve needed lots of dongles lately. These are things I’ve never seen before, yet as soon as I pick up a two-inch long USB-C to USB dongle, I know exactly what it is. But when a three-year-old picks up the same dongle, she will require years to fully understand the notion of ports and wired connections. In this example, I use a sort of “transfer learning” to quickly understand something I’ve never seen before. The child however, has to learn from scratch; their internal neural network must be trained on thousands of experiences to have the same level of “intelligence”.

What are the benefits?
There are two main reasons. 1. Neural networks are extremely good at detecting objects in images (faces, objects, labels, etc) and 2. It’s very difficult for most companies to train a neural network from scratch, because a tremendous number of training examples is required. Thinking about the dongle example, the child would have to see thousands of cables, phones, and ports to be able to generalize as well as I can.

If transfer learning is new to you, here are a couple articles that look at future implications of transfer learning such as: This one looks at hybrid A.I. models, few-shot learning and one-shot learning and generating training data with GANs. This one shows transfer learning as the second most important driver to success in A.I. This one shows examples using image and text data.

Of Interest

Text Generation That Will Amaze You:
I’ve written a bit in this newsletter about deep fakes — the type of A.I. that can trick humans (and some machines) into thinking something that was machine generated (and potentially damaging) was generated by a human, such as a photo, video or, now, text. It’s your turn to take a spin. Visit the Talk to Transformer site and type in anything you want. See how this incredible A.I. finishes your sentence(s) for you. It’s at once fascinating, scary and unsettling. And these types of A.I. are only getting smarter. We recently took a look at how A.I. and intelligent agents operate in the real world. Some of the ideas involved are rather complex, but we think they’re very possible (think, for example, Google Glass.) Soderberg’s team, who are based at MIT, have set a goal of trying a few different AI predictions. (By the way, I used Talk to Transformer to generate everything after the sentence above ending in “…getting smarter.”)

Landr – Using A.I. to Master Music:
Landr, best known for their A.I. based music mastering tools, just raised $26 MM to improve its suite of music creation tools. Mastering is generally the last phase in the process of recording a song. It adds that final polishing and standardization, assuring the song will sound good on your earbuds, your home stereo speakers and everything in between. It’s also historically been quite expensive, as mastering professionals, with their expensive gear and years of experience, charge thousands to master an album. That’s obviously out of the range of many musicians. Landr is changing that with A.I. They trained their algorithms on over 10 million songs to build a bespoke set of audio post-production processors, including mastering. Read more here. Visit their site.

This One’s for the Gentlemen:
After finishing John Romaniello and Adam Bornstein’s book Engineering the Alpha, I decided to follow the eating/workout plans and wanted a calculator that would give me my macros for any day of the plan and for any body weight. So, I made a web app. If you’re interested, here’s the book and here’s the page I built to compute macros for any person following the plan. I even provide the recommended sources of protein, carbohydrates and fat. There’s no A.I. here, but it is a time saver if you need it.