Adobe just unveiled “Super Resolution” – Adobe’s term for A.I. driven photo enlargement. It’s a big deal for a lot of reasons and in this week’s Tech Tuesday, I’ll talk about how we’ve been enlarging images to date using bicubic interpolation, what super resolution is, how it works, and what some uses might be for this amazing technology.
Imagine you have an image that’s 200 x 200 pixels and you want to make it 1,000 x 1,000. Enlarging complex images is wrought with unsightly artifacts, resulting from the main algorithm we’ve used for ages – called bicubic interpolation.
Bicubic interpolation uses data from all eight pixels surrounding the pixel being worked on and because it uses more pixel data (a total of 16 coefficients or multipliers) than for example bilinear interpolation, it operates more slowly but produces better results.
If you try to enlarge an image that’s all blue, using a method like bicubic interpolation might work to your satisfaction. However, if there’s a lot of detail, it will look pretty bad. This is why we generally look for cameras with a high resolution; as these capture images in lots of megapixels.
We can more easily make such a large image (consisting of lots of megapixels) smaller. If we want to zoom into an area of the image, we have a surplus of megapixels and the enlargement simply zooms into a region, rather than actually creating pixels with the aforementioned bicubic interpolation method.
This is working for us, but image storage and transmission are real issues and data rates grow. This is where the value of super resolution comes in.
What about if we could capture images with fewer megapixels and transform them at the time of rendering, using A.I.-driven photo enlargement such as Adobe’s Super Resolution, to bigger photos that look great? How fun would it be to brag to a friend about your 1/2 megapixel camera that takes dreamy photos? It would be vastly more efficient to create and store images like this and have a much lower carbon footprint.
Moreover, as an author writing on the topic pointed out, there are more uses for this type of technology: “Imagine turning a 10-megapixel photo into a 40-megapixel photo. Imagine upsizing an old photo taken with a low-res camera for a large print. Imagine having an advanced “digital zoom” feature to enlarge your subject.” These are now all becoming possibilities with the development of Super Resolution.
Here’s an example of the potential of Super Resolution:
How was Super Resolution Created?
Since they have access to huge numbers of photos, Adobe created their own training data to train their deep learning model. They started with a large number of full-resolution images and then created a companion image for each one that was simply reduced in size. Then they trained an A.I. to learn how to create the larger (full-resolution) image from the smaller (lower-resolution) one. With enough images across a wide variety of objects and scenes, they built an A.I. that is quite believable.
It’s worth checking out the extreme zoom-ins on their site. Here’s Adobe’s press release.
Have a great week!
The Algorithms That Make Instacart Roll
It’s Sunday morning and you need to restock your refrigerator, but the weekend crowds at the supermarket don’t excite you. Your Monday and Tuesday are jam-packed with Zoom meetings, and you’ll also be supervising your children’s remote learning. In short, you aren’t going to make it to the grocery store anytime soon. So you pull out your phone, fire up the Instacart app to order your favorite groceries. The transaction seems simple. But this apparent simplicity depends on a complex web of carefully choreographed technologies working behind the scenes, powered by a host of apps, data science, machine-learning algorithms, and human shoppers. Read more about the algorithms that make Instacart roll here.
What to do When You Can’t AB Test
Will the new search software improve sales conversion? What’s the incremental impact of our new store pickup process on omni-channel sales? Can you find out today? Joshua Loong is a data scientist at Best Buy Canada and these are some of the important questions he works to answer to support product development and corporate strategy. Why is a data scientist answering questions a digital analyst can as well? Of course, AB testing can be used to answer some of these questions but it is not always possible. Joshua therefore has been using a variety of counterfactual methods to assess these inferential questions that he shares in this article.
A Simple Guide to Semantic Segmentation
Semantic Segmentation is the process of assigning a label to every pixel in the image. This is in stark contrast to classification, where a single label is assigned to the entire picture. Semantic segmentation treats multiple objects of the same class as a single entity. On the other hand, instance segmentation treats multiple objects of the same class as distinct individual objects (or instances). Typically, instance segmentation is harder than semantic segmentation. This blog explores some methods to perform semantic segmentation using classical as well as deep learning based approaches. Moreover, popular loss function choices and applications are discussed.