This week I watched the highly charged Netflix documentary The Social Dilemma, which explores the consequences of our growing dependence on social media.
Here are a few quotes I typed as I watched:
- “We created a system that biases towards false information. Not because we want to, but because false information makes the companies more money than the truth. The truth is boring.”
- “If you want to control the population of your country, there has never been a tool as effective as Facebook.”
- “Fake news on twitter spreads six times faster than true news.”
- “There are only two industries that call their customers ‘users’: illegal drugs and software – Edward Tufte.”
The Social Dilemma
As someone who has worked on systems that automatically personalize content for hundreds of millions of people, I watched the documentary intently as many important points were brought to bear on topics such as increases in teenage suicide and the bipartisan rift that is ever-increasing in America.
The Social Dilemma looks to explain these trends and more by examining the way that social networks operate; particularly how they use A.I. to make their services as addictive as possible.
Here are the three main objectives that the most successful social networks (think Facebook, Twitter, Instagram) strive to achieve:
- Maximize engagement
- Maximize growth
- Maximize ad revenue
These objectives are definitely tied together — with more engagement and growth, ad revenue naturally increases. But let’s take a look at how these networks increase engagement (and ultimately eyes on ads).
Maximizing Engagement
The Social Dilemma does a good job of illustrating how sophisticated intelligent algorithms predict what a user will most likely engage with next. And they’ve learned a lot along the way, such as:
- It’s incredibly compelling to notify a user that they’ve been tagged in someone else’s post, photo or video
- There’s a tipping point for social networks when a new user gets a certain number of connections in a certain number of days
- Fake news receives 6x more clicks than real news
With these carrots at the end of the proverbial stick, social networks (and many, many other consumer-focused companies) bait their offerings (ads, products, etc.) behind these observations. The result is more revenue, driven by more engagement.
But the engagement is usually heavily lopsided!
Imagine This Scenario:
A person starts reading news about a current event, but from a heavily right or left slant. Algorithms quickly learn the reading preferences of this individual (user) and recommend more content with the same slant, because, well, that’s just what this person wants. The next day, that same user gets a notification about a new article, again slanted far left or right and they dive in to read more on this evolving topic. This continues on for days, months, or even years.
Then imaging an entire nation of people, paying attention to only one side of a story or topic. Imagine the division that would occur. This is the very type of division we see across the United States today.
The Social Dilemma argues that this sort of division is rampant in the entire world and that social networks are making things much worse. I have a hard time finding fault with that reasoning. After all, intelligent recommender systems are designed to give people more of the content they’ll likely engage with, not less, and certainly not the “other side of the story”. I know this because I’ve designed numerous such systems.
So What can we do About It?
In The Social Dilemma, the filmmakers and speakers argue for more regulation. However, I’m not convinced that this would solve the social dilemma we’re faced with.
I believe that users, customers, shoppers, influencers, and so forth, have a right to know when algorithms are being used to alter and/or personalize their content, and to own their session data.
Eventually, I’d like to see a situation where we have the option to sell our data to big corporations like Google or Facebook. The way it is currently, there’s no discussion about our personal data — social networks simply own and use it how they wish. If we had the power to control which data of ours was used and how, it would force accountability. It would allow for settings that could increase diversity and reduce bias. It would change the way companies market to us and ultimately how they inadvertently or otherwise divide us.
What do you think? Have you watched the documentary yet? Do you agree? Is there a better approach? I’d love to hear your thoughts.
Of Interest
Artificial Intelligence That Mimics the Brain Needs Sleep Just Like Humans, a New Study Reveals
Artificial intelligence designed to function like a human could require periods of rest similar to those needed by biological brains. Researchers at Los Alamos National Laboratory in the United States discovered that neural networks experienced benefits that were “the equivalent of a good night’s rest” when exposed to an artificial analogue of sleep.
Deploy APIs With Python and Docker
There are a few skills in software development that act as floodgates holding back a vast ocean of opportunity. When learning one of these skills, it seems that a whole new world is suddenly illuminated. In this article, author James Briggs covers what he believes is one of these skills: The ability to write code that can be deployed to the internet and communicated with an API.
Convert Your PDFs to Audiobooks With Machine Learning
Walking – it’s one of covid-19’s greatest (and only) pleasures, isn’t it? These days, you can do anything on foot: listen to the news, take meetings, even write notes (with voice dictation). The only thing you can’t do while walking is read machine learning research papers. Or can’t you? In this post, Dale Markowitz (inspired by Kazunori Sato) will show you how to use machine learning to transform documents in PDF or image format into audiobooks, using computer vision and text-to-speech. That way, you can read research papers on the go!