Accepted wisdom tells us, “you are what you eat”. If that refers to our physical wellbeing, it follows that our mental wellbeing is what we read, watch and listen to. As our online information diet is controlled by algorithms, maybe it’s time we understood them.
Algorithms are a necessary part of our online experience. They are pieces of code that companies like Google, Facebook, Amazon, Twitter, Instagram, and Netflix, write to control what we see on their platforms. But, do they act in our best interests?
When tech companies start out their incentives are aligned with their user’s interests. After-all they’re trying to increase the popularity of their services, so it makes sense to write algorithms that give users more of what they want in a way doesn’t harm them.
Somewhere along the line a decision point is reached, and the tech companies choose how their services should be funded. It’s usually a toss-up between selling advertising and paid subscriptions.
Often the company chooses an ad-based revenue model. Consumers are usually reluctant to pay for services they’ve been getting for free, so its the option with the least friction. Consequently, the goal of the algorithm shifts, and the consumer’s interests are de-prioritised in favour of maximising advertising revenue.
As more data is collected about us and the way we use online services, the algorithms evolve and learn that advertisers like to target predictable people, and will pay more to do so.
Psychologists tell us that on any given topic the most predictable people are found at the extremes. Take politics for example, the far left and the far right have strong opinions on a range of topics. The unpredictable people are found at the political centre.
So, if algorithms aim to maximise advertising revenue, and people at the extremes are worth more money, the algorithms have inadvertently acquired the goal of moving people to the extremes. Content creators and advertisers know this and publish material that manipulate our base emotions to get the most algorithmic traction. Unfortunately, the emotions with the highest degree of engagement are negative. Angry people click more, and so do the outraged and fearful.
This isn’t new. Traditional news media have peddled negativity for years. We only have to look at the relentless coverage of the pandemic to see that. The difference with big tech is that there is much less scrutiny applied to the validity of their content. Misinformation and unsubstantiated stories spread fast and the consequences spill over into the real world.
Take YouTube for example, recommendations account for more than 70% of the watch time. To put another way, of the 1 billion hours of YouTube videos watched daily, 700 million of those hours are suggested by an algorithm. Humanity is being dosed with something humanity hasn’t chosen, and our wellbeing is not factored in.
The anti-vaccination movement is another example. Algorithms promote their content because of its high engagement, especially among parents. In 2019, cases of measles increased by 300% globally. In parts of Africa that figure was 700%. Causation is difficult to prove, but the correlation is interesting.
Outrage is another base emotion exploited by algorithms. For some, it’s become addictive and they crave more of it. But there’s a supply and demand problem. There aren’t enough things to be legitimately outraged about. So, to satisfy our appetite, we are becoming outraged on other peoples behalf, triggered by the mundane and more interested in conspiracy theories.
The evidence is around us. Mainstream media giving airtime to fringe views and circulate the reactions online. Activists protesting issues that seem outside their sphere of relevance – police brutality in Minnesota leading to anti-racism demonstrations worldwide, an example. While outlandish conspiracy theories receive high levels of engagement online, leading to QAnon becoming a significant force in the 2020 US election.
If you think you’re smart enough to evade this nonsense, think again. The problems are systemic. Algorithms and new technologies like deep fakes are improving fast. We will be taken in by it soon.
Fortunately, movements like the Center for Humane Technology and AlgoTransparency are addressing these issues and campaigning for change. Tristan Harris (CfHT) draws the comparison with the slave trade. In the 1800s, slavery powered the entire economy, which is economically similar to the influence of big tech today. But a collective movement led to an awakening. Most people hadn’t seen the inside of a slave ship. So abolitionists made the invisible visceral and showed how abhorrent the practice was to get people onside.
To abolish slavery, the British Empire accepted an annual drop in GDP of 2% for almost 70 years. History lessons like this provide hope and inspiration. It’s possible to prioritise wellbeing over economic growth. We just need to find the collective will.