Slaves to the Algorithm

Accepted wisdom tells us, “you are what you eat”. If that refers to our physical wellbeing, it follows that our mental wellbeing is what we read, watch and listen to. As algorithms control our online information diet, it’s probably time we understood them.

Algorithms are pieces of code that companies like Google, Facebook, Amazon, Twitter, Instagram, and Netflix, write to control what we see on their platforms. They are a necessary part of our online experience. But do they act in our best interests?

When tech companies start out, their incentives are aligned to the interests of their users. After all, they are trying to increase the popularity of their services, so it makes sense to write algorithms that give users more of what they want in a way that does them no harm.

Somewhere along the line, tech companies reach a decision point and choose a funding method for their service. It’s usually a toss-up between selling advertising and paid subscriptions.

Often the company chooses an ad-based revenue model. Consumers are usually reluctant to pay for services they have been getting for free, so it’s the option with the least friction. Consequently, the goal of the algorithm shifts and consumer interests are de-prioritised in favour of maximising advertising revenue. The data collected about us and the way we use online services help algorithms to evolve. They soon learn that advertisers like to target predictable people and will pay more to do so.

Psychologists tell us that on any given topic, the most predictable people are at the extremes. Take politics, the far-left and the far-right have strong opinions on a range of topics. The unpredictable people are at the political centre.

So, if algorithms aim to maximise advertising revenue, and people at the extremes are worth more money, the algorithms have inadvertently acquired the goal of moving people to the extremes. Content creators and advertisers know this and publish material that manipulates our base emotions to get the most algorithmic traction. Unfortunately, the emotions that trigger a high degree of engagement are negative. Angry people click more, and so do the outraged and fearful.

This tactic is not new. Traditional news media have peddled negativity for years. We only have to look at the relentless coverage of the pandemic to see that. The difference with big tech is that there is much less scrutiny applied to the validity of their content. Misinformation and unsubstantiated stories spread fast. And the consequences spill over into the real world.

Take YouTube, recommendations account for more than 70% of the watch time. To put it another way, of the 1 billion hours of YouTube videos watched daily, 700 million of those hours are via the suggestions of an algorithm. Humanity is being dosed with something humanity has not chosen, and our mental wellbeing is not a factor.

The anti-vaccination movement is another example. Algorithms promote their content because of its high engagement, especially among parents. In 2019, cases of measles increased by 300% globally. In parts of Africa, that figure was 700%. Causation is difficult to prove, but the correlation is interesting.

Outrage is another base emotion exploited by algorithms. For some, it has become addictive and they crave more of it. But there is a supply and demand problem. There are not enough things to get legitimately outraged about. So, to satisfy our appetite, we are becoming outraged on other peoples behalf, triggered by the mundane and more interested in conspiracy theories.

The evidence is around us. Mainstream media giving airtime to fringe views and circulate the reactions online. Activists protesting issues that seem outside their sphere of relevance – police brutality in Minnesota leading to anti-racism demonstrations worldwide, an example. While outlandish conspiracy theories receive high levels of engagement online, leading to QAnon becoming a significant force in the 2020 US election.

If you think you are smart enough to evade this nonsense, think again. The problems are systemic. Algorithms and new technologies like deep fakes are improving fast. We will be taken in by it soon.

Fortunately, the Center for Humane Technology and AlgoTransparency are addressing these issues and campaigning for change. Tristan Harris (CfHT) draws the comparison with the slave trade. In the 1800s, slavery powered the entire economy, which is economically similar to the influence of big tech today. But a collective movement led to an awakening. Most people had not seen the inside of a slave ship. So abolitionists made the invisible visceral and showed how abhorrent the practice was to get people onside.

To abolish slavery, the British Empire accepted an annual drop in GDP of 2% for almost 70 years. History lessons like this provide hope and inspiration. It is possible to prioritise wellbeing over economic growth. We just need to find the collective will.

[800 words]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s