Should we be worried about Algorithms?

As a concept an Algorithm is nothing to fear. However...

As a concept an algorithm is nothing to fear. It’s simply a set of instructions for a computer to use to rapidly sort data into useful information. Algorithms are used for all kinds of useful things from medicine to logistics and generally they make our lives materially better.

The advantage of an algorithm is that it’s consistent and can work all the time without a break, whereas humans are unpredictable and biased. This means that if an algorithm is designed properly, it can perform incredible data orientated tasks in the blink of an eye.

So why do we hear a lot of stories in the press about how dangerous algorithms have become?

The controversy centres on the social networks like Facebook, Twitter, YouTube, Instagram, and TikTok and how their algorithms strike at the very heart of our psychology and free will.

Here’s a quote from Frances Haugen, a whistle-blower and ex-Facebook employee responding to the question of why there’s so much violence and misinformation on their platform:

“Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money”

So, what is it that Facebook and the others have done to make their algorithm dangerous?

Let me explain…

The entire social media success story is built on the ability of their algorithms to predict what users will want to see and for the network to serve them up more of what they like. This keeps people on their platforms.

The more time users spend on the platforms, the more revenue this generates for the platform via advertising. The social networks are in the “attention” business and their algorithms are there to help the networks maximise that attention via the content we like to view, react to and share.

Extremes keep us hooked

The problem for wider society is that humans are drawn to extremes. From an evolutionary perspective, we are hardwired to seek out threats and opportunities and so the algorithms are programmed to show us more of these.

When an algorithm determines that showing you something negative is more likely to make you remain on the platform than showing you something neutral or positive, then the next material served up is likely to be darker.

For example, if it’s a video on YouTube making you angry, the next video in your feed is likely to make you even angrier. Studies show that YouTube algorithms promote ever more extreme material. This explains why social media has become such a hotbed of radicalisation for white supremacists and other dangerous groups.

This has real world consequences for everything from Anti-vax misinformation to scams, fake gossip, and conspiracy theories. The result is an increase in polarised opinion and organised violence.

Since we’re all carrying around smartphones, we are perfectly set up to experience what in Silicon Valley is known as Algorithmic Behaviour Modification.

Algorithms are designed to adapt

Algorithms are adaptive and use Machine Learning which is a kind of Artificial Intelligence designed to improve the algorithms as they receive more data. These data tools constantly work out small improvements in reaction to our behaviour.

Have you ever wondered how long it takes a platform to calculate which advertisement to show you next? The answer is milliseconds, during which tens of thousands of comparisons will be made with what other humans did in similar situations. Online, these calculations occur millions of times an hour, 24/7, 365 days a year.

The Advertisers that use Social Networks are constantly tweaking their ads based on data received from algorithms. Thus, we are targeted constantly and aggressively without ever knowing or understanding it. And the data these algorithms collect is extensive. Every website we view, every keystroke we make, every site we visit is monitored, turned into data, and monetised.

Sometimes the algorithm can get things distressingly wrong: in the US there was a woman called Gillian Brockwell who, like 26,000 other women each year in the States, tragically miscarried her baby. As a result, Gillian felt moved to write an open letter to Facebook, Instagram and Twitter asking them to stop sending her ads related to pregnancy and motherhood. In the late stages of her pregnancy, despite her use of search times like ‘stillborn’, ‘tragedy’ and ‘is this Braxton Hicks’, their algorithms had failed to detect that she was obviously miscarrying.

A human problem

Of course, the problem cannot be blamed on the algorithm itself as that was designed by humans. As with all things related to social media, it’s we humans who are damaging each other. Powerful interests with deep pockets are generating content they know will elicit a response. We, the users seem almost hypnotised by the content we see and like lab rats, react to the stimuli served up.

What can we do to protect ourselves?

It’s not unreasonable to suggest that the 3 billion or so humans on social media are part of an enormous social experiment where the long-term consequences are unknown. The algorithms are complex and the companies that own them do not publish their source code. We really don’t know what’s in them at all.

Bearing in mind the power of algorithms, I feel we should have an independent organisation to analyse their net benefit or cost to society as we already do for medicine, food and advertising standards.

Sign up for Updates