What are social media algorithms, and how can they be manipulated?

Algorithms influence the content we see when we’re online; they’re the digital instructions or ‘rules’ that social media platforms (and feeds) operate by.

While algorithms can improve our online experience – ‘learning’ from our interests and behaviour to show us what we’re most likely to engage with – many have also come under severe criticism for serving users negative, offensive, and harmful content. 

As well as users potentially being influenced by platforms’ algorithms, individuals and groups are also increasingly finding ways to use and manipulate algorithms for themselves, to spread their own – sometimes harmful – messages.

Here’s what you need to know about algorithmic manipulation.

Social media

The data we provide by interacting with platforms and websites – or via ‘cookies’ downloaded as we browse online –  gives their algorithms information on everything. Where we live, our favourite artists, our hobbies and interests, political preferences, or ideal holiday destinations. 

By gathering information about us, algorithms can predict what kinds of content, advertisements, and service recommendations we are most likely to read, watch, buy or share. 

This underpins the business models for many social platforms. It’s in a platform’s best interest to keep us as users involved and scrolling, by showing posts we’re inclined to interact with.

While this can definitely bring some benefits – introducing us to content which is relevant, entertaining, or interesting – it can also amplify the more negative sides of online life.

Users are often more compelled to click on, or engage with, content which is inflammatory or radical. Some believe this is contributing to society gradually becoming more and more polarised.  

Ultimately, what’s most worrying is that algorithms ‘interpret’ such interactions to mean that you want to see more of the same type of posts, even when they’re upsetting, misleading, or harmful.

Hate speech and radicalisation

By mistaking intrigue for endorsement, algorithms can work subtly to influence our tastes and preferences. 

Many charities and organisations have expressed concerns about the effects that content (and social media algorithms specifically) can have on young people in the long run. 

While radicalisation is one of the more extreme consequences of spending time on social media, it’s far more common that people will be exposed to hurtful comments and imagery (featuring things like racism or misogyny).

Consistent exposure to damaging content while individuals are still developing can have lasting effects: biassing them towards harmful attitudes, getting them involved in disinformation-rife groups, or simply numbing them to the more toxic sides of the internet.

Manipulating algorithms

This becomes even more problematic when groups or individuals are able to intentionally manipulate social media algorithms in their favour by widely sharing pieces of shocking content. 

Because posts of a more outrageous nature are bound to generate clicks and views, the algorithm is ‘incentivised’ to push them to even broader audiences – meaning they get interacted with even more. In essence, this becomes a ‘snowball effect’. 

This process has been used to explain the rapid rise in popularity of internet personality Andrew Tate.

Tate is infamous for his misogynistic views – casually discussing violence towards women, and for saying that depression doesn’t exist. His opinions were considered so harmful that platforms like Meta and TikTok banned his official accounts. This, however, hasn’t stopped videos of him circulating on these very same platforms. 

Tate has previously offered financial rewards – to members of his ‘Hustler’s University’ scheme – for posting content featuring him and recruiting others to do the same. Thousands of people complied, which meant that social media algorithms were ‘tricked’ into showing his offensive posts to millions, many of whom were young and wouldn’t have sought that content out independently.

It also meant his content circulated widely despite the fact that he wasn’t posting it himself – making his official platform bans basically ineffective. This should be taken as a real concern – with news reports of schools deciding to issue warnings to parents about Tate’s dangerous videos and comments. 

Being bombarded with hate speech by confident speakers – displaying eye-catching signs of wealth and status, like fast cars and cigars – poses a risk to the development and autonomy of young, impressionable audiences. It’s a risk that really needs addressing by social media companies. 

Not all doom and gloom

One thing to remember is that if algorithms can be manipulated to show the worst of the digital world, they can, with a little effort, also be influenced to operate in directions that are more positive.

By sharing and engaging with positive, factually correct, supportive and helpful posts and accounts, people can direct social media algorithms (and their feeds) to feature more content of a similar, positive nature. 

Admittedly, this is easier said than done in reality. Algorithms are designed to serve our immediate impulsions, rather than our best intentions.

But we must remember that everyone – young and old – has a shared power to take more control of what we see online, and to shape our digital experiences into ones that are uplifting, rather than hurtful.

Consider talking with your child about the role of algorithms and the influence they have. Helping children to better understand the online world is a really crucial part of building their digital resilience and media literacy. If your child is using social media, it might be helpful to discuss some of the things you’ve read in this article.

Digital