Social Media Algorithms: What Don't We Know?

06/08/2024

Social media is inescapable in the modern world, but what actually are the driving forces behind what you see?

Article Image

Image by PICKPIK

By Robyn Garner

With the inescapability of social media in the modern age, the algorithms that organise them have the potential to influence our thoughts, feelings and actions. It’s certainly not a part of the internet experience people are unaware of, with swathes of social media users praising or damning it. Every day I stumble across another Instagram reel telling me “the algorithm loves me” because I’ve stumbled upon a band still small enough to meet fans at gigs or an author still small enough to sign every copy. They’re not necessarily wrong - many creators have found their career kickstarts on social media. But why are they on my feed? The science behind social media algorithms is consistently unclear to the average user. This is in part due to its ever changing nature; however, social media companies notoriously lack transparency about how they generate the content we see. It is a widespread fear that social media creates political polarisation and extremism through the content it puts on our feeds. It seems important to ask: what exactly is it we don’t know about them?

Algorithms themselves are often misinterpreted in modern language usage. In computer science, an algorithm is a series of steps which takes an input and repeats a finite computation until an output is reached. Much like AI, the word algorithm has become synonymous with any task performed by a computer, which can often confuse exactly how we understand it. Some social media platforms, such as Facebook, use a form of machine learning known as neural networks alongside their algorithms to filter content. Facebook uses these to score posts relevant to a user (collected by a separate software) and create a ranking which determines what a user sees. Neural networks mimic human thought processes in their computation, made up of hundreds of thousands of simple interconnected processing nodes. They learn by analysing large quantities of examples which allows them to find patterns and assign labels. They are inherently difficult to understand, as not even the people who create them are able to decipher exactly what processes lead to their end results.

Despite these somewhat unknowable methods, even Facebook has conceded that there is need for more widespread understanding around algorithms, with their Vice President for Global Affairs, Nick Clegg, announcing that Facebook was intending to make changes to their interface to increase transparency. In an article for Medium, he discusses Facebook’s “oversight board” who make decisions on content that may violate community guidelines and the addition of the “why am I seeing this button?”, although the latter is not yet particularly comprehensive.

Others feel there is still not enough being done. At the 2022 MIT social media summit, former Facebook employee and whistleblower Frances Haugen spoke about the addictive nature of social media algorithms and the various dangers they can pose. Other speakers at the event voiced concerns about the effect of algorithms on politics and the extent of what we don’t know due to refusal by social media companies to release the information behind their algorithms. Haugen points out that there are only 300 to 400 algorithm experts in the world, saying “there aren’t enough people at the table.”

Social media companies, naturally, aim to provide content that will keep you running back to their platform. This is, of course, reflected in their algorithms, which are tailored around what you engage with, who you follow, what entertains you and what the company has been paid to promote. Social psychologist William Brady has found evidence to suggest the algorithms additionally boost content that we are biased to learn from. This type of information is dubbed PRIME, which stands for “prestigious, in-group, moral, and emotional”. Studies have shown that misinformation is often inserted into posts that provoked moral outrage. This feeds both into the algorithm which picks content based on engagement, something a post that provokes outrage is likely to elicit, and thus into humans’ psychological learning tendencies based around emotion and morals.

This shows a dangerous possibility to spiral into further polarisation. Studies have been conducted on if Facebook’s newsfeed does polarise people. Differing results have been found. A study in collaboration with Meta took place into 2020, looking into the effect of social media algorithms on behaviour and attitudes during election campaigns. They found there was no change “in downstream political attitudes, knowledge, or offline behaviour, including survey-based measures of polarization and political participation.” However, a different study in 2021 resulted in the opposite finding regarding polarisation.

Outside of concern for the algorithm creation, polarisation and biases, there is worry surrounding the influence of those who run these social media platforms on what content users see. While it would be preposterous to suggest an individual person caters content to each and every user, the idea that certain ideologies can be pushed forward or removed from sites is a possibility frequently discussed by the many users of a variety of different social media sites.

Recently, on the microblogging site Tumblr, there was a controversy between the CEO of Automattic, Matt Mullenweg and a transgender user of the site who was banned without initial explanation. Numerous users of the site accused Mullenweg of transphobia, claiming that he had removed the user due to deeming her post-transition photos as sexually explicit. Mullenweg responded to these accusations on a blog post on his own Tumblr account, where he said that it was instead due to a hyperbolic death threat that the user had posted about him. While Mullenweg may not have been abusing his power, it certainly demonstrates the potential that exists for CEOs to do so.

Whether a CEO or an algorithm - with good or bad intentions - it is clear that social media has become more and more influential on the way that we think. Perhaps algorithms offer a way to crack the code and boost yourself to fame, or perhaps they simply polarise us, politically and otherwise. Until we achieve more transparency and more research, it will remain a matter of debate.