
Introduction
Social media such as Facebook, Twitter and Instagram have been among us since the early 2000s. Not long after that, the first algorithms for social media were invented. With the invention and upgrades of Google's engine Hummingbird in 2013, algorithms became the new standard in displaying social media content (Elbedeiwy and Hieber, 2020). An algorithm shows the user the 'most relevant' content for him or her, rather than showing content in chronological order as was before. The algorithm determines which content to show based on the users' previous interests, relationships, frequency of engaging, following, usage and recency of posts (Golino, 2021).
Algorithms can have severe impacts on what users see on their news feeds on various social media applications. Alternatively, as media scholar José van Dijck argues, "Social media are inevitably automated systems that engineer and manipulate connections" (Bucher, 2018, p.2). Van Dijck hits the nail on the head with this quote. The more a user engages with one specific type of content, the more content of this nature the algorithm will show a user. This is also called a filter bubble, which may lead to echo chambers, where the user will only engage with like-minded people, thus amplifying certain beliefs and opinions. The process is explained in more detail by Dutch presenter Arjan Lubach.
As explained, algorithms can determine what a user sees and does not see on their news feed, potentially making their view of the world smaller and less nuanced. The usage of algorithms can impact personal lives and worldviews and businesses, who now need to take the algorithm into account when promoting their business online through social media.
Research question
To provide more insight into the soft impacts of social media algorithms, the final paper will attempt to answer the following research question:
To answer the research question, two qualitative interviews were conducted. The interpretivist approach is used to develop a detailed description, which can be used to gain more insight into using social media through algorithms. The paper hypothesises that users are aware of the functions of algorithms, and therefore change their views of the social media apps.
Theoretical framework
In order to answer the research question, this paper will use the theoretical concepts that relate to algorithms, echo chambers, filter bubbles and techno-moral change as a broader topic. Algorithms have recently been featured more prominently in popular news outlets. It has become more widely known that algorithms affect how users experience social media such as Instagram, TikTok and Facebook. Instagram and Facebook show posts based on 'relevance' for the users instead of showing posts in order of recentness. TikTok shows posts based on posts that have been viewed before. All of the aforementioned applications use 'interaction' (e.g. liking and sharing) with the posts to 'know' what the user likes to see. Based on this data, the application will show the user more of the same content.
In this light, it is essential to get a clear definition of what exactly is an algorithm. The definition of 'an algorithm' used in this paper is 'a computer system that can perform a specific task without being explicitly programmed'. In addition to this, algorithms can 'learn' about user behaviour. This 'learning' provides the algorithm with data through (among others) data mining and picture processing. The data is then used to do predictive analytics, resulting in the user's tailored experience (Mahesh, 2018). A side effect of this is that users will only see the content that the algorithm predicts is likeable for the user, measured through interactions with the content. Although this may benefit the user experience since users will only see content that interests them, it may also harbour a more negative side effect. Thanks to the algorithm, users will possibly be deprived of diversity in the content that is shown to them (Bozdag and Hoven, 2015). This result is more commonly known as a 'filter bubble'. Closely related to filter bubbles are echo chambers. Echo chambers exist when users only engage with other users that are (more or less) in the same filter bubble as them. Likely, these users will have the same interests, views, and opinions. Therefore, the user will only receive feedback reinforcing their current beliefs and opinions rather than receiving conflicting or different messages. This process will then reinforce the filter bubbles. It may lead users to ignore differing information, polarisation between groups, and an extra strong adherence to current beliefs and worldviews (Cinelli and Morales, 2021).
It may seem that algorithms, therefore, hold great power for the user. This, as Bucher explains in her book "If…Then", does not necessarily have to be the case (2018). Bucher draws on the French philosopher Foucault when trying to understand the power that algorithms have. Foucault argues that power primarily comes from the people; in other words, when we (in this case, the users) give power to something, only then will the object have power over She argues that algorithms have some inherent power, as explained above. However, even more so, the users give algorithms the most power. By making claims about algorithms, the user gives power to the algorithm through actions, such as changing user behaviour based on the knowledge of algorithms. In changing the behaviour, Foucault and Bucher argue that this only gives the system more power, thus becoming a never-ending cycle of growing power (Bucher, 2018). Bucher also mentions that algorithmic power is neither about how the social world is fabricated nor about what algorithms do. The power of algorithms rather lies in how and when different aspects of algorithms become available to specific actors, under what circumstances, and who or what gets to be a part of the algorithm (2018, p. 55). This theoretical information will be made more explicit and relatable to modern-day algorithms during the discussion of the qualitative interviews in a later section of the paper.
As explained in the introduction and the parts above, the growth and use of social media algorithms ties in with the theory of Technomoral change as explained by Swierstra (2015). Technomoral change can be defined as the phenomenon that technology and morality mutually shape each other. This goes according to the same principles as explained by Bucher through the theories of Foucault. Swierstra also claims in his theories about technomoral change that the soft impacts of modern-day technologies, such as algorithms, can no longer be ignored (2015, p. 8). Some soft impacts of algorithms are, as explained above, filter bubbles and echo chambers, as they do not necessarily cause direct harm but can still affect user experience and have negative consequences for society. Examples of such are the polarisation that algorithms can cause by only showing the same type of content to the user. Algorithms may also lead to a less diverse view of content, and by extension, a less diverse worldview as a whole. Swierstra argues that technomoral change constitutes the normative challenge, meaning that moral change is not caused by technological change but can be provoked by it. (2015). Some examples of how these social and ethical issues may happen are as follows. Algorithms learn from user data, which may have soft implications for the privacy of the user since the user's data is being mined in order for the algorithm to work accordingly. User autonomy may also be compromised thanks to social media algorithms since the user may (unbeknownst to them) lose their freedom of choice when viewing content that they have not sought out themselves but is provided for them by the algorithm (Royakkers, 2018).
Methodology
In order to answer the research question "How do algorithms affect the user experience of social media platforms such as Instagram, TikTok and Facebook?" this study has chosen a qualitative approach. The paper seeks to gain more insight into the soft impacts of social media algorithms. In order to gain more personal insights into the topic, open-ended interviews with two participants are conducted.
Participants and design
Participants for the interviews were chosen based on their social media usage. Participants needed to use social media, especially Facebook, TikTok and Instagram, on a near-daily basis. The participants had to have some pre-existing knowledge of algorithms. Otherwise, the participants would not be able to answer most of the interview questions fully. Ultimately, two participants were chosen. The first participant was male, 24 years old, of German nationality, and their highest education level was a master at the university. The second participant was female, 24 years old, of Greece nationality and their highest education level was also a master at the university. Age, gender, nationality, and the highest level of education are all critical for social media usage. In order to be able to compare the results of the two interviewees adequately, the interviewees must have similar demographic backgrounds.
The interviews were conducted in person, separately, on the 11th of October at the University of Maastricht. The interviews both lasted for about thirty minutes. Before the interviews, privacy agreements for anonymisation and fair use were made. The interviewees were notified about the background of the interviewer and the aim of the study. The interviews were recorded and later transcribed. The transcriptions of each of the interviews can be found in the appendix. The interview was conducted according to the setup of an open-ended interview, which entails that the interview questions were broad in order not to steer the interviewee in a specific direction. Roughly, the interviews followed the same structure, which was as follows.
The interview began with some opening questions to assess ground-level measurements, such as how often the interviewees use social media and which types of social media they use. In the middle section of the interview, participants were asked to talk about the type of content on social media, the type of advertisements they see on social media (if any), what they think that their peers and family view on social media, and if they notice algorithms and how. The interviewees were asked to give their opinions on algorithms as they viewed them on social media. They were also asked if they were pleased with the content they viewed on their social media and thought that algorithms played a role in this matter. Lastly, they were asked if they would want to change anything about the algorithms on social media or if their opinions on algorithms had changed during the interview. The interview was closed by summarising the interview and asking if they had anything else they wished to state or ask.
Results
The results of the interviews will be discussed in this section. The transcription of the interviews can be found in the appendix at the bottom of the paper. The first interview was conducted with a 24-year-old male from Germany. From hereon, they will be referred to as "Henry", which is a fictional name. They used Instagram and YouTube daily and Facebook on a near-daily basis. Interestingly, the interviewee noted that they would rather not use those apps daily, stating that they "try to limit themselves". They used Facebook for information such as news media and other entertainment applications and to see what their friends are up to. Henry stated that he usually used social media on his smartphone, since this was easiest to use. The other interviewee, a 24-year old female from Greece, will be referred to as "Amy". Amy uses "all of the social media", which means she uses Facebook, Instagram, YouTube and TikTok. She also stated that using social media was "a part of her life". Facebook was primarily used for news and research, YouTube and TikTok for entertainment and Instagram to see what their friends are up to and look at fashion and travel content.
Both interviewees felt that they see similar content on Instagram as their friends, as they all follow each other. However, there are also differences in content, since everybody follows different pages and some friends also use social media for work-related purposes, Amy stated. Amy also stated considerable differences between boys and girls regarding the type of content they view, with girls mainly looking at fashion and travelling content, whereas boys mostly look at gaming content. Henry felt specifical that there are significant differences between generations, with him and his friends primarily viewing the same content, but his parents looking at entirely different things. He also stated that the older generation, such as his parents, often does not know how social media works.
The interviewees had somewhat different views on what a social media algorithm exactly is. Henry defined algorithms as "[systems] who kind of keep track of your social media actions. Like, if you watch like a certain kind of video, or if you like, pictures, or if you share something, whatever. So they keep track of that. And then based on your preferences, and based on your behaviour online, algorithms show you more of the content that you like, and less of what you're not interested in.". This explanation shows that Henry has a pretty defined and accurate understanding of algorithms. Amy defined algorithms as "...when you are seeing something a lot of times, after a while an advertisement pops up among the content that you were watching". This shows that Amy immediately links algorithms to advertisements, even though algorithms apply to all content. The interviewer did not correct Amy about the nature of algorithms. Both interviewees stated that they learned about algorithms through their own experience and discussions with their peers on the topic.
When asking the respondents' opinions on algorithms, both said that they feel like they are "spied on".
Interviewee "Henry"
Henry also stated he was afraid of his privacy thanks to the algorithms since algorithms use data mining to work. Henry stated that he feels like he waives his privacy rights when agreeing with the privacy statement users see when they first launch the app, but he does not feel that this is fair because "no one reads the statements anyways". Amy stated almost precisely the same feelings, saying that she feels "unprotected" in her privacy. Henry also stated that he sometimes feels manipulated or "lured in" when it comes to advertisements shown by the algorithm, "making" him buy something that he otherwise would not have known about and maybe would not be interested in.
Both users could also identify some perks of algorithms. Both interviewees felt that they were rarely shown content that they did not find of interest. Amy also stated that the algorithm often comes up with interesting content that she maybe otherwise would not have seen. Henry felt like they might as well use algorithms since "[his] data would be used for other purposes anyway". Both users felt like they were happy with the content they see daily on their social media. They realised that the algorithm might have some influence in this, but looking back on a time when there were no algorithms in social media, they were also happy with it back then. Both interviewees also feel it is essential to realise that "social media is fake", especially thanks to the influence of algorithms.
When asked if they wonder about the information that is not shown to them by the algorithms, the interviewees gave mixed reactions. Henry stated that social media would probably be more diverse without algorithms, and especially regarding news, he found it necessary that users receive information from multiple sources. However, he mentioned, he mainly uses social media for entertainment purposes. For this use, it is less important to him that diverse content is shown. Amy mentioned that it would be good for her to be shown diverse content, especially when it comes to fashion content. She explained that she might feel self-conscious about her body when she only sees gorgeous models on social media. Therefore, it would be better to show some more diversity.
Lastly, both interviewees said that they had not thought about the implications of algorithms as in-depth as before the interview, which made them reflect on the topic. Especially when thinking about the content that may be "hidden" for them by the algorithms, the interviewees felt a little more pessimistic about algorithms than before the interview.
Discussion and conclusion
The aim of this paper was, through qualitative interviews, to find how algorithms affect the user experience of social media platforms such as Instagram, TikTok and Facebook. The paper hypothesised that users are aware of the functions of algorithms, and therefore change their views of the social media apps. The paper aimed to answer the research question based on qualitative interviews. This research method poses some limitations, such as the unverifiability of the data, the labour and cost insensitivity of the process, and the un-generalisability of the data. In addition to this, only two participants were questioned, compromising the generalisability and validity of the study. The participants had similar demographics, which has its advantages as explained in the methods section but also compromises the study's generalisability. Lastly, the interviewees both had pre-existing knowledge about the subject, which may also compromise generalisability once more.
The interviews have proven that the hypothesis was partially correct. The interviewees did prove that they were aware of algorithms and their functions, although one interviewee was more aware of this than the other. However, the interviews did not prove that users change their behaviour on social media based on this information. Quite the contrary, the respondents did not feel like they could change anything about the algorithm and its implications for social media use. This confirms the theory of Bucher, where she states that users give algorithms power (2018). By not taking any action to counter algorithms, their power increases.
The most important finds of the study are that users do not feel protected in their privacy, something that Royakkers had already predicted (2018). They feel that algorithms "spy" on them, which makes the user feel unsafe and uneasy. At the same time, users do not feel like they can do anything about these implications of social media algorithms. The interviews also showed that knowledge about algorithms and how they work might be lacking, especially among certain (age) groups. This is something that should be investigated in further research. It has become clear that information about algorithms must become more widely spread and information on how to deal with algorithms when they make the user feel unsafe or unprotected.
The answer to the research question is that algorithms do not necessarily influence user experience on social media. Algorithms often go unnoticed. Sometimes, the user experiences negative feelings thanks to the intrusiveness of the algorithm. Users often feel like the only option to counter algorithms is not to use social media as a whole. It would be suitable for further studies to search for a new 'middle way' to deal with algorithms and privacy issues without shutting out social media as a whole.
References
- Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: democracy and design.
- Ethics and Information Technology, 17(4), 249–265. https://doi.org/10.1007/s10676-015-9380-y
- Bucher, T. (2018). If, Then. In Oxford University Press.
- https://doi.org/10.1353/plo.2017.0013
- Elbedeiwy, S., & Hieber, M. (2020, May 28). The Evolution of the Google Algorithm: Why Content
- Matters Now More Than Ever. D Custom. https://dcustom.com/blog/optimization/2020/05/google-algorithm-evolution-2/
- Golino, M. A. (2021, the 24th of April). Algorithms in Social Media Platforms. Institute for Internet & the Just Society.
- https://www.internetjustsociety.org/algorithms-in-social-media-platforms
- Kazansky, B., & Milan, S. (2021). "Bodies not templates": Contesting dominant algorithmic
- imaginaries. New Media & Society, 23(2), 363–381. https://doi.org/10.1177/1461444820929316
- Mahesh, B. (2019). Machine learning algorithms: A review. International Journal of Science and Research, 9(1), 1-6.
Royakkers, L., Timmer, J., Kool, L., & van Est, R. (2018). Societal and Ethical Issues of Digitization. Ethics and Information Technology, 20(2), 127-142. - Swierstra, T. (2015). Identifying the normative challenges posed by technology's 'soft' impacts. Etikk i praksis. Nordic Journal of Applied Ethics, 9(1), 5–20.