Social media has become an ever present force in public life and it has rapidly transformed the way people communicate and express themselves. With thousands of platforms like Facebook, Twitter, Snapchat etc., citizens now more than ever can speak their mind freely and get their opinion heard. Freedom of expression is a right whereby “people have the freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers” (Article 10: Freedom of expression | Equality and Human Rights Commission, 2022). Despite social media platforms claiming they encourage this, this essay will argue that the platforms limit freedom of expression to a detrimental extent, primarily for three reasons. Firstly, because of their operations breaking their firm goal of “being neutral”, secondly, because of their lack of abidance for laws and regulation, and finally, because social media platforms can be a hotbed for discrimination and hate speech. All of these restrict freedom of expression, which is worsened by social media platforms alleging plausible deniability. As such, the focus of this essay will be looking at how these platforms allow this to happen, and how by accepting responsibilities and working alongside governments and marginalised communities they can foster trust and give people their ability to express themselves freely.
Neutrality is becoming a standard ethos for Silicon Valley social media platforms. They were designed to only pass out the information from network users without additional editorial input. Therefore, the online public sphere equalizes different societal classes, and empowers everyone to voice for themselves (Gobble, 2018). However, these modern platforms in fact are not as neutral and apolitical as their founders make them out to be, which is instead adverse to freedom of expression.
Firstly, the original intention of the platforms to give fringe voices a space has potentially made it harder for the middle ground to be heard. Chander and Krishnamurthy (2018) argued that the algorithms of social media platforms, such as Twitter and Facebook, are not programmed to present a complete set of information produced by everyone in users’ networks. They were designed to maximize users’ time spent on the platforms and induce them to click on the advertising links for companies’ benefits. Even though no political consideration has been invested into the development of algorithms, the spread of ‘fake news’ as a result would have biased effects on political sides (Lazer et al., 2018). The freedom of users to access balanced and comprehensive information thus has been limited.
For example, Twitter’s own algorithm ranks the quality of tweets as search results to a query by assessing users’ current engagements with them and mathematically predicting their potential to become trendy (Newberry and Sehl, 2021). It represents a value system of how to judge the quality of information based on the traffic, instead of relying on experts to demonstrate accurate information. It could be why half of the top ten results by searching “is the vaccine safe” on Twitter are anti-vaccination statements recently, whilst the vaccination safety is beyond scientific dispute. The choice could be used as political tactics to disseminate polarised information, while users could only passively gain the orchestrated information.
As a consequence, unhealthy echo chambers and political polarisation are highly likely to be caused. These harms brought by the social media platforms are more likely to be irreversible and could exacerbate in the future because Beckett’s (2011) findings showed that people’s prejudiced opinions could only be amplified even when fact-checkers or opponents critique them, and human psychology tends towards confirmation bias in the face of hostile points of view.
Secondly, while DeNardis and Hackl (2015) argued that the guidelines of the social media platforms are neutral, and could supervise the online public sphere as well as aid the freedom of expression, it could be justified that the autonomy of the companies and the ambiguity of the guidelines are destroying liberty.
The community guidelines, written by people who control the platforms, are not neutral on certain issues. They take sides, for example, Facebook rules state that “we welcome a multitude of viewpoints’’ – but subject to their guideline limitations (Mosseri, 2016); Bumble kicked off a pro-trump media personality, who was accused of being a white nationalist, noting its “Stance Against Hate’’, even though the person denied the authenticity of his dating profile (Notopoulos, 2018). These guidelines determine the platform content directly without any evidence-based reference, thus, they damage the diversity of the voices on the platforms.
Meanwhile, the guidelines state that users should not target victims of discrimination, however, they didn’t clearly ban it. Different platforms also have different judgements of what constitutes hate speech. The interpretations of the guidelines could even change under different circumstances, for example, when someone argued that Former US President Donald Trump has violated Twitter guidelines, Twitter responded that they won’t ban his account because it otherwise would “hide important information” (Newton, 2018), however, they then permanently suspended his account in 2021 “due to the risk of further incitement of violence” (AI Amri, 2021). Therefore, the ambiguity of the guidelines which users are subject to adds uncertainties to their freedom for expression, causing the unfairness on users’ side.
Overall, even though the giants of Silicon Valley are trying to portray themselves as neutral and apolitical platforms with due respect to freedom of expression, their effects are inherently not. They should accept the fact that they are managing the content intentionally, and start to take their responsibilities of improving the algorithms and guidelines to work for social good.
Social media platforms can also restrain freedom of expression through their ability to disregard law and regulations. Platforms can act as quasi government entities that engage with users politically more than their own government. Although this can give users an avenue to express their opinions more freely, it is done so within the platform itself which has in-built algorithms moderating user speech and further legitimising social media companies as a source of political information and an authority over established government entities. On the other hand, Everett (2018) claimed that social media platforms like Facebook and Twitter are international in scope, but they are still held down by their local government law in Silicon Valley. They may have the prevalence, familiarity and capacity to regulate user action, but the government is obliged to protect speech rights of users given the platforms as a quasi governmental entity. However, in fact, these platforms tend to avoid responsibility by claiming impartiality, they are no doubt competing against government regulation and more traditional news organisations to present themselves as the primary source for information and public political discourse (Hare and Weinstein, 2014). Freedom of speech can therefore be limited because these platforms, with access to users’ data, can now deliver the message they want directly to online citizens and thus influence their expression.
Before social media, journalist organisations acted as traditional gatekeepers who select or produce news and with government approval they were unparalleled in their ability to disseminate information (Russell, 2017). However, an argument can be made that with the creation of social media platforms, people can now have access to information all across the globe with an unlimited number of interpretations. As such, it is average citizens/users that act as secondary gatekeepers. Citizens act as gatekeepers who pass on news media by directly sharing information (Russell 2017). This argument is further strengthened by social media platforms creating a “semiotic democracy”, in which all individuals have the power to participate in the discussion (Chander & Krishnamurthy, 2018). Therefore, this could lead to more freedom of expression because people aren’t limited to just the information given by traditional news sources or the government. They are now interconnected into their own democratic community where they can express themselves freely.
However, it could be stated that this idea is constrained by the fact that with the increasing influence of social media platforms over the average person’s life, they then take on a government role that limits freedom of expression, whether it is by purpose or not. As private businesses, Silicon Valley social media platforms actively want less government regulation since it impacts their public relations and stock prices whenever they fail to uphold the law. The government wants to protect their citizens rights and as a result, Silicon Valley companies are asked to open up more, with more people using their services, they should become more susceptible to government protection and legislation (Strebel and Kübler, 2021). A prime example of when this didn’t occur was in the late 2010s, personal data belonging to millions of Facebook users was collected and analysed without their consent by a firm called Cambridge Analytica. This data was predominantly used in political advertising which supposedly had an effect in influencing the Brexit referendum. Facebook was accused of having a lack of regulation and worse, preventing governments from being able to check and critique their practices. Thus, this shows how these social media platforms resist critique and inspections whilst also endangering the information of their users to third parties.
Additionally, platforms like Facebook and Twitter want an identifiable message of absolute public openness, having access to as many users as possible, but this in return becomes more of a priority for government regulation. Creating social media as a public forum should be in line with how US law insists that freedom of speech is exercised. In contrast, censorship by arbitrary guidelines doesn’t support this idea. In under two decades, “facebook created one of the most effective forums for discourse, but actively disposes of nearly 100 years of free speech jurisprudence,” (Everett, 2018). Nonetheless, the Supreme Court recognises that social media is arguably the strongest way users have to express themselves, more so than any other engine of expression. Consequently, with the encroachment of social media platforms and the speed of which they have grown, in order to preserve freedom of speech, it is paramount that government regulations control this growth whilst also ensuring their citizens’ freedom of expression.
In essence, considering of social media’s growing entrenchment in society, these platforms will actively use this position to limit freedom of expression detrimentally. They act as a counterculture idealism for which people who distrust traditional media can use to express themselves. However, the necessity for these platforms to actively stave off government regulation is a serious concern as they can shirk their responsibility to uphold free speech. The solution here is for the government to take a more strict stance in enforcing the law and delivering out punishment for their consequences.
Social media platforms can end up limiting freedom of speech due to it being a conduit for hate speech and forms of harm. Political groups and organisations have always sought alternative means to reach out to supporters or ideologically aligned members of society. While Stier (2015) argued that social media has given these fringe organisations a chance to speak for themselves thus exercising their right to expression, the truth is that with the increasing presence of different political groups trying to compete for power, the voice of individuals becomes more muddled and their own expression becomes divided along political lines and even exacerbated to discriminastion and hate speech.
As previously mentioned, one of the benefits of social media has been giving a voice to those who wouldn’t normally participate in conventional society. Disenfranchised groups have been able to create communities to express themselves and fight for their issues. However, Natter (2018) found it has instead given a home to a lot of malicious groups who can harness the impartiality of a lot of Silicon Valley social media platforms. Often, fringe organisations on the far right and left who are banned from being allowed to advertise themselves in traditional news media outlets, can use social media as an important gateway to relevancy. Thus, an outlet for hate speech and stifling of individual voices can occur.
Political groups that deploy hate speech and drown freedom of expression often use identity confirmation and opponent dismissal. This is where hyperlinks, forums and social media groups are used as a way to funnel endorsement for a particular political cause or issue, which accentuates the party’s policy message and priorities, i.e. links for right wing parties supporting nationalist, homegrown groups or businesses in retaliation to opposing groups (Ackland & Gibson, 2013). A good example of this was the creation of the patriot party, a far right white supremacist group in 2019 that utilised Facebook and Twitter spaces to perpetuate racist hate speech to mobilise people to overtake the US government. Even though it was far fetched, it showed the possibility for such groups to mobilise and gain attention on a large scale.
There is also a recognition that social media algorithms spy and gorge data on users, and then influence them with personalised information, which users would be more susceptible to, so as to create more engagements. Users would thus be easily trapped in an enclosed information system where their pernicious bias could be reinforced. According to Harrison’s (2021) research, the filter bubbles by algorithm have hugely contributed to the fast spread of hate speech, extremism, and white supremacy online in recent years. At the same time, Lanier (2019) stated that search engines would automatically recommend racism-related content, and algorithms, on their own, would produce results that reflect discrimination, such as the speech against women and minorities. Besides, the algorithms also deliver prejudiced information to trigger users’ fight or flight emotions, which are the most ancient human emotions, to help the platforms gain more traffic. Therefore, it could be seen that both AI machine learning and real life extremist organisations are greatly leading to the wide spread of hate speech and discrimination on social media.
Freedom of expression is only guaranteed if citizens feel safe and protected, both online and in real life. However, as a result, it is important to note how social media can allow hate speech and fringe extremist groups to spread misinformation and hostility that can be extended into real life political situations. This is worsened by the fact that these groups can rely on social media algorithms to further spread harm to millions of people with very little input or control. A good way to solve this is for social media platforms to work alongside marginalized communities to seek ways to combat this harm and prevent further atrocities in the real world.
To conclude, the essay has discussed three ways in which Silicon Valley social media platforms noxiously affect freedom of expression. First, their ostensible tenet of neutrality has become a subterfuge for their unregulated algorithms and guidelines. Secondly, the lack of abidance for laws and regulations has made it easier for social media platforms to impose their view of the world. Finally, their failure to stop extremist groups from hijacking platforms to spread hate speech and cause violence in real life with the help of algorithms stops users from being able to express themselves freely. By claiming plausible deniability, they endanger millions of users and their freedoms. To better serve the public, it could be suggested for the platforms to adopt a more open approach by accepting legal regulations, taking responsibility and spending the resources needed to improve toxic guidelines and algorithm operations either driven by profits or political interests. By doing so, social media can lead to greater freedom of expression than ever before.
Bibliography:
Ackland, R. and Gibson, R. (2013) ‘Hyperlinks and networked communication: a comparative study of political parties online’, International journal of social research methodology, 16(3), pp. 231–244.
AI Amri, J., 2021. Permanent suspension of @realDonaldTrump. [online] Blog.twitter.com. Available at: <https://blog.twitter.com/en_us/topics/company/2020/suspension> [Accessed 30 January 2022].
Beckett, C., 2011. SuperMedia. Chicester: John Wiley & Sons, pp.89-92.
Chander, A. and Krishnamurthy, V., 2018. The myth of platform neutrality. Geo. L. Tech. Rev., 2, p.400.
DeNardis, L. and Hackl, A.M., 2015. Internet governance by social media platforms. Telecommunications Policy, 39(9), pp.761-770.
Equalityhumanrights.com. 2022. Article 10: Freedom of expression | Equality and Human Rights Commission. [online] Available at: <https://www.equalityhumanrights.com/en/human-rights-act/article-10-freedom-expression> [Accessed 7 February 2022].
Everett, C. M. (2018). Free speech on privately-owned fora: a discussion on speech freedoms and policy for social media. Kan. JL & Pub. Pol'y, 28, 113.
Gobble, M., 2018. Rethinking the Silicon Valley Myth. Research-Technology Management, 61(1), pp.64-67.
Hare, I. and Weinstein, J., 2014. Extreme Speech and Democracy. Oxford: OUP Oxford, pp.78-82.
Harrison, L., 2021. Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble. College Student Affairs Journal, 39(1), pp.103-105.
Lanier, J., 2019. Ten arguments for deleting your social media accounts right now. London: Vintage, pp.13-18.
Lazer, D., Baum, M., Benkler, Y., Berinsky, A., Greenhill, K., Menczer, F., Metzger, M., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S., Sunstein, C., Thorson, E., Watts, D. and Zittrain, J., 2018. The science of fake news. Science, 359(6380), pp.1094-1096.
Mosseri, A., 2016. Building a Better News Feed for You | Meta. [online] Meta. Available at: <https://about.fb.com/news/2016/06/building-a-better-news-feed-for-you/> [Accessed 30 January 2022].
Natter, K., 2018. Rethinking immigration policy theory beyond ‘Western liberal democracies’. Comparative Migration Studies, 6(1).
Newberry, C. and Sehl, K., 2021. How the Twitter Algorithm Works in 2021 and How to Make it Work for You. [online] Social Media Marketing & Management Dashboard. Available at: <https://blog.hootsuite.com/twitter-algorithm/> [Accessed 29 January 2022].
Newton, C., 2018. Twitter explains why it hasn’t banned President Trump. [online] The Verge. Available at: <https://www.theverge.com/2018/1/5/16855900/twitter-donald-trump-ban-explanation> [Accessed 30 January 2022].
Notopoulos, K., 2018. Bumble Says It Just Banned This Pro-Trump Media Personality. [online] BuzzFeed News. Available at: <https://www.buzzfeednews.com/article/katienotopoulos/jack-posobiec-bumble> [Accessed 30 January 2022].
Russell, F. M (2017). The new gatekeepers. An institutional-level view of Silicon Valley and the disruption of journalism. Journalism Studies. 20 (5), 631.
Stier, S., 2015. Political determinants of e-government performance revisited: Comparing democracies and autocracies. Government Information Quarterly, 32(3), pp.270-278.
Strebel, M. and Kübler, D., 2021. Citizens’ attitudes towards local autonomy and inter-local cooperation: evidence from Western Europe. Comparative European Politics, 19(2), pp.188-207.
Comments