Aided by technology development the internet is becoming a virtual public sphere for people to convey messages in political and journalistic processes. The emergence of networked political communications in the past few years has raised new questions about its relationship with democracy. This essay will assess the role of networked political communication, and argue that instead of encouraging healthier democracies in society, it is in fact creating polarisation and echo chambers.
Network and political echo chambers
Communities of like-minded individuals
The low-cost access to the internet regardless of geographic distance is promoting the popularisation of its usage. Not being bound by physical proximity, citizens gain the ability to connect and organize based on shared interests, as niche or rare as they may be. These conversations may be taking place in the open, through forums or blogs, but also in more private settings, such as closed groups on Facebook or private communities.
Wilson (2001) argues online spaces create opportunities for better democracy, which can promote the development of positions that would otherwise be silenced or offer a safe space for people who suffer from discrimination. For example, the supporters of Hillary Clinton during the 2016 US presidential election campaign created a private Facebook group where they could express their opinions without fear of being harassed during a particularly polarizing election.
However, this essay argues that in practice the modal outcome of the “better democracy” is group polarization, which as Sunstein (2019) indicates represents a “breeding ground for extremism” and “may even put social stability at risk”. His concern is that “through the mechanisms of social influence and persuasive arguments, members will move to positions that lack merit”. This could happen either because the homogeneity of the networked group restricts the size of the argument pool or because individuals will be more likely to voice a popular opinion within the group in order to obtain the approval of as many members as possible. The existence of these two network mechanisms has been demonstrated in research that shows individuals who participate in homogeneous discussion groups tend to adopt more extreme positions after deliberating with their like-minded peers (see e.g. Myers and Lamm, 2016).
Sunstein (2019) also argues that this mechanism is behind the rise in the number of political hate groups and extremist organizations with a strong online presence, such as white supremacists and holocaust deniers. Al-Ghazzi (2021) used this argument in the lecture as an explanation for the emergence of online groups with less nefarious intentions but whose success can be equally polarizing. For example, in the book “The Outrage Industry”, Berry and Sobieraj (2013) document how digital tools were critical to the emergence and organization of the Tea Party movement. They lowered the barriers to entry for those who wanted to get involved and offered a space where political messages could be refined and agreed on.
A study by Lawrence, Sides and Farrell (2010) further proved the view. They found that network users generally gravitate towards information that aligns with their political orientation and are more likely to be polarized than non-network-users. For example, Conover et al. (2012) showed that partisan Twitter users are significantly more likely to spread messages that are congenial with their ideological positions, while close to 75% of retweets on political topics take place between users of similar ideological views. Besides, Del Vicario et al. (2016) found that information related to scientific news and conspiracy theories also tends to spread in homogeneous and polarized communities on Facebook.
It’s thus clear that the “better democracy” given by networked political communication in fact has helped build up a closed system where political beliefs are amplified by communication and repetition and insulated from rebuttal. At the same time, governments and political journalists are aware of the fact that they can rely on the ready-made filtered groups/bubbles to implement further spin control.
The filter bubble
Wilson (2001) argued that networked political communication gives audiences permitted freedom and autonomy as never before to engage in diverse content as they want.
However, as the increasing number of diverse opinions are placed in front of anyone who is using the internet, it has become clear that humans are generally mentally unprepared to deal with it. In this context, search engines and social networking sites have turned to depend on real-time, automated, and personalized algorithms to help users navigate their web browsing sessions. This new way of being able to consume 24/7 information is ensuring that political groups and journalists can provide a 24hr drip feed of information to tired, overwhelmed minds, and therefore keep their supporter bubbles active, even if no longer able to deliberate with a clear mind.
In contrast to newspapers and TV newscasts, where journalists act as the gatekeepers that determine which stories are newsworthy, network algorithms tailor information to each individual’s preferences and interests. This ‘architecture machine’ “tracks what you skipped and reread and where you paused, then use those cues to evolve into a composite system that would carry only the news you cared about most” (Hapgood, 2005).
Sunstein (2019) uses this information to justify his worry about the age of the algorithm, in which citizens are not in control of the information they consume anymore. Even if people would not voluntarily choose to gravitate towards echo chambers - where they only encounter beliefs or opinions that coincide with their own - they may have no option, as network sites become the arbiters of what people see and what they don’t. This concept is ready to be exploited by political journalists, extremist terrorist groups, campaigners, and anyone else with a political agenda, who want to disseminate heavily biased information to ready and waiting consumers.
Hence, the argument is that the network algorithms are more likely to increase inequalities in civic engagement between politically interested people and those who would only prefer to consume entertainment news. There is a further argument that for those with clear partisan preferences, the algorithms reduce the extent to which they hear from the other side, even below the level of what they would voluntarily choose to see. The outcome of this dual process, as Beckett (2019) claimed, is a society where the type of shared experiences that are necessary in a well-functioning democracy disappeared.
Network and political polarization
Ideological polarization - divergence in political views or issue positions
Instead of receiving cross-cutting information from the internet to enable democratic progress which Wilson (2001) claimed, citizens’ online experiences in practice are shaped in predictable ways by their own political and sociodemographic characteristics.
Zaller et al. (2002) argued that politically aware individuals are more receptive to pro-attitudinal messages, which means they are more likely to uncritically accept supporting arguments and reject counter-attitudinal arguments, leading to attitude polarization. As beliefs in misinformation are more difficult to alter, deactivating polarization would require more than just increasing exposure to cross-cutting political views. In Sunstein’s (2019) opinion, this pattern is being exacerbated simply by citizens’ self-selection into the consumption of online information, based on their pre-existing level of political interest.
Affective (Psychological) polarization - dislike for the partisan outgroup
DeGroot (2004) believes information environments where individuals are exposed to multiple viewpoints as spaces that should lead to democracy, social consensus and cohesion.
However, biased information processing is driving the perceived increase in polarization among the public. This is particularly the case in America - as Habermas & Lennox (2010) pointed out - even if Democrats and Republicans may agree on some policy positions, they tend to increasingly dislike and distrust one another, and to perceive that the social distance between them has expanded. This conceptualization of polarization is often called “affective polarization”.
Settle (2018) articulates this argument that core features of social media platforms, such as the fusion of social and political content or the ability to easily infer other users’ political identity, have a direct impact on the aggregate level of psychological polarization. The mechanism behind these effects is that these features facilitate the type of psychological processes behind affective polarization: the reinforcement of social and political identities, in combination with citizens’ biased information processing.
As Settle (2018) shows through a series of survey studies and lab experiments, network usage (and the type of partisan content that is shared on Facebook) increases the perceived differences between individuals’ own position and where they perceive the outgroup to be, makes political and social identities more correlated, and contributes to the stereotyping (and negative evaluations) of the outgroup.
Overall, it’s evident that networked political communication is reinforcing consumers’ biased opinions, aggravating the conflicts between different societal/political groups. It would remain a challenge in the future to steer internet content away from the political beast that it has become, and move it towards a goal where well considered and researched arguments are given and deliberated. However, while anyone can create content and access bubbles of easily-led thinkers, it is unlikely to happen anytime soon.
Ways in which it could begin to happen would be:
1. Working for good: Journalists need to always adhere to the objectivity principle in the aim to serve the public. Governments, enlightened leaders and activists should help steer democratic processes to produce better outcomes.
2. Evolving individuals: To increase citizen awareness, improve digital literacy and promote better engagement among educators.
3. Adapting systems: To improve social media systems and ethos among technologists, which would perhaps involve limiting the algorithms that steer consumers into filtered bubbles.
Comments