With advancements in information and communication technologies, news media industries have undergone sea changes, and the emergence of artificial intelligence (AI) and machine learning has redefined journalism. The process of producing a news story, and getting it out to consumers, has been affected by the adoption of these newer technologies.
In addition to human staff, robot reporters and automated journalism are making inroads, and today, digital media has intensified the dissemination of news. This essay will argue that AI is destroying the public trust in journalism, even though it has benefitted news industry in some ways.
Firstly, according to Merrill (2018), while the articles generated automatically by robots are eventually reviewed and corrected by journalists, the errors are much less frequent. AI thus could help limit errors in published articles.
Moreover, Bontcheva (2016) has further stated that with the help of AI, they can deliver enhanced news quality and accuracy by identifying and dismantling the fake news. Algorithms have been designed that can distinguish between human, and AI generated content. Some use comparative analysis to check whether the facts contained match to reliable sources, whereas others look for differences in the title and content - identifying clickbait. Journalists are benefitted quickly by automated fact-checking AI.
However, Green (2021) has claimed that AI in journalism is in fact more likely to be used for creating fake news, deceiving and disappointing the public.
As Green (2021) explained, an important method to create these models for discriminatory programs is referred to as the "adversarial" system. Adversarial machine learning is the process of creating malicious or misinforming content that can slip past detection programs, which has been used under governors, politicians or journalists’ deliberate manipulation to fabricate the stories.
Another method is the creation of "deepfakes," which are doctored or artificially generated videos and photos that can superimpose the physique and face of one person on another to make it seem like they carried out a certain action.
Deepfakes can have drastic consequences upon misuse. From propaganda to incite hatred and violence, to maligning public figures with fake speeches and doctored videos, deepfakes can be used to create confusion and can result in a severe loss of public trust and a bad reputation.
A prime example of such misuse was the release of a doctored video of Facebook CEO Mark Zuckerberg around the time of his congressional hearing.
At the same time, fake news can also be produced due to the constraints of AI itself. While there are only a few examples of robots journalists producing fake news online due to the constraints of AI itself, there are some striking ones. For example, a Quakebot robot (specialized and designed to prevent earthquakes) has accidentally announced an earthquake reporting several deaths when the earthquake was in fact 92 years old. While the cause of the mistake seems to be human, a human action caused the replacement of the date of the 1925 event by 2025, the robot failed to spot it and started disseminating the news. The example shows that AI journalism still requires human reviewing to avoid such big mistakes.
Secondly, Nygren (2019) claimed that AI is outperforming human beings through efficiently quickening news production process, which is advancing its credibility among the public. AI, with an algorithm process which enables the data set to be converted into news stories for human interest and readability, could complete a set of news work without human’s intervention, from filtering information to dealing with the readers’ comments.
However, Beckett (2021) argued that the increased efficiency under the title of ‘human interest’ doesn’t necessarily serve the public interest. For example, he mentioned in the lecture that journalists tend to use AI’s comment moderation ability to ignore abusive remarks in order to maintain its reputation, which restricts expression freedom, as well as collapsing the democracy.
Furthermore, in Donsbach’s (2013) opinion, even if the news production process has been facilitated, a robot is still unable to write exactly like a journalist and could miss some elements of context that would be clearly and quickly perceived by the human brain. Effectively the automatic nature of the writing style of robots leads to stereotypical articles, i.e. robots always write the same way, which would cause audiences’ question on whether the news converys objective information as expected.
Also, typically, a robot might not be able to convey sadness or the joy in a text as well as any other literary style as a journalist would do. Subsequently, the dehumanization of writing is likely to have a direct impact on the editorial identity of online media. The capabilities and technical constraints of these AIs are common to all media wishing to implement these new means of automation. In the long run, this could lead to losing the editorial identity of a media, exacerbating the strangeness with the readers.
Thirdly, there are legitimate concerns in the journalism industry nowadays, around AI.
New technologies are increasingly challenging copyright laws in the creative industries as AI is trawling, and learning from human creative works, and generating its own (The World Economic Forum, 2018). This is likely to test the legal interpretation of ‘fair use’.
In addition, AI cannot currently be held legally accountable, which gives distributors such as Facebook, Google and Twitter power to shape public opinion, as it is the AI that determines the content that is put in front of consumers.
As of now, technological developments have no solutions to legal problems emanated from algorithm-generated content about private citizens, which violated the public’s interest.
In conclusion, the trust danger between the public and journalism is proliferating because of AI, considering its lack of integrity, human engagement/supervision and law enforcement.
To amicably resolve the challenges, the news organization should:
1. Adhere to the basic principles of journalism - keeping integrity and serving for the public interest, rather than being profit-driven or politically-manipulated to manufacture news content.
2. Make/refine policy of AI journalism ethics, while motivating the enhanced law regulations to defend organisations over legal issues which could be because of algorithm-driven news stories in Google and other similar digital news platforms.
3. Encourage the active participation of journalists in their profession to cooperate well with AI, instead of relying on it blindly.
4. Educate and increase the public’s awareness of AI journalism, rebuilding the trust between.
5. Work with governments, enlightened leaders, activists and technologists to improve the system together towards AI ethos, as well as supervising each other.
Comments