How AI can pose ethical and social challenges for journalism

Have you ever wondered how much of the news you read is written by humans or machines? Artificial intelligence (AI) is revolutionizing journalism, both as a tool to help journalists with their work and as a source of content creation. AI can offer many advantages to journalism, such as speed, efficiency, and innovation. But AI can also pose some serious ethical and social challenges for journalism, such as the threat of misinformation, plagiarism, bias, and loss of trust. How can we ensure that AI is used in a responsible and ethical way in journalism? What are the implications and consequences of AI for journalists, policymakers, and researchers? In this post, we will explore these questions and propose some possible solutions and recommendations.

What is AI and how is it used in journalism?

AI is a broad term that refers to the ability of machines or software to perform tasks that normally require human intelligence, such as understanding language, recognizing images, or making decisions. AI can be applied to various aspects of journalism, such as:

Data journalism

AI can help journalists collect, analyze, and visualize large amounts of data, such as social media posts, public records, or sensor data. For example, Bloomberg News uses an AI system called Cyborg to generate stories based on financial data and reports. The system can extract key facts and figures from complex data sets and write concise and accurate summaries for the readers.

Automated journalism

AI can help journalists generate content based on data or predefined rules, such as sports reports, financial summaries, or weather forecasts. For example, The Washington Post uses an AI system called Heliograf to produce stories on topics such as high school football, congressional races, and the Olympics. The system can write stories using templates and natural language generation software, and also update them with new information as it becomes available.

Augmented journalism

AI can help journalists enhance their content with additional information or features, such as interactive graphics, personalized recommendations, or voice assistants. For example, The Financial Times uses an AI tool called Lantern to measure the engagement and impact of its stories. The tool can provide insights into how the readers interact with the content, such as how long they spend on each article, what topics they are interested in, and what actions they take after reading.

Investigative journalism

AI can help journalists verify and fact-check information in real time, using tools like ChatGPT, Project Debater, and Juicer. For example, ChatGPT is a chatbot that can answer questions about any topic using natural language processing and deep learning. Project Debater is an AI system that can debate humans on complex topics using argumentation and reasoning skills. A juicer is a tool that can extract entities and keywords from large collections of documents and provide summaries and visualizations.

What are the ethical and social challenges of AI in journalism?

Artificial intelligence (AI) is increasingly being used in journalism, both as a tool to assist journalists and as a source of content generation. However, the use of AI also raises ethical and social issues that need to be addressed, such as the risk of misinformation, plagiarism, bias, and loss of trust. In this post, we will explore these issues and propose some possible solutions and recommendations for journalists, policymakers, and researchers. Some of these are:

Misinformation

AI can generate false or misleading information that can harm individuals or society. For example, AI can create fake news, deepfakes, or synthetic media that can manipulate public opinion or spread propaganda. This can undermine the credibility and integrity of journalism and erode the trust of the public. To prevent or combat misinformation, journalists need to verify the sources and accuracy of the information they use or produce, especially if it involves AI-generated content. They also need to disclose the use of AI in their work and inform the audience about the potential risks and limitations of AI. Policymakers need to regulate the use and distribution of AI-generated content and ensure that there are legal and ethical frameworks to hold accountable those who create or disseminate misinformation. Researchers need to develop methods and tools to detect and expose misinformation and educate the public about how to spot and avoid it.

Plagiarism

AI can copy or reproduce existing content without proper attribution or consent. For example, AI can scrape or paraphrase content from other sources and present it as original work. This can violate the intellectual property rights of the original authors and damage their reputation. It can also reduce the quality and diversity of journalism and create confusion for the audience. To avoid or deter plagiarism, journalists need to respect the rights and interests of the original authors and give credit where credit is due. They also need to use reliable and reputable sources and check for plagiarism before publishing their work. Policymakers need to enforce the laws and regulations that protect the intellectual property rights of content creators and penalize those who infringe them. Researchers need to study the extent and impact of plagiarism in journalism and develop tools and techniques to prevent or detect it.

Bias

AI can reflect or amplify human biases or prejudices that can affect the accuracy, fairness, and diversity of journalism. For example, AI can inherit biases from the data or algorithms it uses, or from the people who design or use it. This can result in unfair or inaccurate representation of certain groups or issues in journalism, such as gender, race, religion, politics, etc. To reduce or eliminate bias, journalists need to be aware of their own biases and those of the AI systems they use. They also need to ensure that the data and algorithms they use are diverse, inclusive, transparent, and accountable. Policymakers need to establish ethical standards and guidelines for the development and use of AI in journalism that promote fairness, equality, and diversity. Researchers need to investigate the sources and effects of bias in AI systems and propose methods

What are the possible solutions and recommendations for AI in journalism?

To address the ethical and social challenges posed by AI in journalism, we propose some possible solutions and recommendations for journalists, policymakers, and researchers. Some of these are:

Education

Journalists should learn more about AI and its potential risks and benefits. They should also educate their audiences about how to recognize and evaluate AI-generated content. For example, the JournalismAI project at the London School of Economics and Political Science offers online courses, workshops, and resources to help journalists understand and use AI in their work. The project also publishes a newsletter and a report on the state of AI in newsrooms around the world. Additionally, journalists can use online platforms such as Udemy to access courses on artificial intelligence and algorithms in journalism. These courses can teach journalists how to use AI for various purposes, such as data analysis, content creation, and verification.

Regulation

Policymakers should create laws and guidelines to ensure the ethical and responsible use of AI in journalism. They should also enforce standards and sanctions for violations of privacy, intellectual property, or human rights. For example, the European Commission has proposed a set of rules and principles for trustworthy AI that cover aspects such as human oversight, transparency, accountability, and fairness. The rules also include specific requirements for high-risk AI applications, such as those that affect people’s health, safety, or fundamental rights. Similarly, the Knight Foundation has announced a new initiative to help local news organizations harness the power of AI in a way that is ethical, equitable, and sustainable. The initiative will provide funding, training, and support for newsrooms to adopt AI solutions that serve their communities.

Innovation

Researchers should develop new methods and tools to improve the quality and reliability of AI in journalism. They should also collaborate with journalists and other stakeholders to ensure that their research is relevant and beneficial for society. For example, the Brown Institute for Media Innovation at Columbia University supports interdisciplinary research projects that explore the intersection of media, technology, and society. The institute also organizes events, grants, fellowships, and training programs to foster innovation and collaboration in journalism. Another example is the Tow Center for Digital Journalism at Columbia University, which conducts research on the impact of AI on journalism and hosts policy exchange forums to bring together technologists and journalists 5. The centre also publishes reports and guides on topics such as automated journalism, data journalism, and algorithmic accountability.

Conclusion

AI is a powerful technology that can transform journalism in many ways. It can help journalists with data analysis, content creation, verification, and engagement. However, the ethical and social challenges posed by AI in journalism need to be addressed, such as misinformation, plagiarism, bias, and trust. In this post, we have explored these challenges and proposed some possible solutions and recommendations for journalists, policymakers, and researchers. These include education, regulation, and innovation. We hope that this post has provided some useful insights and suggestions on how to use AI in journalism in a responsible and ethical way. We welcome your feedback and opinions on this topic in the comments section below. Thank you for reading!

Previous fact checks

Social Media and Youth Mental Health (MeetUP)

Hoax ‘documentary’ about human flesh-eating

Fact check: Coup in Niger sparks fake news

Eric Tataw’s Recent Arrest is Not Connected to His Separatist Links

The Role of Clickbait and Sensationalism in Creating Disinformation and its Impact on Society

The Article “Welcome to Yaounde” by TGP Newspaper is Misleading

Related Posts

WhatsApp Image 2024-08-07 at 03.12
Fact Check: Increase in Tax Revenue in Cameroon
Claim: Between 2010 and 2023, the tax revenue collected in Cameroon...
Read More
WhatsApp Image 2024-07-30 at 13.24
Fact Check: Claim on Magistrates Accused of Corruption in Cameroon
Claim: “865 out of 1,784 magistrates are accused of corruption...
Read More
Landscape feature images (2)
84% of Road Accidence is caused by Human Mistakes, Incorrect
Fact check of inaccurate social media post claiming statistics...
Read More
Landscape feature images (1)
Scam: Alleged UNESCO Job Opportunity in Canada
An A4 page document circulating on social media claims to be...
Read More