Dangers of AI :The Case of the Pakistani Newspaper misusing AI instead of authentic academic / journalism writings (Carlson, 2023) 13/11/25


https://www.facebook.com/share/p/1EWxC5PpeL/
The advent of Artificial Intelligence (AI) in journalism has introduced significant changes in the production, editing, and dissemination of news. However, ethical lapses and editorial oversights continue to emerge as media organizations adapt to technological tools. This article analyses the case of a Pakistani newspaper that inadvertently published a ChatGPT-generated prompt within an article in 2023. The author appears to FORGOT to 'erase' that prompt. It discusses the implications of the incident on journalistic integrity, editorial accountability, and the ethical use of AI in media practices. The discussion further situates the case within broader debates on AI transparency, authenticity, and professional responsibility in news production.


1. Introduction

Artificial Intelligence has rapidly integrated into the workflows of global media organizations, assisting journalists in generating story ideas, editing texts, and drafting reports (Carlson, 2023). Generative AI tools such as ChatGPT have become particularly prominent due to their ability to produce human-like text. While these technologies offer efficiency and speed, they also raise questions regarding authorship, bias, and ethical responsibility (UNESCO, 2021).

In November 12 , a Pakistani newspaper became the subject of public criticism and humor after it accidentally printed a ChatGPT prompt in its business section. The article, titled “Auto Sales Rev Up in October,” contained an unedited line from ChatGPT reading: “If you want, I can also create an engaging snippet for readers, mentioning the overall market trend and company performance. Do you want me to do that next?” The incident provides a revealing case study of the challenges posed by AI integration in traditional journalism.







2. The Case Description


The newspaper in question—reportedly Dawn ( https://www.dawn.com/news/1954574 )—published an article analyzing the growth of Pakistan’s automobile market for the month of October. While the report correctly cited figures on car sales, it inadvertently included a visible ChatGPT-generated message within the printed column. This message, clearly part of the AI’s conversational interface, revealed that the journalist had used ChatGPT to draft or structure the story. The oversight became viral on social media, leading to public discussions about AI reliance and editorial negligence. 


The newspaper issued any official clarification (updated 13/11/25 , 620 pm)












https://www.dawn.com/ai-policy

3. Ethical Analysis

3.1. Editorial Integrity and Professional Oversight

According to the Society of Professional Journalists’ (SPJ) Code of Ethics (2014), journalists must ensure accuracy and accountability in all published material. The failure to remove a raw AI-generated prompt represented a breakdown in editorial oversight and quality control. It blurred the boundary between machine assistance and human authorship, a concern emphasized by Carlson (2023), who warns that the “automation of authorship” can undermine journalistic credibility if not transparently managed.

3.2. Transparency and Disclosure of AI Use

The European Commission’s Ethics Guidelines for Trustworthy AI (2019) and UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2021) both stress that audiences must be informed when AI tools contribute to published content. In this case, transparency was entirely absent. The AI’s unintentional exposure revealed the hidden dependence of the newsroom on generative AI, sparking debates about whether readers have the right to know how much of a text is machine-generated.

3.3. Impact on Credibility and Public Trust

Trust remains a cornerstone of journalism. As Lacy and Rosenstiel (2015) argue, credibility is eroded when readers perceive manipulation or carelessness in news production. The viral nature of this error created reputational harm not only for the newspaper but also for the broader Pakistani media landscape, already criticized for inconsistent editorial standards. Public trust in journalistic processes may decline further if such incidents are perceived as symptomatic of systemic negligence rather than isolated mistakes.

3.4. AI Literacy and Media Practice

The incident also reveals a significant gap in professional AI literacy among journalists in developing regions. Cotton et al. (2023) and Williamson and Hogan (2020) argue that while AI can enhance creativity and productivity, it requires critical training to prevent overreliance and ethical missteps. The Pakistani case underscores the need for institutional policies, training workshops, and editorial protocols that define acceptable AI usage in content production.


4. Discussion: AI Ethics in Journalism

The growing incorporation of generative AI into newsroom workflows necessitates a re-examination of ethical standards. AI can support efficiency, but it also risks diminishing the authenticity of human journalism if deployed without disclosure (Diakopoulos, 2019). The case demonstrates how a seemingly trivial technical mistake exposes deep ethical tensions between automation and editorial responsibility.

It is vital that media organizations implement AI-use disclosure frameworks similar to those proposed by the World Association of News Publishers (WAN-IFRA, 2023). These frameworks recommend explicit statements indicating whether AI tools contributed to text generation, summarization, or fact-checking. Such transparency aligns with global ethical mandates on trustworthy AI deployment.


5. Conclusion

The Pakistani newspaper’s inadvertent publication of a ChatGPT prompt illustrates the delicate intersection between technology and ethics in journalism. The incident underscores the necessity for rigorous editorial supervision, AI literacy, and transparent disclosure mechanisms within newsrooms. As AI continues to shape global information ecosystems, the responsibility of journalists to maintain credibility and authenticity becomes ever more urgent. Future research and media practice must therefore focus on establishing clear professional standards for AI-assisted journalism—standards that safeguard both technological innovation and the enduring ethical foundations of the press.


References

  • Carlson, M. (2023). Automating the News: How Algorithms Are Rewriting the Media. Rutgers University Press.

  • Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Designing learning systems that value authenticity over automation. Assessment & Evaluation in Higher Education, 48(5), 657–671.

  • Diakopoulos, N. (2019). Automating the News: How Algorithms are Rewriting the Media. Harvard University Press.

  • European Commission. (2019). Ethics Guidelines for Trustworthy AI. Brussels: European Union.

  • Lacy, S., & Rosenstiel, T. (2015). Defining and Measuring Quality Journalism. Reuters Institute for the Study of Journalism.

  • Society of Professional Journalists (SPJ). (2014). SPJ Code of Ethics.

  • UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. Paris: UNESCO.

  • Williamson, B., & Hogan, A. (2020). Pandemic politics, pedagogies and practices: Digital technologies and AI in education. Learning, Media and Technology, 45(2), 107–114.

  • World Association of News Publishers (WAN-IFRA). (2023). AI in Newsrooms: Transparency and Responsibility Guidelines.