Authenticity over Automation: Cotton et al. (2023) on Designing Assessment and Learning Systems in the Age of AI
The rise of Artificial Intelligence (AI) in education has transformed the ways students learn, produce, and demonstrate knowledge. However, these technological shifts have also challenged traditional notions of academic integrity and authenticity. Cotton et al. (2023) address this tension in their seminal work “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT,” emphasizing the need to redesign assessment and learning systems that prioritize authenticity over automation. This post explores their argument, situating it within the broader discourse on educational integrity, pedagogical ethics, and AI literacy. It concludes that sustainable assessment practices in the AI era must focus on reflective, process-oriented, and human-centered approaches that foster genuine learning rather than algorithmic reproduction.
1. Introduction
The integration of Artificial Intelligence (AI) technologies, particularly generative tools such as ChatGPT, has redefined how learners engage with academic tasks. While these technologies offer unprecedented access to information and writing support, they have also facilitated new forms of academic dishonesty (Lund & Wang, 2023). Traditional assessment models—especially those emphasizing written output or memorization—are increasingly vulnerable to AI-generated work (Kasneci et al., 2023).
Cotton, Cotton, and Shipway (2023) provide a critical framework for addressing this challenge. They argue that rather than resisting AI, educational institutions must redesign assessment and learning systems that value authenticity over automation. Their perspective highlights the importance of creating assessments that evaluate human thought, creativity, and reflection—qualities that AI cannot authentically reproduce.
2. The Argument of Cotton et al. (2023): Authenticity over Automation
2.1 Context and Core Thesis
In their paper published in Innovations in Education and Teaching International, Cotton et al. (2023) respond to growing concerns over AI-assisted cheating and plagiarism. They note that while ChatGPT and similar tools can generate coherent, well-structured responses, these outputs often lack critical engagement and personal insight. To preserve educational integrity, the authors propose that assessment systems must prioritize authentic engagement—assessments that measure what students can personally think, do, and create, rather than what machines can replicate.
The authors caution that over-reliance on automation in assessment risks devaluing genuine intellectual effort and diminishing student motivation. Instead, educators should seek to foster “deep learning experiences” that engage students emotionally, ethically, and cognitively (Cotton et al., 2023).
2.2 Redesigning Assessments for Authentic Learning
Cotton et al. (2023) argue that the most effective response to AI misuse is not prohibition but pedagogical redesign. They suggest implementing forms of assessment that AI tools cannot easily replicate—such as oral presentations, in-class writing, reflective journals, and collaborative projects. These tasks require individual reasoning, personal experience, and contextual understanding.
Such authentic assessment aligns with Boud and Molloy’s (2013) conception of learning as an “active construction of meaning,” emphasizing process over product. By prioritizing the how of learning rather than merely the what, educators encourage students to engage with ideas critically and personally.
2.3 Promoting Academic Integrity through Assessment Design
Cotton et al. (2023) further contend that academic integrity must be embedded within the assessment design process itself. Instead of treating dishonesty as a disciplinary issue, institutions should focus on cultivating an integrity culture—where students understand and value genuine learning. This involves transparency, ethical dialogue, and assessment tasks that reward effort, originality, and self-reflection. The authors thus advocate for trust-based assessment ecosystems, where students are motivated by learning, not by performance metrics alone.
3. Authenticity versus Automation in Learning
3.1 The Risks of Automated Learning
Automated assessment tools, while efficient, can lead to depersonalization and superficial engagement. Holmes et al. (2021) warn that over-automating evaluation processes may reduce learning to measurable outcomes, neglecting emotional and social dimensions. When students rely on AI-generated answers, they bypass the struggle, curiosity, and reflection that are essential for deep understanding (Williamson & Hogan, 2020). Thus, automation, when uncritically applied, risks turning education into a mechanical transaction rather than an intellectual journey.
3.2 Authentic Learning in the Age of AI
Authenticity in learning requires that students take ownership of their thinking and apply knowledge in meaningful contexts. Kasneci et al. (2023) support this view, emphasizing AI literacy as a key competency for navigating technological environments responsibly. Authentic learning invites students to use AI as a supportive tool—for example, to explore ideas or check understanding—without outsourcing their intellectual labor. This approach echoes Cotton et al.’s (2023) call for pedagogical practices that integrate AI critically, ethically, and transparently.
4. The Pedagogical Shift Toward Authentic Assessment
4.1 Redefining Assessment Purposes
To design assessments that value authenticity, educators must move beyond rote performance measures and standardization. Cotton et al. (2023) propose aligning assessments with real-world application, personal reflection, and problem-solving. For instance, inquiry-based projects, case analyses, and peer-assessed discussions engage students actively, making AI misuse less effective and less appealing.
Such assessments not only discourage dishonesty but also promote lifelong learning skills—critical thinking, creativity, and ethical awareness—that define education in the 21st century (Luckin et al., 2016).
4.2 Institutional and Policy Implications
Cotton et al. (2023) argue that systemic change is necessary to sustain authentic assessment practices. Institutions should provide faculty training on AI ethics, update academic integrity policies, and develop frameworks for responsible AI use. Ethical assessment design should thus be part of institutional governance, reinforcing the humanistic and moral purposes of education.
5. Discussion
The view of Cotton et al. (2023) reflects a crucial shift in educational philosophy. Their advocacy for authenticity over automation resonates with Williamson and Hogan’s (2020) argument that AI should remain a “partner in pedagogy rather than its replacement.” Both perspectives emphasize human-centered education—where technology supports but does not define learning.
Authentic assessment ensures that learners remain active participants, not passive consumers, in their education. It restores meaning to academic work, enabling students to connect knowledge with identity and purpose. Thus, the pedagogical future envisioned by Cotton et al. (2023) is one where integrity and innovation coexist harmoniously.
6. Conclusion
Cotton et al. (2023) provide a compelling argument for reimagining assessment and learning systems in the era of AI. Their emphasis on authenticity over automation underscores the importance of maintaining human agency, creativity, and ethical engagement in education. In an age where algorithms can replicate knowledge production, the true measure of learning lies in originality, reflection, and moral understanding. Designing authentic assessments is not merely a defense against AI misuse—it is a reaffirmation of education’s deepest purpose: cultivating thoughtful, responsible, and self-aware human beings.
References
The rise of Artificial Intelligence (AI) in education has transformed the ways students learn, produce, and demonstrate knowledge. However, these technological shifts have also challenged traditional notions of academic integrity and authenticity. Cotton et al. (2023) address this tension in their seminal work “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT,” emphasizing the need to redesign assessment and learning systems that prioritize authenticity over automation. This post explores their argument, situating it within the broader discourse on educational integrity, pedagogical ethics, and AI literacy. It concludes that sustainable assessment practices in the AI era must focus on reflective, process-oriented, and human-centered approaches that foster genuine learning rather than algorithmic reproduction.
1. Introduction
The integration of Artificial Intelligence (AI) technologies, particularly generative tools such as ChatGPT, has redefined how learners engage with academic tasks. While these technologies offer unprecedented access to information and writing support, they have also facilitated new forms of academic dishonesty (Lund & Wang, 2023). Traditional assessment models—especially those emphasizing written output or memorization—are increasingly vulnerable to AI-generated work (Kasneci et al., 2023).
Cotton, Cotton, and Shipway (2023) provide a critical framework for addressing this challenge. They argue that rather than resisting AI, educational institutions must redesign assessment and learning systems that value authenticity over automation. Their perspective highlights the importance of creating assessments that evaluate human thought, creativity, and reflection—qualities that AI cannot authentically reproduce.
2. The Argument of Cotton et al. (2023): Authenticity over Automation
2.1 Context and Core Thesis
In their paper published in Innovations in Education and Teaching International, Cotton et al. (2023) respond to growing concerns over AI-assisted cheating and plagiarism. They note that while ChatGPT and similar tools can generate coherent, well-structured responses, these outputs often lack critical engagement and personal insight. To preserve educational integrity, the authors propose that assessment systems must prioritize authentic engagement—assessments that measure what students can personally think, do, and create, rather than what machines can replicate.
The authors caution that over-reliance on automation in assessment risks devaluing genuine intellectual effort and diminishing student motivation. Instead, educators should seek to foster “deep learning experiences” that engage students emotionally, ethically, and cognitively (Cotton et al., 2023).
2.2 Redesigning Assessments for Authentic Learning
Cotton et al. (2023) argue that the most effective response to AI misuse is not prohibition but pedagogical redesign. They suggest implementing forms of assessment that AI tools cannot easily replicate—such as oral presentations, in-class writing, reflective journals, and collaborative projects. These tasks require individual reasoning, personal experience, and contextual understanding.
Such authentic assessment aligns with Boud and Molloy’s (2013) conception of learning as an “active construction of meaning,” emphasizing process over product. By prioritizing the how of learning rather than merely the what, educators encourage students to engage with ideas critically and personally.
2.3 Promoting Academic Integrity through Assessment Design
Cotton et al. (2023) further contend that academic integrity must be embedded within the assessment design process itself. Instead of treating dishonesty as a disciplinary issue, institutions should focus on cultivating an integrity culture—where students understand and value genuine learning. This involves transparency, ethical dialogue, and assessment tasks that reward effort, originality, and self-reflection. The authors thus advocate for trust-based assessment ecosystems, where students are motivated by learning, not by performance metrics alone.
3. Authenticity versus Automation in Learning
3.1 The Risks of Automated Learning
Automated assessment tools, while efficient, can lead to depersonalization and superficial engagement. Holmes et al. (2021) warn that over-automating evaluation processes may reduce learning to measurable outcomes, neglecting emotional and social dimensions. When students rely on AI-generated answers, they bypass the struggle, curiosity, and reflection that are essential for deep understanding (Williamson & Hogan, 2020). Thus, automation, when uncritically applied, risks turning education into a mechanical transaction rather than an intellectual journey.
3.2 Authentic Learning in the Age of AI
Authenticity in learning requires that students take ownership of their thinking and apply knowledge in meaningful contexts. Kasneci et al. (2023) support this view, emphasizing AI literacy as a key competency for navigating technological environments responsibly. Authentic learning invites students to use AI as a supportive tool—for example, to explore ideas or check understanding—without outsourcing their intellectual labor. This approach echoes Cotton et al.’s (2023) call for pedagogical practices that integrate AI critically, ethically, and transparently.
4. The Pedagogical Shift Toward Authentic Assessment
4.1 Redefining Assessment Purposes
To design assessments that value authenticity, educators must move beyond rote performance measures and standardization. Cotton et al. (2023) propose aligning assessments with real-world application, personal reflection, and problem-solving. For instance, inquiry-based projects, case analyses, and peer-assessed discussions engage students actively, making AI misuse less effective and less appealing.
Such assessments not only discourage dishonesty but also promote lifelong learning skills—critical thinking, creativity, and ethical awareness—that define education in the 21st century (Luckin et al., 2016).
4.2 Institutional and Policy Implications
Cotton et al. (2023) argue that systemic change is necessary to sustain authentic assessment practices. Institutions should provide faculty training on AI ethics, update academic integrity policies, and develop frameworks for responsible AI use. Ethical assessment design should thus be part of institutional governance, reinforcing the humanistic and moral purposes of education.
5. Discussion
The view of Cotton et al. (2023) reflects a crucial shift in educational philosophy. Their advocacy for authenticity over automation resonates with Williamson and Hogan’s (2020) argument that AI should remain a “partner in pedagogy rather than its replacement.” Both perspectives emphasize human-centered education—where technology supports but does not define learning.
Authentic assessment ensures that learners remain active participants, not passive consumers, in their education. It restores meaning to academic work, enabling students to connect knowledge with identity and purpose. Thus, the pedagogical future envisioned by Cotton et al. (2023) is one where integrity and innovation coexist harmoniously.
6. Conclusion
Cotton et al. (2023) provide a compelling argument for reimagining assessment and learning systems in the era of AI. Their emphasis on authenticity over automation underscores the importance of maintaining human agency, creativity, and ethical engagement in education. In an age where algorithms can replicate knowledge production, the true measure of learning lies in originality, reflection, and moral understanding. Designing authentic assessments is not merely a defense against AI misuse—it is a reaffirmation of education’s deepest purpose: cultivating thoughtful, responsible, and self-aware human beings.
References
- Boud, D., & Molloy, E. (2013). Feedback in higher and professional education: Understanding it and doing it well. Routledge.
- Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 60(2), 205–216. https://doi.org/10.1080/14703297.2023.2190148
- Holmes, W., Bialik, M., & Fadel, C. (2021). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.
- Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson.
- Lund, B. D., & Wang, T. (2023). ChatGPT and academic dishonesty: Exploring student perceptions and implications for educators. Journal of Applied Learning and Teaching, 6(1), 20–30.
- Williamson, B., & Hogan, A. (2020). Commercialisation and privatisation in/of education in the context of AI. OECD Education Working Papers, No. 218. OECD Publishing. https://doi.org/10.1787/efb68af7-en
- Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0