Smart or Shallow? Reflections on Postplagiarism, Trust, and Learning with GenAI

Rahul Kumar, Brock University. (26 Nov, 2025).

In my recent talk for the Postplagiarism Speaker Series at the University of Calgary, hosted by CAIELI and the host, Dr. Sarah E. Eaton, I examined how GenAI is reshaping the practices, expectations, and underlying assumptions of post-secondary education. The full presentation and recording are available through the University of Calgary’s YuJa platform and Brock University’s institutional repository (Video: https://yuja.ucalgary.ca/V/Video?v=1232044&a=73919705; Presentation: https://hdl.handle.net/10464/19682).

A central observation from the empirical evidence I presented is that the concept of postplagiarism is resonating with post-secondary students. In multiple studies I conducted in 2024 and 2025, students consistently indicated that the traditional plagiarism-based framing of academic integrity felt mismatched to a world where AI tools are ubiquitous in learning and hard to police (not that many faculty want to police it – many companies do however, and tap into the traditional human fears). Postplagiarism, as articulated by Eaton (2023) and colleagues at https://postplagiarism.com, provides a more accurate lens for understanding how learning occurs when students routinely use cognitive technologies.

Other studies on GenAI usage for academic tasks also confirm a dramatic increase in student AI use of them. In 2024, 73.4% of students reported using GenAI. When the same question was asked again in 2025, nearly 94% self-disclosed using AI for academic work (Kumar & McGray, 2024; 2025). These levels indicate that AI use is approaching saturation in post-secondary environments. Attempts to prohibit or discourage routine use appear to have diminishing influence.

My position, articulated in the talk, is grounded in a pragmatic tradition influenced by John Dewey. Rather than attempting to prevent the use of AI, educators should guide its use toward productive outcomes while mitigating risks. Dewey described this orientation as amelioration – the improvement of conditions through intelligent, reflective, and data-driven action. This framing avoids both uncritical enthusiasm and categorical rejection.

To make sense of how AI tools shape learning, I draw on David Krakauer’s distinction between complementary and competitive cognitive artifacts. Complementary artifacts extend human capacities, while competitive ones replace them. The challenge for educators is to design learning environments where student engagement with AI leans toward the complementary end of the continuum. Doing so supports human development, learning, and judgement, rather than diminishing them. Ignoring this dictum only gets students to offload their cognitively demanding work to GenAI (and has produced articles on brain rot through extensive uses of GenAI).

Underlying this discussion is a deeper issue: trust. Students, faculty, and institutions must decide when, how, and to what degree to trust AI-generated content, and on what grounds. In the talk, I proposed an approach to understanding trust that accounts for the ways in which learners assess reliability, transparency, alignment with task expectations, and their own comfort with cognitive delegation on content and affective domains.

As AI becomes embedded in the everyday practices of post-secondary learning, the task before educators is not to control its presence but to cultivate conditions where its use strengthens learning rather than weakens it. Postplagiarism offers one pathway for conceptualizing this future, and I hope the ideas shared in the presentation help advance this discussion across our institutions.

Sources
Presentation: https://hdl.handle.net/10464/19682
Video recording: https://yuja.ucalgary.ca/V/Video?v=1232044&a=73919705
Postplagiarism (conceptual overview): https://postplagiarism.com