How to encourage academic integrity in the age of generative AI
For a brief period, research offered helpful ways of adapting assignments to make it less likely that good marks could be achieved with ChatGPT and the like. These included requiring students to analyse images and videos, to draw on class discussion, analyse longer texts, and write about recent events not included in the AI training data. Other tips included asking students to write about highly specific topics, to include their personal experiences and perspectives, to integrate multiple sources and to present their own original arguments (see for example, Nowik, 2022; Mills, 2023; Rudolph et al, 2023).
It is, however, becoming clear that generative AI is increasingly able to produce convincing responses to even these kinds of assignments. Unless we are willing to exclusively assess using invigilated examinations, which would risk narrowing the range of skills we are able to validly assess, we need to consider other ways to promote academic integrity among our student cohorts. Fortunately, the wider literature on cheating can help. Here are some of the lessons from that literature and from my years of regulating national qualification systems.
- The bar for proving malpractice is rightly high. This means that our resources are better spent on deterrence rather than detection, at least until a surefire technological AI detection solution is available. The possibility of a viva voce may act as a helpful deterrent to cheating, though it is highly unlikely to be a method of proving that cheating has occurred.
- Research suggests that honour codes can be a useful way of discouraging cheating, but they need to be relational rather than bureaucratic. In other words, they need to be part of a classroom culture of support, rapport and respect, ideally discussed and completed in class.
- Cheating is much more likely when the stakes are high. Providing feedback that highlights students’ progress over time, rather than comparing their performance to that of their peers, is less likely to encourage cheating behaviour. Moreover, this kind of feedback is far more helpful to students in telling them what they need to do to improve.
- Students don’t always fully understand what does and doesn’t count as malpractice. They tend to believe that it is their intent rather than their behaviour that will be judged. In this way, even simple rules such as not taking a mobile phone into an examination are frequently misunderstood. It is important to discuss what is and what is not cheating with students, especially now that generative AI may be legitimately used for some purposes and not others.
- There is a risk that in discussing academic integrity we inadvertently normalise cheating. Wider research shows that we are all more likely to engage in behaviours that we believe are common. While the evidence is that the prevalence of malpractice is likely to be higher than official figures suggest, it is a minority of students who are likely to cheat on an assignment. It is crucial that we reflect this in our communications with students.
The threat of malpractice to the integrity of educational assessments is nothing new. An early documented case of cheating – the theft of an exam from a university printing office – was described by Barnes in 1904. Even examinations are open to extreme forms of malpractice (e.g., smart watches, in-ear technology, hidden cameras, magic calculators…). The ideas for encouraging academic integrity presented here are useful regardless of the mode of assessment, are resilient to technological innovations and are generalizable to many contexts.
Written by Dr Michelle Meadows, the Course Director of our MSc in Educational Assessment.
If you are interested in learning more about educational assessment, you may wish to watch our series of films in which we speak to experts from academia and practice to identify the central issues and provide some top tips on assessment matters.
References
Barnes, E. (1904). Student honor: A study in cheating. The International Journal of Ethics, 14(4), 481-488.
Mills, A. (2023). AI text generators: Sources to stimulate discussion among teachers. https://docs.google.com/document/d/1V1drRG1XlWTBrEwgGqdcCySUB12JrcoamB5i16-Ezw/edit#heading=h.qljyuxlccr6
Nowik, C. (2022, December 17). The robots are coming! The robots are coming! Nah, the robots are here. Substack, https://christinenowik.substack.com/p/the-robots-are-coming-the-robots#details
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342-363.