Haylee Fuller shares some of the work done at Queen Mary to explore the opportunities and challenges to teaching, learning and assessment since the boom of generative AI, sharing updates to policies, recommended approaches and useful resources.
When generative AI exploded into popular use, there was an obvious and disruptive impact on many traditional ways of thinking about academic integrity and assessment security. There is little need to recount fears about massive numbers of misconduct cases or complete loss of control over assessment security, followed by short-lived hopes for technological solutions such as AI detectors. I joined Queen Mary as the Head of the Appeals, Complaints & Conduct Office in March 2023, so my first challenge in the role was to think about how we would respond to this new environment in practice. I quickly received emails from colleagues in Schools/Institutes reporting that according to GPT Zero (or other ‘detectors’), between 30-60 students in their modules had written assessments with AI. Within the Appeals, Complaints & Conduct Office, we had to reflect carefully about what advice to give to Module Organisers (MOs) and our Misconduct Panel (who reach decisions about the cases).
Traditional ways of thinking about academic integrity or misconduct are mostly deontological (a rules-based approach), with policies including lists of do/don’t. This kind of approach is ill-suited to new technologies which are both innovative tools for future careers and success, and the source of ethical or integrity concerns. Flexible approaches to match discipline and context are necessary, so that the right balance is achieved between innovative learning, and the importance of integrity and ethics in our research and education. From a practical perspective, we still need guidance and clarity about how to respond to concerns. If there are no coherent and consistent rules, we need to consider other frameworks. Teleological approaches to ethics challenge us to think about the outcomes and consequences of actions, not just compliance with finite rules.
Returning to the practical, and the concerns raised, this means: