91茄子

To respond to AI, we may need to live with lower student satisfaction

Reversing some recent trends in teaching and learning is sure to generate mixed responses from students. Regulators must step in, says Ian Pace

Published on
九月 12, 2025
Last updated
九月 12, 2025
A student gives a thumbs down gesture
Source: iStock/AntonioGuillem

To on generative?artificial intelligence?(AI) is a tempting but futile exercise. The technology is here to stay, and it certainly can facilitate plenty of things. But it also creates many problems, and solving those is where our focus should lie.

The situation for text-centred academia is particularly critical. During this year’s marking season, many academics noted greatly increased numbers of student assessments that they are sure were artificially generated. And, sure enough, found 92 per cent of students using AI tools for their assessments.

However, as has long been true of contract cheating, proving this to the satisfaction of university disciplinary investigations is very difficult. We are left in the depressing position of giving lower marks to those essays most obviously written by the student.

Commentators have proffered several categories of solutions, from straight-out AI bans (almost impossible to police) and a return to handwritten exams (difficult for students so unused to writing by hand) to teaching students AI prompt engineering (a modest skill) and leaving them to it. Common suggestions between these extremes involve embracing AI while requiring students to critically assess the results – and to document that process. The trouble is that it is easy for that documentation to be artificially generated, too.

gets her students to summarise class discussions (but this can be done by feeding the commonly provided transcript or slide deck into AI), creating blogs (which multiple AI programmes can do), and making assessments “more personal and creative” (I asked ChatGPT to do this in her field of developmental economics and it did so successfully).

Personally, I have wondered whether focus on non-online data, such as archives, knowledge generated through fieldwork, or (to a limited extent) subscription-only databases would represent ways forward. Certainly, the skills required for this type of work are highly valuable in themselves and have many transferable aspects. But adopting it would create significant logistical difficulties, requiring many health and safety checks, and would also necessitate increased levels of dedication and application from undergraduates (although there are ).

I conclude that written assessments in any format will need to be used less and less. Oral presentations that go beyond reading a prepared text (which could be AI-generated) and involve questions and other live interactions could still be valuable – and at least they demonstrate some speaking and presentation skills. Other assignments, perhaps involving role-playing, could help with building teamworking skills.

that the priority has now shifted towards teaching “human skills” – managing interactions and relationships with others, teamwork, empathy, creativity. But AI can already simulate some of these, especially creativity, and will soon be able to simulate others; and a bigger issue is to what extent existing types of degrees (and disciplines) might provide these skills.

Indeed, hard questions need to be asked about the continued viability of at least some degrees. If they were honest, many academics would admit that significant numbers of graduates have learned only to comprehend and produce a vaguely critical synthesis of existing scholarship on a subject, perhaps applying this to some new data. Such skills may still be valuable, but the professional demand for them may be limited as employers increasingly turn to AI to economise resources.?If, instead, we need to teach a level of critical analysis exceeding that achievable by AI, would this not amount to asking undergraduates – in a mass HE system – to do what we have previously only expected of postgraduates?

There is no silver bullet for universities. All solutions to the challenges thrown up by AI will be provisional – not least because AI will continue to develop for now. They will need to be tried out in an experimental fashion, and some will prove failures. But my suggestions would include a new emphasis on that which cannot easily be rationalised or quantified, forms of “creativity” a long way from the commodified and functionalised understandings of this term, and perhaps also more rote learning and memorisation, to retrain students in effort, persistence and dedication.

The trouble is that such experiments will inevitably run up against universities’ interest in maximising measurable student satisfaction. The contemporary economics of higher education encourages strategies to attract as many students as possible and ensure they cannot fail. Over a long time – and accelerated by Covid – universities have pursued that imperative by rationalising and standardising the study and learning process (undermining academics’ agency in the process).

They have made most materials that students need available online. They have sought to make required methods and processes for writing an essay as transparent as possible. Reasonable adjustments for neurodivergence have become assessment norms. And the amounts of required reading, self-directed study and more have been progressively reduced, sometimes legitimised by arguments about students needing to also take jobs – in essence, that they should be awarded a full-time degree on the basis of part-time study.

Responses to AI that roll back some of this are sure to generate mixed responses from students – especially from those now used to obtaining high marks merely by writing a reasonable AI prompt. But quality concerns cannot be sacrificed?on the altar of student satisfaction. The regulator needs to step in.

for English universities currently mention essay mills and plagiarism but not generative AI. The OfS did in June recognising the need to engage with AI, but its recommendations are otherwise relatively general.

I would not want to try and pre-empt the precise measures the OfS might demand. But it seems clear to me that if we want to prevent institutions from taking easy options to maintain student satisfaction, significantly increased quality regulation specific to how institutions manage student use of generative AI may well be required.

This is not a moral judgement on AI. It is just an honest recognition of the extent of an assessment problem that is already out of hand.

is professor of music, culture and society and university adviser: interdisciplinarity at City St George’s, University of London. He is also secretary of the London Universities’ Council for Academic Freedom. He is writing here in a personal capacity.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.

Reader's comments (1)

new
I read in the press that our MPs are now increasinlgy using AI tools to write their speeches and to formulate their interventions in our legislature, duly recorded for the benefit of posterity in that august publication Hansard. This is a source of concern for some but realistically when our legislators are using these tools it makes it increasingly difficult for us to "police" their use in academic work and invoke outmoded notions of plagiarism and cheating? I am afraid, to use another cliche, the genie is well out of the bottle now and the use of these tools is normalised. The "Quill Pen" protocol will not work and end up making us look like fools.
ADVERTISEMENT