91茄子

Academics must be open to changing their minds on acceptable AI use

Honest and open-ended conversations over how AI can be productively used in the learning journey are needed, not ChatGPT bans, says Ava Doherty

Published on
September 1, 2025
Last updated
September 1, 2025
Man shrugging with AI on a screen as his head. To illustrate doubts of academics on what is an acceptable use of AI.
Source: Mininyx Doodle/iStock

Students today face a striking paradox: they are among the most technologically literate generations in history, yet they are deeply anxious about their career prospects in an artificial intelligence-driven future.

Since the launch of ChatGPT, the rapid advance of artificial intelligence (AI) has fundamentally reshaped the graduate job market. This shift presents unique challenges and opportunities for students, universities and the broader higher education sector.

The complexity of this relationship became clear during my first term at university. In a tutorial-style session, our tutor asked if any of us had used AI to write essays, warning against the practice. As students, we highlighted AI’s benefits for research, brainstorming and refining our ideas. The conversation became more nuanced when we realised that AI detection tools could not reliably distinguish between entirely AI-generated content and legitimate editorial assistance. The challenge, she explained, was not the technology itself, but learning to use it as a tool for enhancement rather than replacement.

This tension resonates widely. Students are caught between embracing powerful new tools and maintaining academic integrity, between preparing for an AI-augmented future and proving their own capabilities. The anxiety runs deep because the rules feel like they are being written in real time. I have found myself second-guessing my writing, wondering if a particularly polished sentence might trigger suspicion. Moments of clarity and eloquence can feel like liabilities rather than strengths.

91茄子

ADVERTISEMENT

Conversations with fellow students suggest these anxieties are widespread. Humanities students, in particular, worry about whether their degrees adequately prepare them for AI-driven workplaces. Some admit experimenting with AI in assignments, aware it can sharpen their work, but fear over-reliance could harm employability. Many wonder whether their critical thinking, research and writing skills are enough in a world where even entry-level positions increasingly demand AI literacy.

The employment landscape reflects this tension. According to AI-skilled workers earn a 56 per cent wage premium over peers without such expertise. Yet traditional entry-level positions – once the first foothold for recent graduates – are being transformed by AI-powered tools, creating more competition for fewer roles. Graduates now compete not only with peers but also with experienced professionals adapting to AI-augmented workflows. A marketing coordinator role that once required basic content creation now expects candidates to manage AI tools, analyse algorithm-driven data, and optimise campaigns using machine learning insights.

91茄子

ADVERTISEMENT

Oxford’s tutorial system, with one-to-one or small-group sessions, discourages over-reliance on AI because tutors can detect inconsistencies between written assignments and verbal explanations. Similar initiatives exist elsewhere. For example, the London School of Economics has experimented with oral exams to evaluate students’ comprehension beyond written submissions. Such approaches can foster sustained student-academic engagement while developing critical thinking, creativity and communication skills.

Rather than hiding AI use, students and tutors should define clear ethical boundaries together. Peer-to-peer learning where students teach each other skills is increasingly common, allowing tech-savvy students to share knowledge with both peers and faculty. While students may already be more familiar with AI than some academics, these arrangements foster collaborative learning and ensure tools are used constructively.

Workshops should go beyond teaching technical skills?such as prompt engineering, data visualisation or AI-assisted coding. In today’s rapidly changing landscape, adaptability – the ability to evaluate new tools quickly and integrate them effectively – is just as essential. Assessment methods should further evolve to reflect this AI-driven world. Approaches such as oral exams, practical demonstrations and collaborative projects can reveal genuine understanding in ways that AI cannot easily replicate.

Professional development and careers guidance should start from the beginning of degree programmes, not just the final years. Advisers can help students identify tailored learning pathways and anticipate which roles require AI-related expertise. This enables students to make strategic decisions about skill development and work placements throughout their studies, rather than scrambling to catch up at the end.

91茄子

ADVERTISEMENT

Large tech firms have limited bandwidth for sustained partnerships, particularly outside top-tier institutions. Scalable alternatives include collaborations with regional tech companies, local startups, professional associations, or university consortia pooling resources. These partnerships can offer meaningful, real-world experience without relying solely on global corporations.

The rise of AI presents profound challenges but also opportunities. Students are navigating unprecedented uncertainty about employability and academic integrity, yet their digital fluency and adaptability are significant strengths. Universities that embrace AI as a tool for learning and career development, while maintaining academic integrity, will equip graduates to thrive in an AI-augmented world.

That tutorial conversation stayed with me not because it resolved all tensions, but because it acknowledged them openly. In an age of rapid technological change, that kind of intellectual honesty may be the most valuable skill of all.

Ava Doherty is a history and politics undergraduate at the University of Oxford.

91茄子

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (2)

new
technological literacy is a subjective concept. who is more literate: the person who can use such tools, or the person who refutes use of the tools because they understand why they will fail? If you want [some] universities/academics to provide roadmaps for bringing AI into taught material, you should also accept that some might decide to bannish it for sound reasons. Information on AI practices should be available to applicants then they can decide.
new
I do think we seem to be going round the house on this one, as they say, and there is a certain circularity of argument here. There was a good piece recently on this subject by Martin A. Mills bit so much else seems to be recycled standard opinion pro and con and we really need a bit of insight now from the real experts

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT