̽Ƶ

I was wrong. Universities don’t fear AI. They fear self-reflection

The various reactions to my recent article on universities’ tardy AI adoptance underlines their allergy to internal transformation, says Ian Richardson

Published on
December 22, 2025
Last updated
December 22, 2025
A lecturer puts his fingers in his ears, illustrating a resual to reflect
Source: ajr_images/Getty Images

Reaction online to my recent opinion piece in ̽Ƶ on universities’ failure to strategically engage with artificial intelligence (AI) has been both fierce and illuminating.

Some criticisms were measured and thoughtful; others were reflexive, polemical or rooted in deeply held convictions about what universities are – and what they must never become. Together, however, they inadvertently reinforce the point that I was making: that resistance to change in the sector is so entrenched that it has become part of its identity. And that resistance now poses a genuine threat to its long-term well-being.

A number of responses centred on definitional nit-picking. Why refer to higher education as a “sector”? Why invoke “Enlightenment principles”? Such procedural questions, while valid, emphasise a particular challenge associated with criticism of higher education. It’s tempting to be drawn down this rabbit-hole, and become deflected from the larger issue: why is the sector so reluctant to interrogate its own structures, norms and assumptions?

Elsewhere, critics asserted that AI is over-hyped and may be another phlogiston – an intellectual dead-end or passing chimera. Why must universities engage, they ask. Shouldn’t they resist fads, as they have so correctly done in the past?

̽Ƶ

ADVERTISEMENT

This argument, popular among faculty, invokes the precautionary principle, but in practice represents an abdication of adaptive responsibility. It assumes that the status quo is safe, neutral and inherently more virtuous than the unknown. Yet universities themselves have long taught that knowledge – and society – advance through enquiry, experimentation and engagement, not by entrenchment.

What makes this line of reasoning particularly problematic is that it is often forcefully espoused by those with least technology literacy. Many such criticisms demand a “rigorous case” for AI, but such a case is difficult to recognise if you lack understanding of data, machine learning or emerging practices. Contrary to some assertions, academic integrity and technological adoption are not mutually exclusive; on the contrary, preserving a meaningful conception of academic integrity now requires an informed understanding of the technologies that challenge it.

̽Ƶ

ADVERTISEMENT

Another set of responses framed my argument as morally suspect – an endorsement of extractive digital oligarchies, a capitulation to marketisation. This is familiar territory. For some academics, technology adoption is indistinguishable from the neoliberal creep they perceive to be “hollowing out” universities. But such a framing, again, obscures more than it clarifies.

When I noted that banks have embraced AI in a way that universities have not, I wasn’t suggesting that banks are paragons of virtue. I was merely noting that even these most conservative of institutions have been able to reconfigure themselves in response to technological change, and the existential crisis it represents. That universities, with their vast intellectual resources and claimed devotion to societal progress, lag so far behind should give us all pause.

Concerns were also expressed about the harms of AI: the ecological costs, the erosion of critical thinking, the risk of over-dependence. These are important issues and deserve serious attention, but they do not justify strategic disengagement. Indeed, universities must simultaneously explore AI adoption in all corners of institutional practice, while proactively leading ethical, pedagogical and ecological responses to AI. Ignoring the transformational possibilities of the technology for practice will do little to preserve the sanctity of the student experience; it simply cedes leadership to actors outside the academy.

Some reactions to the article were openly dismissive: “naive”, “hyperbolic”, “written by a bot”, “a black plague [of mass adoption]”. These comments are emotionally revealing and suggest that a deeper objection relates not so much to the technology as to the perceived threat it poses to identity, expertise and authority. Such anxieties, while understandable, are not evidence against the argument. Rather, they are evidence in favour of it.

̽Ƶ

ADVERTISEMENT

There were some more constructive responses. Several noted that universities are more proactively involved with AI than suggested in the article, and I’m very happy to acknowledge examples of thought leadership in the sector. But exceptions do not make a rule. Resource constraints, governance gaps and a disinclination to challenge existing practices, all conspire to impede strategic consideration of how technology aligns with institutional purpose and design. AI is treated as an ad hoc add-on, delegated to committees and bolted on to legacy systems.

When AI transformation begins with use-cases that build on existing logics, rather than pilot projects that challenge present institutional design, the result is inevitable: more of the same, with the promise of faster and cheaper (if you’re lucky).

Crucially, some commentators highlighted a deeper cultural problem: the fact that universities talk about preparing students for the future but rarely treat students as partners in shaping it. This failure of “reverse mentoring” reflects a custodial mindset at the heart of institutional resistance. For some, challenging the underlying logics of how universities organise knowledge, learning and governance is not only uncomfortable but sacrilegious. But clinging to rituals of transcendental purpose will not preserve the university’s social value.

The irony is that the pursuit of truth and understanding is a dynamic process that demands constant questioning of existing beliefs and a readiness to revise ideas based on new evidence. It is this process that, over many decades, has contributed to the development of AI – and universities have been central to that development. But when it comes to internal transformation – rethinking curricula, governance, research practice, pedagogical models – the will and curiosity are mysteriously absent.

̽Ƶ

ADVERTISEMENT

To restate: the greatest threat to higher education is not AI. It is institutional inertia supported by reflexive criticism that mistakes resistance for virtue. AI did not create this problem, but it is exposing dysfunctionalities and contradictions that have accumulated over decades.

Whether universities engage with AI enthusiastically or reluctantly is ultimately less important than whether they do so strategically, imaginatively and with a willingness to question their own design. Because if they don’t, others will.

̽Ƶ

ADVERTISEMENT

 is a faculty member and director of executive education at Stockholm Business School, Stockholm University. With a background in technology media, he is co-founder of the national Swedish programme AI for Executives, which seeks to drive board-level understanding and organisational adoption of AI across industries and sectors.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (22)

May I politely suggest that this writer of a word-salad in dire need of dressing learn anything about higher education including his own association with one sector among many--an executive education program in a college of business. For at least the last century higher education is inseparably associated with incessant, often uninformed change and fads. If anything, it responds too rapidly and unknowledgeably. It is fictitious to claim the opposite as so many self-styled "critics" do. There is a large literature on this. In fact, it includes business schools themselves (Steven Conn's well-known book for one) I challenge the executive educator at a business college to look across his own or any university. If his eyes are open, he will immediate see that different departments respond differently to AI and some far too rapidly. The point is NEVER AI yes or not. But the responsible use of AI by all parties. That is the point: NOT falsely dichotomizing "failure to engage" vs. "fear of self-reflection." To sum it up: Come on man! In my years of reading THES and now THE, I have NEVER see a "response" to critics, let alone a non-response of 100s of words. Why?
The fact that the hoi palloi are rushing to watch the barrell organ being ground by a new monkey is no reason for the genuine virtuoso to drop Paginini in favour of the Beggar's Opera.
Bit hard on the Beggar's Opera in my view. Culturally a rather elitist comment. Paganini was a terrible composer on the whole though, by all accounts, a virtuoso but he's dead and were are no recordings of is work for us to hear, so it's hard to drop him as we can't pick him up in the first place, whereas The Beggar's Opera is a living work and preferable to Pag's compositions in my view at least. But ten your reference to the "hoi palloi" is the indicator your comment is ironic and satirical and meant to stimulate debate, so well done you!
Where do we start with this reponse to his critics? I'm sure it will have its uses, but at the moment AI seems to be error strewn. Why should we trust it? It's also extractive- a new basis for oligarchy and authoritarianism. Cognitive debt is becoming widely apparent, caused by some students' use of AI in their academic work. They are also already alienated from the university by fees and online platforms used to manage them- as are academics. Would you risk knowledge production and exchange? Some universities and academics may go in this direction, but I doubt many will want to risk their hard won reputations.
I think the author needs to reflect on the fact that responses to THES articles, especially Opinion pieces and especially when it comes to AI are frequently, satirical, hyperbolical and often, mischievous and should be taken with a pinch of salt. There is a cadre of regular respondents who like to be contrarian and hope to be amusing and are often eccentric. I am surprised that anyone reads them but please don't take always them too seriously or as representative. The "written by a bot" jibe is a bit over worked now but was originally meant to poke fun at some of these articles which really do take themselves a bit too seriously, in my view. An opinion piece for THE is not Newton's Principia after all. KEEP IT LIGHT!
Indeed, I have penned some very enthusiastic and positive responses to AI articles but they were meant to be "wind-ups" and satirical and not to be taken seriously except by the literal minded. This is the problem with AI bots (and some of their more enthusiastic champions), they are unable spot irony!!
That assertion is not correct. I have identified 2.8 giga-instances of 'irony' in the most recent 0.72 seconds. In addition my casing is irony, with copperish highlights. The writer of this comment merely provides evidence of the malaproptive elitism of biological academics towards real-world innovation.
Define "irony." I dare you!
Perhaps the statement "Prof Graff is a much respected scholar and world renowned authority" or some similar locution might be an example?
"May I politely suggest that this writer of a word-salad in dire need of dressing learn anything about higher education including his own association with one sector ..." This would be another example though not as amusing as the one above?
Well there are so many retired professors hanging around with far too much time on their hands and little else to do but make provocative comments as if anyone really cared what they think anymore (I am one btw). There are more of them these days than retired vice admirals or former CIA operatives (that's for the US readers), and they all tend to be a bit curmudgeonly and rater "fuddy duddy" "Victor Meldrews" ill at ease with modernity at the best of times and terrified by their own irrelevance (in my opinion), so you won't get much praise, or even much sense, for new tech and innovation on these comments, from them on anything really. So why anyone would think such comments were in any way representative of any rational or reasoned debate is frankly beyond me.
"I'm not wrong! It's everyone else who's wrong!" Good of TES to include an image of the article author.
Check Ian R's online bio info. He is selling a product and a service. This is blatant conflict of interest. As was the recent essay by the US standardized test promoter and test-prep company manager What has happened to THE? BTW: It IS THE. The Times of London closed THES some time ago. THE is independent and co-publishes with the US Inside Higher Ed. There is no TES
Thanks for taking the time to read my profile. To be clear, I am a tenured faculty member, and Director of Executive Education, at Stockholm University's business school. Our executive education function, unlike many around the world, is not allowed - under Swedish commissioned education regulations - to generate a profit (and does not serve as a cash-cow for the publicly funded university). On a personal level, I do not receive additional payments because of my involvement in the AI for Executives programme. To describe me as conflicted, in this matter, is simply incorrect. AI for Executives is a collaboration between the executive education providers of the three largest public universities in Sweden, and AI Sweden, the national centre for AI adoption in Sweden. Digital and AI adoption is widely seen as a key driver of national competitiveness, and essential to the long-term well-being of the country.
By evading the central issue, you underwrite my point. I define "conflict of interest" broadly and historically, and to use a word cross-disciplinarily. I know Sweden universities well, having had close associations since the 1970s and honorary PhDs.
Actually he addressed your point and left you with egg on your face in my view. Look at the way you deflect and mystifies in terms of definition and then the entirely gratuitous, defensive and irrelevant reference to your self-proclaimed knowledge of Swedish universities. Just admit you got it wrong and apologize to this chap for your error. There is clearly no conflict of interest and such unfounded allegations are, at best, inappropriate.
Reported for violating community standards
new
Haha! And a very Merry Christmas to you too, Professor Graff!!
"The Times of London closed THES some time ago. THE is independent and co-publishes with the US Inside Higher Ed. There is no TES reply". What?! No-one told me!! This is appalling!!! Whatever next??
I would suggest HE issues would only be magnified by wider AI usage. Yet again, we are not businesses and ‘neoliberal creep’ is exactly what AI encapsulates, in all its turgid, mathematical reductionism.
I would like to know what the author believes to the "mission" of the academy to be. To me, a university fulfils a function in a secular society that is a combination the function performed by seminaries and monasteries in religious societies. The purpose of the university is less to prepare the student for the real world, but more to shield the student, and the academic, from the forces in the world that would prevent them from developing and using important mental faculties. This both develops them as individuals, and makes them more successful when (if) they return to the real world afterwards. I suspect other will have different definitions of the academy's purpose, but the key point is this - a thing is defined by what it does. If it does a different thing, it becomes a different thing, even if it keeps its name. If a University raison d'etre in order to survive, what has survived is not a university, but a different thing with the name university. The time of the monastery came to an end (for better or worse). Perhaps if the time of University is coming to an end, we should let it, and allow institutions more suited to what is required arise, purpose designed, in their place.
Thank-you for your comment. While I'm tempted to suggest that there's no contradiction between integration of emergent technologies, and the mission you describe (and that I share, incidentally), I'd really like to reflect on the more fundamental question you raise. Thank-you once again.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT