探花视频

AI threatens universities’ ability to bolster democracy

To protect critical, context-rich thinking in HE, knowledge agency must be quickly reclaimed from Big Edtech, say Dirk Lindebaum and Gazi Islam 

Published on
October 20, 2025
Last updated
October 20, 2025
The US capitol building with a red traffic light in front of it, illustrating democracy under threat
Source: Kevin Dietsch/Staff/Getty Images

The University of Oxford’蝉 that all of its staff and students will be given access to the education version of ChatGPT is an indication of how deep and rapid an effect artificial intelligence (AI) is having on higher education.

Millions of academics and students around the world are already using AI for research, teaching or learning. And while Oxford may be the most prominent, it is not the only university to have invested in an institutional subscription to an AI model specifically trained for educational purposes; Syracuse University, for instance, Claude for Education.

According to some commentators, the impact of AI on higher education will be so profound that researchers’ and educators’ roles will fundamentally change. Researchers will no longer be knowledge producers but, rather, knowledge verifiers, in the sense of checking academic text for its accuracy or confirming the accuracy of empirical data. Educators will no longer be instructors but transform into facilitators of AI-supported learning.

But what and whose knowledge will we be verifying or facilitating? If we lean too heavily on AI, the answer will be “outputs” characterised by (i) digitally codified information rather than tacit knowledge embedded in experience, (ii) computational reckoning rather than human judgement, and (iii) homogenised knowledge. All this is governed by profit-seeking (rather than truth-seeking) motives of private companies; this distinction matters a lot in higher education because codified knowledge represents only a fraction of the whole range of possible knowledge.

探花视频

ADVERTISEMENT

To the extent that academics and universities are marginalised in the production and dissemination of knowledge, the social and civic values they claim to safeguard may be jeopardised. This is because the possibility of social knowledge and reasoning – and by extension, the possibility of a democratic civic sphere – are put under pressure by the widespread adoption of AI.

While it appears to “participate” in knowledge-making, AI has no stake in concrete social situations. It is unable to experience substantive human interactions in which personal life histories, circumstances, hopes and fears converge to demand attention and resolution. But “knowledge” that is codified and context-stripped – and scaled in ways that reduce thought diversity – is indicative of algorithmic decision-making systems that prioritise efficiency over intellectual and societal flourishing.

探花视频

ADVERTISEMENT

Those human demands for attention and resolution are foundational building blocks of social and deliberative decision-making. When substituted with AI, there is a risk of what has been referred to as organisational immaturity. This occurs in three ways. First, through infantilisation, when reasoning is outsourced to automated systems. Second, through reductionism, when human judgement and creativity are substituted for by statistical patterns and probabilities. Finally, through totalisation, when technology becomes so embedded in everyday work that research or teaching are unimaginable without it.

If left unchecked, these processes threaten the space for critical, original and context-rich thinking in higher education. This space is vital for producing a skilled labour force but also, more fundamentally, an educated citizenry able and willing to actively participate in democratic nations’ will-formation and governance. That space is also vital to protecting societies from capture by those who can wield (near) monopoly control over the means of knowing, allowing them to manipulate knowledge in ways that do not countenance the social interest.

Crucially, corrective action to undo organised immaturity is difficult to take because the very possibility of recovery depends on individual and organisational capabilities that will have been lost. New strategies are thus needed to reclaim epistemic agency in universities already infused with AI.

One suggestion is that educators can create two-stage learning experiences, whereby students first engage in writing tasks based on reading for understanding, relying on their own cognitive efforts, and then critically contrasting their work with the output of an AI given a similar task.

探花视频

ADVERTISEMENT

Another suggestion is for academics to be more mindful of their knowledge agency. They should push back in departmental meetings when colleagues endorse using AI for a “project”. And they should push back online when AI-enthusiast colleagues complain that their use of AI to “innovate” theorising is being impeded by restrictive AI policies of publishers or learned societies. Instead of conceding to the inevitability of technological capture, we can recognise our roles in knowledge production and dissemination, a responsibility that cannot be conceded to a prosthetic brain.

Institutionally, independent research centres should be critically evaluating the impact of AI on higher education, serving as hubs for innovative research, but also functioning as advocacy groups for policy reforms and transparency in educational technology implementation. Particular attention needs to be directed at the difference between marketing-led claims by AI firms, and whether empirical evidence is consistent with these claims.

Furthermore, by actively engaging in institutional governance, be it through board memberships, advisory roles or participation in regulatory committees, academics can defend the value of higher education in the public interest, buffering against technological and corporate agendas.

But we must understand that the window of opportunity for reclaiming knowledge agency and governance from Big Edtech is closing fast. To avoid organisational immaturity in higher education and democratic decline in society, the time to act is now.

探花视频

ADVERTISEMENT

is professor of management and organisation at the University of Bath. is professor of people, organisations and society at Grenoble ?cole de Management. Co-authored by a larger author team, their paper, “” is published in Organization Studies.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (3)

NO! AI only threatens anything when both faculty and students refuse to learn to use it, and teach its use, responsibly and well.,
How dies refusing to use a hammer to repair a watch make it a threat? If you are going to use a tool you have a responsibility to learn ho to use it safely and well. However before reaching that point learning about the right tool for the right job is as necessary prelude. In that respect the authors' advice seems no worse than that if the "just adopt AI" briggade.
new
Good article. It's a tool which requires much oversight at best. Dependence on AI is a huge threat to society. AI work in the university context can be instantly spotted due to its banality at best (and its tendency to flatter the ego of the user). Academic work/ research has to be much better than this, whether produced by students or scholars. Who shall guard the guardians etc etc?

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT