The University of Oxford’蝉 that all of its staff and students will be given access to the education version of ChatGPT is an indication of how deep and rapid an effect artificial intelligence (AI) is having on higher education.
Millions of academics and students around the world are already using AI for research, teaching or learning. And while Oxford may be the most prominent, it is not the only university to have invested in an institutional subscription to an AI model specifically trained for educational purposes; Syracuse University, for instance, Claude for Education.
According to some commentators, the impact of AI on higher education will be so profound that researchers’ and educators’ roles will fundamentally change. Researchers will no longer be knowledge producers but, rather, knowledge verifiers, in the sense of checking academic text for its accuracy or confirming the accuracy of empirical data. Educators will no longer be instructors but transform into facilitators of AI-supported learning.
But what and whose knowledge will we be verifying or facilitating? If we lean too heavily on AI, the answer will be “outputs” characterised by (i) digitally codified information rather than tacit knowledge embedded in experience, (ii) computational reckoning rather than human judgement, and (iii) homogenised knowledge. All this is governed by profit-seeking (rather than truth-seeking) motives of private companies; this distinction matters a lot in higher education because codified knowledge represents only a fraction of the whole range of possible knowledge.
To the extent that academics and universities are marginalised in the production and dissemination of knowledge, the social and civic values they claim to safeguard may be jeopardised. This is because the possibility of social knowledge and reasoning – and by extension, the possibility of a democratic civic sphere – are put under pressure by the widespread adoption of AI.
While it appears to “participate” in knowledge-making, AI has no stake in concrete social situations. It is unable to experience substantive human interactions in which personal life histories, circumstances, hopes and fears converge to demand attention and resolution. But “knowledge” that is codified and context-stripped – and scaled in ways that reduce thought diversity – is indicative of algorithmic decision-making systems that prioritise efficiency over intellectual and societal flourishing.
Those human demands for attention and resolution are foundational building blocks of social and deliberative decision-making. When substituted with AI, there is a risk of what has been referred to as organisational immaturity. This occurs in three ways. First, through infantilisation, when reasoning is outsourced to automated systems. Second, through reductionism, when human judgement and creativity are substituted for by statistical patterns and probabilities. Finally, through totalisation, when technology becomes so embedded in everyday work that research or teaching are unimaginable without it.
If left unchecked, these processes threaten the space for critical, original and context-rich thinking in higher education. This space is vital for producing a skilled labour force but also, more fundamentally, an educated citizenry able and willing to actively participate in democratic nations’ will-formation and governance. That space is also vital to protecting societies from capture by those who can wield (near) monopoly control over the means of knowing, allowing them to manipulate knowledge in ways that do not countenance the social interest.
Crucially, corrective action to undo organised immaturity is difficult to take because the very possibility of recovery depends on individual and organisational capabilities that will have been lost. New strategies are thus needed to reclaim epistemic agency in universities already infused with AI.
One suggestion is that educators can create two-stage learning experiences, whereby students first engage in writing tasks based on reading for understanding, relying on their own cognitive efforts, and then critically contrasting their work with the output of an AI given a similar task.
Another suggestion is for academics to be more mindful of their knowledge agency. They should push back in departmental meetings when colleagues endorse using AI for a “project”. And they should push back online when AI-enthusiast colleagues complain that their use of AI to “innovate” theorising is being impeded by restrictive AI policies of publishers or learned societies. Instead of conceding to the inevitability of technological capture, we can recognise our roles in knowledge production and dissemination, a responsibility that cannot be conceded to a prosthetic brain.
Institutionally, independent research centres should be critically evaluating the impact of AI on higher education, serving as hubs for innovative research, but also functioning as advocacy groups for policy reforms and transparency in educational technology implementation. Particular attention needs to be directed at the difference between marketing-led claims by AI firms, and whether empirical evidence is consistent with these claims.
Furthermore, by actively engaging in institutional governance, be it through board memberships, advisory roles or participation in regulatory committees, academics can defend the value of higher education in the public interest, buffering against technological and corporate agendas.
But we must understand that the window of opportunity for reclaiming knowledge agency and governance from Big Edtech is closing fast. To avoid organisational immaturity in higher education and democratic decline in society, the time to act is now.
is professor of management and organisation at the University of Bath. is professor of people, organisations and society at Grenoble ?cole de Management. Co-authored by a larger author team, their paper, “” is published in Organization Studies.
请先注册再继续
为何要注册?
- 注册是免费的,而且十分便捷
- 注册成功后,您每月可免费阅读3篇文章
- 订阅我们的邮件
已经注册或者是已订阅?