top of page

Reading, Assessment, and AI in English Studies

The growing presence of AI has unsettled English writing and literary studies educational spaces. As student use of large language models increases, educator responses, in my current context as an instructor at a Midwestern university, have ranged from “going analog” to syllabus policies meant to regulate or contain AI use. AI-inflected reading, it seems, has brought English Studies back to square one, as the already fraught work of textual engagement is increasingly delegated to tools such as ChatGPT and Claude. This moment recalls Michael Warner’s famous question in his essay “Uncritical Reading”: what does it mean to teach critical reading, and what do students really do when they read? Warner observed: “Students who come to my literature classes, I find, read in all the ways they aren’t supposed to,” students “identify with characters,” “fall in love with authors,” “mime what they take to be authorized sentiment,” and “stock themselves with material for showing off, or for performing class membership.” Could we now add “use ChatGPT to analyse texts” to the list of “uncritical” reading practices? There is an obvious difference between the embodied and socially situated relationships with texts that Warner describes and what is often described today as AI-assisted “reading”: it is entirely possible to copy an AI-generated response without ever having experienced the text in any way.


This difference frames my interest in the use of AI prosthesis, or AI tools, for classroom reading in English literature courses, particularly as a question of student labour. So, why, as Marc Watkins asks, are students increasingly “offloading” the skill of reading to AI?  The illusion of AI’s command over a wide range of textual sources is persuasive, but it also makes reading feel strangely displaced, as interpretation has been externalized. Set alongside critiques of AI as colonial extraction built on exploited labour and environmental harm in the Global South, the use of AI for reading appears fundamentally counter to the aims of humanistic education. Against this backdrop, the widespread student turn to AI reading tools seems, at first glance, puzzling.


A black and white line illustration shows a figure in the center left, hunched reading a giant book. The figure is dwarfed by the stacks of enormous books that surround them. None of the books feature titles on the spines. The figure is intensely reading.
“Read Books on Practical Navigation,” illustration from Weird Islands by Jean De Bosschere, 1921. Image via Public Domain Image Archive.

The current anxiety around AI in English studies, however, cannot be understood in isolation from the longer institutional and colonial histories of the discipline, which parallel the institutional and colonial histories of AI itself. This legacy continues to shape English departments today, in India as well as in the United States, not only through the canon but through pedagogical practices and assessment regimes that regulate how students are allowed to read, write, and respond. AI has lodged itself in English studies within these conditions.  In India, where I studied for an undergraduate degree in English, entry into English literary studies is mediated through entrance examinations, which signal what kind of student is valued in the discipline. As Gauri Viswanathan has pointed out, English “enters the syllabus” as a colonial tool to credentialize citizens and subjects, becoming a certifiable basis of employment and competence. While liberal educational theory sustains the myth that schooling exists to cultivate individuality and creativity, Viswanathan identified how modern English literary education has always promoted standardization: the autonomous self as imagined by Locke or Rousseau obscures education’s function as social management. In this sense, AI appears counterhumanistic only if we forget that English studies itself emerged through colonial structures that privileged uniformity and reproducibility.


From the outset, English literature examinations functioned as a mass colonial apparatus, designed to identify and train civil service officials who could serve as linguistic and cultural interpreters for the colonial government. Contemporary iterations of these examinations continue to thrive. In “ChatGPT Is a Blurry JPEG of the Web,” Ted Chiang argues that, unlike AI, human learning cannot be measured by rote memorization. And yet, in the Indian assessment economy, rote learning is not a failure of learning, but its explicit aim. Sumana Roy’s has explored this dynamic in the culture of English literature examinations in post-Independence India, tracing how tutor-produced notes and answer templates came to mediate students’ encounters with the canon, encouraging rote learning over literary engagement, and reducing exam texts to what examiners themselves call “corpses,” objects to be dissected for marks rather than read as living works.


The structure of English literature examinations in India helps explain why AI is often better suited to these tasks than students themselves. The point of these exams, wholly different from actual classroom teaching, does not seem to be close reading oriented toward literary studies practice. These exams often require a disembodied, formulaic voice that can move efficiently through recognised themes, quotations, and frameworks. This is the kind of voice AI produces easily. When I was an English Studies Honours student at Delhi University, I remember feeling as though I needed to cosplay as a robot for exams. To be successful at end-of-semester exams, one had to learn three quotations, memorise four themes, and arrange them into a predictable structure—this formula worked on topics ranging from Russian novels to Chaucer. Similarly, the 2018 entrance examination for the M.A. in English at Jawaharlal Nehru University gave candidates three hours to write three essay-type answers. The compulsory question asked for a critical response to Agha Shahid Ali’s “Even the Rain,” with no further guidance. By 2021, however, the exam format had shifted decisively with the introduction of a multiple-choice question (MCQ) paper, in which candidates select the correct answer from a fixed set of options rather than writing extended responses. Close textual analysis was no longer possible. These MCQ papers rewarded surface familiarity with an enormous range of material. With 120 questions spanning figures from Longinus to Deleuze and Guattari, preparation strategies shifted accordingly. Fellow applicants I spoke to described reading Wikipedia entries across topics, an approach well-suited to exams that tested an encyclopaedic recall of names, dates, definitions, and terms. This marked a clear movement from depth to breadth, requiring perhaps, to paraphrase Chiang, a blurry jpeg of English literature.


This assessment framework is dehumanising in many ways. It abstracts reading from experience, reduces knowledge to memory, and treats all candidates as interchangeable cognitive units. It also fails to account for disability and reproduces casteist and ableist constructions of merit by privileging speed, memory, and prior access to cultural capital. LLMs excel at producing summaries, definitions, and recognisable academic language, precisely the forms of knowledge these exams reward. Hence, current responses to AI in the classroom, such as returning to analog methods or attempting to outwit automated tools may not be sustainable. The question is not simply about AI and reading, but about AI and the contexts in which reading takes place.


Although mass examination formats are beyond the interventions of individual instructors, classroom teaching could explore embodiment as a mode of literary engagement: Abby Knoblauch observes that academic work often begins in embodied reactions. Literary studies, approached this way, can encourage students to reflect on their own positionality as readers and encourage students to produce reflective responses and dissuade AI generation. For an introductory English class, for instance, I once assigned a midterm essay based on Frankenstein, adapted from Kate Bomford, asking students to write a letter to Victor Frankenstein from the Creature’s perspective, prioritizing relationality. I have also found it useful to ask students to maintain reading journals guided by formulations such as Sara Ahmed’s “sweaty concepts” (undeniably human—AI cannot yet sweat). Students reflected on moments of difficulty or excitement in their reading and situated these responses within form, context, and textual themes. Conversations with students, opportunities for revision, multiple assignment modalities, and flexible deadline policies may also help reduce high-stakes anxiety and acknowledge the unequal material conditions under which reading and learning occur. Such practices would perhaps not eliminate AI, but might shift what counts as meaningful literary work, foregrounding student labour.



Anushmita Mohanty is currently a PhD Candidate in the English Department at the University of Wisconsin-Milwaukee. Her research is on literary representations of education, student narratives, and education-based migration. 

The views and opinions expressed in this post are solely those of the original author/s and do not necessarily represent the views of the North American Conference on British Studies. The NACBS welcomes civil and productive discussion in the comments below. Our blog represents a collegial and conversational forum, and the tone for all comments should align with this environment. Insulting or mean comments will not be tolerated and NACBS reserves the right to delete these remarks and revoke the commenter’s site membership.

Comments


bottom of page