top of page

The AI Takeover Is Not Inevitable: Now Is the Time to Resist

“Artificial intelligence”—the propagandistic term that tech companies use for their new content-generation tools, including large language models (LLMs)—can seem like an existential threat to British Studies, History, the humanities, and universities themselves. What if this new crisis gives us a chance to reassert the value of the humanities and to push back against other, similar threats—defunding public institutions, increased faculty precarity, and the pervasive devaluation of our fields? Generative “AI” tools provide a new way to fill the internet with unsubstantiated, often useless content (“slop”) while (ironically? by design?) chatbots offer overwhelmed users a means of navigating this expanding information swamp. Tech companies have developed these data processing networks with the invisible and poorly-paid labor of workers from the Global South, who label and clean training data by slogging through violent, hateful, traumatizing material; with stolen copyrighted content, used to train the models without permission; and with the huge amounts of energy and water that feed proliferating data centers, building an infrastructure of extraction and dependency. The harms of generative AI and LLMs on multiple levels are clear—to learning, to research, to democratic institutions, to the climate, to local water supplies, to workers. Why, then, do so many faculty feel that they are alone and without resources in trying to resist this “inevitable” technology? What can we do to slow, mitigate, or prevent LLMs' worst effects, maintain our intellectual freedom, and even turn this into an opportunity to demonstrate our disciplines’ importance to democratic societies?


Universities have been quick to embrace Gen AI as yet another necessary technological skill set for undergraduates, promising to future-proof students’ academic degrees, even though such tools have limited abilities to produce evidence-based research. University presentations and webinars often acknowledge the informational, ethical, and environmental disaster of LLMs, their lack of connection to truth and reality, and the potential threat they pose to student learning, only to dismiss these concerns in the language of inevitability and the need to keep up with other institutions. (The catastrophic language of the demographic cliff haunts our current, intense higher ed competition.) Many humanities faculty in particular have been dismayed to see how quickly universities have promoted LLMs, often over our concerns, and how big education technology companies (EdTech) have embedded these tools in software packages in ways that at worst seem like an endorsement and at best make them even harder to avoid. In these aspects, LLMs are a new manifestation of longer, interconnected trends in higher education in the United States, the United Kingdom, and elsewhere: the wholescale embrace of market logics; the pervasive power of EdTech; ballooning administrations; declining support for the humanities in favor of STEM and Business; the loss of secure, full-time, and tenure-line academic positions in favor of part-time and precarious work; and the weakening of shared faculty governance.


An illustration in mostly grayscale, with only a light pinkish sky and pops of light red for color, shows a decaying castle-like structure. Atop the structure, a woman lays dejectedly among the decay. A large robotic figure holding a light bulb and “T” looms behind and over the structure. Smaller human figures throughout the bottom portion of the crumbling structure engage in various activities.
“The Old World’s Dishonesty,” by Albert Rhobida, 1890. Illustration from La vie électrique. Public domain.  

All of these developments rest on several interconnected assumptions, related to market forces and the shifting goals of university education in post-industrial, multiethnic, and democratic societies. With the evisceration of government funding for education and consequent rising costs at both public and private institutions, students and families (understandably!) see university degrees as expensive luxuries if they do not provide the likelihood of post-graduate employment to repay education debt. History and other humanities departments have struggled to make the case that their majors can offer this kind of imagined pre-professional training. This is true even when multiple studies have shown that over the course of a career, humanities degrees do as well as others for median income and employment rate, as they lead to a wide range of professions. Many organizations have pointed out that the humanities are good preparation for a changing employment landscape, giving students the skills to read, write, reason, and conduct research. In the past, boosters for the humanities have emphasized their disciplines in terms of skills acquisition over specific course content, as this seemed more relevant, and perhaps less politically risky. Now, of course, it is exactly textual content creation that LLMs cheapen.


The problems with AI, the colonial and racist legacies of its data training, the evisceration of the humanities, questions about employability, and the rise of contingent faculty are connected in other ways as well, namely the ideological battles over the funding and control of public university systems. It is not a coincidence that our fields are being devalued and defunded just as there are more women and faculty of color in the professoriate! There is in this an element of gaslighting: History and Literature are both powerful enough to be central to the culture wars, requiring close state supervision and the wholesale revision of curricula and public history narratives, while simultaneously being framed as outdated and out of touch. And in this last position, so often, our university administrators participate, defaulting to the language of the market and asking ill-equipped departments and faculty to recruit for majors based on short-term employment and earnings data. We have been trapped in this cycle for a long time—at least for most of this century. And now “artificial intelligence” threatens the very meaning of a university degree.


And yet: it is now, I think, and as others have written, that we can make the strongest case for studying the humanities. Many students are desperate for meaning, hope, and perspective—exactly what our courses and majors offer, and exactly what is needed to correct AI slop. Faculty are coming together to share resources, find ways to resist, and call out the shocking ethical and environmental problems with LLMs. Members of organizations like the North American Conference on British Studies can use our study of Britain and its role in global history to help put this new challenge in perspective. “AI” relates to the long, intertwined histories of colonization, slavery, and industry, the separation of producers and users, and resulting rebellions and resistance. I strongly believe that it is our role to help make invisible harms visible, to constantly and loudly connect these, to bring our voices to whatever tables we can, and to make those tables bigger and more inclusive.


The crisis of the humanities that “AI” is helping us see has been a long time coming. The fossil fuel and tech interests that now want us to accept climate change as inevitable—the same interests who were up until very recently denying its reality—are putting big money into LLMs, EdTech, tech dependency, data centers, cryptocurrencies, and other systems that mine data and revenue. It is certainly not in their interests to have a well-informed, thoughtful, skeptical, active, and engaged citizenry. Moreover, the ideological battles that have defunded public goods in favor of police and prisons have contributed to the market logic that has redefined university work in terms of preprofessional programs, devalued the humanities, curtailed or ended tenure-line jobs, and led to unsustainable levels of academic precarity. The more we can connect these issues—LLMs, the crisis in the humanities, and academic employment—the more we can resist these corporate narratives of inevitable futures. If you can, join the American Association of University Professors; get involved in your university’s system of shared governance and even administration; support non-tenure track faculty by advocating for tenure lines and full-time positions with benefits; bring up the truth-value, ethical, and environmental problems of “AI” whenever and wherever you can; advocate for history and the other humanities for their role in forming informed and engaged global citizens.


This post is the first of a series exploring these connections, with a stellar group of scholars thinking through the implications of AI for us. We hope that this will be the start of a productive conversation about these issues, including opportunities to exchange model policies, syllabus language, pedagogical approaches, etc. I am thrilled to see NACBS promote such discussion through this series, and hope to see many more opportunities to debate, collaborate, and organize. Studying history, we know that nothing is inevitable.

 

You can find a list of resources related to AI in education at this link.



Amy Woodson-Boulton is professor of British and Irish history and past chair of the Department of History at Loyola Marymount University in Los Angeles, California. Her work concentrates on cultural reactions to industrialization in Britain, particularly the history of museums, the social role of art, and the changing status and meaning of art and nature in modern society. She currently serves as chair of the NACBS DEI Committee.


The views and opinions expressed in this post are solely those of the original author/s and do not necessarily represent the views of the North American Conference on British Studies. The NACBS welcomes civil and productive discussion in the comments below. Our blog represents a collegial and conversational forum, and the tone for all comments should align with this environment. Insulting or mean comments will not be tolerated and NACBS reserves the right to delete these remarks and revoke the commenter’s site membership.

bottom of page