top of page

The Humanities Paradox in AI: Why Praise Isn’t the Same as Hiring

In early February, a clip of Anthropic co-founder and president Daniela Amodei went viral. In an interview with ABC News, she declared that studying the humanities is “more important than ever” in the age of AI. Clipped, screenshotted, and shared, academics discussed it with campus colleagues and posted it on social media. For a brief window, it felt like the most hopeful thing anyone in Silicon Valley had said about the liberal arts in years.


Amodei’s statement sent me to Anthropic’s careers page. What I found were dozens of open roles for machine learning researchers, software engineers, and infrastructure specialists, alongside a smaller number of policy and operations positions. What I did not find was any job listing that explicitly sought training in English, history, philosophy, or other humanities disciplines.


A black and white line drawing shows a confusing shape, which, from afar, looks somewhat like a cactus or a close up of cellular structures. On closer inspection, the image shows a tree with huge bubbly leaves that each show a different topic or subject. The text is in French, and a large bubble labeled “Histoire” appears in the lower left.
Chrétien Frédéric Guillaume Roth, Essai d'une distribution généalogique des sciences et des arts principaux (1769), after Diderot and d’Alembert's Encyclopédie. This Enlightenment-era diagram organized human knowledge into distinct branches of Memory, Reason, and Imagination, treating each discipline as a form of expertise with its own methods and logic. Today, that same knowledge is increasingly repackaged as transferable “soft skills,” detached from the disciplines that produced it. Courtesy of the University of Basel Library. Public domain.

This is not a hit piece on Amodei or Anthropic. The tension she reflects is larger than any one company. It is an industry-wide gap between rhetoric and practice, or what I call the humanities paradox in AI: the simultaneous public elevation and institutional marginalization of humanistic knowledge. It is worth examining honestly, because the stakes are high for those of us who teach these disciplines and for the students we are preparing.


Listen carefully to what Amodei actually praised when she praised the humanities. She described what Anthropic looks for in prospective employees: people who are strong communicators, emotionally intelligent, curious, and motivated to help others. These are real and valuable qualities. But they are not the same as humanistic expertise, which is a far more specialized skillset.


When she described the value of her own literature degree, she pointed to critical thinking, comfort with ambiguity, and skills in reading and argument. These are indeed central outcomes of a humanities education. But framing the value of these disciplines primarily through “soft skills” and transferable dispositions performs a subtle kind of deflection. It suggests that you are valued for what your education taught you to be, but not necessarily what it trained you to do.


A historian does not simply have “good communication skills.” A historian is trained to analyze systems of power over time, to track the consequences of institutional decisions, and to interpret incomplete and conflicting evidence in context. An ethicist does not just possess “high EQ.” An ethicist brings frameworks for navigating moral uncertainty, competing values, and long-term consequences, which are precisely the kinds of problems AI companies confront daily. When these disciplines are reduced to their interpersonal byproducts, their substantive contributions disappear. Companies are then relieved of the responsibility to build roles where that expertise is actually used.


For those of us in higher education, this paradox is familiar. For decades, humanities departments have seen enrollments decline, tenure lines shrink, and budgets tighten, even as administrators celebrate the liberal arts in mission statements and fundraising materials. The language of “critical thinking” and “communication skills” has long served a dual purpose. It defends the humanities’ relevance while translating disciplinary knowledge into terms legible to employers and accreditors. In the process, the specific content of humanistic training, the methods, frameworks, and interpretive traditions, often recedes from view.


There is a longer history here as well. Moments of technological change have frequently been accompanied by efforts to redefine expertise in more instrumental terms. In the nineteenth century, industrialization reorganized skilled labor while simultaneously producing new discourses about efficiency and generalizable skill. In the twentieth, early computing relied on human “computers” whose intellectual labor was often recast as routine or interchangeable. The current AI moment echoes these patterns where complex forms of knowledge are flattened into task-based descriptions that can be measured, optimized, and, increasingly, automated.


A widely circulated 2025 Microsoft study, for instance, ranked historians among the occupations most susceptible to AI replacement based on how closely their tasks overlapped with what large language models already do. Such studies rely on defining historical work at a highly abstract level. That is the same flattening that occurs when historical thinking is reduced to “research skills,” missing evaluating fragmentary evidence, situating sources within context, and exercising interpretive judgment.


I see the effects of this directly with my own students. As they search for majors and eventually for positions where a history degree is an asset, they encounter the same pattern, where broad praise for what the humanities cultivate, paired with few roles that explicitly value humanistic knowledge. The message they receive is not that their education is unimportant, but that it is important only insofar as it translates into something else.


This disconnect matters because the decisions being made at companies like Anthropic, OpenAI, and Google will shape how AI systems interact with millions, if not billions, of people. Encompassing judgments about harmful content, appropriate autonomy, embedded values, and responses to moral dilemmas, they also involve how AI systems represent the past, which narratives they privilege, how they summarize historical events, and what they omit.


These are, in significant part, humanities questions, ones long examined within disciplines concerned with language, ethics, culture, and historical interpretation. Yet there is limited public evidence that leaders in AI are incorporating humanistic expertise at scale in the design and deployment of these systems.


One notable example is Amanda Askell, a philosopher at Anthropic whose work on alignment and fine-tuning directly engages questions of values and behavior in AI systems. Her presence demonstrates what it can look like to embed humanistic training within technical development. But she stands as an exception rather than a widely adopted model. Unlike engineering teams, which function and grow as a matter of course, roles for historians, literary scholars, or philosophers remain comparatively rare and less structurally defined.


There are reasonable counterarguments here. AI companies might point out that the pipeline is thin, that few humanities PhDs are applying for roles at frontier labs, or that the candidates who do apply often lack the technical fluency needed to work alongside engineers. Some might note that the most effective contributions come from hybrid figures, people who combine humanistic training with computational literacy, and that the industry is already hiring them in small numbers. Others would argue that a startup burning through venture capital simply cannot justify roles whose return on investment is difficult to quantify in a board deck. These are real constraints, but they are also, in part, self-reinforcing. Pipelines are thin because the roles do not exist yet, and candidates do not develop hybrid fluency because there are few career paths that reward it; and the difficulty of quantifying a philosopher’s ROI is itself a reflection of the instrumental logic that the humanities paradox describes. A charitable reading of Amodei's comments, and the most interesting one, is that she likely believes what she is saying. The issue is not insincerity, but structure. Venture-backed AI companies operate under competitive pressure, where the expectation is that each hire must advance model capabilities as directly and quickly as possible. Within that framework, hiring another machine learning researcher will almost always appear more urgent than hiring a historian, even when the systems being built are fundamentally about language, meaning, and human behavior.


This is the deeper version of the paradox. The organizations developing increasingly powerful tools for generating and interpreting language are doing so within incentive structures that make it difficult to value fully the disciplines most concerned with how language works and what it does in the world.


If AI leaders genuinely believe the humanities matter, the next step is institutional, not rhetorical. Create the roles. Hire historians, philosophers, and literary scholars as experts whose training addresses core questions in AI development, and not as generalists with “good communication skills.” Seat humanists on advisory boards and governance bodies, where those in power make decisions about values, language, and public impact. Integrate that expertise into design, evaluation, and leadership as part of the structure.


Until then, the quotes will continue to circulate. Humanities graduates will continue to share them with a mixture of hope and irony. And the gap between what AI leaders say about the value of human understanding and what their organizations materially support will persist.

Daniela Amodei may be right that the humanities are more important than ever. The question is whether the institutions shaping this technological moment are willing to reflect that belief not just in rhetoric, but in how they build their teams, define expertise, and imagine the future of knowledge.



Chloe Northrop is Department Chair and Professor of History at Tarrant County College, where she has led departmental conversations on the impact of generative AI for online student learning. She specializes in eighteenth-century British Atlantic history. Her current book project, Sea Daddy: Celebrity and Shifting Tides in Eighteenth-Century British Naval Figures, examines the construction of naval fame across the Atlantic world. She currently serves on the Executive Council for the Southern Conference on British Studies.


The views and opinions expressed in this post are solely those of the original author/s and do not necessarily represent the views of the North American Conference on British Studies. The NACBS welcomes civil and productive discussion in the comments below. Our blog represents a collegial and conversational forum, and the tone for all comments should align with this environment. Insulting or mean comments will not be tolerated and NACBS reserves the right to delete these remarks and revoke the commenter’s site membership.

bottom of page