top of page

Data is the New Oil: Outside-in intelligence and the social and environmental consequences of AI

If so-called artificial intelligence systems have any intelligence at all, it is of a kind described by historian of automata Jessica Riskin as “outside in.” Simple machines built by information technology pioneers like Claude Shannon in the 1950s are the first clear examples of "outside in" intelligence. One of Shannon’s robots, a wooden mouse with copper-wire whiskers called Theseus, was able to search a maze to find “cheese,” a switch that turned off its motor. Once it had found the cheese, Theseus could remember where it was and return to it from wherever he was dropped into the labyrinth. But although Theseus appeared to be an intelligent mouse, as Shannon always explained at the end of his demonstrations, the mouse did not really solve the maze. Relay circuits underneath the labyrinth remembered the route once the cheese had been found. The maze was smart, not the mouse; it was an “outside in” intelligent system.

Self-driving cars are much more sophisticated than Theseus, but they also have “outside in” intelligence.  If their performances, like the generative pre-transformers (ChatGPT) that produce texts or images, are impressive, it is because they are drawing on unfathomably huge amounts of data.  They are trained on petabytes of information and cheap human labor , and once on the streets become their own data collection sites, sharing information with other vehicles and data centers. A petabyte is one quadrillion (one with fifteen zeros after it) bytes, a byte is 8 bits of information, and a bit is the information contained in the simplest possible machine, a switch with two positions, on or off, conventionally 0 and 1.

When I was a very young, I used the word “quadrillion” to describe anything unimaginably large. The older me uses an imaginary data set described by writer Jorge Luis Borges in a short story called “The Library of Babel.” Borges’s library contains every possible book that can be written with a communication system in which there are 25 symbols: twenty-two letters, a comma, a period, and an empty space. Each book is of the same length, four hundred and ten pages, but is uniquely different from any of the others. As a result, the library’s contents can be exactly calculated as containing more books than our best estimates of the number of atoms in the universe. Borges’s imagined library is not infinite, but it is too big to be real.



Image description that follows taken from Public Domain Review.  “We find a more apocalyptic vision of the future in Robert Seymour’s 1820s The March of the Intellect, where a jolly automaton stomps across society. Its head is a literal stack of knowledge — tomes of history, philosophy, and mechanic manuals power two gas-lantern eyes. It wears secular London University as a crown. The machine smokes while crusading, blowing hot-air-balloon follies from a pipe bowl, carried on the breath of its menacing exhalation: “I Come I Come!!”. Wielding a straw broom, capped with the head of reformer Henry Brougham, it sweeps away all potential encumbrances. Gone are the pleas, pleadings, delayed parliamentary bills, and obsolete laws. Vicars, rectors, and quack doctors are turned on their heads.”
“The March of Intellect,” by Robert Seymour, c. 1828. Public Domain.

If the universe is too small for Borges’s library, perhaps there’s not enough space on Earth for all our generated data? Add up all the ongoing collections that our phones, computers, and cars make every day (there are more than 1.5 billion active iPhones in the world), and a quadrillion bytes begins to look less like a large number and more like a small fraction. The thought has certainly occurred to Elon Musk (owner of Grok), Jeff Bezos (founder of Amazon), Sam Altman (CEO of OpenAI), and Jensen Huang (chief executive of the chip maker Nvidia). They have all invested in a space data center start-up called Starcloud, whose chief executive Philip Johnston thinks orbiting data centers are the solution. For Johnston, space data centers would have abundant access to solar power, avoid government regulation, and look cool ––appearing in the sky like small moons. The cost of making these centers, sending them into orbit and maintaining them (they would require rebuilding every five years––the life span of their computer chips), and the resulting pollution cause by orbital overcrowding are things Johnston is less keen to talk about. For him, “It is not a debate––it is going to happen…”

Meanwhile, back on Earth, space and energy remain a problem. According to Google’s AI search, a data center contains about a quintillion (one with eighteen zeros after it) switches. Sustained and expensive research through the post war period has made electronic switches (transistors) very small. The most sophisticated phones have around 20 billion of them. But dividing a quintillion by 20 billion still makes a data center a machine comparable in size to 50 million phones. That calculation might be too crude, but it does enough to suggest just how expensive powering a data center will be. A section of the London Borough of Ealing Council’s planning report that gives the go ahead to a company called Global Technical Realty to build a data center campus in Southall estimates that the resulting demand for electricity will be “1, 610, 528 Mhwh per year, the annual equivalent of approximately 530,000 homes,” or that of a city the size of Birmingham.  All that energy must also create heat, and that means data centers also need water.  According to The Guardian, the proposed hyperscale campus in Cambois in Northumberland could consume as much as 124 million liters of water, equivalent to the average yearly use of 11,000 people and fifty times higher than the US operator developing the site, QTS, has estimated. They will also pollute. The Elsham data center, which is being built in Lincolnshire at a cost of £10 billion, is projected to release five times the carbon dioxide of Birmingham airport.

What these demands for energy and resources make all too clear is that the new information economy needs access to an old infrastructure, the railroads, power grids, and rivers systems that were the sources of nineteenth century industrialization. Or put another way, they are the latest historical development of an economic system dependent upon a series of energy transitions from wood to coal to oil and now to solar, wind and nuclear energy. Twenty-first century technology companies are keen to encourage the idea that these “transitions” inevitably lead to greater efficiency, and even to solutions to the problems of climate change. But as historian of science Jean-Baptiste Fressoz has pointed out, talk of energy transitions in the past only served to disguise demands for “[m]ore and more and more” power.  The mind-boggling arithmetic of data growth gives every reason to think that he is right.

To make matters worse, the owners of this new UK infrastructure have no reason to serve the British people. David Edgerton, a historian of technology and of the UK, has argued that the “nationalized” economy largely created by post war Labour governments maintained key energy resources such as coal, gas and electricity supply, as well communication networks like railways, telephone, and postal systems, under UK government control. Margaret Thatcher’s conservative government and Tony Blair’s New Labour sold that infrastructure to global multinational companies that care little to nothing about British economic or environmental interests.  As the novelist and journalist James Meek discovered on a recent visit to Blyth, only six miles from the proposed Cambois data center, those that are set to live under what Meek calls the “flyover of an energy-AI superhighway” have little faith that global capital will sustain the local economy. As a result, a community that was once at the heart of the Nation Union of Mineworkers’ resistance to Margaret Thatcher’s economic policies now feels a greater affinity for crude Reform UK nationalism than the current Labour government’s economic globalism.

Although there are some voices, such as that of Martha Dark, co-executive director of Foxglove, a London based non-profit for a “fairer tech future,” calling for “an economic plan best for Britain… rather than for Amazon, Google and Meta,” the power of the technology companies to suppress debate and withhold investment is daunting. But there is perhaps a glimmer of hope. Proposals like that of Starcloud suggest that the so-called AI boom is a bubble. And that raises the question: what happens to the data centers after the financial bust?

Governments and communities could use an AI crash as an opportunity to curtail technology company power by taking some control of the data centers and the data they contain. It is not clear how that would work, but the purpose must be to provide local and national communities greater economic and political power over their digital environments. After all, if the AI boom has taught us anything it is this:  Data is the new oil, and the most pressing questions are, who owns it, who has access to it, and what should it be used for?


Kevin Lambert is a Professor of Science and Technology in the Liberal Studies Department at California State University Fullerton and author of the book Symbols and Things. His work investigates how the material environment shapes ways of knowing. He is currently researching a new project about the circulation of things in the global sixteenth and seventeenth centuries. 



The views and opinions expressed in this post are solely those of the original author/s and do not necessarily represent the views of the North American Conference on British Studies. The NACBS welcomes civil and productive discussion in the comments below. Our blog represents a collegial and conversational forum, and the tone for all comments should align with this environment. Insulting or mean comments will not be tolerated and NACBS reserves the right to delete these remarks and revoke the commenter’s site membership.

bottom of page