AI, slavery, surveillance, and capitalism
(or AI for The Labor Question & What is Silicon Valley?)
Welcome to The Tech Bubble. This week, two pieces: another note from the artificial intelligence series (this time on origins and ends), as well as an essay on the limits of competing frameworks for understanding Silicon Valley.
Thanks again to everyone who signed up after the last post, we are just shy of 2,000 subscribers! If you're interested in supporting my work (and me), consider becoming a paid subscriber for $7 a month—the cost of one pack of Newports (at some establishments) or $70 a year. Everything will be free for now but I’ll eventually be paywalling some content that will have recommendations, extra essays, and more! If you like what you’ve read so far, please share it with a friend (or enemy).
AI == SOCIAL CONTROL (or AI AS SURVEILLANCE)
Slavery, labor, and the origins of computation
Back in 2022, Georgetown Privacy Center's Executive Director Emily Tucker argued that we should abandon terms like "artificial intelligence," "AI," and "machine learning" entirely when discussing digital technologies. These terms, and much of the discourse around artificial intelligence, trick us into naturalizing industry-favored ways of discussing, thinking, critiquing, and developing algorithmic and automated systems that converge with the interests of large corporations.
Our lack of self-consciousness in using, or consuming, language that takes machine intelligence for granted is not something that we have co-evolved in response to actual advances in computational sophistication of the kind that Turing, and others, anticipated. Rather it is something to which we have been compelled in large part through the marketing campaigns, and market control, of tech companies selling computing products whose novelty lies not in any kind of scientific discovery, but in the application of turbocharged processing power to the massive datasets that a yawning governance vacuum has allowed corporations to generate and/or extract. This is the kind of technology now sold under the umbrella term "artificial intelligence."
In its place, Tucker calls for a "Baldwin Test"—a commitment to using language that springs from real-life experiences with these technologies instead of corporate talking points. Think of it as a creative practice informed by a few fundamental guidelines:
(1) Be as specific as possible about what the technology in question is and how it works.
(2) Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency.
(3) Name the corporations responsible for creating and spreading the technological product.
(4) Attribute agency to the human actors building and using the technology, never to the technology itself.
What we call "artificial intelligence" has little to do with intelligence itself, whether we're talking about sparks of insight, boredom, delusion, or alternative forms (human or otherwise). Instead, it is concerned with narrow types of cognitive activity (e.g., pattern recognition) in pursuit of products that render humans more calculable, enabling prediction and replication. When told that ChatGPT can reason, or that our generative chatbots suffer from hallucinations, it's to trap us in a closed-off imaginative space that large tech firms have painstakingly cultivated. Certain assumptions hold there (like the idea that computational infrastructure should solely be in the hands of large corporations or that certain types of "AI" are capable of thought) and go a long way towards shaping what sorts of technologies get envisioned, financed, designed, developed, deployed—and to what ends. Corporations flooding the zone with propaganda about the inevitability of superhuman intelligence or the current capacity of their products to "think" are, ultimately, part of a long lineage of technologies concerned with cultivating social control.
Social control is embedded in the blueprint of modern computation, specifically through Charles Babbage and his calculation engines—tools first and foremost concerned with labor discipline and automation. As Meredith Whittaker explains, these tools "directly encoded" Adam Smith's theories of labor division and built on contemporary methods advancing labor control.
They were prefigured on the plantation, developed first as technologies to control enslaved people. Issues alive in the present—like worker surveillance, workplace automation, and the computationally mediated restructuring of traditional employment as "gig work"—echo the way that computational thinking historically emerges as a mode of control during the "age of abolition," in the early nineteenth century.
Whittaker's argument is a straightforward one: methods deployed on plantations to control and maximize the labor of enslaved Black people were transferred to industry for use on "free" (that is, not enslaved) white workers. Contemporaneously, plantations were understood to be "modern industrial undertakings" integral to developing the forms of surveillance, regimentation, labor division, rules, and regulations that'd give way to "scientific" management of industrial labor. Enter Babbage, who, like plantation overseers, saw worker surveillance and control as "central" to "labor arrangements, alongside distinct regimes of violence and discipline calibrated to increase profits and productivity."
Debates over abolition raised existential questions about the future of Britain's empire, namely, how could international preeminence be maintained without racialized, enslaved labor? And so, Babbage sought to solve the "labor question" with calculating engines that would standardize and discipline work into forms that supported British industry, capitalism, and imperial ambitions. Babbage's machines depended on automation, which in turn depended on the division and control of labor so that work could be made "observable, quantifiable, and controllable." Even surveillance, it was hoped, would someday be automated, allowing new instruments for disciplining workers and discouraging labor organizing amongst workers could be devised. Technological innovation would be deployed to reshape new forms of labor into older ones, preserving familiar forms of exploitation while experimenting with novel ones oriented towards similar ends.
Modern computation is born from an explicit attempt to replicate lucrative, violent forms of labor exploitation for the British Empire in the face of worker revolts and abolition movements. Anyone familiar with how automated systems are used to suppress worker organizing, generate kill lists to justify the targeting of civilians, militarize borders, and terrorize migrants will not be surprised that there's a direct link here. We can go a bit further, however. Take the core argument in Matteo Pasquinelli's The Eye of the Master:
the inner code of AI is constituted not by the imitation of biological intelligence but by the intelligence of labor and social relations. Today, it should be evident that AI is a project to capture the knowledge expressed through individual and collective behaviours and encode it into algorithmic models to automate the most diverse tasks: from image recognition and object manipulation to language translation and decision-making.
Here emerges one reason to use the term "artificial intelligence." We drown in corporate propaganda about the high-minded quest to build human and superhuman intelligences. Look past the hype, and you will recognize a grand capitalist project that pursues labor automation alongside the imposition of hierarchies and disciplinary mechanisms that preserve, expand, or re-legitimize forms of social control. Capitalists in the 19th and 20th centuries ostensibly used statistical methods to measure intelligence, yet actually served to reinforce pre-existing forms of discrimination with skill classifications. "Artificial intelligence" can be thought of as the 21st-century iteration of this project.
AI continues this process of encoding social hierarchies and discriminating among the labour force by imposing indirectly a metrics of intelligence. The class, gender, and racial bias that AI systems notoriously amplify should not only be considered a technical flaw, but an intrinsic discriminatory feature of automation in a capitalist context.
What we call artificial intelligence (Pasquinelli's "eye of the master") lends itself towards intensifying capitalist control over labor and social relations, innovating new forms of discipline and surveillance, and creating new relations that preserve the exploitation necessary to keep things chugging along. Automation doesn't replace workers but "multiplies and governs them anew." And so we are inundated with Potemkin AI (software that relies heavily on human labor but pretends to be powered by sophisticated software). Global computational infrastructure—data, servers, sensors, networks, cables, and so on—are maintained by an "army of 'ghost workers'" spread across the Global South in refugee camps, prisons, and workplaces that resemble plantations (with counterparts in the Global North subjected to similar conditions). To sustain these restructured labor relations, workplaces, and markets, we increasingly see flesh and blood managers replaced with algorithmic overseers bolstered by surveillance and social control. The experiences of app-based ride-hail in New Delhi and New York City or data labelers in Southeast Asia and Europe are the rule, not the exception: "artificial intelligence" doesn't describe attempts to mimic human intelligence but should be used to describe the most recent iteration of a long-standing project to subordinate humanity to certain power relations that benefit Silicon Valley capitalists.
Surveillance capitalism vs techno-feudalism vs techno-authoritarianism
Understanding those power relations requires we spend some time interrogating what Silicon Valley is exactly? From its earliest days, the question has plagued a growing host of beings under Earth's firmament: the native peoples who lived in California, the settlers who exterminated them, the soon-to-be-sacrificed ecology, the migrants whose bodies (then and now) fueled its metastasis, the eugenicists whose dreams sustained it, the state planners who subsidized it, the financiers who speculated on it, and the consumers who sometimes suffer sometimes worship it.
The question's stakes have only grown with the years. Despite cycles of staggered layoffs and devaluations, Palo Alto still aspires to control the commanding heights of the economy. The sector boasts tens of trillions of dollars of value, the region hundreds of thousands of jobs, and the people synonymous with both (venture capitalists, founders, tech executives, etc.) have integrated themselves into increasingly rarefied circles of power and are slowly restructuring our politics, culture, economy, social relations, and legal system to their benefit.
It matters, then, what we think of Silicon Valley and its operations because that will, in turn, structure the nature of the response. Is Silicon Valley to be expropriated or exulted? Will we subject it to a controlled demolition, or should society be remade in its image? Are we satisfied with this nexus of capital, technology, and power, or are we interested in another mode of technological development?
Small wonder then that, despite their best efforts, the past few years have seen many critics and sycophants start to sound like one another. Capitalism, they insist, is coming to an end because our powerful digital technologies are unraveling the mode of production that has come to dominate Earth these past few centuries. Surveillance capitalism, techno-feudalism, and techno-authoritarianism are dialects that share the same mother tongue: an artificial language incapable of accurately describing our reality. The longer we cling to them, the harder we will find it to accurately assess what exactly we take issue with and what to do about it.
The most well-known of these theories comes to us from Harvard Business School professor Shoshana Zuboff's 2018 brick of a book (and title), The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. In it, she insists capitalists have leveraged digital technologies to expand computational resources, develop data analytics tools, and—most importantly—generate new digital services. In the beginning, Zuboff tells us, there was Google's behavioral value reinvestment cycle. User activities like search queries generated data that was analyzed to improve digital experiences or offer new ones. This drew more users, yielded more data, and so grew the Edenic digital paradise. Google needed to make money from this cycle, so it introduced a snake into the garden through advertisements. Then Google made a fateful discovery: "data exhaust," or the byproduct of user digital behavior, was not waste to be expelled but gold to be mined. Web searches don't just generate results; they generate data: the way you type your query, the heatmap of your mouse, the pages you do or don't visit, and more. This exhaust was actually a "behavioral surplus" capable of bolstering Google's burgeoning advertising business by incorporating predictive modeling based on data its users generated. This, Zuboff laments, sealed the deal: Google abandoned the behavioral value reinvestment cycle for a behavioral surplus business model. It began to prey on its users, hunting for more behavioral surplus and investing in greater tools of surveillance, accumulating "surveillance capital" and developing an expansive extractive and predictive infrastructure eventually aimed at influencing and determining human behavior. Goodbye capitalism, hello surveillance capitalism.
While the book received widespread praise, it was met with criticism from fellow academics and tech critics. Kirstie Ball, professor at the University of St. Andrews, notes the book's refusal to cite relevant surveillance literature, which ends up duplicating numerous arguments that weaken the tome and leave it offering embarrassingly little in terms of concrete next steps. Ball chalks this up, ultimately, to the text being "intended as a wake-up call for the educated business reader to recognize the massive power of the tech platforms." It may provoke some reflection, but ultimately "is more likely to be found in an airport bookshop than in a learned library."
Blayne Haggart, professor at Brock University, goes a bit further in sharing that he wouldn't be teaching it because "as an academic work, it falls far short of the standards to which we should hold ourselves." He echoes Balls' concerns about citation and engagement with literature but adds that Zuboff goes to great lengths to obscure her own analytical framework and often relies on hyperbole when evidence is scant.
One example Haggart offers is that Zuboff compares people under surveillance capitalism to poached elephants. She envisions the poachers as some Big Other—in a 2015 article, she defined this as "a new universal architecture existing somewhere between nature and God." Big Other "poaches our behavior for surplus and leaves behind all the meaning lodged in our bodies, our brains, and our beating hearts." We are not the product; we are an abandoned elephant carcass. An existential metaphor that suggests little is to be done, Haggart notes, but a simplistic one that prevents us from thinking clearly about this issue. If one remembers that some people enjoy social media despite its negative externalities, then capitalist reformers might pursue well-established policy solutions constructed for this very type of predicament: taxes or regulation or abolition of the enterprise entirely.
There are other criticisms of Zuboff's book and theory. The major one comes from tech critic Evgeny Morozov's 14-part, 16,000-word review that interrogates the foundations of Zuboff's argument. Zuboff's theory, he observes, relies on a certain way of seeing capitalism: that its dysfunction is not a product of historical or even systemic features—such as deindustrialization or profit-seeking—but rather "the avoidable consequences of particular organizational arrangements, which, while having their uses in earlier eras, could now be made obsolete with information technology." At play here, also, is an intellectual genealogy stretching back to Havard business history professor Alfred Chandler and his mentor, sociologist Talcott Parsons; social systems function because they have specific needs satisfied by specific parts—history alters those needs, which alters the functioning of those parts, which forces society to adapt. Chandler's own grand theory of "managerial capitalism" was a tautological mess: an analytical model of history that ignored power relations, prioritized technological change, ignored evidence that inconvenienced the core premise that sprang its inquiry and could not recognize it was analyzing capitalism.
Such is the thrust of Zuboff's work. Surveillance capitalism is driven by the imperatives of surveillance capitalism, which narrows her inquiry to relationships between firms extracting behavioral surplus from users and ignores vast swaths of the digital economy. We discard the geopolitics that helped catalyze Silicon Valley, the sources of capital that fuel its startups, the labor relations at major platforms that carve up daily life, the assetization schemes ranging from crypto to nature itself, the effects of concentrated ownership of computational resources, and more. In effect, we hyperfocus on specific phenomena (surveillance by advertiser giants) in ways that normalize consequential phenomena (as well as other types of surveillance!) happening elsewhere in the digital economy because they do not fit into the theory. However, their inability to fit into surveillance capitalism's theorization does not mean their effects on our politics, labor relations, social lives, culture, economy, and ecology are any less real—just that surveillance capitalism is likely a useless theory.
Despite Morozov's field dressing, Zuboff's surveillance capitalism has been influential enough to regress our understanding of the digital economy. Before we turn to Zuboff's influence, it's worth looking at a different surveillance capitalism offered up by John Bellamy Foster and Robert W. McChesney at the Monthly Review.
Foster and McChesney's 2014 essay is a lengthy one with a simple premise: World War 2 lifted the U.S. economy out of the Great Depression into hegemonic supremacy, but state planners were still concerned that this would be temporary because "domestic demand would be insufficient to absorb the enormous and growing potential economic surplus generated by the production system, thereby leading to a renewed condition of economic stagnation and depression." To that end, planners in industry and government organized three major campaigns to absorb that surplus: a corporate marketing revolution via Madison Avenue, a military-industrial complex committed to imperial control of markets abroad, and technological innovation, and when those two efforts waned, the ascent of financialization emerged.
Marketing required the technical development of "a highly organized system of customer surveillance, targeting propaganda, and psychological manipulation of populations." At the same time, it transformed capitalist competition as collusion among oligopolies ensured prices only went upwards, and fights instead were over "market share for particular brands," which required product differentiation, which intensified pressure on developing communications technologies that could create and preserve certain consumer groups, which accelerated the privatization of broadcasting and previously open space to accommodate ever-growing advertising and marketing efforts.
The second pillar of Foster and McChansey's "surveillance capitalism"—a permanent warfare state—was, ironically enough, founded by General Dwight D. Eisenhower in 1946 in a memo on the subject of "Scientific and Technological Resources as Military Assets." Nearly 15 years before he warned about the "military-industrial complex," Eisenhower insisted national security depended on the creation of a massive apparatus that marshaled civilian scientists alongside private firms and contractors into a tight and collaborative relationship. The Second World War's subordination of industrial capacity and scientific innovation was to be replicated and sustained indefinitely, calling on new institutions and new technologies. Groups like the Council of Economic Advisers to help chart a course for military Keynesianism, institutions like the National Security Council to produce documents like NSC-68, which called for a geopolitical strategy that justified massive rearmament for national and economic security, and agencies like the National Security Agency to spearhead electronic surveillance at home and abroad. A desire for social control, whether to crush political enemies or galvanize consumer activity, spurred the creation of national electronic surveillance programs and computer systems to transmit intelligence rapidly (MINARET, the proto-internet ARPANET, ECHELON, etc.).
The third pillar, financialization, emerges when the first two begin to falter during the neoliberal era as financial institutions create speculative products for capital to invest in, but its high-speed computer networks add an accelerant to the fire. "Like advertising and national security, it had an insatiable need for data," Foster and McChesney write. "Its profitable expansion relied heavily on the securitization of household mortgages; a vast extension of credit-card usage; and the growth of health insurance and pension funds, student loans, and other elements of personal finance." The aggregation of financial transaction data picks up exponentially, becoming common practice in major corporations and spurring an entire industry of data mining, data analytics, data brokerage, and the promise of "premium proprietary behavioral insights."
From that quick look at the alternative surveillance capitalism, we get a sense of how much is lost in Zuboff's truncated history, given its singular focus on major Internet firms—specifically Google and Facebook—and on much more recent encounters with the security state, specifically 9/11 and its aftermath. Ignoring that history deprives us of a chance to appreciate how fundamental surveillance has been to capitalism since World War Two. It also means that in losing sight of the origins of that influence, we lose sight of points to attack if we are interested in treating the illness instead of a few symptoms.
Chief among Zuboff's heirs are the techno-feudalists, tech critics who argue capitalism is dead and Big Tech has killed it. To their credit, the techno-feudalist engagement with history, economics, politics is substantial compared to Zuboff's. In their telling, capitalists—specifically tech capitalists—no longer reinvest their profits to produce more, but instead create platform monopolies that resemble feudal fiefs to generate rents, not profits, through monopolies and total control over the value generated by workers and consumers.
Former Greek finance minister Yanis Vaorufakis argues in his new book, Technofeudalism: What Killed Capitalism, that our digital overlords ("cloud capitalists") leverage "cloud capital” to create algorithms that predict and manipulate our behavior, as well as platforms to exploit workers ("cloud proles") and trapped users who generate data as a form of, typically, unpaid labor ("cloud serfs"). In his 2020 book "Technoféodalisme: Critique de l'Économie Numérique," 2022 essays in the New Left Review debating this concept, and a recently published book "How Silicon Valley Unleashed Techno-Feudalism" French economist Cedric Durand goes to great lengths to ascertain why capitalism seems to have has shifted from reinvesting in production to supporting regimes of intangible (digital) asset monopolies underwritten by state coercion that have given rise to various methods of rents instead of profit-seeking competition.
As Malcolm Harris writes for NYMag, it seems to be that technofeudalists "have a bad habit of repeating the industry's self-promotional puffery." They align with investors in insisting that computation has transformed capitalism, arguing that the anti-competitive practices of firms like Amazon, Google, and Facebook are evidence of their deviation from capitalism. As Harris points out, the tech firms themselves are renters—they do not even control most of the computational resources that sustain their operations. As Cory Doctorow points out in his book How to Destroy Surveillance Capitalism, while social control is real, algorithmic social control seems to be a marketing strategy to sustain the multitude of inflated industries and corporate valuations reliant on these claims.
The techno-feudalists—largely Marxist scholars building on Zuboff, a decidedly non-Marxist scholar—may yet spawn their own successors among liberals too. In The Atlantic, executive editor Adrienne LaFrance penned a February 2024 titled: "The Rise of Techno-authoritarianism" argues Silicon Valley technocrats are a threat to democracy because they do not believe in it, instead they believe in themselves.
Many of them profess unconditional support for free speech, but are vindictive toward those who say things that do not flatter them. They tend to hold eccentric beliefs: that technological progress of any kind is unreservedly and inherently good; that you should always build it, simply because you can; that frictionless information flow is the highest value regardless of the information's quality; that privacy is an archaic concept; that we should welcome the day when machine intelligence surpasses our own. And above all, that their power should be unconstrained. The systems they’ve built or are building—to rewire communications, remake human social networks, insinuate artificial intelligence into daily life, and more—impose these beliefs on the population, which is neither consulted nor, usually, meaningfully informed. All this, and they still attempt to perpetuate the absurd myth that they are the swashbuckling underdogs.
It is true that Silicon Valley is home to a horde of reactionaries that pine for a modernity free of liberalism, as well as gripped by a delusion that society would be best served under their dominion. But why? If we are going to offer a theory such as techno-authoritarianism, we have to have some theory about why there is such a break. LaFrance offers us a tautology: these individuals accelerating the pace of our innovation and the triumph of digital technologies owe it to ideas that prioritize accelerating our innovation and the triumph of digital technologies.
The sad truth is this: Silicon Valley and its reactionaries are chickens coming home to roost. Silicon Valley is not some Promethean flame we smuggled from Olympos. Technology is not some primordial force corrupted by libertarian nerds with too much money. It's of the real world, it's material, it's influenced by the forces of history as much as anything else. Those forces come out of choosing to prioritize technology that helps us undertake surveillance for commerce, advertising, imperial adventure, speculation, and political suppression. The existential threat posed by today's tech capitalist overlords will get much worse. The funny ideas they have about their divine right to rule, their suspicion of competition and democracy, and other people who do not look like them—well, that's how we got here in the first place.
I think about slavery all the time, almost literally, and I'd never connected tech to it. Now I can't believe I hadn't. My mind is kind of blown.
I’m just going to stop paying my debts. The government took my tax dollars and created a digital employee model that they refuse to regulate and is currently causing massive layoffs in every sector.
It’s one thing to subsidize “job creators.” It’s something entirely different to subsidize “job destroyers.”
SB-1047 should have passed, and when it didn’t, it appears as if the government itself is trying to make sure I can’t find work in my field ever again.
This isn’t a conspiracy. This is my real life. My professor used Chat-GPT for syllabus creation, told his students to use it as well, and then I discovered a 500-image trove of student works that had been used for generative-AI content training.
It’s blatant. I feel as if I’ve been stabbed in the back.