And we are back! Thanks to all of you who remained subscribers all this time, I appreciate your support more than you can know! If you like this essay and want to support me, consider becoming a paid subscriber to help support my work.
Today’s post consists of two notes from a series I’m working on that examines various ways some might think about artificial intelligence. The full series will be published at Security in Context soon, where I’m working as a senior researcher.
AI == life engineering (or AI as a vector to revitalize eugenics)
I first spotted this one in Paris Marx's Disconnect blog: NVIDIA CEO Jensen Huang suggesting "life engineering" would be made possible by artificial intelligence:
Applying this technology and advancing it further for some of the most challenging and pressing issues of our times—whether it's digital biology or healthcare—is some of our best futures. I can't imagine what we're going to be like in 10 years when we apply artificial intelligence to the field of biology and for us to move beyond calling it life sciences to life engineering. And for us to understand biology the way we understand many other fields of science would be incredible. And so I think that's probably our single greatest potential of helping and we're working on it.
Whether Huang realizes it or not, as Paris points out, such talk is "very much in line with using the veneer of tech to revitalize the eugenics movement." Silicon Valley and its various offshoots will be quick to insist they’re interested in something closer to eugenic’s supposedly progressive origins—in hacking your biology, improving your erections, killing Death, and becoming an immortal computer until the universe’s heat death or whatever. There’s little reason to buy into that lark, however.
In The Baffler, Gaby del Valle writes about the recent mainstreaming of Great Replacement Theory—the idea that there is a secret cabal flooding the West with coloreds to permanently disempower and/or destroy the white race. This fits right into the recent backlash to diversity, equity, and inclusion (DEI) policies which Great Replacement-pilled opponents bemoan are ignorant of an inescapable truth: “differences between white Americans and everyone else aren't merely cultural but genetic, immutable, and incapable of being assimilated away."
As journalists like Jacob Silverman and Gil Duran and Eoin Higgins have documented at length, some of Silicon Valley’s more prominent investors, thought leaders, founders, and zealots take these ideas a step further: they see themselves as the vanguard of a reactionary backlash against liberalism itself. America will be saved from the fate of Eurabia only by the re-legitimization of caste, aristocratic rule, and pre-capitalist social relations. The supposed superiority of the white race has not made these screeds any more coherent, but that hasn’t stopped their popularity from growing.
Still, we shouldn’t make the mistake of suggesting this is a novel development—or that eugenics has some progressive origin story. Malcolm Harris’ Palo Alto and Grey Berchin’s Imperial San Francisco are both magisterial histories that highlight how the material and ideological roots of a Silicon Valley interested in race science, eugenics, and winning the war against liberalism go back to California’s colonial settlements. Palo Alto itself makes painfully clear that “progressive” eugenics never existed, showing how ideas about improving humanity (and early experiments doing so) were always pregnant with race science or an obsessive focus on racial difference. And even if you hone in on something as recent as the modern venture capital industry that finances Silicon Valley, the shock some express at the turn towards Trump makes no sense considering the history and political objectives of its various actors—they are concerned with preserving the position of those who look like them and live like them and think like them.
While none of this inspires confidence that anything Silicon Valley produces can ever be free of a sulfuric odor, it doesn’t outright condemn artificial intelligence. One popular critical pathway for that project runs through the work of Timnit Gebru and Émile Torres, who assert that the ideas animating the pursuit of artificial general intelligence (AGI) are part of a TESCERAL bundle (transhumanism, extropianism, singularitarianism, cosmismism, rationalism, effective altruism, and longtermism) shaped by a second wave eugenics movement. Whereas first wave eugenics was concerned with creating a superior "human stock" through social engineering, the second wave is more interested in genetic engineering and biotechnology to create "posthuman" species as well as machine gods.
So while “life engineering” in the short-term may refer to efforts to improve our lifespans, vitality, resilience, cognition, and so on, it’s not clear why that should be taken seriously. If you look at the Valley’s origins, dominant ideologies, material structure, culture, prominent networks, and a host of other composite features, you quickly find grooves that easily plug “life engineering” into selective breeding, skull measuring, biological essentialism, race science, and eugenic projects t lend themselves to preserving power and privilege for those who are inherently worthy (by virtue of genetics, heritage, race, class, and other sharp lines of differentiation). None of this even begins to get into concrete and ongoing ways artificial intelligence is being used to pursue life engineering—that is, the ways in which it is used to structure the allocation of resources, social relations, market forces, urban design, medical care, regulatory frameworks, cultural production, and so much more. We will touch on that in later parts.
AI == virus (or AI as a foreign body that reinforces the resilience of the host)
One of my favorite pieces recently is Kate Wagner's "AI and Internet Hygiene" in which she looks at internet hygiene (the ability to spot bullshit which may harm a computer system)—specifically, she talks about the rise and fall of it, how the ongoing AI hype cycle takes advantage of its collapse, and the need for a new program taken up by public institutions.
You might be able to spot the fake "download" button when pirating an uncut unrated version of CRASH (1996), but your parents will fall for it every single time…not unlike how you may have developed an ability to spot AI-generated content but your aunts and uncles are sharing it unassumingly via WhatsApp/Facebook or SMS group chats.
Wagner writes:
Finally, as terrible as the AI infection of the Internet at large is, perhaps a re-skilling of the computer-using population is a silver lining. After two generations of Internet illiteracy, an embrace of new tools for empowerment and understanding can help salvage not only the Internet (if such a thing is still possible) but maybe even some of that old utopianism it once promised. At its core, the computer and the Internet are tools for creation, education, commerce, play, and communication.
Two takeaways here. First: AI was “prematurely” (from the perspective of the public) introduced to the public by profit-seeking firms, and it is in their interest that it widely proliferates to scrape every bit of data, erode privacy and copyright norms, and create dependents out of creatives and public institutions and other corporations as products are foisted upon each party. Second: digital systems and tools should not be ceded to the market—we should not accept the assumption driven into our heads that their natural form and function is to be an instrument of capital, profit, surveillance, social control, and so on. AI and the internet (as much as these terms can describe a singular thing) are constituted by actors whose interests don’t align with our own, and incidental scenarios where we both eek out a benefit tend to be slanted in the direction of those firms spreading the infection—not the public begging for a different implementation.
This reminds me of Ted Chaing's New Yorker essay "Will A.I. Become the New McKinsey" which pegs the technology we call AI as broadly concerned with disempowering workers, extracting from the public, and consolidating power & privilege:
I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
Chaing's essay boils down to asking whether artificial intelligence will become another version of McKinsey, fondly described as "capital's willing executioners" by a former employee. You could make pro-social A.I. just as you could make pro-social McKinsey, but investors will either prioritize ventures that maximize shareholder value or figure out a way to hollow out and co-opt your alternative (as was done with ESG, which our financial sector promptly used to greenwash business as usual and attract even more investor activity).
Silicon Valley and artificial intelligence can/do lend themselves pretty easily to old and new eugenic projects, but they also lend themselves to other projects integral to capitalism as we know it: concentrating wealth, degrading working conditions, codifying extraction and dispossession, externalizing harm, and evading accountability. As Chaing points out, "technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress." Criticism of capitalism, after all, means you oppose technology, progress, and the improvement of human civilization! Enriching a cloistered network of reactionaries more interested in scientific racism than eradicating poverty will do more to benefit humanity than, say, disempowering said phrenologists and tackling poverty, housing, climate change, and so on without solutions that run through Silicon Valley and its latest frontier of artificial intelligence hype.
The dynamic here is a simple one: we are told a foreign body will reinforce the resilience of the host body, but misled about who the host is. Various financiers and monopolistic firms, not the general public, stand to gain from the introduction and proliferation of automated systems and algorithmic oversight.
This is on full display with Marc Andreessen’s Techno-Optimist swill, but his “Why AI Will Save the World” fits the mold here too. There, Andreessen argues that AI will "profoundly augment human intelligence" to improve every domain of human knowledge, raise our standard of living, and protect our way of life. AI will serve as a personalized tutor that helps children "maximize their potential with the machine version of infinite love." AI will be your "assistant/coach/mentor/trainer/advisor/therapist" with infinite patience, compassion, knowledge, and helpfulness. Every scientist, artist, engineer, entrepreneur, doctor, and caregiver will get an AI "assistant/collaborator/partner." It goes on and on like this:
Perhaps the most underestimated quality of AI is how humanizing [sic] it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.
The other relevant part of this a16z specific manifesto is whereas homegrown AI might strengthen our society, foreign AI will destabilize it. "The single greatest risk of AI is that China wins global AI dominance," Andreessen writes—an outcome that can only be achieved, conveniently, if we avoid rules and regulations that might slow the pace of AI research and development (and the growth of the industry’s valuation).
This would be a scary picture if it was true. As Daron Acemoğlu and Simon Johnson lay out in their recent book Progress and Power, the risk of destabilization is less a function of whether China or America develops the technology than the power relations dictating the financing, designing, and development of various technologies. "The broad-based prosperity of the past was not the result of any automatic, guaranteed gains of technological progress," the pair write. "We are the beneficiaries of progress, mainly because our predecessor made the progress work for more people." The book—a survey of the past 1,000 years of technological development—is unambiguously clear about this: major inventions don't have some inherent nature that tends towards improving life or enriching humanity. More often than not, they worsen life and deepen inequality because elites hoard their benefits and tightly dictate the terms of technological deployment. So when technological breakthroughs do yield positive outcomes, it's because those with power are FORCED into an agreement that distributes resources more equally.
AI boosters and zealots, even the venture capitalist lemmings among them, understand this—which is why it informs their desperate screeds valorizing a business model that boils down to printing out lottery tickets for friends then using the proceeds to subject various industries to shock therapy. Is it any surprise that billionaire investors insist any form of regulation or state intervention that threatens their ability to enrich and empower themselves (and their friends) is a “form of murder” while funding technologies that do actually kill people in hospitals, in police stops, at the border, and at large as their emissions intensify climate change? Of course not.
Woohoo welcome back to Substack friend, excited to read this on the plane
Welcome back. Great piece to kick off your return.