Hello everyone, this week will be a fun one because we've been flooded by such stupid developments. Today's essay is on the subject of AI skepticism and the shallow limits of our debates about AI.
Some housekeeping: I've got a book review of AI Snake Oil out in the December issue of The New Republic and I've got an essay on Wall Street & AI in the Fall issue of Boston Review. We’ve also released three episodes of This Machine Kills since the last newsletter. In Episode 381, we talk about the analytical weakness at the heart of the legal strategy deployed in the New York Times lawsuit against Microsoft and OpenAI. In Episode 382, we expand a bit on my review of AI Snake Oil and do a deeper dive into political economy of automated bullshit. In Episode 383 we use Musk and Amazon’s war on the NLRB to survey the right-wing’s war on the administrative state.
If you like what you read, consider subscribing to The Tech Bubble for free. If you really like what you read, then consider becoming a paid subscriber for $7 a month (that’s the price of a tallboy and some snacks) or $70 a year (the price of my eternal gratitude).
Casey Newton’s latest essay on why we should be skeptical of artificial intelligence skeptics/critics is one of the more lazy and intellectually dishonest ones I’ve read in a long while. I want to go through its five parts then talk more generally about that mode of commentary:
I.
The core thrust of Newton’s piece is that the entire range of AI discourse boils down to two groups: critics external to firms and organizations directly working on or studying AI (they believe “AI is fake and sucks”) and internal critics who understand “AI is real and dangerous.” Newton aligns himself with the latter and suggests that the former camp are not only incapable of recognizing genuine innovations made in this field, but risk leaving us blind to threats enabled by those advances.
It’s not clear why anyone should entertain Newton’s false dichotomy, especially if they spend more than a second thinking about this. There is obviously AI that is fake—our world is littered with Potemkin AI, or digital software constructed such that it obscures the humans performing supposedly automated labor. There is obviously AI that is real, such as the recommendation algorithms that structure our experiences on platforms. There is obviously AI that sucks and does not work: think of algorithmic products that claim to predict crime, determine whether pretrial bail is granted, or detect welfare fraud. There is obviously AI that is dangerous—such as products that let armed forces and intelligence agencies use drones to perform assassinations.
AI can be real and fake and suck and dangerous all at the same time or in different configurations. OpenAI, in pursuit of its first profits, uses fear of AI as a marketing strategy to secure partnerships and enterprise deals, or as Brian Merchant puts it: "if they want to survive the coming AI-led mass upheaval, they'd better climb aboard." Microsoft claims it is fighting climate change by accelerating fossil fuel extraction with generative AI products that generate tens of billions of dollars for oil companies and the tech giant. Israeli apartheid is powered by AI tech provided by Google, while Israel’s genocide of Palestinians is bolstered by automated systems that function as a “mass assassination factory.” The rise of the insurance tech industry has seen the advent of AI tools that promise healthcare revolutions but are part of a longer line of profit-driven, hyper-personalized reforms that have been degrading conditions. It’s hard to imagine a world where one could, with a serious face, say all of this boils down to: “fake and sucks” vs “real and dangerous.”
Let’s hone in on a specific example to really drive it home.
Criticisms of AI systems integrated into Isareli apartheid and genocide point out how the automated systems generate kill lists, but also how they obscure accountability—they offer post hoc justification for war crimes and human rights violations that Israel was already eager to commit. The system is real and dangerous (integrated into an expansive man-machine decision-making system aimed at assassination), but also fake and sucks (purportedly built to help identify military targets, but quickly ran out and shifted to indiscriminately bombing everything).
At best, an attempt to flatten AI criticism in any of these areas to “fake and sucks” vs “real and dangerous” would be idiotic. At worst, it would deeply intellectually dishonest.
II.
So right off the bat, critically engaging with the core premise of Newton’s piece reveals how useless it is if we are interested in describing reality, but especially if we are interested in thinking coherently about AI skepticism. Nonetheless, let’s try to use it in good faith and see where it leads us.
Section II of Newton’s essay articulates why he believes AI is real & dangerous by defining some key terms and offering some evidence to support his argument, all of which also shed some light on why he has settled on this criticism binary.
First, when Newton says “AI” he means “generative AI and the LLMs that power them — the technology that underpins ChatGPT, Gemini, Claude, and all the rest.” When Newton says “real” he means “a genuine innovation that will likely sustain a large and profitable industry.”
Newton points to the user base of generative chatbots (300 million weekly ChatGPT users), the money tech giants are spending on this technology (a planned quarter-trillion dollars), and ways it's being used today that "feel surprising and new." Let’s dive into each.
It's not clear why Newton believes a large userbase means it'll sustain a large and profitable industry. Even though OpenAI’s ChatGPT is one of the largest consumer products on the Internet, it is burning through billions with no profit anytime soon. Ed Zitron’s dogged analysis of OpenAI's financials is worth considering: he argues that the company is not only unprofitable and unsustainable and untenable in its current form, but would also need to massively expand its revenue, massively cut costs that have only grown, pursue massive price hikes, realize a significant technological breakthrough in the form of an exponentially better model that costs less to train, and continue to sustain explosive growth that’s already stumbling. None of this has anything to do with whether AI is “real” or “fake,” but concerns raised about generative AI that touch on burn rate, energy input, scaling models, training data seem to be summarily dismissed by Newton as whining about AI being “fake.”
Let’s close our eyes then, click our heels, and go to a land where a company’s finances don’t matter. Uber is an instructive example here. For years, reporters and commentators who did not bother to closely examine Uber’s working conditions or financial health or lobbying efforts simply assumed it would grow into profits like Amazon through network effects. As economist Hubert Horan documented in his 33-part series on Uber’s business model, ride-hail’s unit economics are fundamentally hostile to sustainable profits. To square this circle, Uber took tens of billions of dollars it received from venture capitalists and spent those subsidies on: exploiting legal loopholes to externalize and minimize labor costs, write and pass laws to expand said loopholes, pursue monopolies or duopolies, generate “academic” research that doubled as corporate PR, use algorithmic discrimination to cut driver pay while hiking fares, and abandon a litany of sci-fi moonshot projects that were initially key parts of its value proposition to investors, regulators, and the public.
A gem of this seems to lend itself to Newton’s second point: look at how the companies are spending their money if you want to know what’s real. Yes, tech firms and VCs backing them often get things wrong and lose tens or hundreds of billions of dollars chasing unicorns like crypto, web3, metaverse, VR/AR—this time they're right, however. The size of the investment is "a signal that they have already seen something real." Is that true?
Uber’s size wasn’t an indicator of whether it would usher in a sustainable and profitable industry. But Uber used its size as a signal to attract the capital necessary to reshape labor markets, public transit systems, regulatory frameworks, consumer behavior, and urban governance into forms more hospitable for a business model that was previously illegal, unprofitable, unsustainable, and untenable. Similarly, Zitron’s close analysis of OpenAI’s finances suggests that—barring a series of increasingly unlikely maneuvers—OpenAI (and other genAI firms) will have to go the Uber route to survive: use Smaugian hoards of capital to realize legal and political reforms that force markets, consumers, competitors, clients, and governments into forms that can accommodate previously illegal, unprofitable, unsustainable, and untenable business models.
This brings us to the third plank of “real” AI: present-day use cases. Though Newton promised earlier to only use “AI” to refer to LLMs and generative tools, he quickly abandons that pretense when offering examples of why AI is real. His list of evidence includes some advances that center large language models and generative chatbots built on them, but many which either don’t seem to use generative AI or do so in a limited fashion that has nothing to do with the products he was defending earlier. To make matters worse, the list is not great!
Thus far, it seems like Newton’s insistence on AI being “real” comes at the cost of anything resembling a material analysis. Does Newton’s framework offer any insight into why so many people are using generative AI products, into what workplaces are rolling them out and why, into what firms are utilizing them and why? No. Does it offer any insight into what the aforementioned quarter-trillion dollars is being spent on and why? No. Does his focus on whether critics believe AI is “real” or “fake” tell us anything about its resource intensity, technical limitations, potential applications, or harmful impacts? No. Thus far, we are just being sold something indistinguishable from corporate copy.
III. & IV.
These two sections are where Newton articulates what believing “AI is fake and sucks” actually means. We’ve gone over how vacuous the false dichotomy is and how empty is “real” signifier is, but maybe there is something salvageable in “fake and sucks.” In Newton’s own words, he envisions the camp as arguing something like:
Large language models built with transformers are not technically capable of creating superintelligence, because they are predictive in nature and do not understand concepts in the way that human beings do.
Efforts to improve LLMs by increasing their model size, the amount of data they are trained on, and the computing power that goes into them have begun to see diminishing returns.
These limits are permanent, due to the inherent flaws of the approach, and AI companies might never find a way around them.
Silicon Valley will therefore probably never recoup its investment on AI, because creating it has been too expensive, and the products will never be good enough for most people to pay for them.
From here, there are two more caveats. First:
There is a fourth, rarely stated conclusion to be drawn from the above, which goes something like: Therefore, superintelligence is unlikely to arrive any time soon, if ever. LLMs are a Silicon Valley folly like so many others, and will soon go the way of NFTs and DAOs.
Second:
This is a view that I have come to associate with Gary Marcus. Marcus, a professor emeritus of psychology and neural science at New York University, sold a machine learning company to Uber in 2016. More recently, he has gained prominence by telling anyone who will listen that AI is “wildly overhyped,” and “will soon flame out.” (A year previously he had said “the whole generative AI field, at least at current valuations, could come to a fairly swift end.”) … Marcus doesn’t say that AI is fake and sucks, exactly. But his arguments are extremely useful to those who believe that AI is fake and sucks, because they give it academic credentials and a sheen of empirical rigor. And that has made him worth reading for me as I attempt to come to my own understanding of AI.
Finally, we have something resembling an argument. Newton hones in on Marcus here to suggest that he's too dour on AI's prospects, dismisses the ubiquity of these products, hyper-fixates on its limitations, and ignores what it can—an issue because generative AI is increasingly being used to disrupt critical infrastructure at firms like Amazon. "They're staring at the floor of AI's current abilities, while each day the actual practitioners are successfully raising the ceiling."
Staring at the floor, then, means constantly pointing out a model's flaws but ignoring each iteration's new capabilities as the models grow larger and more complex. Scaling the models, increasing their size and the amount of data used to train them, would always yield improvements and unlock new use cases. Until it didn't.
When asked to cite one (1) external critic who Newton saw as in the “fake and sucks” camp on BlueSky, Newton cited two: a blog post co-authored by Amy Castor and David Gerad that documents how numerous large language models are seeing diminishing improvements despite exponentially larger sets of training data. These declining returns suggest chatbots may be experiencing a scaling issue, and yet the technology firms offering these products are insisting each new model will be a massive improvement over the last (at the same time that they are searching for funding and revenues). To point this out, then, is to insist “AI is fake and sucks.”
In a lengthy response, Marcus makes clear that Newton not only consistently misrepresents what Marcus believes but doesn't seem to understand basic details about AI that are crucial to the “real/dangerous” v “fake/sucks” argument. Newton proudly considers himself part of the "real and dangerous" crowd, but as Marcus points out only vaguely gestures at dangers while ignoring those that already exist: "Covert racism? Deepfakes? Propaganda? Discrimination in employment, insurance, and housing?" Marcus also points out the dichotomy comes with a glaring analytical blindspot: "thinking that if an AI is stupid (or overrated) it can't be dangerous." Newton charges the “fake and sucks” camp with a blindspot that overlooks danger but Newton’s is arguably more dangerous: his immaterial analysis would likely overlook AI-powered deepfakes, propaganda, fossil fuel extraction, insurance tech, and assassination drones, if it meant that chatbots were being used by hundreds of millions of people.
So where does this leave us? The real/fake and sucks/dangerous framework is at best useless: it overlooks financials, harms, impacts, political economy, and technical detail in favor of press releases and product launches. It bundles any technology that is compute heavy into “AI” as part of a bid to defend generative AI. It misrepresents the views of the sole critic named in its camp (Gary Marcus). If you are interested in phony comfort, this is the framework to use—not one that is skeptical of artificial intelligence.
V.
This final section is about what it means to "raise the ceiling" or improve the capabilities of generative AI. Here, Newton insists that OpenAI's o1 is "another possible step toward building superintelligence" but worries about how its price hike will be interpreted.
My fear, though, will be that “AI is fake and sucks” people will see a $200 version of ChatGPT and see only desperation: a cynical effort to generate more revenue to keep the grift going a few more months until the bottom drops out. And they will continue to take a kind of phony comfort in the idea that all of this will disappear from view in the next few months, possibly forever.
In reality, I suspect that many people will be happy to pay OpenAI $200 or more to help them code faster, or solve complicated problems of math and science, or whatever else o1 turns out to excel at. And when the open-source world catches up, and anyone can download a model like that onto their laptop, I fear for the harms that could come.
This is not just the height of Newton’s analysis, this is the limit of real/dangerous v. fake/sucks. Newton’s framework seems to view any attempt at understanding OpenAI’s financials with suspicion, and instead insists analysis should follow the most vulgar input possible: narratives about consumer behavior. That may be fine if you are angling for a job at a company’s PR shop or at some advertising agency, but one would hope a journalist or commentator might try to show some capacity for curiosity and ask even one question about why & how certain products are being offered, as well as how those offerings fit into the company’s business model (or lack of one).
To this point, I would like to turn to an infinitely more useful piece of writing by computer scientist Ali Alkhatib that hits on many of the flaws in Newton’s piece: defining what artificial intelligence is, thinking through its development and application, and using frameworks to understand why certain technologies are being pursued/deployed. I’ll quote the relevant section below:
I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”. Even open-source AI projects often borrow from libertarian ideologies to help manufacture little fiefdoms.
This way of thinking about AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”) brings the discipline - reaching at least as far back as the 1950s and 60s, drenched in blood from military funding - into focus as part of the same continuous tradition.
Defining AI along political and ideological language allows us to think about things we experience and recognize productively as AI, without needing the self-serving supervision of computer scientists to allow or direct our collective work. We can recognize, based on our own knowledge and experience as people who deal with these systems, what’s part of this overarching project of disempowerment by the way that it renders autonomy farther away from us, by the way that it alienates our authority on the subjects of our own expertise.
This framework sensitizes us to “small” systems that cause tremendous harm because of the settings in which they’re placed and the authority people place upon them; and it inoculates us against fixations on things like regulating systems just because they happened to use
10^26
floating point calculations in training - an arbitrary threshold, denoting nothing in particular, beneath which actors could (and do) cause monumental harms already, today.Okay, that was a bit of a post. Whether you subscribe to this way of defining AI or you totally reject it, I hope I’ve made it more salient to you that you can judge frameworks entirely according to how well it helps you navigate a space you’re trying to navigate. You can reject a definition that isn’t helping you, and I would encourage you to reject mine just as readily as I rejected AI Snake Oil’s if it’s not serving your purposes.
I’ll wrap it up here; if you’re benefiting from some particular way of drawing a boundary around and thinking about AI, I’d really like to hear about it.
If there is one thing you take away from my essay or Alkhatib, it’s should be this: reach for your wallet when someone starts offering simple taxonomies for understanding artificial intelligence. These technologies are complex: their origins, their development, the motivations driving their financing, the political projects they’re connected to, the products they’re integrated into. Anyone who is telling you criticism slots into “real” or “fake” or “sucks” or “dangerous” is at best a useful idiot desperate for a comforting narrative that’ll provide some phony comfort. If you’re interested in understanding the world, ignore such fools.