Does OpenAI's latest marketing stunt matter?
On distractions, intentions, aesthetics, and fascism.

At the end of the day, OpenAI’s latest viral marketing stunt—a ChatGPT 4o update that allows users to generate images, including ones in a style that barely resemble’s that of animation company Studio Ghibli—is ultimately a distraction. Let me explain.
Yes, this stuff is an “insult to art itself” as Brian Merchant points out in his invocation of Studio Ghibli co-founder Hayao Miyazaki’s earlier comments on early AI art tools (“I strongly feel that this is an insult to life itself”). Firms hawking generative AI powered are dead-set on eviscerating legal barriers obstructing their god-given right to "ingest copyrighted works into their training data" and will vomit up any answer that resembles something reasonable—much like their products. If you do not let us do this, China will surpass us is an increasingly popular and successful one.
As a result, artists are left shit out of luck. As Merchant writes:
OpenAI and the other AI giants are indeed eating away at the livelihoods and dignity of working artists, and this devouring, appropriating, and automation of the production of art, of culture, at a scale truly never seen before, should not be underestimated as a menace—and it is being experienced as such by working artists, right now.
On top of the exploitation of artists and their livelihood, we get to consider what this does to our role in cultural production. How does being flooded with slop that does not but does resemble significant cultural products—because it is superficially trained off the form of various works of art, as well as those inspired by or derived from it, as well as commentary about it—affect our own internal response to those same pieces of art, even when presented in their original slop-free context?
Erik Hoel chimes in here, warning we are entering "the semantic apocalypse" or following down the path of "semantic satiation" whereby a word loses meaning if it is said over and over and over and over and over again:
The semantic apocalypse heralded by AI is a kind of semantic satiation at a cultural level. For imitation, which is what these models ultimately do best, is a form of repetition. Repetition at a mass scale. Ghibli. Ghibli. Ghibli. Repetition close enough in concept space. Ghibli. Ghibli. Doesn’t have to be a perfect copy to trigger the effect. Ghebli. Ghebli. Ghebli. Ghibli. Ghebli. Ghibli. And so art—all of it, I mean, the entire human artistic endeavor—becomes a thing satiated, stripped of meaning, pure syntax.
This is what I fear most about AI, at least in the immediate future. Not some superintelligence that eats the world (it can’t even beat Pokémon yet, a game many of us conquered at ten). Rather, a less noticeable apocalypse. Culture following the same collapse as community on the back of a whirring compute surplus of imitative power provided by Silicon Valley. An oversupply that satiates us at a cultural level, until we become divorced from the semantic meaning and see only the cheap bones of its structure. Once exposed, it’s a thing you have no relation to, really. Just pixels. Just syllables. In some order, yes. But who cares?
So on the cultural front it is pretty bleak. AI firms are creating tools that will be used to produce good enough AI slop—stuff that’s eerily reminiscent of cultural works we’re inspired by or fond of. Cultural works and products that are stolen to fill datasets to train LLMs will be slowly sapped of meaning and value, thanks to how easy the slop is to produce and how well-capitalized its producers are.
I agree with Max Read’s take that it's "hard to be exercised over something so obviously ephemeral" but, as he points out, this is symptomatic of a larger debate: the emotional weight and reputation of Miyazaki's and Studio Ghibli's craft that excites AI shills generating these images who insist it’s democratizing creativity, at the same time frustrating critics who treasure what that art represents but also are (rightfully) skeptical of promises by the tech sector that a new technology will democratize something.
To take one small example: fintech & crypto & web3 have all been hailed as the future and have made similar promises about democratizing hard-to-enter spaces to the benefit of all. They have all amounted to little more than schemes of varying complexity to steal from investors, defraud the public, undermine consumer protections that declare a certain avenue of profit-seeking as illegal (and immoral), and use this wealth transfer to legitimize parasitic enterprises that should not exist.
Going back to Read, he builds on Merchant's essay last week with a 1935 essay from Walter Benjamin "The Work of Art in the Age of Mechanical Reproduction" that reflects on how implementing technical tools to accelerate the production of art robs us of something of its essence:
A.I., one could argue, enacts on particular artistic styles what photography and lithography enacted on particular works of painting or sculpture, rendering those styles endlessly reproducible, de-aura-fied, no longer subject to the “criterion of authenticity” from which they previously derived value.
But (without necessarily wanting to endorse this argument re: A.I. and style) we might also note that Benjamin’s main point was not so much to endlessly bemoan the withering of aura but to explore the effects of mechanical reproduction on the “social function” of art.4 The toothpaste is kind of out of the tube on this one, in Benjamin’s time as in ours, and there are reasons to welcome the freeing of art from its “parasitic subservience to ritual.” When art can no longer automatically obtain significance from ritual--authenticity, tradition, or even ownership--its value must be found elsewhere. “Instead of being founded on ritual,” he writes, the social function of art “is based on a different practice: politics.” What matters with art isn’t its past (where and who it comes from), but its future: what it does.
And as Read remarks, this leads us to the question: what are the politics of AI slop/art? Read quotes Benjamin’s essay and drives home the point that it is fascism which "sees its salvation in granting expression to the masses—but on no account granting them rights" and means "self-alienation has reached the point where it can experience its own annihilation as a supreme aesthetic pleasure." This is all true and well, but the use of generative AI for images or writing is ultimately a distraction.

Here's Rob Horning writing about the last viral marketing stunt from OpenAI, a piece of "creative writing" from an unreleased product:
Hari Kunzru is right to point out that “the ‘can machines do creative writing’ thing is mostly a distraction from the use of the machines to go through text and images to cancel grants and put people on deportation lists.” So the best way to understand OpenAI’s recent claims to have trained a new model that, according to CEO Sam Altman, is “good at creative writing” and “gets the vibe” of “metafiction” is that the company is running interference for the authoritarians using similar technology to automate surveillance, circumvent human scruples, and do away with due process.
What are the use cases for artificial intelligence that seem to draw up the most excitement (and potential profits)?
In December, The Financial Times reported Palantir and Anduril were in talks with a dozen other firms (including SpaceX, OpenAI, "autonomous-ship builder" Saronic, and AI data labeler Scale AI) to create a consortium that would jointly bit for Pentagon contracts.
Anduril is an arms dealer (sorry, I mean an “AI weapons manufacturer”) and Palantir is surveillance firm (sorry, I mean an “AI-driven analytics firm”). These two tech companies that have made it clear they intend to profit on death and misery by selling tools that power deportation, remote assassinations, and the privatization (sorry, I mean “digitization and modernization”) of public services and government agencies. Anduril and Palantir announced a partnership to use Pentagon data for AI training, and OpenAI completed its slow but steady military pivot with the announcement of an Anduril partnership that would build military AI products—though OpenAI’s hire of Palantir’s Chief Information Security Officer might’ve signaled the pivot had been completed much earlier.
The State Department is claiming to use artificial intelligence to revoke visas of foreign students who’ve protested Israel’s ongoing genocide in Gaza. The United States has been and is using artificial intelligence to try and perfect reliable tools of repression that it has returned to over the years in bids to crush dissents and terrorize minorities. Over the years, Big Tech firms have enthusiastically bolstered Israeli apartheid and genocide of Palestinians with tools powered by artificial intelligence. Israel created a “mass assassination factory” featuring multiple systems powered by artificial intelligence (“The Gospel,” “Alchemist,” and “Death of Wisdom,” as well as “Lavender” and “Where’s Daddy” that were used to target civilian infrastructure and kill civilians as part of a plan to cause as much death and misery as possible.

Gareth Watkins’s brilliant essay (“AI: The New Fascist Aesthetics”) spends some time with the art to flesh out some of the more nebulous connections between firms enthusiastically stealing art to generate slop and fascists deploying it to deport and assassinate civilians, but the focus is the primary things that create and sustain this technology alongside the main ways in which it is deployed:
AI is a cruel technology. It replaces workers, devours millions of gallons of water, vomits CO2 into the atmosphere, propagandises exclusively for the worst ideologies, and fills the world with more ugliness and stupidity. Cruelty is the central tenet of right wing ideology. It is at the heart of everything they do. They are now quite willing to lose money or their lives in order to make the world a crueller place, and AI is a part of this – a mad rush to make a machine god that will liberate capital from labour for good. (This is no exaggeration: there is a lineage from OpenAI’s senior management back to the Lesswrong blog, originator of the concept of Roko’s Basilisk.) Moreso even than cryptocurrency, AI is entirely nihilistic, with zero redeeming qualities. It is a blight upon the world, and it will take decades to clear up the mountains of slop it has generated in the past two or three years.
AI Art should be thought of as a trojan horse. There is much more interest and excitement in using artificial intelligence to predict human behavior, surveil groups of people, using behavioral insights to improve discrimination, synthesize and securitize new assets, innovate new forms of dispossession and extraction, terrorize migrants and dissidents, regiment work and disempower workers, and a host of other noxious deleterious social ends. Another place to turn to when thinking about this is Dan McQuillan's book Resisting AI: An Anti-fascist Approach to Artificial Intelligence:
The struggle against the fascization of AI precedes AI itself. It’s not that AI first comes into existence and we then have to tackle its dodgy politics from scratch, but rather that AI is already part of the system’s ongoing violent response to the autonomous activity of ordinary people. Instead of having to invent a plethora of new remedial measures, we can build on the long history of community solidarity generated by people’s resistance to exclusion and enclosure.
The very generalizability of AI and the way it comes to bear on different communities and constituencies creates the potential for this resistance to cut across race, gender, sexuality, disability and other forms of demographic division. If the whole of society becomes subsumed by algorithmically ordered relations and enrolled in machinic optimization, then society as a whole also becomes a site for contesting the imposition of those power relations. AI’s generalizability and its intensification of social crisis creates a position from which to question the totality of social relations.
So does OpenAI’s marketing stunt matter? That’s probably the wrong way to think about it. AI firms are interested in developing tools and marketing strategies that revolve around the allure of AGI—around a stillborn god that will transform large swaths of society into excessively profitable enterprises and incredibly efficient operations. Think of it as a desperate attempt to defend capitalism, to preserve the status quo (capitalism) while purging recent reforms that purportedly undermine it (democracy, liberalism, feminism, environmentalism, etc.). Sam Altman, OpenAI’s co-founder, has repeatedly called for “a new social contract,” though most recently has insisted the “AI revolution” will force the issue on account of “how powerful we expect [AGI] to be.” It doesn’t take much to imagine that the new social contract will be a nightmarish exterminist future where AI powers surveillance, discipline, control, and extraction, instead of “value creation” for the whole of humanity.
The subsuming of art springs out of the defense of capitalism—more and more will have to be scavenged and cannibalized to sustain the status quo and somehow, someday, realize this supposedly much more profitable horizon. The ascendance of fascism comes with the purge—the attempt to rollback institutions and victories seen as shackles on the ability of capitalism to deliver prosperity (and limiters on the inordinate power and privilege for an unimaginably pampered and cloistered elite).
Both are part and parcel to what’s going on, but one project is objectively more dangerous (and ambitious) than the other. In that way, then, all of this is a distraction. Pay no mind to the unmarked vans and plainclothes officers, the censorship and disappearances, the mass deportations or drone assassinations, the civilian assassinations or ongoing genocides. We’re just smol AI firm democratizing art!
I find it really astonishing with what naivitë many people are looking at the current capabilities of AI. Unfortunately, many people just don't know what art is all about. For them, a AI generated replica is the same as an artifact created by a human being who tried to express him/herself in an original and deeply personal way. If you are flooded with mediocrity, how would you now the difference. There is so much to unpack here. Thanks for the post, Edward.
_semantic satiation_ sounds like the Big Lie (große lüge) tactically applied for techDazzle (bamboozle the non-technoPriesthood). Part of the problem is the abuse/misuse of artificial "stupidity" since Hollywood has bad analogies (SkyNet, Matrix) to existing art (GPT are statistical parrots). In effect VCs are grooming white collar workers to be (input) slaves to the (inevitable) machine of AGI (magic bullet for world's ills). Fightback ...
1. counter-spin of "Techno-feudalism" and return to corporate "parishes" ruled by AGI-priesthood
2. "give" away (free as in speech) cheap models-as-a-Service to deflate future price-gorging
3. look for realistic machine learning stories ... (think mentat & not Butlerian Jihad)