Luigi, insurance tech, China, AI, plus some more sci-fi and music recommendations
A roundup of I'm reading, watching, and listening to.
Welcome, valued Consumer, to the Tech Bubble Dispatch #3! This week’s recommendations: we've got some stuff on the UnitedHealthcare shooter, insurance tech, China, artificial intelligence, plus some book and music recommendations. This is a long ass roundup, as always there’s table of contents for you to quickly navigate if you just want a particular section or two.
Some housekeeping: Casey Newton wrote a (paywalled) response to my essay (and Gary Marcus’s) and it is surprisingly underwhelming! The limited engagement with our criticisms comes in the form of, well, ignoring our criticisms, doubling down on his framing, and pretending I am a Marxist labor organizer rambling about morality (though, to be fair, my essay is riddled with insults aimed at him & his argument). I expected more despite his proximity to the Kara Swisher School of Journalism, but alas. Fellow tech critic Paris Marx, who shares my general contempt for tech journalism, just published a wonderful contribution to this entire debate—be sure to read it! AND Gary Marcus circled back to stomp out Casey Newton one more time.
Inspiring people to crash out feels about as good as doing it yourself!
If you like the recommendations below and want more since they'll be paywalled eventually, then please subscribe to the newsletter for $7 a month (the price of a tallboy and some snacks) or $70 a year (the price of my undying love). Above all else, subscriptions keep me warm and fed as I try to devote more of my time to the newsletter! If you've already subscribed, however, I'm kissing you on the forehead. And if you have any suggestions, recommendations, or questions, shoot me an email at edwardongwesojr@gmail.com!!!
UnitedHealthcare Shooter
Luigi Mangione has been revealed as the United Healthcare CEO shooter and his capture has only intensified a series of debates about his politics, surveillance, healthcare, non-violence, the public’s reaction, and What Is To Be Done? I am still forming my thoughts on this, largely because I’m more interested in why it’s taken so long for someone to (successfully) target such a prominent target and if we’d see anything like this in other areas (e.g. pharma, fossil fuels, Big Agriculture, Big Tech). That being said, I’ve read a lot of interesting essays, takes, notes, and analyses based on this that I wanted to share with you guys and hope you find insightful.
First off, some pieces from before we knew the shooter’s identity!
I think Taylor Lorenz’s “We ‘we’ want insurance executives dead” was the best essay clearly articulating why people were celebrating the United Healthcare CEO’s assassination and why various talking heads were incapable of understanding this rage:
As fellow journalist Ken Klippenstein posted, "No shit murder is bad. The [commentary and jokes] about the United CEO aren’t really about him; they’re about the rapacious healthcare system he personified and which Americans feel deep pain and humiliation about."
This is what the media fails to understand. They don't see insurance CEOs who sanction the deaths of thousands of innocent people a year by denying them coverage, often coverage doctors deem medically necessary, as violent.
…
Instead of centering the stories of those harmed by UnitedHealth and the very real outrage that most Americans feel about the way the healthcare system is run today, the media is publishing a tidal wave of breathless articles about the loss of "civility" and "respect" online.
An hour before the gunman was captured and identified, Camille Sojit Pejcha published an essay connecting the burgeoning fandom building around Luigi to a desperate plea from the public. It made me think of one of my favorite scenes in First Reformed, where Reverend Ernst Toller is spiraling about the church’s silence on climate change and bursts out “Well somebody has to do something! It's the Earth that hangs in the balance.” Camille’s piece:
I’m inclined to believe that the public sentiment surrounding the gunman is evidence not just of our desire for him, but our desire for a figurehead: a modern-day folk hero who expresses our collective frustration with a broken, sometimes corrupt American healthcare system. We’re fed up, disenfranchised, and sick of the status quo. And if the face of our populist rage over a corrupt industry just so happens to have a stellar jawline—well, who’s going to complain about that?
Since Luigi was identified as the alleged shooter, the internet has been trawling his deep digital footprint for clues about his political valence, who to blame, who to cheer, and What It All Means. Here are some good post-reveal pieces!
My old Motherboard boss Jason Koebler wrote a great meta-story on the "ritualized rifling through old internet accounts" that follows in the wake of news events like this one (Also please subscribe to 404 Media, which is far and away the best tech journalism outlet on the internet—it’s also worker owned!)
I do not plan on ever doing a murder or anything that ever puts me into the news in the same way Mangione is now. But whenever something like this happens I find myself thinking about my own digital footprint, which is extremely vast and in many cases extremely outdated. I do not think that my high school Live Journal or Facebook posts that I don’t even remember says anything about who I am as a person now or why I do literally anything that I do. But if you are good enough at Googling me you can probably find accounts I have and accounts I didn’t even know that I had and use it to build some sort of narrative about my life. I have met thousands of people in my life and plausibly any journalist could get in touch with one of them within a few moments and maybe they would say something about me—does that reflect who I am or why I do anything that I do? Probably not!
That being said, Max Read did a great job looking at Luigi Mangione’s recent digital footprint to get a sense of what our assassin was concretely interested in (his media consumption, his politics, etc.) in hopes of offering a roadmap that avoids the mistakes and temptations laid out by Jason’s essay.
This type of guy--the account Tolstoybb calls Mangione a “birthrates center-right pop science centrist type”--might be sort of boring to talk to at a party. But the thing is, he could go to a party or hold down a job and talk about his beliefs without raising any red flags. In other words, he doesn’t sound particularly strange, in the sense of foreign to experience. Nor does he come across as a “extreme”: Mangione’s beliefs, such as we can interpret them through his social media accounts, are pretty normal for a man of his age and background; they’re not those of the median American voter, precisely, but I suspect they’re pretty close to the views of the median 20-something white male tech worker. (Even his generous review of Ted Kazcynski’s manifesto is not particularly outré for people from his general cohort.) It’s the kind of worldview a curious but indiscriminate person might assemble out of the podcasts they listen to on their commute or during their workout in a particularly alienating period of life (i.e., your 20s).
In “The Tinkerer” John Ganz connects Mangione and his politics to a sacred American tradition: “that of the vigilante gunman” but adds that there is something else here:
What we are facing in the United States today is an overlapping crisis of institutions: social problems that stubbornly resist normal political solutions and a sense that the old system of meritocratic recognition is either broken or, at least, deeply unsatisfying for the young. Like many well-off young people, he comes from a bourgeois family that owns property but decided instead of business to pursue a high-status profession that’s supposed to be about rationally improving the world, only to find that idealistic path blocked in various frustrating ways. Even for the “normal” and relatively successful something feels terribly broken and alienating. When you combine these things you will get tragic and desperate attempts at heroism. The crowd already loves it.
In his newsletter, Brian Merchant builds on this to survey a host of concerning developments as the wealthy and elite build higher and higher walls for their fortresses, affix turrets along the perimeter, and commit themselves to even more destructive extraction/exploitation outside those limits.
What’s made all the more clear by the outpouring of relative solidarity for a right-leaning tech guy who gunned down an executive in broad daylight is that we are sitting on not just one but an expansive and overlapping constellation of powder kegs right now. The Trump administration has only promised to further calcify the state’s anti-democratic tendencies, restrict its ability to respond to the populace in ways that might ameliorate suffering, and embark on brand new authoritarian projects like orchestrating mass deportation, firing competent bureaucrats to replace them with loyalists, and gutting the Department of Education; significant rage generators all.
There is murderous rage, there is vandalistic rage, there is wanton rage, and, of course, there is the sort that arose after the killing of George Floyd; righteous rage, with millions pouring into the streets to pointedly confront legacies of injustice. But too little changed, again. The rich merely have better and more weaponized fortresses, and more riches, and the nonrich have more of their rage.
And our last entry comes from Sam Adler-Bell in New York Mag:
The shooter claimed this prerogative for himself without a corporate bureaucracy, an algorithm, or a system of laws to authorize the privilege. It is a terrible thing to destroy a human life for the sake of propaganda, and a terrible thing to do so for the sake of profit. (There is hubris in both.) We will not be able to disrupt our metabolism for social suffering by indulging our appetite for political violence; we can’t kill our way out of a society premised on human disposability. But it must be said that violence finds more purchase, seduces more persuasively, in the absence of other obvious and meaningful pathways for registering discontent. Americans are dying, going bankrupt, and wallowing in despair under a health-care system that prioritizes the profits of some over the basic needs of others: Where should they turn? Who is listening?
Insurance Tech
Over the years, my podcast co-host (Jathan Sadowski) and his studies of the insurance industry’s integration of various digital technologies have revealed to me a land of eldritch horrors. In the coming weeks we will see people attempt to quibble about how much Americans actually spend on healthcare and who is to actually blame for high prices and what fixes would actually solve it. One thing almost none of these debates will touch on is how the integration of surveillance tech degrades care or humiliates patients, or the threat is poses under the guise of Transforming Healthcare.
In this section: I’ll share a series of papers that illuminate how much surveillance is being integrated into health/life insurance in the name of “behavioral control” to ostensibly ensure everyone is healthier and costs less to insure, but in reality to introduce a brutal form of profit maximization targeting vulnerable populations.
The first piece of Jathan’s I saw was a 2020 essay in Real Life (RIP, but please subscribe to founding editor Rob Horning’s excellent newsletter: Internal Exile) titled “Draining the Risk Pool.” I’ll quote Jathan at length to pique your interest in the next few essays:
With an expansive network of stuff recording how you behave across virtually every realm of daily life, your insurer aims to track compliance perfectly and handle claims automatically through agents powered by artificial intelligence. Enforcement, from their point of view, will be unbiased and certain; there can be no confusion or disagreement about who owes what to whom when they can just check the data. By this logic, fiduciary duty compels for-profit insurance companies to discipline policyholders and diminish the horizon of possibility based on what the stats have shown is the most profitable life to lead — and, by extension, kind of person to be. That’s if you’re lucky enough to be deemed worthy of insurance at all.
Insurance has always been a kind of mathematical morality. Embedded in the calculations, models, contracts, and other tools of the actuarial trade are judgments about who is responsible, what things are worth, how society should be organized. The promise of insurtech is that these judgments can be made objectively about and applied universally to every individual. Yet this individualized approach is a redefinition of what is “fair”: rather than spreading risks across a population to hedge against the vagaries of life, the data-driven system promotes the sense that no one should bear any expense or risk for the benefit of the collective. From this perspective, insurers are right, even obligated, to treat different people in different ways based on predictions of their future behaviors. Technology, in turn, should be focused on this discriminatory task rather than being directed toward extending better coverage to broader populations and reducing the collective insecurity that impedes a flourishing society.
The idiosyncratic view of fairness underlying this worldview readily justifies a range of perverse consequences. If there’s a great divide between the premiums people pay that happens to mirror other structural and systemic inequalities in society, then at least it was implemented “fairly.” If some people are saddled with restrictive policy conditions and others receive special treatment, then at least it’s in the name of “fairness.” If an underclass of the underinsured and uninsurable is created, then at least it’s the result of a more fair society.
Later that year, Big Data & Society titled "The Personalization of Insurance" that looked at how insurance practices where changing: what technologies, methods, markets, and firms were involved in the integration of data and surveillance. The whole issue is of interest but one place to start is an article by Alberto Cevolini and Elena Esposito titled "From pool to profile: Social consequences of algorithmic prediction in insurance” given how prominently algorithmic discrimination figures into United Healthcare’s business model. The abstract:
The use of algorithmic prediction in insurance is regarded as the beginning of a new era, because it promises to personalise insurance policies and premiums on the basis of individual behaviour and level of risk. The core idea is that the price of the policy would no longer refer to the calculated uncertainty of a pool of policyholders, with the consequence that everyone would have to pay only for her real exposure to risk. For insurance, however, uncertainty is not only a problem – shared uncertainty is a resource. The availability of individual risk information could undermine the principle of risk-pooling and risk-spreading on which insurance is based. The article examines this disruptive change first by exploring the possible consequences of the use of predictive algorithms to set insurance premiums. Will it endanger the principle of mutualisation of risks, producing new forms of discrimination and exclusion from coverage? In a second step, we analyse how the relationship between the insurer and the policyholder changes when the customer knows that the company has voluminous, and continuously updated, data about her real behaviour.
Next are three papers from Jathan that’ve been very illuminating for me and are great places to start if you’re interested. First up is a case study of UK-based behavioral insurance company Vitality and what goes into its efforts building an ecosystem of surveillance technology to modify individual behavior.
Imagine your insurer were an ‘active life partner’, even a guardian of your future, which engaged with you on a regular basis, through a complex system of digital technology and behavioural science, to ensure you were leading the healthiest, longest, most valuable life possible. This is the foundation for a growing model of ‘behavioural insurance’ that has become a major approach in the industry. By capturing constant streams of data about consumer behaviour, and creating incentive programmmes to modify how people behave, insurers hope to unlock a key source of value and risk management. We critically analyze this model through a detailed case study of a world leader in behavioural insurance: Discovery Limited and its Vitality ecosystem of data-driven, behaviour-based, interactive wellbeing programmes for life/health insurance. We detail the behavioural theory of risk and the moral economy of ‘shared value’, which underpins the model of insurance that Vitality champions and justifies its activist approach to social change.
The second paper is on the extent which with our FIRE sector (finance, insurance, and real estate) has embraced digital technologies. What sort of tech is being developed, towards what ends, and by what actors? What practices are they encouraging or disrupting, what claims are they supporting in the tech sector, and what does all of this have to do with surveillance?
The deepening integration of the technology sector with the FIRE sector (finance, insurance, real estate) is, arguably, one of the most consequential developments in contemporary political economy. With names like fintech, insurtech, and proptech, the influence of these industries can be found everywhere and only continues to grow. At their heart, the rise of FIRE-tech is about upgrading old and building new forms of surveillance and then applying them to increasingly more people, places, and processes. And yet, critical research on FIRE-tech from the field of surveillance studies has not kept pace with these developments. This special issue offers us a chance to look back on the contributions made by the field and look ahead at the challenges it now faces. As we think about the issues and agendas that should guide our research moving forward, there is value in bringing surveillance studies and FIRE-tech into closer alignment. Doing so provides an opportunity to increase the analytical power and ongoing relevance of the field by critically confronting the development, purpose, and impact of FIRE-tech. In short, we should frame FIRE as surveillance and we should see surveillance via FIRE.
The third paper is about trying to highlight the importance of insurance for modern civilization, and by extension how consequential the emergence of certain technologies could prove to be in life/health insurance.
Calling attention to the growing intersection between the insurance and technology sectors—or ‘insurtech’—this article is intended as a bat signal for the interdisciplinary fields that have spent recent decades studying the explosion of digitization, datafication, smartification, automation, and so on. Many of the dynamics that attract people to researching technology are exemplified, often in exaggerated ways, by emerging applications in insurance, an industry that has broad material effects. Based on in-depth mixed-methods research into insurance technology, I have identified a set of interlocking logics that underly this regime of actuarial governance in society: ubiquitous intermediation, continuous interaction, total integration, hyper-personalization, actuarial discrimination, and dynamic reaction. Together these logics describe how enduring ambitions and existing capabilities are motivating the future of how insurers engage with customers, data, time, and value. This article surveys each logic, laying out a techno-political framework for how to orient critical analysis of developments in insurtech and where to direct future research on this growing industry. Ultimately, my goal is to advance our understanding how insurance—a powerful institution that is fundamental to the operations of modern society—continues to change, and what dynamics and imperatives, whose desires and interests are steering that change. The stuff of insurance is far too important to be left to the insurance industry.
And lastly, I want to point you towards a symposium by the Law and Political Economy Project focused squarely on insurance:
From the health care we receive to the public services our cities provide, private insurers play a considerable yet often overlooked role in our political economy. And insurance markets, of course, do not arise naturally or spontaneously, but are instead the product specific social and legal conditions. The law enables insurers to take on this outsized role by carving out antitrust exceptions for the insurance industry, by allowing insurance industry lobbyists to influence political decisionmaking, and by making possible the financial instruments that profit from risky development.
China
The US-China Cold War has been raging for years now, with the tit-for-tat game poised to ratchet up with Trump’s return to the White House. On December 5th, The Financial Times reported on China's attempts to respond to the latest wave of US export controls by shifting to local chips and a ban on shipping key minerals and metals to the United States.
Analysts at Bernstein estimate Chinese groups have the power to influence sourcing decisions for the roughly 40 per cent of the global smartphone market they control and the 23 per cent of the computer market supplied by companies that include the world’s largest PC maker Lenovo.
Customers in China, for example, contributed 27 per cent of sales last year for Intel, America’s stumbling traditional chip champion. Artificial intelligence chip giant Nvidia drew 17 per cent of sales from the country. Arizona-based Onsemi estimates its chips are in half of China’s electric vehicles. Mobile processor maker Qualcomm derived about half of its $39bn in annual revenue from China.
On December 9th, The Financial Times reported China was launching an antitrust probe into Nvidia. Life comes at you fast.
Over recent years, Nvidia has become a global market leader in AI chips, with its graphics processing units becoming crucial in developing leading AI models. But US export controls have forced Nvidia to sell watered-down versions of its must-have GPUs in China and given rise to a large black market of smugglers who illegally bring its more advanced processors into China.
“This probe appears to be a political action rather than a legal one,” said a Chinese antitrust expert, who asked not to be named, adding: “Never before has state media taken the lead in announcing an investigation.”
And on December 12, The New York Times reported on the fight breaking out between government officials and tech firms over restricted sales to China.
Over the past year, an intense struggle has played out in Washington between companies that sell machinery to make semiconductors and Biden officials who are bent on slowing China’s technological progress. Officials argue that China’s ability to make chips that create artificial intelligence, guide autonomous drones and launch cyberattacks is a national security threat, and they have clamped down on U.S. technology exports, including in new rules last week.
But many in the semiconductor industry have fought to limit the rules and preserve a critical source of revenue, more than a dozen current and former U.S. officials said. Most requested anonymity to discuss sensitive internal government interactions or exchanges with the industry.
The U.S. chip equipment companies argue that they do not oppose stronger rules as long as they also apply to international competitors. Their chief complaint is that the U.S. industry is the only one facing restrictions, allowing companies in Japan and the Netherlands to step in to supply China with technology. That damages U.S. companies while also failing to restrain Beijing, they argue.
Early last month, The Economist wrote up a great overview of the absolute juggernaut China has become in the global race to build up green energy before we destroy our ecological niche
Chinese money props up every stage of the clean-energy supply chain. Between 2018 and 2023 global investment in the refineries and factories that turn raw materials into wind turbines, electric vehicles (EVs) and other green technologies came to $378bn, according to BloombergNEF, a research firm. Nearly 90% of that came from China (see chart 1).
Thanks to these investments, China produces far more clean-energy equipment than any other country. Its companies manufacture enough lithium-ion batteries (which are used to power EVs) to satisfy the whole of global demand. Eight in ten of the world’s solar panels are made in China, according to the International Energy Agency, an intergovernmental body. By building whopping economies of scale and competing with each other fiercely, Chinese companies have slashed costs.
China is not only supplying these technologies, it is driving demand for them. More than half of its electricity is still generated by coal. But last year Chinese firms plugged some 300 gigawatts of wind- and solar-power capacity into the grid, nearly two-thirds of the amount installed globally. (For comparison, Britain’s total power capacity is 100 gigawatts.) In June the world’s biggest solar farm came online in western China. It covers an area twice the size of Manhattan. China is also building more nuclear-power plants than any other country. Last year global spending on the deployment of clean-energy technologies came to $1.8trn, according to BloombergNEF, of which 38% occurred in China (see chart 2).
Adam Tooze with a haymaker in The Financial Times titled “Only China can now lead the world on climate” that argues, well:
The inescapable conclusion of the past 35 years is that it is foolish to treat the US as a reliable partner in global climate policy.
During Biden’s honeymoon, the hope was that the US and Europe would act together. In Europe, outright climate scepticism is rare and the EU has built an impressive suite of subsidies and carbon pricing. The end of coal-fired power generation in the UK this year was historic. But in Europe too the cost of living crisis is swinging the political mood against tough climate action. The looming crisis in the European car industry, brought on by Chinese success in EVs, exposes the hypocrisy of a continent that promised a Green Deal while clinging to diesel.
To varying degrees, both Europe and the US have failed to grasp the decarbonisation challenge identified by their own scientists decades ago. Insofar as there is to be a global climate leader it can now only be China, which is responsible for more than 30 per cent of global emissions and has mastered the green energy supply chain. Given mounting tension with the US, Beijing has every incentive to minimise oil imports. The key question is whether the Chinese Communist party can muster the political will to override its fossil fuel interests. If it can, it will not single-handedly solve the climate crisis but it will assert a claim to leadership that the west will find hard to answer.
Artificial Intelligence
I’m working on the third part of my series of essays on AI (read parts 1 and 2), so I’ll share some of the stuff that’s informing the next entry!
First off is Brian Merchant's report for the AI Now Institute that tracks the emergence of "AGI" as a lodestar for generative AI firms and their attempts to cobble together business models.
Taken together, we see a portrait of a company that wrapped itself in an altruistic narrative mythology to attract researchers, investment, and press. It stumbled into a hit app that opened a pathway to a new product category in commercial generative AI (something Silicon Valley had been pursuing unsuccessfully for years), ignited a gold rush, drew competitors, and wielded its unique legacy and relationship to AGI to differentiate itself. However, given that generative AI technology is so expensive to develop and run, a unique imperative to generate revenue—lots of revenue—in order to capitalize on its popularity, cultural cachet, and market opportunity has become the company’s dominant concern. (In the past, as mentioned earlier, OpenAI has stated that its move away from nonprofit status was necessitated by the need for more compute power if it were to make satisfactory progress in creating AGI. This can be seen instead as a move toward preparing for an era of commercial product releases, even if the company remained unprepared for the success of ChatGPT when it arrived.) This is how a company transforms from a nonprofit whose aim is to be “owned by all of humanity” and “free of the profit motive” to one whose board is purged of safety experts in favor of Larry Summers.
In Nature, David Gray Widder, Meredith Whittaker & Sarah Myers West published an essay about the illusory nature of "open" artificial intelligence—systems that are ostensibly transparent and allow for modification, fine-tuning, and competition.
This paper examines ‘open’ artificial intelligence (AI). Claims about ‘open’ AI often lack precision, frequently eliding scrutiny of substantial industry concentration in large-scale AI development and deployment, and often incorrectly applying understandings of ‘open’ imported from free and open-source software to AI systems. At present, powerful actors are seeking to shape policy using claims that ‘open’ AI is either beneficial to innovation and democracy, on the one hand, or detrimental to safety, on the other. When policy is being shaped, definitions matter. To add clarity to this debate, we examine the basis for claims of openness in AI, and offer a material analysis of what AI is and what ‘openness’ in AI can and cannot provide: examining models, data, labour, frameworks, and computational power. We highlight three main affordances of ‘open’ AI, namely transparency, reusability, and extensibility, and we observe that maximally ‘open’ AI allows some forms of oversight and experimentation on top of existing models. However, we find that openness alone does not perturb the concentration of power in AI. Just as many traditional open-source software projects were co-opted in various ways by large technology companies, we show how rhetoric around ‘open’ AI is frequently wielded in ways that exacerbate rather than reduce concentration of power in the AI sector.
Our second paper is from David Gray Widder as well as Sireesh Gururaja and Lucy Suchman looks at Pentagon funding for defense tech that integrates algorithmic systems:
In the context of unprecedented U.S. Department of Defense (DoD) budgets, this paper examines the recent history of DoD funding for academic research in algorithmically based warfighting. We draw from a corpus of DoD grant solicitations from 2007 to 2023, focusing on those addressed to researchers in the field of artificial intelligence (AI). Considering the implications of DoD funding for academic research, the paper proceeds through three analytic sections. In the first, we offer a critical examination of the distinction between basic and applied research, showing how funding calls framed as basic research nonetheless enlist researchers in a war fighting agenda. In the second, we offer a diachronic analysis of the corpus, showing how a ‘one small problem’ caveat, in which affirmation of progress in military technologies is qualified by acknowledgement of outstanding problems, becomes justification for additional investments in research. We close with an analysis of DoD aspirations based on a subset of Defense Advanced Research Projects Agency (DARPA) grant solicitations for the use of AI in battlefield applications. Taken together, we argue that grant solicitations work as a vehicle for the mutual enlistment of DoD funding agencies and the academic AI research community in setting research agendas. The trope of basic research in this context offers shelter from significant moral questions that military applications of on
This next paper comes out of AI Now, authored by Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker. Here the focus is on commercial AI models deployed for the military but trained on personal data swept up by data brokers or particularly prone to high-cost errors that harm civilians (e.g. injury or death):
Discussions regarding the dual use of foundation models and the risks they pose have overwhelmingly focused on a narrow set of use cases and national security directives-in particular, how AI may enable the efficient construction of a class of systems referred to as CBRN: chemical, biological, radiological and nuclear weapons. The overwhelming focus on these hypothetical and narrow themes has occluded a much-needed conversation regarding present uses of AI for military systems, specifically ISTAR: intelligence, surveillance, target acquisition, and reconnaissance. These are the uses most grounded in actual deployments of AI that pose life-or-death stakes for civilians, where misuses and failures pose geopolitical consequences and military escalations. This is particularly underscored by novel proliferation risks specific to the widespread availability of commercial models and the lack of effective approaches that reliably prevent them from contributing to ISTAR capabilities.
In this paper, we outline the significant national security concerns emanating from current and envisioned uses of commercial foundation models outside of CBRN contexts, and critique the narrowing of the policy debate that has resulted from a CBRN focus (e.g. compute thresholds, model weight release). We demonstrate that the inability to prevent personally identifiable information from contributing to ISTAR capabilities within commercial foundation models may lead to the use and proliferation of military AI technologies by adversaries. We also show how the usage of foundation models within military settings inherently expands the attack vectors of military systems and the defense infrastructures they interface with. We conclude that in order to secure military systems and limit the proliferation of AI armaments, it may be necessary to insulate military AI systems and personal data from commercial foundation models.
And a final paper from Gaël Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker on how this race to build bigger AI models comes with staggeringly high costs whether it comes to climate, surveillance, or the domination of our politics and knowledge production by private interests:
With the growing attention and investment in recent AI approaches such as large language models, the narrative that the larger the AI system the more valuable, powerful and interesting it is is increasingly seen as common sense. But what is this assumption based on, and how are we measuring value, power, and performance? And what are the collateral consequences of this race to ever-increasing scale? Here, we scrutinize the current scaling trends and trade-offs across multiple axes and refute two common assumptions underlying the 'bigger-is-better' AI paradigm: 1) that improved performance is a product of increased scale, and 2) that all interesting problems addressed by AI require large-scale models. Rather, we argue that this approach is not only fragile scientifically, but comes with undesirable consequences. First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint. Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate. Finally, it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others in the context of shaping both AI research and its applications throughout society.
Science Fiction
If you’re reading my recommendation roundups, you know how much I love science fiction. Here are this week’s two book recommendations:
Star of the Unborn by Franz Werfel
I have spent the past ten years recommending Star Maker by Olaf Stapledon to anyone who will listen—in fact you should stop reading and find that book first if you haven't read it already (go in blind). But for those who are familiar with it and were wanting for something more (like a plot, though there is not much of one here either), then this is the book for you. A travelogue set 100,000 years into the future, like all my other recommendations you must go in blind.
Lessons in Birdwatching by Honey Watson
Here's to the crazy ones. The degenerates. The perverts. The sickos. The people who like to watch. The ones who see things differently. If you do not enjoy this book (and feel parts of yourself squirm in the process), then I am praying for the salvation of your eternal soul.
Song of the Week
Here are two songs I hope you enjoy!
Willie Scott & The Birmingham Spirituals – Keep Your Faith to the Sky
Eugene McDaniels - The Parasite (For Buffy)
And that’s it for the roundup this week! If you’ve made it this far, why don’t you subscribe to my newsletter and share it with a friend?