Lately I have been railing against how corporations have become a way for people to make evil decisions without accepting any physical or moral accountability. For example, executives who decide for profit's sake to skimp on aircraft maintenance or railroad track inspections that lead to death face no personal consequences and, more importantly see no reason why they should.
And I realized that framing AI as a political ideology takes this "economics of alienation" to the next level: now the executive can not merely say "It wasn't me, it was the obligation to maximize shareholder value"; they can say "it wasn't me, it was the algorithm".
This philosophy of lack of accountability, of detaching individuals from the consequences of their choices is to me the fundamental flaw at the heart of... well, I was going to say "Western democracy" but that's a dubious expression, so whatever it is we live under now.
TL;DR: imagine a world where we could say to a police chief "yes, you can have algorithmic policing - but if turns out to be racist, you personally are liable. And by 'liable' we don't mean 'resign on a full pension'."
You might like Dan Davies *The Unaccountability Machine*, which details where this system of unaccountability comes from. Much of it is about Stafford Beers’ cybernetic perspective on corporate organization (Beers is the originator of “The purpose of a system is what it does”).
I'm revisiting cybernetics and it's a weird feeling to see how cybernetics/systems principles have been used/molded by various streams of AI R&D... it feels both universalist and not so at the same time, unsure
+ adding to what folks have said, I'd also recommend David Gray Widder and Dawn Nafus's "Dislocated Accountabilities" (https://journals.sagepub.com/doi/full/10.1177/20539517231177620) as well as DGW et al.'s "Basic Research, Lethal Effects" for further reading if interested.
The former unpacks the software engineering principle of "modularity", which was curious to me as I've encountered this principle from a graphic design context. There's a lot of great excerpts to gather from it, for me personally the below hits home:
"Modularity sets the stage for a refusal to accept a relationship between “us” developers and “them” technology users, let alone other affected citizens (McPherson, 2018; Suchman, 2002). Others have noted that modularity is an epistemic culture (i.e. Cetina (1999)) that cultivates a capacity to “bracket off” (Malazita and Resetar, 2019), even when human beings are bracketed off, not pieces of code. This makes it an everyday form of the modernist fallacy of the separability of society from technology (Latour, 1993), separating code from harms it enables..."
and the latter shows how (from what I recollect,) "basic research" that's vague/ambiguous in scope is more of a rhetorical move of deceptive self-reassurance that one's research won't be used for military/defense contexts. Our propensity to assign technologies responsibility rather than their maintainers, developers, and marketers might be traceable to even VAR systems, as an advisor has remarked...
Thank you for saying this! I've been thinking about this, and I've come to realize that this distancing from responsibility is actually a *core* feature of capitalism.
Think of it this way: if you have a cow and I have a wheat field. I can trade you some wheat for some milk. Easy, direct, barter system. Now to have an entire economy where different individuals specialize in different tasks (raising cattle, farming wheat, making wagons...) we need something more efficient than the barter system; we need MONEY.
And already you have the seeds of modern capitalism, and you have the seeds of the running from moral responsibility that capitalism gives us. If you use money to buy milk, but you raise that money by selling grain to a farmer who mistreats his livestock, now you can say "hey, it's just business." Especially if you go through a middleman or layers of middlemen like modern capitalism does.
That's why I say that "it's just business" or "it's just money" is sort of the new version of "just following orders." It's an attempt to dodge moral responsibility for your actions. Unlike with the Nazis, unfortunately, in modern capitalism we're sort of falling for the lie...
thank you! just in case for future readers/browsers, I found a similar (same?) text by Sadowski about this in Real Life Mag: https://reallifemag.com/potemkin-ai/
The Bank of Australia was a failed financial institution of early colonial New South Wales. It was formed in 1826 and collapsed in 1843. (from the usual source)
I came to this piece by way of Molly White+Nikhil Suresh+Iris Meredith+Gary Marcus (a kind of second-set Dead compilation) & it was a gas the whole way, inasmuch as this subject can ever be a gas. I have no more tech knowledge than how to on-turn the devices in question, and can even find that challenging, but it's exhilarating and heartening to know that such sharp & clear-thinking (& *young*) people are engaged in this work. And y'all can all write like motherfuckers, that's the most fun thing. It's not just that I, like Lord Peter Wimsey, find it so easy to get drunk on words that I am seldom perfectly sober, it's that sharp writing plainly made is one of the surest indicators of quality thought. I also take it, perhaps speciously, as a guarantee of the ethical bona fides of the author, though their willingness (compulsion?) to come at tech from a social-justice perspective, and the degree to which they engage with the economy of tech (which means so much more than the money involved and must encompass a thorough and detailed survey of all the pieces -- technological, societal, governmental, financial, and above all political, which leads inevitably to race as pretty much everything does in the US) show a desire to do that now very old-fashioned thing, speak truth to power in hopes of making the world a better place.
I'm sort of in a similar situation, trying to understand this as someone who's both watching from the sidelines and intimately impacted by it (as all of us are). Unsolicited recommendation, but I found Eryk Salvaggio's "The Hypothetical Image" (https://www.cyberneticforests.com/news/social-diffusion-amp-the-seance-of-the-digital-archive) incredibly helpful as a conceptual frame. He proposes a couple ways/metaphors with which to conceptualize diffusion models used in "generative image making" (scare quotes mine), first sifting through the origins of the insurance industry and the many ways people have used technological interfaces to either disappear, obscure, or escape confrontation of their own biases, abuses of power, and complicity in systems of oppression.
Though it’s good to be aware that the recent Nobel prize winners, at least in chemistry or medicine (whichever prize was awarded for the Alpha Fold work) was not for generative AI nor large language models — indeed, the Alpha Fold work and much of Deep Mind’s other work is closer to what Gary Marcus and his cohort envisions than it is to the increasing-scale-will-be-magical thinking of Open AI. That said, its not clear to what extent Open AI is working on what Marcus has described as “algebraic AI”, and others have described as “Good Old-Fashioned AI (GOFAI)” — supplementing the statistical models with reasoning about facts, since they’re pretty secretive about what’s going into their systems.
The other prize awarded, for Hopfield nets (Hinton as well as Hopfield, I think?) comes closer, as it was for the fundamental work enabling things like LLMs
One of the thinks that most frustrates me about the GenAI bubble is that when it bursts, it's going to take down with it funding for the valuable work that other AI (ML, or "GOFAI" or whatever) is doing.
And that will have real consequences in terms of things like vaccines not created and diseases not cured and lives not saved.
All because of the greed and ego of a few dozen tech bros.
Lately I have been railing against how corporations have become a way for people to make evil decisions without accepting any physical or moral accountability. For example, executives who decide for profit's sake to skimp on aircraft maintenance or railroad track inspections that lead to death face no personal consequences and, more importantly see no reason why they should.
And I realized that framing AI as a political ideology takes this "economics of alienation" to the next level: now the executive can not merely say "It wasn't me, it was the obligation to maximize shareholder value"; they can say "it wasn't me, it was the algorithm".
This philosophy of lack of accountability, of detaching individuals from the consequences of their choices is to me the fundamental flaw at the heart of... well, I was going to say "Western democracy" but that's a dubious expression, so whatever it is we live under now.
TL;DR: imagine a world where we could say to a police chief "yes, you can have algorithmic policing - but if turns out to be racist, you personally are liable. And by 'liable' we don't mean 'resign on a full pension'."
You might like Dan Davies *The Unaccountability Machine*, which details where this system of unaccountability comes from. Much of it is about Stafford Beers’ cybernetic perspective on corporate organization (Beers is the originator of “The purpose of a system is what it does”).
I'm revisiting cybernetics and it's a weird feeling to see how cybernetics/systems principles have been used/molded by various streams of AI R&D... it feels both universalist and not so at the same time, unsure
+ adding to what folks have said, I'd also recommend David Gray Widder and Dawn Nafus's "Dislocated Accountabilities" (https://journals.sagepub.com/doi/full/10.1177/20539517231177620) as well as DGW et al.'s "Basic Research, Lethal Effects" for further reading if interested.
The former unpacks the software engineering principle of "modularity", which was curious to me as I've encountered this principle from a graphic design context. There's a lot of great excerpts to gather from it, for me personally the below hits home:
"Modularity sets the stage for a refusal to accept a relationship between “us” developers and “them” technology users, let alone other affected citizens (McPherson, 2018; Suchman, 2002). Others have noted that modularity is an epistemic culture (i.e. Cetina (1999)) that cultivates a capacity to “bracket off” (Malazita and Resetar, 2019), even when human beings are bracketed off, not pieces of code. This makes it an everyday form of the modernist fallacy of the separability of society from technology (Latour, 1993), separating code from harms it enables..."
and the latter shows how (from what I recollect,) "basic research" that's vague/ambiguous in scope is more of a rhetorical move of deceptive self-reassurance that one's research won't be used for military/defense contexts. Our propensity to assign technologies responsibility rather than their maintainers, developers, and marketers might be traceable to even VAR systems, as an advisor has remarked...
Thank you for saying this! I've been thinking about this, and I've come to realize that this distancing from responsibility is actually a *core* feature of capitalism.
Think of it this way: if you have a cow and I have a wheat field. I can trade you some wheat for some milk. Easy, direct, barter system. Now to have an entire economy where different individuals specialize in different tasks (raising cattle, farming wheat, making wagons...) we need something more efficient than the barter system; we need MONEY.
And already you have the seeds of modern capitalism, and you have the seeds of the running from moral responsibility that capitalism gives us. If you use money to buy milk, but you raise that money by selling grain to a farmer who mistreats his livestock, now you can say "hey, it's just business." Especially if you go through a middleman or layers of middlemen like modern capitalism does.
That's why I say that "it's just business" or "it's just money" is sort of the new version of "just following orders." It's an attempt to dodge moral responsibility for your actions. Unlike with the Nazis, unfortunately, in modern capitalism we're sort of falling for the lie...
If you couldn't access the Potemkin Ai pdf, try here: https://direct.mit.edu/books/oa-edited-volume/5319/chapter/3800165/Planetary-Potemkin-AI-The-Humans-Hidden-inside
thank you! just in case for future readers/browsers, I found a similar (same?) text by Sadowski about this in Real Life Mag: https://reallifemag.com/potemkin-ai/
The Bank of Australia was a failed financial institution of early colonial New South Wales. It was formed in 1826 and collapsed in 1843. (from the usual source)
I came to this piece by way of Molly White+Nikhil Suresh+Iris Meredith+Gary Marcus (a kind of second-set Dead compilation) & it was a gas the whole way, inasmuch as this subject can ever be a gas. I have no more tech knowledge than how to on-turn the devices in question, and can even find that challenging, but it's exhilarating and heartening to know that such sharp & clear-thinking (& *young*) people are engaged in this work. And y'all can all write like motherfuckers, that's the most fun thing. It's not just that I, like Lord Peter Wimsey, find it so easy to get drunk on words that I am seldom perfectly sober, it's that sharp writing plainly made is one of the surest indicators of quality thought. I also take it, perhaps speciously, as a guarantee of the ethical bona fides of the author, though their willingness (compulsion?) to come at tech from a social-justice perspective, and the degree to which they engage with the economy of tech (which means so much more than the money involved and must encompass a thorough and detailed survey of all the pieces -- technological, societal, governmental, financial, and above all political, which leads inevitably to race as pretty much everything does in the US) show a desire to do that now very old-fashioned thing, speak truth to power in hopes of making the world a better place.
I'm sort of in a similar situation, trying to understand this as someone who's both watching from the sidelines and intimately impacted by it (as all of us are). Unsolicited recommendation, but I found Eryk Salvaggio's "The Hypothetical Image" (https://www.cyberneticforests.com/news/social-diffusion-amp-the-seance-of-the-digital-archive) incredibly helpful as a conceptual frame. He proposes a couple ways/metaphors with which to conceptualize diffusion models used in "generative image making" (scare quotes mine), first sifting through the origins of the insurance industry and the many ways people have used technological interfaces to either disappear, obscure, or escape confrontation of their own biases, abuses of power, and complicity in systems of oppression.
Would recommend Professor Ethan Mollick on Substack and The AI Daily Brief on YouTube and Spotify and then theres the recent Nobel AI prize winners
Though it’s good to be aware that the recent Nobel prize winners, at least in chemistry or medicine (whichever prize was awarded for the Alpha Fold work) was not for generative AI nor large language models — indeed, the Alpha Fold work and much of Deep Mind’s other work is closer to what Gary Marcus and his cohort envisions than it is to the increasing-scale-will-be-magical thinking of Open AI. That said, its not clear to what extent Open AI is working on what Marcus has described as “algebraic AI”, and others have described as “Good Old-Fashioned AI (GOFAI)” — supplementing the statistical models with reasoning about facts, since they’re pretty secretive about what’s going into their systems.
The other prize awarded, for Hopfield nets (Hinton as well as Hopfield, I think?) comes closer, as it was for the fundamental work enabling things like LLMs
One of the thinks that most frustrates me about the GenAI bubble is that when it bursts, it's going to take down with it funding for the valuable work that other AI (ML, or "GOFAI" or whatever) is doing.
And that will have real consequences in terms of things like vaccines not created and diseases not cured and lives not saved.
All because of the greed and ego of a few dozen tech bros.
A thousand times yes. It won’t be the first AI winter to follow AI pipe dreams