Skip to content Skip to footer

Why We Fear AI w/ Hagen Blix

Hosted by
Article published:
Block & Build
Block & Build
Why We Fear AI w/ Hagen Blix
Loading
/

In this episode we discuss how the accelerating hype of the “artificial intelligence” race affects left political theory and movements. Cayden talks to co-author of the new book Why We Fear AI: On The Interpretation of Nightmares, Hagen Blix. We’ll explore how the seeming inevitability of AI is intertwined with neoliberalism as well as how we urgently need to figure out how to narrate the development of technology for ourselves – so that the billionaires don’t do it for us.

Other Resources From This Episode

Connect with Block & Build and more


This transcript was automatically generated and may contain minor errors.

Cayden Mak: [00:00:00] Welcome to Block and Build a podcast from Convergence Magazine. I’m your host and the publisher of Convergence Caden Mak. On this show, we’re building a roadmap for the movement that’s working to block the impact of rising authoritarianism while building the strength and resilience of the broad front that we need to win.

This week on the show, I discussed the impacts of the accelerating artificial intelligence, race, and hype on left political theory and movements. I’m joined by one of the co-authors of the new book, why We Fear AI on the Interpretation of Nightmares Haagen Blix. We’ll explore how the steaming inevitability of AI is intertwined with neoliberalism, as well as how we urgently need to figure out how to narrate the development of technology for ourselves so that the billionaires don’t do it for us.

But first, these headlines. I’m sure our listeners have been paying close attention to the situation on the ground in Gaza this week. Gaza’s are dropping in the streets due to starvation, and reports indicate that many are [00:01:00] reaching the fifth and final stage of malnutrition, after which providing them with food is simply not enough to bring them back from the brink.

If you like me, feel absolutely beside yourself about this, you are not alone. Look, there’s one way of narrating what we’re seeing that says the world doesn’t care that Palestinians are dying. That’s a narrative that both makes those of us who are outraged, feel alienated and absolutely barking mad while erasing the power issues that underlie what we’re witnessing.

I’m just gonna fully quote a social media post by friend of the pod, Iman Abdelhadi on this. She says It’s worse than Gaza’s being starved and destroyed, and no one cares. It’s that hundreds of millions of people do care and are powerless to stop it. We need a world where that could never happen, where the masses actually have political power.

I couldn’t have said it better, and I think we need to get real about what we’re doing to build power. Those of us opposed to genocide haven’t built enough. And while it might be easy to self flagellate and say that this is all our fault, we’re also up against historic headwinds, [00:02:00] capital, state power, and immediate information system that is fully in their hands.

While we should be serious about our responsibility to build power, that seriousness includes a real assessment of the power of the opposition. In the meantime, we owe it to Goins who have survived a genocide so far not to give up the fight. Organizations like the Samir Project are still working to get food and medical supplies into the region, and they deserve our support.

The people who supposedly represent our interests need to know that we have never been more pissed about this. Our heartbreak and rage should fuel our steadfastness. Before today’s interview, I wanna take a moment to ask you to support convergence during our annual summer fundraising drive. Reader and listener support is critical in a time when independent media is under existential threat.

So if this podcast has in fact helped consider what you can do to give back, to make sure that we can keep making it. Anyone who starts an annual or monthly subscription gives $25 or more, or upgrades, their subscription will receive a special thank you gift. [00:03:00] Head over to bit ly slash summer Fund Drive, all one word to make your contribution today.

You’ll also find that link in the show notes, the new book, why We Fear AI on the Interpretation of Nightmares, explores the realities and myths of the many nightmarish interpretations and predictions about the current AI boom. Through a clear-eyed material and leftist lens, it serves an excellent primer on what we mean when we talk about AI and whose interest this technology operates in.

As well as how it intersects with politics, economics, surveillance, and more. I was joined earlier by one of the book’s co-authors, Haagen Blix. Have a listen. Haagen Blix, thanks so much for joining me today. 

Hagen Blix: Thanks for having me. 

Cayden Mak: Um, before we really dive into the meat of the book, um, could you talk and let our listeners know a little bit about your background in the fields of cognitive science, linguistics and ai?

Um, how did you get into it? How do those fields sort of relate to, um, the approach that you take in Why we fear ai? [00:04:00] 

Hagen Blix: I’m, I’m a cognitive scientist. I, um, I work on human language syntax in my, my academic research. And, um, yeah, there’s some interesting stuff happening with, you know, like there’s people who care about how do children acquire language.

They care about what, what are the unique properties that human languages in general have. You know, we are after all the talking species, other species may have communication systems, but they don’t have language in the way that we do. And yeah, so like, um, a couple years ago these, uh, transformer models came around the corner and the linguists saw that, that something was happening there.

You know, like, um, they came out of, in, out of engineering efforts. And so I’ve been interested in kind of originally just what is going on in this space and, um. Kind of just started from that relatively apolitical, uh, technical interest, just out of curiosity. And then over, over the last few years, have really, and, you know, quickly started to think about what do these things represent politically, right.

And [00:05:00] I think the question is just getting more and more acute and, and is a, is an works in industry as a, as a researcher. And she has an AI background, so she’s a computer scientist. And so yeah, we kind of, uh, have been talking about these kinds of things for a while 

Cayden Mak: now. That’s really interesting. ’cause I, I mean, I think one of the, you really useful things that the first half of the book does is lay out the development of the computing technology that makes, quote unquote AI possible as well as the sort of like.

Social, political and economic conditions that evolved alongside it or that really they evolved within. Right? That like those things go together. And I, I, it struck me upon reading it that it is often in the interest of industry and the tech oligarchs for us to divorce these things that like they are, um, meant to be held separate in the interests of people who have power.

Um, and so I’m [00:06:00] interested for you to talk a little bit about that linkage that you and Engberg are making in the book. Where did that realization come from? Can you talk a little, explain a little bit about this, like historic co-evolution between computing technology and neoliberalism? 

Hagen Blix: Yeah. We, we, we got really interested in, in the course of thinking about this book, right?

So I think we started from, from a kind of place that, that seems quite natural. I think if you’re coming from the left, which is to say, thinking about the nature of the training data, you know, these machines are, are fed on every piece of text that you could legally or illegally acquire on the internet.

You know, originally they were usually trained on just Wikipedia or something. Right? But I, I think it’s quite natural for someone from the left to think about this as a kind of enclosure. You know, there’s, there’s, there’s something that, that used to be some kind of public space that like, you know, it’s not taken away from the public.

The internet still exists, but it kind of reappears inside. [00:07:00] These machines. So we kind of started to think about how is this similar to, to past forms of what, of what you would call enclosure, right? The, the, the situation where you take something that used to be a, a public good or shared good or something and you privatize it.

Right? So we kind of, um, started thinking about that and then started to think about, yeah, computing technology more broadly. You know, one of the things that people always stress about these AI models, including many, many AI researchers that we both know is the black box character, right? Like, there’s a real sense with these machines of even the people who built them feel like they don’t really know why they work.

So that sense of of, oh, what does it really mean for a tool that we made to be a black box, to be incomprehensible, right? That, that kind of really struck us as a question. We get very. Curious because it feels, in a sense, nonsensical, right? Like, well, you, you programmed it, you, you put in every single step that the [00:08:00] thing does, and then you fed in all the words and the machine reconfigures itself.

You know, like the very basic way one can imagine that is that, that you blank out some words in the training data and you tell the model to predict, given how many words it’s seen to predict what was in the blank. And you know, then you do that basically trillions of times. And that way the internals of the machine get a really good statistical representation of, you know, you could say what text usually looks like, right?

So there’s, in this training, there’s a step I think where this kind of thinking from, oh, we know what the machine is doing too. Well, we know what the machine is doing during learning. But we, we don’t really know what it is that the machine learns from all these texts other than at this very abstract level, right?

And so we kind of got to thinking, well, it feels like there’s a continuity there, right? Like one of the things that we’re [00:09:00] thinking about is like, you know, you, you write an email. I mean, I don’t really know how an email server works. You know, I have the, the vaguest of ideas. I know some people set up their own email server, but you know, there’s a sense in which, from the user’s perspective, it’s a ubiquitous experience these days to think about, oh, we don’t know how the, how the technology works on the inside, but I know how to use it.

And then we got to thinking that’s not a necessary property of technology, right? Like I can say, I know how to use a computer, but I don’t know how the computer works. But it seems really ludicrous to say this about a hammer, right? Like, I know how to use a hammer, but I don’t know how a hammer works.

Seems like a. Totally bizarre statement. So we got interested in the, the nature of this divorce between the internal workings of a tool and the ability to use it. And so we kind of got really interested in what kind of social developments give rise to that, what kind of politics is entailed in that kind of 

Cayden Mak: thing.

I mean, I find that really interesting too because I think so [00:10:00] much it, like it is often felt to me like when people who are working on developing these generative AI systems say that they don’t know what the system is doing. I think that like my impulse is to be like, I think that’s kind of bullshit ’cause in, in a lot of ways, right?

Like you describing the training process. It’s pretty clear to me that then like if you have an algorithm that you’ve trained on a set of data that’s like, alright, here’s statistically what’s likely to come out of it, that is then how the machine. Essence works. It’s not that we can predict exactly what that’s going to be because it is like doing a statistical thing that like is more complex than what our minds.

Can come up with that quickly. And, and it also fundamentally, to me, gives lie to the fact that like these things aren’t intelligent in the way that we are intelligent. Right. That they are just sort of procedural as opposed to when you and I are having a conversation, we are [00:11:00] relying on like actual domain knowledge about things.

Whether that thing is politics, whether that thing’s economics, whether that thing is computer science 

Hagen Blix: or you know, really embodied things. You know, if you’re, if you’re talking about hugging someone or you know how sand feels between your fingers, you know, the model will do that just as fine. But the model of course isn’t, isn’t trained on what it actually feels like because we don’t have that kind of data in any way that, uh, lends itself to recording in the way that language does such Ever since we invented writing.

Cayden Mak: Yeah, yeah, yeah. Right. It’s like not an embodied thing. And that actually so much of our knowledge is about being in the world and like observing stuff. These models don’t actually observe things they merely like spit. Statistically likely stuff. Um, and that seems to be really important given that like this.

And I, I think that the, you tracing the history of the transistor, which is a essential part of a computing system from this sort of like moment when it was [00:12:00] possible to basically take apart like a transistor radio and like put it back together to understand how it works to the place where we are now.

Where like chips have just like huge numbers of transistors on them. So tell me a little bit about, uh, the thought to start with describing the history of the development of the transistor in particular and could you tell us a little bit about how that sort of like enclosure of the knowledge of how a technology works emerged alongside neoliberalism as a sort of like dominant uh, economic logic of our time?

Hagen Blix: Yeah, I think there, there’s some interesting, like cultural parallelisms. I, I think intuitively one thing that is happening with these, like, you know, the development of an email server or whatever, so there’s a sense in which the, these technologies get more and more concealed from us because we just have access to a smaller and smaller percentage of knowledge.

But there, there are technical aspects to that that would [00:13:00] mean there are aspects of that that are true, presumably under any kind of, for form of social organization, right? There’s always gonna be someone who knows more about something than you, and there’s gonna be something that you know more about than someone else, right?

That’s the, that’s kind of the nature of like that kind of technological development. Um, I think there’s a real sense in which that, on the political level, of course, goes, uh, hand in hand with a real intense stratification. Uh, off the working class, right? There’s a constant specialization. And then one thing that, uh, we really work out in depth in the book and where we draw a lot for on, for example, Harry Braverman, uh, who wrote, uh, uh, labor and Monopoly Capital in 1974.

And I always tell people, if you wanna read one book that actually tells you what AI is about, you should probably go and read Terry Braverman in the seventies. You know, sometimes, sometimes things are easier to see just before they start happening, but yeah, [00:14:00] so, so, um, so you can then take that observation of this increasing complexity and this increasing stratification, and you can think about, um, why this constant development.

Right? So certainly capitalism for all its horrible ills is also very good at increasing productivity. Most technological. Developments we think of in the history of capitalism. It’s like, well, it increases productivity, right? That means you, you can do something a little faster. You can produce something a little faster than you could before.

But, um, there’s also something else. So sometimes you can go ahead and you transform a kind of task from a task that required a high skilled worker, a worker who for their abilities, was able to have special bargaining power and replace them with somebody who’s less skilled, right? So, um, I think from the forties to maybe the seventies, you get this very, uh, [00:15:00] strong development from like, you know, people who cut a lot of complex metal pieces that make up all the industrial machinery that continues to grow over the decades.

Cut these complex metal pieces for like rare use cases that have to be extremely precise. These people are, are, um, getting replaced more and more by. Tools that take over part of the function, right? So you maybe you start with a lathe, uh, and you end up with a, with a C NNC machine, right? With a CNC machine.

You, you put in a 3D modeled object and then the machine calculates the path along, which like the various like drill bits that carve off the metal piece have to move so that you can put in a metal block and you get a whatever kind of shape, right? But now you’ve done something interesting, something that I think is really informative about how we should think about artificial intelligence, which is you’ve taken something that used to be the specialized knowledge of the worker, right?

That worker knew [00:16:00] exactly how to cut complex pieces of metal with high precision from a block of metal. And you’ve, you’ve kind of transformed that knowledge into a property of the machine, right? Like there’s a sense in which something of what the worker used to know, it’s. Property of the tool. And now that worker who used to be the, you know, the technical specialist, now that worker has been turned into a worker from a, from a high skilled worker into a low skilled worker.

And of course, at the same time. On the other end, there’s now somebody who makes a CC machine, right? Like somebody knows how to program a CMC machine. Uh, so there’s technological development happening there and so on, right? So we can kind of take these two momentums, we can call this one de-skilling, right?

The, the metal worker was, uh, turned from someone who got, who had to be paid as a highly skilled laborer, right? Like, I think that’s always very important with these words. De-skilling doesn’t mean [00:17:00] reducing the skill that somebody has, but reducing the degree to which somebody has to be paid as skilled laborer, right?

So that worker, they, the worker might still know very well how to cut complex metal parts from, from a block of metal. But the worker no longer can Matt special wages. He, that he has lost access to the bargaining power that those skills gave him. So we can, we can take this de-skilling as a technological development and.

Abstracted away from, from the productivity increases, right? So we can say productivity increases, you know, all else being equal. We could say, well, that’s good. You know, like the kind of society that I’d rather live in would still have productivity increases. But the de-skilling, of course, has now turned the metal worker into a poor metal worker.

So that’s not great. And what’s went, what, what’s interesting is to the pers from the perspective of capital, or you know, from the perspective of a, of a balance sheet, [00:18:00] those two developments look kind of the same because productivity increases means, okay, you, you, instead of taking an hour to make an object, maybe a worker only takes half an hour.

So you cut your, your labor costs per object in half. And de-skilling means well maybe. Instead of paying the worker $30 an hour, you now pay them $50 an hour. That also means you’ve cut your labor costs in half. Right? So from the perspective of a balance sheet, productivity increases and de-skilling look exactly the same.

So we can think of this kind of development as kind of, yeah. Politically related, right? Because we then need to jump back and think about, oh, what about the upskill person? Right? And of course, as an investment choice, you would only upskill this person that you have to pay for knowing how to make a CNC machine if you’re still overall saving money, right?

You’re not gonna pay the person who makes the CNC machine $20 more so that you can save [00:19:00] $15 somewhere else because then you’re out $5. You’re only gonna do it if the wage increase of the person who makes the CNC machine. It’s high. So in a sense, what you get is like an overall structure of impoverishment everywhere.

Workers are getting de-skilled, but you’re also getting a, a sharp increase in that social differentiation. Because what happens under capitalism, especially since like the so fifties, is that this kind of de-skilling drive is radically happening everywhere and to everyone. And recursively, right? Like as I said, the machinist was the one who was originally upskilled.

That’s the person who’s now getting de-skilled. And it’s always partly about being like, can we take something that was knowledge that was economically valuable, that was skill that. Gave you a special bargaining power and put it into a tool, right? So that’s a, that’s a totally different way about thinking about why do I know how to send an email.

But I dunno how an email server works. Not just because email servers are [00:20:00] complicated things, but also because this constant transfer of knowledge into the things away from the workers is a form in which capital, in the struggle over wages. Right? Like the boss wants to pay you as little money as he can because everything that he doesn’t give you stays in his pockets and you want to earn a good living, right?

So there’s always this struggle, a fight, uh, attention over overweights. So that kind of struggle is something that influences the development of technology, right? And we don’t, I think in this particular moment where we see. As a new kind of transfer of knowledge into a tool. Everyone who work is like some kind of knowledge worker.

Everyone who works with language or text or visual representations is now subject to this kind of de-skilling in a, in a, in a very radical and huge way, right? Like all their knowledge that’s currently attempts to put that into a machine, to de-skill ’em. And you know, I, [00:21:00] we, we see so much pushback from, for example, artists because they’re very aware of that, right?

They immediately see this. As that kind of threat. Yeah, I think it’s really crucial to think about that, you know, not just as like, man, there’s some horrible tech growth in Silicon Valley who like always wanna fuck things up and have the most horrible idea of what it means to live the good life. That may be true, but it is also true that this general move is something that is generated as a very, very deep and central property of how technological developments under capitalism are driven precisely because they’re driven by profit interests.

Right. And profit interests from the balance sheet perspective cannot distinguish between productivity increases and de-skilling rules. They look exactly the same to capital. 

Cayden Mak: Yeah. I, I think that insight is really important and it also feels related to, uh, what you all write about in the book about [00:22:00] the way in which.

Capital is this thing that we ascribe this sort of like disembodied will to our brains, do the same discursive move with ai, right? That like we give it this sense of agency that like, maybe it kind of doesn’t have, that allows us to obscure, uh, responsibility for the decisions that are made by the, by the system that is like quote unquote making the decision.

I think it’s like even hard to sort of talk about it in rigorous ways because of our tendency to anthropomorphize these things 

Hagen Blix: because we, but there’s no, there’s no words to, to not anthropomorphize the production of language because there until a few years ago, there never was anything that produced language that wasn’t the person.

Right, right. 

Cayden Mak: Yeah. And I think it’s, it’s like one of these things that is almost. It’s like part of, part of the parler trick of, uh, AI [00:23:00] is to convince us in some way that just because it outputs like credit, like legible sentences that make sense in largely English, that that means it’s something. Um, and so yeah.

I’m interested for you to unpack a little bit that, of that comparison between this disembodied will of capital and the disembodied will of so-called, uh, ai. 

Hagen Blix: Yeah. So, so in the book, I think we’re kind of doing two things simultaneously, right? We’re, we’re, we’re giving kind of, um, an economic and political analysis of what kind of thing AI is and what we should think about in terms of organizing, um, and what you would expect in political terms.

But we also are interested in the stories around ai, right? That’s. All these nightmare stories. Oh, the AI is gonna take over, the AI is gonna make us all into paperclips. They’re gonna die. It’s either, it’s either gonna be, you know, the end of humanity or, or some [00:24:00] unspeakable utopia. Somehow some other utopias never get any articulation.

You know, like it’s always the scary stuff. Yeah. No imagination for the good, but the scary stuff, of course, gets spelled out in, in, in great detail. And so where we were, and from a political perspective, I think the idea behind that was to say, these stories probably reflect class positions. And so if we, if we leave the interpretation of these stories to the rich and powerful, then we’re gonna get fucked over.

Right? So, yeah. So, so we kind of start from a particular suspicion. The one that you, that you kind of like mentioned already, right? So when we encounter language and language has this property that, that it requires us to speculate to a degree. About the person who uses the, this can be relatively.

Abstract. Certain abstract things basically always happen, right? So there’s things that we call implic coutures. So when somebody tells you, I ate some of [00:25:00] the cake, you will make the inference that they didn’t eat all of the cake. These are kinds of implic coutures, and we, we compute these kinds of implic coutures, we draw these inferences kind of automatically based on, oh, what else could that person have said?

Why didn’t they say this other sentence where they, like, part of the computation of just processing language involves this kind of use of theory of mind kind of things. Right? So theory of mind is what psychologists call the fact that humans are very good at making mental models in their own mind about the content of the mind of someone else, right?

And in particular, when we’re, when we’re, when we’re in an actual conversation, of course we, we are constantly modeling each other’s thought, right? Like that’s, that’s a very significant part of a conversation that we’re thinking. You know, implicitly or explicitly about what’s in the mind of someone else.

And of course that also happens when you’re engaging with a language model. We’re a kind of animal that it has a very deep connection to language that [00:26:00] we evolved for. It’s a clearly a very human thing, and we can’t just turn it off. Right? Like even like, and something that’s really striking to me about this inability to turn it off is that you can’t even turn it off if you know, right.

Like, it, it’s, it’s such an automatic process that even if you know that there’s no person behind the language model, a lot of these things still automatically happen. 

Cayden Mak: It’s easy to slip into it. Yeah, 

Hagen Blix: yeah, yeah. And so we were like wondering, okay, so we have, um, we have all these weird nightmare stories and we have the sense that, um, we have, we, we can’t help but anthropomorphize the ai.

That’s kind of a good ingredient already for a horror story. Right? But yeah, so we were thinking then about that in relation to how we think about capitalism, right? So, um, Mark Fisher has a famous argument about the, the idea that we can’t help but somebody in capitalism is in control. And again, that is kind of [00:27:00] like the other thing, you know, I know that capitalism is a global system for organizing the, the distribution of wealth, the distri, the production of wealth, where labor goes, where capital and investments go and so on.

But I also know that, um. That the whole thing has an overall structure, right? Like this is not, this is not, you know, bill Gates happens to be a weirdo and that’s why this thing happens. That’s a small part of it, but it’s an, it’s an overall structure that kind of has its own, you know, left is what, say logic.

I, I always find this a bit, it’s one of the terms that I try to avoid. ’cause nobody really knows what left is mean by logic. But, but let’s say, let’s say, right, let’s say you’re ExxonMobil, let’s say you’re the eight CEO of ExxonMobil. And, um. You, um, read all this stuff about climate change and you come up with a brilliant idea that [00:28:00] maybe we should do something about that.

Maybe we shouldn’t burn all that, uh, oil that we’re drilling out of the earth. And so you’re like, well, well, let me turn this tanker here around and do something different. And then the stock value of ExxonMobil plummets because the expected profits are lower, and then the shareholders are gonna vote you out, and somebody else is gonna be put into that position, and that person is gonna be willing to draw out the oil.

So there’s a, there’s a sense in which these pressures that are on a CEO. Look like a will. It looks like, well, ExxonMobil wants to get that oil out of the ground. And of course, in, in one sense, it really does boil down to the stockholders, but the stockholders can all be replaced and the CEOs can all be replaced, and somehow if you replace everyone, that that kind of effect continues to be there.

Right? So, so the [00:29:00] pressures of capitalism, the way that capitalism pre structures, the kind of decisions that people in particular positions can make, kind of looks like a will, but of course. Just like with the language model, the fact that I know that there’s no will, doesn’t stop me from somehow having it feel that way.

Right. So Mark Fisher calls that the, the unthink ability of the lessness of capitalism, and he, he applies it very productively to, as an analytical tool to a bunch of really interesting situations. Right. Like I think one of the, one of the very clear ones to me is that like in so many typical privatization cases where, you know, there there was a government service, maybe the government and the trains or whatever, and they could privatized and the service get better, people always blame the government because somebody has to be responsible.

Right. But it’s, it’s, it’s kind of such an internally incoherent position to, to [00:30:00] be like the government is at fault for the thing that the government stopped doing. ’cause their government is at fault is exactly the kind of thing that’s consistently used to justify to have the government get out of things.

Right? So it’s a, it’s a totally incoherent, it’s kind of self-contradictory, but people can’t help but have this feeling. And I think that that is, that is not because they don’t know better, but I think that is a structural effect. I mean, I can, I, I am fairly certain of myself, but it doesn’t change the fact that I have these feelings.

You know, they’re, they’re getting produced. There’s something like that. And so one of the, that kind of starting points that we entertain in the book is like, well, one of, a lot of these AI horror stories reflect the fact that the commonality between the language model and capitalism that they both have this.

Hole in the center where we project something like a will, something like a person, even when we know it’s not there. Right? And then you think about, oh, you know, all [00:31:00] the horror stories about what if, what if somebody tells the AI to make as many paperclips as possible? And then the AI makes the whole world into paperclips.

And you’re like, well that does sound oddly like the oil well, right? The oil well is already doing this. Right. So maybe, maybe it’s worth them trying to, to clarify for us what parts of this are about technical properties of ai. What parts are about AI as objects in a social context, because, you know, these things cost hundreds of millions or even billions of dollars to make, they are clearly a form of capital.

They’re an asset that is supposed to make a profit. So in that sense, they’re in exactly the same kind of social configuration again as the oil well. So maybe there’s a, maybe there’s something there to unpack. And again, then if there’s something to unpack like this, then um, then we should really worry about, you know, not letting Eric Schmidt [00:32:00] or Mark Zuckerberg or Elon Musk or Peter Thiel, or any of these people who own these kinds of assets.

Um. Tell the stories of what the nightmares are about because, uh, then we might end up fighting windmills. Right? This is exactly that kind of, that kind of situation. 

Cayden Mak: Yeah. Well, I find that also really useful because I think that like, you know, you hear a lot of these sort of like existential risk, uh, like AI doomers who, you know, as you rightly point out in the book, many of them are the people who are developing these technologies to begin with and they make these sort of like incoherent arguments about how uh, the best solution to AI is to build more AI as like, this is just like not, it’s like, is patently absurd when you stop and think about it, right?

It’s just like, makes no damn sense. Um, one thing that I’ve always appreciated from, uh, Dr. Emily Bender and Dr. Alex Hannah is that like, AI doism is sort of [00:33:00] like the like weird like mirror image of AI hype that it’s like. Both thinking like ascribing a lot more power to this system than it actually has.

Um, and like it also struck me, especially when I was reading a lot of the examples that you give of the use of different kinds of AI systems, both in, uh, basically labor control and the economy and in government, is that like a lot of those applications of especially like prediction algorithms, um, are things that our movements have been fighting against for like 15 years at this point.

Um, and that in some ways this hype, it’s like, it’s like it’s throwing the public off the scent a little bit. Yeah. I’m, I’m, I’m interested in the ways and I really appreciate the ways that the book unpacks that a little bit. Um, and you take a good long look at Jeffrey Hinton in particular, um, who is. Uh, [00:34:00] you know, largely sort of acknowledged as the quote unquote godfather of ai, um, as somebody who has like developed a lot of the, uh, thinking that underlies this technology.

Um, but, uh, yeah, I’m, I’m, I’m, I’m curious about that tension between the, like utopian vision and the do and like AI doism as essentially ways of selling AI to us no matter what the vision is. 

Hagen Blix: I think this is actually a, a really fascinating and complicated and strange thing, right? Like I, I, I have absolutely no doubt that, um, Emily and Alex are, are totally correct.

That, that like, a lot of this is like a weird form of advertising. So I’ve been, I’ve been thinking that. There’s a danger in overstating this particular kind of account, right? Like, it, it’s important to go there. There’s so much hype, you know, there’s so many people out there who seem to, [00:35:00] to think that language models actually think, right?

Like the other day I saw, I saw a headline, I don’t remember who it was, but it was, you know, about the chat GPT giving a shocking confession and you’re like, chat GPT can’t confess anything. It can just produce texts that look like so, so even among journalists there, like, there’s clearly a lot of like nonsense that needs to be debunked and there’s this, uh, a big part of the, the public, but yeah.

Are getting taken in by the, by the name artificial intelligence and by, by all the story. But, but I think there are a lot of cases where we have to be careful with not making the anti hype argument in a way that, um, that when you say hype. People often take away what’s just type and that just, I think there’s a danger there, right?

So when we think about current uses of of language one, we can take, take the State Department. The state department is currently [00:36:00] using a language model to scan social media posts of people, um, that are on student visas. And they’re doing that, uh, because they wanna repress speech about Palestine. And they want to deport people on student visas because they’re racist and they wanna have fewer people from, uh, countries that they, uh, you know, hold particularly intense racist animosities against.

Right? Okay. So, so you have a language model. The language model doesn’t actually understand things. The language model’s gotta make plenty of mistakes. The language model, you know, you can go and measure it. It has a certain accuracy, maybe a bunch of posts that, uh, the language mal thought were about Palestine, were actually about Ukraine.

Or, you know, about how great it is to bomb the Houthis. And, uh, the person was actually totally pro-Trump. Who knows, right? Like, the language model is, is gonna make weird mistakes, okay? So you can, you can go and say, well, look, it’s all hype. You know, [00:37:00] the language model doesn’t actually do the thing that you advertise, or you can ask, well, what is it that you’re doing from political perspective?

Okay, we’ve already established that the State Department has two purposes. The first purpose is to suppress speech, the second per purposes too. Revoke visas and produce deportations while the deportations are still getting produced. They’re like, yeah, okay. It doesn’t fit quite the right people, but whatever.

We like more deportations. I mean, you know, they’re like, good for goal achieved anyway. They’re, they’re gonna do their victory lap. It’s even worse when you think about the suppression of speech, right? There’s a, there’s a very real sense in which the suppression of speech and the production of political fear, or even terror more generally, relies precisely on things not working in a precise fashion.

Right? If you know exactly what you’re not allowed to say, you’re gonna find euphemisms, right? Like, I mean, for all, all, all I know about, and admittedly that is very little, but what I know about the censorship in China is that a lot of [00:38:00] it is like word of race based. And so people do a lot of like euphemisms around to get around the censorship.

So it is the fact that the censorship is actually precise and thereby understandable, that allows you to deal with that. If the, if the thing is, like anything that is in some linguistic way in the broad vicinity of saying the word Palestine, even if you don’t even say the word Palestine, uh, is something that might get your visa revoked.

You don’t know at all what might be dangerous speech. So it, you get much more scared and you’re much more likely to self-censor in a much more radical way. Right? So there’s, in these kinds of situations, there’s a real sense in which the non-working of the language model, the fact that it is really just so you know, the fact that it works just good enough for doing bad things is precisely what makes it more effective [00:39:00] as, as a repressive tool, right?

And that’s, that’s sometimes why, why I worry that when our critical approaches to AI. FO focus too much on the what is hype and what is real. When we, when we were running after them about debunking advertisements rather than that it’s too easy to lose sight of the, maybe more important question of what is the political project that people who are making, selling and using AI are engaged in.

Cayden Mak: Yeah. And it seems, it seems to me that like it’s, it is hard to separate also though the question of what is hype from that political project. ’cause I think that the other thing, the other thing that seems to be the case right now is that so much of the hype is about making it seem like the AI moment that we’re in is an inevitable outcome.

Um, and that we have no choice. Uh. And I think, [00:40:00] you know, the, the thing that you write, that you write about in the book about ai, enclo enclosing probabilities, while foreclosing possibilities, that that is so much about creating this idea that like, uh, the world governed in the way that you described, uh, in this sort of like State department example, is an inevitable outcome of a path that we are already headed down.

That like we cannot get off at this point. Whatever train is taking us to like ai hell. Right. Um, but could you talk a little bit about that enclosure and foreclosure and then the five ways that you describe in the book about how it’s happening? 

Hagen Blix: Yeah, yeah. Let me think about all the five ways. Every time I’m like, oh my goodness, there are so many ideas in that book.

Yeah. There, there’s, there’s definitely, there’s, there’s like, you know, I, I think, uh, Alex and Emily really identify. A, a lot of this like, well, you know, [00:41:00] if the AI is so powerful, then what are we gonna do about that? And one of the things that we are trying to ask is like, how is, how is this an echo of, you know, the good old, uh, well, there’s no alternative thing, right?

We’ve been told this about capitalism for, for decades now, and not just capitalism, but a particular kind of austerity capitalism, right? Where everything seems to slowly get shittier and somehow we can’t do anything about any of the bad things, and there’s no alternative to burning down the planet. Yeah.

There, there, from that Mark Fisher argument, right, like Mark Fisher. Uh, in his capitalist realism work clearly draws on that sense of, you know, it’s easier to imagine the end of the world than the end of capitalism. And we see a lot of that kind of stuff really deeply recurring. And how people imagine AI is when they think of AI as kind of agents, right?

Especially when they think of them as agents. Now, can I actually come up with all the five ways that we [00:42:00] talk about the continuation of AI as kind of these new neoliberal projects? I think you would have to look at the book, but I don’t have it with me because I’m traveling right now. 

Cayden Mak: Yeah. Well, well give us a, give us a top line in a couple of examples of, of the stuff that you talk about in terms of how AI is serving this neoliberal project.

Hagen Blix: So the, the kind of, the question that we wanna ask in that, in that, in, in that earlier chapter is like, okay, now we, we, we’ve, we’re already kind of thinking of it. Maybe AI is talking with a voice. It reminds us of. The voice behind capital in a way that the oil well couldn’t do because the oil, it doesn’t speak.

But now we have a piece, an asset, a huge asset that is constantly pushed on us. How do we imagine it speaking? And we clearly seem to be imagined in, in the voice of, you know, Margaret Thatcher and Freddy Fun Hayek, who’s work fun. Fact is, um, cited in the original perceptron paper that, uh, produced. 

Cayden Mak: I found that to be [00:43:00] fascinating.

Hagen Blix: Right, right. So what, where does the sense of inevitability come from? And I think we kind of cycle around that in the book. Um, we, in the, in the first part of the book, we kind of talk about it from this development of technology, um, in this less and less comprehensible. Sense and how that sense of incomprehensibility produces a kind of like, oh, you have to, you have to trust the experts.

You, you don’t know how things work. Right? And like, certainly neoliberal economists always were like, well, he, he, these are the people who know how to make political decisions. Right? And it’s always a kind of, of course, political valuation and devaluation that is going on with respect to expertise, right?

Like, somehow we’ve been told for decades that we, that the economists are telling us something natural that you can’t change. And so we have to follow their rules. But, uh, but when the actual scientists tell us something about climate, it’s like, well, how are we gonna do anything about that [00:44:00] as if the economists weren’t describing a social situation that is in principle changeable, whereas the climate scientists are describing a, a situation of physics that is something that we genuinely can’t change.

I mean, they tell us if we keep doing this, then bad things will happen and we can only change something about keeping doing this. You know, we can, we can stop producing that much CO2, but we can’t do anything about the fact that CO2 heats the atmosphere. That is a, that is a fact that, for all I know, will for the very foreseeable future, stay in the realm of genuine nature in the sense of like, society can’t do anything about that fact.

Right. So how do, how does a, how does AI get to be, get to be read that way? And I think, yeah. One, one of the very crucial things is this production of the sense that, you know, nobody understands how they work. Right. I think that’s a very crucial ingredient, for example, in the [00:45:00] production of that. Because if even the AI researchers don’t know how it works.

And we have this like sense of, oh, we have to always trust the experts about how technology works, right? Like about the email server. To bring it back to that example, I trust that somebody knows how it works and you know, if it breaks or if the email server does something that funky, I, I’ll, I’ll, I’ll hope the expert fixes it because I don’t know anymore.

Right? So there’s, there’s, in that sense of, of the relation between expertise and the internal workings of tools, there’s a sense in which there’s a smaller and smaller set of people who can be called responsible, wasn’t in a positive and negative sense, right? I can’t be held responsible for the email server not working because I don’t know how it works.

But now we do this imaginary kind of leap into saying, well, the last person who knew stopped knowing because nobody knows how the AI works anymore. And you’re almost playing a magic trick because then the only thing that’s left happens to also be [00:46:00] talking. So maybe the AI is the only one who knows. And we, we’ve been told for decades that we have to trust a particular kind of expert.

So now the sense of there’s no alternative. It’s almost like there’s not just, is there no alternative to, to AI getting put into everything, but there’s not, maybe there’s, we can’t help but think that maybe there’s not even an alternative to what the AI itself will quote unquote want. Right. And I think that kind of structure is very much reflected again, if we go back to the other level that we’re talking about in the book, in the, in the Nightmares, right?

In these kinds of nightmares of why do we imagine the AI is like lording over us and being powerful? And then one more, one of the things that we do in the book is to say, well. Lemme give you a totally different way of thinking about these stories, right? Like, why, why do we, why are we thinking about AI’s future of rewards?

Well, maybe the capitalists are thinking about these, you know, finding these stories resonating because, [00:47:00] you know, I, I assume that there are a significant number of capitalists who would all things being equal also prefer that climate change didn’t destroy the capacity of the planet to host human life?

You know, I mean, some of them are clearly genocidal. Some of them have already decided that, um, they want humans to be the biological bootloader for an AI or whatever. You know, they think of AI as the successor species. You know, you do find all these crazy things, like some of them are genuinely. So genocidal that they don’t mind all humans dying.

But most of them I think are just like, well, they cling to the power and wealth so much that, that they’re like, well, I can’t imagine giving up my wealth. So it seems like climate change is, is in inevitable to, right? So for them, they’re like, okay, climate change, ai, you can produce these stories like that.

That’s clearly a narrative that makes true, but maybe for the worker, right? Like one of the examples that we talk about in the book is how Amazon uses AI in their warehouses, right? They [00:48:00] have tons and tons and tons of different AI things these days, of course. But one of the things that they’ve been doing for a while in their warehouses is use, uh, video classification software that basically scores people on how well and fast they, they store, they store things, right?

People who put things in boxes are people who put things in storage for other people to get, and all this stuff gets, they’re getting, they’re getting scored. Okay? So, so from an economic perspective, you can think about this. Um, as kind of the automation of a low level oversight task, right? Once again, we see, oh, the manager was the person whose job it was to de-skill the worker, but now we see a technology to de-skill the person whose personal job it was to de-skill someone else, right?

So we see we that we, we see the kind of, you know, taylorism in a very broad sense of concentrating knowledge in order to reduce wages. We see that happening recursively, we see some manager function, low level [00:49:00] oversight is getting replaced by, uh, but you can think about that from the perspective of the worker.

You know, like, um, the worker used to have a manager. The manager may have been an asshole or a nice person or whatever, but the manager had power of the worker, right? The worker knew that manager is giving me a score and whether I am still getting gonna be here next week or not, depends on that score.

Now that that person is getting a score. From the algorithm, and now you’re asking me, well, why, why do stories about the AI taking over resonate with the worker? Well, the worker really has an AI lowering over them in a very real sense, right? Like of course the AI is still ultimately just reproducing the interests of Jeff Bezos and the AI’s numbers are getting fed to a manager still, right?

So it is not that the AI actually is the thing that is in charge, but the [00:50:00] AI is a, is still a thing that’s experienced as control, right? And when we know that when people have these experience, these mediated kind of situations that like the, the last part of the chain is, is experienced in a. In a more intense, more real way, right?

Like you may know that somebody orders your boss, like your boss has a boss if you’re working in a big company, right? So it’s not like anybody’s just acting of their own pure accord. And as we said earlier, ultimately they’re, they’re, they’re all constrained by the capital’s demand for profit, right? So it’s capitalism as a system that produces these things.

But of course, it hasn’t changed the fact that the immediate experience of like, oh, this, the, the bot said I’m not stowing well enough, and so I get fired is a real form of power over the worker, but the worker experiences as, yeah, a machine is taking over. Maybe what happens if the machines are taking over everywhere?

But very, very importantly, right? I think [00:51:00] this kind of contrast tells us why we can’t leave the interpretation of why these stories resonate and the worries about AI to. The billionaires of the world because they worry about something very, very different from the things that we worry about. And so we can’t be taken in by their interpretation of these stories.

Cayden Mak: What do you think are the sort of like, what’s, what’s the essential work for our movements right now in thinking about reinterpreting the nightmare of ai? Like, what is, what is the task that’s sort of before, before us, uh, as we think about like how we can intervene on the ways that capital’s already using AI to, to consolidate its power?

Hagen Blix: I, I think that is an excellent and extremely important question on which to, to land there. Yeah, absolutely. I, I think, yeah, I think the first task is, is um, the, this moment has a, has a [00:52:00] real chance to clarify how much. Of the shit that is fucked up is about capitalism and link people’s daily experiences of fears about the future and concrete threats to their livelihood, to this overall machinery of power.

And make it clear that that can be changed. Right. So I think that’s part of the, the narrative. And then more concretely in, uh, organizing, I think we are, we can think about these, right? We talked about this recursive de-skilling, right? And I think, um, the fact that that happened and that happened so rapidly is one of the ways in which, uh, socialist theories from the 1900, from the 18 hundreds.

Kind of failed us. Right? Because, you know, like, it, it’s kind of the assumption that, oh yeah. Once the factory system is [00:53:00] kind of fully developed, everyone else, everyone will basically have the same wages. And then the, the constitution of the working class as a political subject will kind of happen relatively easily because they’re already so clearly rooted in a, in a shared socioeconomic experience.

And then like shared work in these huge, uh, workplaces. Right. So it, it kind of, that was the hope that that would happen very naturally. But then we got all this stuff about that stratified the working class. Right. And we know, of course, the, the, um, you know, the, the majority of people don’t think of themselves as working class and whether we wanna define working class as, you know, people who work for a wage or a salary, not from owning the means of production or not.

Most people think of themselves as middle class. And I think one, one fruitful way of thinking about that fact is to think, well, somebody’s middle class, if they’re gaining some extra profit, some extra income [00:54:00] because their labor is in some way related to some labor that could de-skill someone else, right?

But now we see, now we see that, that with ai, there’s a giant de-skilled move coming precisely for a lot of jobs that have so far. With, with the de-skill, right? Like a lot of things that are only produced once, it wasn’t even worth de-skilling, right? Um, but now it’s coming for a lot of these, a lot of knowledge workers, right?

So I think there’s a, if we manage to conjoin it with this account of no, it, it’s about capitalism. We managed to make that clear. Then we can bring a lot of people into the fold of the left that have so far been in their objective socioeconomic situation, relatively privileged under capitalism, and therefore didn’t have a, a deep like personal economic reason to oppose capitalism.

Of [00:55:00] course, lots of people have like ethical oppositions, et cetera. But you know, like one of the really stabilizing factors of capital was that the, the old system of colonialism, um, was replaced with this globalized precursive de-skilling structure where, you know, at the end of these value chains, chains, which are always in the west, workers get paid more.

Not because somebody decided that, you know, imperialism or global hegemony or whatever should come with a wage. And wouldn’t it be nice if we paid people in the west more, but precisely because it was economically valuable to do that, right? That the $10 that you can save by de-skilling 10 people somewhere else are worth paying $5 more an hour to the person in the west that makes the machine or whatever that allows you to do that.

Right? So this is a deeply, yeah, so, so that has really stabilized capitalism in the sense of like, these people had an an objective interest. So if we can [00:56:00] make. Clear how much that same capitalism is now undermining the relative privilege. I think we can, we can, uh, hopefully draw a lot of people to the left.

I think that is incredibly important to think about right now as well, because, well, we know that when, uh, middle, middle class radicalizes and they don’t go to the left, we nowhere else to go. Right? And that is, uh, absolutely crucial. But I think there’s a, there’s a second aspect that is really, you know, to, to say something nicer about middle class people.

Being a middle class person, myself, I think a lot of, a lot of people in, in these kinds of, uh, knowledge work things, uh, very broadly construed, right? Whether from, from artists to therapists, um, to teachers, a a lot of these people, um, take immense pride in their work, right? And in fact, there’s actually plenty of middle class people who.[00:57:00] 

Have foregone higher wages because they wanted a more meaningful job, right? Like the teacher in the US is, you know, maybe in a cultural sense middle class, but Jesus Christ that teachers exploit it in this country, right? It is. It is horrible. But a lot of teachers take immense pride in their work, right?

Therapists take pride in the work and so on, and, and I think in a lot of situations people are extremely aware of the threat that AI poses, not just to their economic bargaining position, their economic privilege, but also to the quality of the work that they deeply care about. Right. And again, I think, I think there’s a real sense in which we can think about this.

We call it the IKEA or fast fashion model, right? Which is like everything that’s produced with AI competes on price. It’s like, look, we can offer you 20% lower quality at 80% lower price, and that that is a way in [00:58:00] which you destroy the mid-level of quality, right? That’s how fast fashion worked was like, well, we can make you a shirt that you can wear only 10 times, but it’s only gonna cost 50 cents for, or like, it’s only costing us 2 cents to produce rather than like 80 cents or whatever.

Right? I’m, I’m making up the numbers, forgive me, but, you know, that kind of principle. Um, but I think in a sense, in a lot of these, um, sectors, the pride that people take in the importance of the work, right? I’ve never met a therapist who didn’t really care about their, their patients. I’ve never met a teacher who didn’t think they were in that for the money.

Right? They’re in it because they think this is an important social thing that kids deserve good education. Right? So, so maybe we can, um, you know, organizing, combine these kinds of things, right? Maybe, maybe we can even politicize existing organizations, right. Existing organizations that aren’t labor organizations in, in any proper [00:59:00] sense, right?

I mean, take a, take a silly example. Like take the bar, right? Like for lawyers, that that’s an organization that is supposed to create unity in, in a political sense. There’s an organization that is also there for some kind of quality control, right? And lots of professions have that kind of thing, right?

Like a sense in which like professional jobs often have. That’s a kind of self oversight structure, right? And, and now we, we now we’re seeing with AI and attack both on quality and on on wages. So maybe we can leverage some of these kinds of institutions, the attack on quality to transform them more into vehicles of also doing labor organizing.

You know, I, I can’t give you a recipe for every single sector. I think, uh, for every single sector people who work in those sectors know much better how all these things relate, what kind of levers [01:00:00] there are to take people who might not be interested in socialist organizing, but who do care about the quality of their work, who do derive personal pride from that?

You know, like 

Cayden Mak: I, 

Hagen Blix: lots of doctors are probably not particularly interested in socialism, but are interested in making sure that patients actually get healed. You know, those kinds of things take, take the conjoint threat to people’s livelihood. The quality of the work that people care about and politicize the fact that they’re so clearly coming together and there’s so clearly properties that are by necessity produced by capitalism if capital has the chance.

And then yeah, try to try to say there’s lots of people that we can bring into the fold of, uh, uh, renewed much stronger, much bigger working class movement. 

Cayden Mak: Yeah. Well, it, it occurs to me that there’s, like, there is so much [01:01:00] possibility there, and I, I, I also kind of see it already emerging in a lot of ways, the ways in which, like.

A lot of like, uh, doctors, like professional associations are starting to like name issues both about sort of like debt and precarity in the profession. And then also like these like combined pressures around the quality of care, cuts to Medicaid and the, and, and cuts to hospitals as a result that like all these things are kind of happening at the same time, create both crisis and opportunity for organizers to find new venues or solidarity.

Um, that at the very least, like there’s, there’s definitely something there. And I think that, like, it makes sense to me that the things that you’re describing about the sort of like political and economic effects and motivations for the development of AI are actually a really big part of that, uh, like larger landscape and then larger context in which you’re seeing like [01:02:00] other kinds of economic and social control being sort of forced on.

And I, I think another related piece of that is like, uh, you know, the, the people that I know who work, for instance in the tech industry, um, are reporting like vanishing numbers of kind of entry level jobs, um, because of the de-skilling that you’re, you’re describing and that, like, that is rapidly going to become an urgent crisis for young people who are trying to like, make, find their place in like society and, and the economy.

There’s, there’s huge, huge opportunities right around the bend it feels like. 

Hagen Blix: Yeah, I, I, I absolutely agree. I mean, there, there, you know, like, uh, there’s misery is gonna get produced and the right is gonna try to use it to increase power. Renaturalize, naturalize hierarchies, entrenched hierarchies, uh, and, you know, direct the anger that that misery will naturally produce.[01:03:00] 

At people who have nothing at all to do with the production of that misery anywhere, but you know, the capitalist class. Uh, but there is also real opportunity and I think, yeah, as I said, I think the book really tries to make clear how we’re in a situation where the fact that it’s about capitalism is much clearer and easier to see than usually, and we should take real advantage of that in our organizing.

Cayden Mak: That sounds absolutely right to me. Well, Hagan, it is been a pleasure talking to you today. Uh, thank you so much for making the time to join us and, uh, share more about your writing your work. 

Hagen Blix: Well, thank you. Thanks so much for having me. This was a pleasure. 

Cayden Mak: My thanks again to Hagan for joining me today.

The book, why We Fear AI on The Interpretation of Nightmares is available now from Common Notions Press. There’ll be a link to find it in the show notes. This show is published by Convergence the magazine for radical insights. I’m Kaden Mak, and our producer is Josh. S. [01:04:00] Kim David designed our cover art and Logan Gross is our summer intern.

Special thanks this week to Jeff and Emily from Common Notions Press. Before we take off, I also wanna acknowledge that Kim Felner passed away last week. She was an organizer’s, organizer, movement champion, and also a convergence contributor. We’ll put a link to her article archive in the show notes. Kim Felner Presente.

If you have something to say, please drop me a line. You can send me an email that will consider running on an upcoming mailbag episode at [email protected]. And of course, if you would like to support the work that we do at Convergence, bringing our movements together to strategize, struggle, and win in this crucial historical moment, you can become a [email protected] slash donate.

Even a few bucks a month goes a long way to making sure our independent small team can continue to build a map for our movements. I hope this helps.


Episode Guests

Tagged

About the Host