Making Myths to Make Money w/ AI Now

Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host Alix Dunn. And in this episode I get to sit down with Amba Kak and Sarah Myers-West, who co-run the AI Now Institute and have just released their seventh annual, I don't know, report, um, which makes it sound like a boring NGO report about all the wonderful things they did last year that didn't make much of a difference in the world.

Alix: Um, but what I love about this annual. Report that they put out is that it is often a really fresh, kind of clear-eyed view on what's happening in the discourse around ai. What are the key political questions that savvy people should be asking themselves? And oftentimes sort of how power is showing up in these spaces in ways that are distorting the conversations that are possible.

Alix: So it's one of those reports that, you know, you finish reading and you're like, wow, I didn't see the world this way until reading this report. And it really kind of [00:01:00] shows a different path that we might follow if we were engaging in these questions more effectively. And also a kind of political realism about what types of actions we might need to take to be able to shift the trajectory of these conversations.

Alix: So don't just listen to this conversation. Please do read the report. It's extremely well written, as are all of the things. That they put out. And it's also just, I think, a really grounded, excellent piece of work that they, they develop collaboratively with lots of other researchers in the field. So with that, my conversation with Amba Kak and Sarah Myers-West about the annual report that just came out from ai Now.

Alix: I am really happy to have on the show Sarah Myers-West and Amba Kak , the two directors of ai now, to talk a little bit about a report that they just released that I think is a really interesting sort of mental model of understanding. What we should be thinking about when we think about AI politics. Thank you for coming on.

Sarah MW: Hey Alix. Great to talk to you. Yeah, great to be here.

Alix: How long have you been writing an annual report or like this annual statement, or [00:02:00] how do you even, what noun do you use to describe this thing?

Sarah MW: So we started writing these reports. I. As an institute back in 2017, it's, you know, something that AI now has been doing for a really long time, and they've taken different forms, but what they've consistently done is reflected back the many different threads of like, what's the state of.

Sarah MW: The technology itself and how it's being rolled out, but also how are people being impacted by and reshaping the trajectory of this technology. And our last report really foregrounded in this moment where. Chat, GPT had just been released and there was all of a sudden this like invigorated discussion about the technology and that was really focused on open AI as the primary actor and, and so we were like, this version of AI doesn't exist without big tech, and we really need to be talking about concentration of power and [00:03:00] intervening in ways that are going to address that concentration of power.

Sarah MW: If you want to get to any number of end goals and in this report. We're taking it a few steps further down the line and looking at what are the ways that there has since this hype cycle. Been this effort to sustain and prop up and really expand power through AI and hollow out many other elements of our society, and how can we be really creative and effective in pushing back on that.

Amba: I also think to the question of, you know, we've been doing this report for multiple years, but looking back at the evolution of these reports, it does feel like we've gone from more implicit argument. Do much more explicit argument, much more opinionated. We're still holding our credibility as experts, but we're very clear about our politics and about the fact that we are steering the field in a direction, and [00:04:00] also we're reflecting back a lot of the momentum that says underway in critical spaces, and we're sort of stitching it together.

Amba: In ways that we think are strategic. I feel like there was some negotiation around we wanna be the folks that lead with empirical evidence, and that's the focus. It maybe wasn't, so it was just the AI now annual report. Right? But in 2023, we were like, this is a report that's naming concentrated tech power as the key challenge.

Amba: Let's go for it. Right. I think that it's been a really healthy evolution to naming our stake, our key argument, and then presenting our map.

Alix: Not

Amba: voice from above. But

Alix: yeah, I really just like the pres and I mean this as a compliment, the presumptively of it because most people write annual reports and they're like, this is what we did last year.

Alix: This is the impact that we had. And you guys are like, this is the state of the world. And I feel like it's good. It's good. I really, I appreciate it. 'cause I feel like I trust your view on the state of the world and I think there's so few people looking at things from the vantage point that you all bring and having that more expansive.[00:05:00]

Alix: Sort of terrain of speculation. And it's not speculation. I mean, I know it's based on a lot of things that have actually happened already. I mean, Sarah, is there anything you feel like you guys have gotten wrong or is there anything that you look back at and you're like, I told you this must be an experience and just, I told you so every year,

Sarah MW: I don't want this to be an, I told you thing, like I would love to be wrong, but we're very clearly grappling with the amassing of power in the industry right now, and the fact it's now.

Sarah MW: Very easy to be talking about. Tech oligarchy as a problem. That's not just like an economic problem, but is a political problem as a societal and a cultural problem. That evolution has become much, much more stark and clear over the the last year.

Alix: I also feel like Victory lap PDF would not be something that many people would read.

Alix: I feel like part of it is giving people grace so that they can arrive at the same conclusions even if they've arrived a little later. So I wanted to like walk through the [00:06:00] sections of the report. 'cause I feel like each of the chapters is kind of a stimulating. Thing in and of itself, like I think thinking about the chunks of topics that are top of mind for you, even if people haven't read more deeply than that.

Alix: And I just wanna start with AI's, false gods. I feel like this is something I was just in conversation with Chris Wiley, who just did this series captured, and we talked about the difference between AI as a religion and AI as a cult, which was a very fun conversation. But I feel like there's much more conversation and maybe it kind of joins up the a GI mythology as well, but like this idea that.

Alix: We're being sold as a society, something that's much more akin to a religion than a company, which makes it hard for us to know how to talk about accountability because accountability feels, I don't know, like a tiny knife in a giant fight that is much bigger. So do we wanna start with chapter one, AI's?

Alix: False gods. And you wanna tell us a little bit about the claim that you're making and then we can go chapter by chapter so you can sort of help people understand how that argument's constructed in its [00:07:00] totality.

Sarah MW: Yeah, so the section on AI is false. Gods is really pinpointing that. There has been this profound shift in the common sense around AI that has taken place over the last one to two years.

Sarah MW: And we're trying to both name and unpack these bigger picture currents and narratives that are propping up the. Power of the AI industry, so these grand narratives around artificial general intelligence as the focal point of all of our interventions. If we can just solve for that, then we don't have to worry about the problems of climate change or curing cancer.

Sarah MW: The idea that the AI industry is too big to fail, we've become so dependent on it that this current trajectory is something that we cannot. Change or roll back from the AI arms Race 2.0, which is the idea of there being this arms race between the US and China has been historically used as a justification to bat down regulation, but now it has gone even [00:08:00] further to become the justification for significant amounts of.

Sarah MW: Public funding going into propping up the industry. And then lastly, and I think most pressingly in a moment where the US is considering instituting this 10 year moratorium on state regulation of ai, the idea that regulation is harmful for innovation, which we know from evidence is simply like a false claim and not the case that we do, in fact need regulation in order to make innovation possible.

Sarah MW: And the innovation that serves the public, not just a small handful of companies.

Amba: I think the too big to fail section. I, I kind of think that there's a really interesting and kind of underexplored argument there. Too big to fail is almost the outcome of this crazy capital expenditure frenzy, but it's also the point of it.

Amba: And so it's that kind of dynamic where, where pulling in not just private capital, but also public capital into this industry based on the fact that scale is the only logic for advancement here, but we're [00:09:00] reaching a point where then. Not only is it too big to fail, but it's sort of bound to entrench the power of very few players.

Amba: And that's kind of where we head in chapter two, which is Heads I win, tails you lose. And we talk about the fact that this current trajectory, on the one hand we point out why there's still a speculative business model. We don't really have the killer app where, you know, we're clearly seeing that these companies are by a very large margin.

Amba: The loss making, and yet we explain all the ways in which, in any version of the world, there are a handful of players. We name them, they're the ones, you know, these are the large, big tech companies that are likely to win. So the house always wins no matter how this plays out.

Alix: Yeah, I think that's a dynamic.

Alix: Because of the wave. I feel like of VC-backed companies before that now are such a key part of infrastructures. When people freak out about the idea of Uber being told they don't have a license to operate in a place like London, that basically they were able to scale fast enough ignoring all laws, so they get to a point in the [00:10:00] public consciousness, they get into a point of just how people think about their day to day, and then it becomes cemented to such a degree that the case for them is just that they're.

Alix: Can't be a case for them, not existing. It becomes really difficult to kind of backtrack from that.

Amba: Yeah, it's like sunk cost, but also there's like an inevitability even in a cultural sense. Not like this is just the way, the way world is now. So what do you mean we roll back?

Alix: I. And this goes to the next section, which is basically, it doesn't matter if these things have added any value, and actually part of the case making for them is that you have to ignore whether they have, because if you went into the details specifically about does this use case make sense, does this use case add value?

Alix: Did that technology actually work? You start questioning. The viability of the thing. But if you've already presumed that the thing is definitely gonna be in existence, they just get more and more slippery around, does this stuff actually work?

Sarah MW: And there's also this sort of third layer of everybody saying, this is so nascent.

Sarah MW: [00:11:00] We don't know what it's going to do. We don't know what its effects are going to be. And so we were like, okay, let's consult the record. Let's look and see how is AI already in deployment? And in that section we unpack a number of things. One, that there's all of these inherent flaws. In the technology that aren't really being taken into account when thinking about the use cases.

Sarah MW: So LLMs hallucinate for example, but we're talking about having LLMs be used to inform doctors about potential interventions or have them fill out our paperwork for like nuclear system filings. Like the kinds of things that, like you really don't want a system to just convincingly make things up. For some reason as being the use case that's put forward that if you went and tried to substantiate, okay, how are these systems boosting productivity, it becomes increasingly clear that not only is there insufficient and, and really lacking validation [00:12:00] that anybody gets more productive, but that there actually is a lot of evidence that what.

Sarah MW: These systems do do is make workplaces more surveilled, more heavily controlled by bosses. They devalue work, so it's an excuse to pay people less. There's all of this evidence already in front of us. About the significant effects in the decision making around how AI is being used that we really need to be looking to and learning from in decision making about what's going to happen in the future.

Sarah MW: And unfortunately, I think what's been happening over and over again is this attempt to just. Wipe the slate clean and say that this is all speculative and yet somehow still inevitable when it's, you know, really neither.

Amba: Just a quick process point on this section in particular. So many months before we started drafting this report, a bunch of organizations, including AI now got together in response to Senator Schumer's roadmap on AI regulation.

Amba: It was. Meant to [00:13:00] come out in a couple of weeks and we decided to work on a kind of shadow report. So a response to what we already predicted. And eventually we were right, was a kind of defang kick the can down the road. Again, response to AI harms. And I remember we had this conversation around how evidence was kind of a trap in our field.

Amba: 'cause on the one hand it's. Being positioned as AI is this new thing, we can't act too early because what if we sort of kill innovation while this market is still just starting up. On the other hand, it's this whiplash for those of us that have been working in this space for a decade because we already have a decade, a mountain of evidence that will tell us exactly how this is going to go and is already.

Amba: Going and is already impacting people, right? AI isn't just being used by us, it's being used on us. It has been used on us for this decade. In that kind of frenzy, we put together this quick thing to be like, this is just the tip of the iceberg, the mountain of evidence that already exists. That was one of the several motivations for this chapter.

Amba: It is to [00:14:00] really, I mean. I don't know if it will ever be the last word, but the next time someone a, a policymaker or like a tech CEO says, we're really early on this journey. We don't know how this is going to go. You know, we can't kill innovation. We want to use consulting the record among others to say, this is compiling a body of evidence.

Amba: It isn't it, it isn't comprehensive, but it's kind of theorizing evidence that exists that should really like nip. That claim in the bud. This is more of the same. Not new, not a blank slate.

Sarah MW: That reminds me too, that there's this other thing that we wanted to really draw a lot of attention to, which is that there's this persistent effort to create AI sized solutions to these very entrenched social problems.

Sarah MW: So like sort of reformulating the problem space so that AI is the natural solution. And what that does in practice is it's displacing deep wells of grounded expertise across different sectors and that that's something we need to be particularly wary about and pushed back on. [00:15:00]

Alix: I like that frame of AI size solutions.

Alix: 'cause I feel like also as you're describing this really. In retrospect, adorable moment where Schumer was gonna write that report. Given where we are now. Um, it just makes me think about this too, going back to the too big to fail piece. What they're doing is at the structural level of world building, and so then we're like, I.

Alix: Hey, like that didn't work very well. I think that should also like maybe be legal. Maybe we should get this totally feckless and like also marginally about to lose all of his power politician in one country to like write a report that's a little bit more aggressive and I feel like there's just this really obvious disproportionate.

Alix: Field trying to counter this dramatic structural social narrative, economic capture essentially. Maybe this is a little bit too personal, but like how does it feel to like constantly be like, um, and I'm not saying that you guys are only suggesting small things, but like this force is just like coming at us, [00:16:00] drawing a curtain over our entire world and we're like, I think there needs to be more accountability here.

Alix: It feels. Both Ian, in terms of like you guys say the same things each year, you're right. I think you're 100% right about basically everything, but like the scale of what we're doing just isn't commensurate with like the size of the thing.

Amba: Yes. And also I think it's in the last two years, the ways in which the tech industry has started sort of saying the quiet part out loud and sort of.

Amba: Our critique is not just one that they're not dismissing, but they're sort of owning. It really gives us, I think, a lot of ground to mobilize on. So for example, I think the AI-sized solutions, where we actually got that from was directly Matt Clifford, who's obviously a very beloved by the UK government tech entrepreneur in the uk.

Amba: He tweeted this one time being like, look, isn't everywhere yet. But that's only because we need to find more AI-shaped holes to fill. What we do [00:17:00] have as fodder is to take this there, you know, straight from the horse's mouth and be like, this is the vision is, is this the vision that we wanna buy into?

Amba: More importantly, who does this vision serve and who does it disempower? And it sort of answers itself, I think, for the vast majority of the public, which is a point in our favor.

Alix: He's a really interesting one too, like when he announced that 50 point. Thing that like Downing Street made about how AI's gonna basically fix an unfixable set of problems for that country.

Alix: He announced it historically conceptualizing British Empire as kind of a positive thing and that basically the British government needed to start essentially taking big swings in the way they used to. And it feels like a lot of this is so ahistorical that when you start bringing in like actual.

Alix: History and like the fact that we're essentially like recreating a lot of dynamics that have have existed before. That also feels helpful, but I don't know how helpful. 'cause they're really good at just pretending like we're in year zero

Sarah MW: I, and I know like many other folks who do this work [00:18:00] frequently get the question of, well what about the positives?

Sarah MW: What's the positive thing? Or like, what's the thing that you want? One of the things that we need to be doing more of and building more muscles here is not saying like what the positive use case is. But reclaiming that, yes, there in fact is a world that we want to build and we can name very specifically what the component parts of that world look like and be more front footed about it, and that AI might have a part to play, but it is quite marginal to it.

Sarah MW: And in thinking about the scale, like that's the fight that we want to be engaged in. It's building out a world where there's a functional economy with healthcare and childcare and you know, security and sustainability. Those fights happen in many places that do not put AI at the center. And I think one of the messages that we're trying to send out into the ether in this report is that the folks who are engaged in those fights.

Sarah MW: Is a real danger of [00:19:00] AI coming for those domains if we drop the ball or are too credulous about. These narratives about ai, we could lose the end game. The impacts of AI for healthcare, for education, for climate.

Amba: I was gonna say Sarah, too, just drawing attention to the fact that for those that haven't read the report, we conclude our opening chapter with this.

Amba: Positive manifesto, and I think it is a manifesto that de centers tech very deliberately. And so it's like actually the world we want is good jobs, shared prosperity, freedom, and, and autonomy. We want an innovative tech system. One of our key. Calls to action in our concluding chapter is how do we reclaim this notion of innovation?

Amba: And our recommendation there is to like stop starting from the position of how can AI get us to good jobs, but what is the kind of ecosystem we need to get to these end goals? And then thinking about if. It has a kind of place to, to play in [00:20:00] that.

Sarah MW: Yeah. And the path to innovation definitely does not run through sink all of our money into building out a single grand model to solve all of our problems at the cost of defunding the NIH and many other places where there's a rich and diverse research space.

Sarah MW: I want that world to be built from the ground up, not from the top down.

Alix: I love the concluding section, and I think that's why is because it's not about ai. It's a very good like one-two punch of like these people are charlatans. If we let them, they're gonna capture our entire economy. They're gonna capture all of our states.

Alix: They're gonna capture basically this top-down structural thing, and we can't go back. If they win, we can't go back. And this is actually what we want and how we should be articulating and organizing what we. Would like to see instead of that vision, I think it's really important that we reiterate. Touch fucking grass.

Alix: Like what are we doing? Okay, I have another, let me see. Oh, so I was thinking one of the [00:21:00] things you say in the report is why society should never accept this bargain is the critical question at hand. And I was like, I like stuck on that line for a while and I started thinking about. I mean, you wouldn't remember, 'cause I think we were like very small children, but Margaret Thaer saying there's no such thing as a society.

Alix: Um, and I just started thinking about like how the conservative project has really been to make it impossible to think about ourselves as part of society and also to conceptualize society coming together to make. Determinations about how power and accountability and allocation of resources should happen.

Alix: So do you wanna say a little bit about that sentence? You bolded it so I presume it's one you guys were thinking a lot about too.

Amba: We did bold it and we stand behind it, but there was also some discomfort with it that I wanna unpack, which is maybe I. Similar to the one that you are getting at, which is, for me, it's that society has a lot to lose.

Amba: Or Sam Altman saying that AI is gonna benefit humanity. Or even Shahan as ABA saying in her book the claim that like, it's as if all our data is the same and we all have the [00:22:00] same stakes. Like I feel like I have a. Immediate allergy to the universalizing narrative and in, in a way, we're also putting forward a version of that.

Amba: But I think we had a, a lot of conversation and we continue to on the fact that there's such an opportunity right now because it is in some ways a shared project, right? It is. There's a vision coming from Silicon Valley, and it is the rest of us, and to really assert that and leverage it because. Even more generally and less partisan is just what does it mean to be human?

Amba: What does it mean to learn, to create, to build relationships, to innovate? I think there's a level at which there's a kind of reflection on those questions that is kind of affecting all of us and is the foundation for, I don't think it's yet common ground, but it's the foundation for building. Really a much more universal common ground in resisting what's coming outta Silicon Valley right now.

Sarah MW: And when the technological project has been so invested in identifying and targeting and narrowing the like specific qualities that make each of [00:23:00] us different from one another and trying to like exacerbate and make those differences. Wider so that it's easier to sell product or whatever the end goal is.

Sarah MW: There is something that's really powerful about that project of like setting aside distinctions in order to build meaningful solidarity because of, not in spite of them. And we're seeing examples of that. Like if you look at the work of Amazon employees for climate justice. As one example where their like project is invested in building solidarity all along the supply chain of that company where like engineers and warehouse workers and like people from many different positionalities invest in the shared project of agitating for climate action and responsibility from the company as well as workplace rights.

Sarah MW: And they're engaged in, in labor Organizing that project of [00:24:00] joining together to move towards a common good, I think is something that like we broadly need to keep building muscles around, but now is definitely the moment to do it.

Amba: You know, the Anthropic, CEO, Dario Modi said 30% of coders might lose their jobs in, or maybe it wasn't coders, but someone said, right coders might be out of jobs in 30 years.

Amba: And there was this huge media frenzy around it in a way that I think wouldn't have been if it wasn't white collar tech workers we were talking about. But it gave a lot of us that are doing the kind of work, we are doing an opening to be like, actually this is a symptom of a future that's not inevitable.

Amba: And there are ways in which. Other sections of society have already been at the receiving end of this kind of devaluation for a very long time. So there are also ways in which I think the universalizing impact of this technology gives us openings to build Soar that maybe didn't, or there weren't as many opportunities before.

Sarah MW: It's funny that it's always tech CEOs that make those projections, by the way. Yes.

Alix: And then you [00:25:00] have these like stenographers in media that like just report them as. Maybe not true 'cause there's speculation, but like important messages. I don't know how we get away from that. This like PR media training, news cycle driven Sam Altman tweets something, or like OpenAI writes a blog post and then basically Kevin Rus and Casey Newton are like, like, that's so interesting and that a minimum should be covered.

Alix: I'm not saying it's necessarily gonna happen, but like we should cover these guys. Like how do we subvert that feeling that. They are newsworthy in and of themselves, meaning that they have this power to kind of wag the dog, the tail wagging the dog. Or like how do we make it so that it seems like they're the tail rather than the dog?

Alix: I don't know if that works.

Sarah MW: It's a good question. I mean, I think like some of it comes through, imagining the work differently. I think if you wound back the clock to 2018. Pushing for [00:26:00] attention to issues of tech accountability and like getting that on the front page of the New York Times was a really effective strategy, right?

Sarah MW: Because it changed the way that policymakers acted. It changed the way that tech CEOs acted in this media environment. It's become so fragmented, so chaotic. It's this constant dynamism that it's no longer like an avenue for change in the same way and in the same breath, like. That kind of makes these CEO statements a little bit more marginal.

Sarah MW: We don't have to take their word as gospel, especially. Not sort of imbuing any sense that what they say is going to be the future. In fact will be the future, but actually we can build it differently. The project is making sure that policymakers and folks who have the leverage, internalize that. I think that that's very much still a project at hand for most of us.

Sarah MW: A lot of this is just fluff. It's not substantiated, and we have to just continually remind [00:27:00] ourselves of that.

Amba: Yes. And also there's a tension that Sarah and I I think have been working through even in the last few weeks, which is that often what we're hearing from policymakers is this question of, well, do you think that we're gonna see a mass displacement of jobs in the next 30 years?

Amba: Like we are hearing from the tech CEOs? And how does that square with the fact that you all claim that these technologies aren't really as good as they claim They are. Pinning down that. Irrespective of whether these technologies work as they claim they do, they're going to be used in ways that basically like employers aren't waiting for the results before they actually lay off workers.

Amba: They're doing it anyway because they're incentivized to do so, and then AI becomes a smoke screen. So I think there's like an interplay between saying, look, if they say this is the future, then this is where we are headed because incentives are aligned that way, but we can do something about it. And I think that's the like resist inevitability, but also believe them when they say this is the vision they're gunning for.

Amba: It's not just a prediction, it's like where they want us to be headed.

Alix: Yeah. They want mass unemployment. Like if you framed it like [00:28:00] that, if you were like, I don't know if that's where we'll go. But it's interesting that these giant companies that are getting bigger and bigger government contracts actually want mass unemployment.

Alix: And rather than like let them then quickly pivot to UBI because that's all vaporware as well. Like really focusing on the fact that Sam Altman, when he says we're going to replace this many workers with our ai. Than saying like, oh, if we do that, we have a 25% unemployment rate in the us. Is that really what politicians want?

Alix: And I feel like we could do a much better job of recasting their visions in a frame that is much darker on terms people would understand and then potentially resist themselves.

Amba: Exactly. Without giving into the, the like hype claims. Yeah. It makes

Alix: them sound evil because they kind of are rather than evil and also genius.

Alix: Just take out the genius part and let's focus on the evil.

Sarah MW: Yeah. It's not even like recasting, it's just this is the so what? This is what they're saying. This is what the upshot is. Same

Amba: quiet part out loud. Which is what I was saying, like I think this is also where we have so much [00:29:00] fodder to to work with.

Alix: Yeah. I also don't think there's that many in of their investors that would mind a 25% unemployment rate 'cause they haven't really conceptualized what society would look like if that were to happen. Yeah. And your whole report is essentially about the structures of these systems, not the kind of micro transactions, because if you get to the micro transaction level, you've essentially accepted the.

Alix: Premise that these things will take over the world. But it feels like a very successful narrative strategy and political strategy to have people start thinking about the harms of these systems as connected to their individual. You know, like if I bring a reusable coffee cup to the coffee shop, I'm contributing to their being less waste in the world, when really the people that are making all the waste are a small number of companies and we should regulate them.

Alix: Um, like it sort of subvert the ability of us to think about. The structural stuff that we might actually be able to stop.

Amba: Yeah, I mean, I think the shiny object phenomenon, which at GPT, especially in, you know, more affluent and urban circles has made it, you know, it's like everyone's debating how [00:30:00] good chat GPT is, or you know, they're experimenting with it.

Amba: And I don't think you wanna come at that conversation. In a way that's demeaning or even dismissive. And so that's why we've really centered the narrative you were just referring to in our report, which is that like, let's not spend all of our time debating the specifics of whether an individual application like chat, GPT is good in a particular context in a vacuum.

Amba: How about we step back and also have a conversation about. Whether unaccountable power in this tiny section of the tech industry is good for society. And I think the move there isn't to say that we shouldn't be experimenting, and it isn't to alienate folks that are interested in doing that right now, but it's to try to get people to be more curious and motivated to join a fight against unaccountable power.

Amba: My instinct, just having conversations with people in my close network or even my family, is that like there is a moment where like the rubber hits the road with those two conversations when we might realize that refusal, individual refusal, the, you know, paper cup type of analogy, like [00:31:00] is key to that trajectory at an individual level and it's part of what's needed.

Amba: But I think that's gonna be like part of the journey that we wanna take a much larger section of society on. Like we need to get people. Interested, curious, and motivated to be part of a fight against a broader power structure. And they can do that while still playing around with this stuff in their free time.

Alix: Okay, so congratulations on the report. I did not realize you've been doing this for eight, nine years. That's kind of wild. So, um, do you wanna say a little bit more about why you do these and what you hope comes out of the publication?

Sarah MW: I would love to, what we're doing with these reports is trying to. In the midst of a very frenzied and dynamic.

Sarah MW: Public conversation about ai, being able to step back and look at that bigger picture, look at the evidence on the record, look at where people are mobilizing and the ways that they're mobilizing and knit. All of that together, I think is the lens that we're [00:32:00] trying to offer in these reports. Because look, there are so many fights to be fighting right now.

Sarah MW: We're all getting bogged down in the day-to-day relentlessness of the. Challenges at hand. And so being able to look at the bigger picture and orient around like, okay, that's the north star that we need to be moving towards is our hope and our intention in these reports. And to say, okay, this is the bolder vision where we could be heading towards and, and the things that are on the horizon and the horizon that we wanna be directing ourselves towards.

Sarah MW: That's really what we're up to and it's going to guide our work in the foreseeable future and, and we hope it'll be an opening to engage in conversation with more folks who are similarly engaged. Honestly, I mean, all of the conversations that led to the production of this report are part of the, the work as well.

Amba: I was actually gonna say there's a very long list of 50 contributors that. Provided feedback, provided constructive [00:33:00] criticism, made additions to the report, and they come from very different worldviews and theories of change. Even under the big tent of critical AI work, there are technologists, there are more kind of grassroots advocacy organizations.

Amba: There are more straightforward policy, wonky perspectives in that list. And I think this time compared to every other time, that itself was really a learning. So realizing that there's a really big tent, there are people within that list that will probably. Disagree with each other on a lot of stuff, but there's a way in which we all see ourselves in some way as part of a fight around reasserting public power.

Amba: And we see the urgency in this moment. And it's interesting 'cause like a few weeks or almost simultaneous with our report release. We had this kind of state AI law moratorium news, right? And if there was, you know, a proposal that exemplifies the lowest common denominator, like something we can all agree is an unmitigated disaster and we should oppose that was it.

Amba: And it brought that big tent together, not just the kind of 50 organizations or names that contributed to [00:34:00] our report, but much broader. You know, it was bipartisan. Um, you had tech company CEOs, some of them even come into the mix. A couple of weeks ago, but just as a, as a counterpoint to that big tent bruhaha vibe, it's great.

Amba: It's really important to build that big tent. And I think a strategic muscle we've been building is also to like catch when that kind of instinct to build a big tent can sometimes cut against the need to have like a much more bolder, much more critical stance. And so even in the, in the weeks since we've, we've already.

Amba: Gotten ready to like break the fuzzy consensus and say like, yes, we need regulation. Yes, AI regulation is good. Also it matters what kind of regulation we're pushing for. Not any regulation is going to be good. A lot of this is weak source legitimizing, so is somewhat tangential, but I think is also the work of this report to like build consensus and also identify the fault lines within this big tent of critical AI work.

Sarah MW: And not just regulation too, right? Like pinpointing that [00:35:00] like worker organizing is a really important front that like narrative strategies are a really important front. Reclaiming the domain of what public centered innovation looks like is an important front. Like there are many tools in the toolkit, especially in this moment and trying to utilize.

Sarah MW: All of them as effectively as possible.

Alix: This was so great. Thank you. Thanks for writing the report. We'll drop a link to it in the show notes and I hope, I don't know, I hope it has the effect that you are looking for. Thanks for having us, Alix.

Amba: Yeah. I can't believe it took this long. I know. I know.

Alix: It's kind of embarrassing.

Alix: We have conversations like

Amba: this a lot

Alix: all the time

Amba: on your podcast. Yeah.

Alix: Finally, we're recording one. Okay. I hope you enjoyed that. Little bit of a summary, little bit of a juicy conversation that hopefully was like a whistle stop tour of the report, but as I said, up top, please do read it. It's really good.

Alix: It's worth the read. We couldn't get into all the details, so please do dig into those details. Next week we have an episode, the first of two, about [00:36:00] our time at fact. Three of my colleagues went, interviewed, a bunch of people, came away with some really cool insights from people that attended, and I think also.

Alix: Kind of shows with the state of the discourses. The conversation is within academic spaces focused on fairness, accountability, and transparency. Do check that one out next week and there'll be two of those back to back. Thanks to Sarah Myles and Georgia Iacovou, our producers of this episode, and thanks especially to Georgia, who went to fact, did a bunch of interviews, produced this episode right after returning and has been hard at work making something.

Alix: Hopefully that is, uh, a lovely. Not summary, but synthesis of fact for those of you that weren't able to make it. So until next time, see you soon.

Making Myths to Make Money w/ AI Now
Broadcast by