After the FAccT: Labour and Misrepresentation
Alix: [00:00:00] Welcome to Computer Says maybe this is your host, Alix Dunn, and I am with two colleagues who recently were gallivanting around fact in Athens and came back with lots of goodies of research and researchers who have stuff to share. Our first episode on this was last week. If you missed it, do catch it, but we tried to make it so that these are standalone, so don't feel like you can't listen to this one.
Alix: Now, if you haven't listened to the first one. But before we get started, do you guys wanna introduce yourselves?
Georgia: Uh, yeah. I'm Georgia. I am the producer on computer. Says maybe a producer on computer. Says maybe, sorry.
Soizic: And I'm Soizic, I'm a member of the maybe team and I was pulled into making the podcast for the first time.
Soizic: Very excited about it.
Alix: Pulled into Slack, does a bunch of work in this area, has been to fact before. Knows a lot.
Soizic: Yeah, it was. It was my third, my third fact. So I was yeah. Fact veteran by now. [00:01:00]
Alix: Last episode we talked about how climate and physical infrastructure and the like, I mean, in some ways like the brutality of a lot of these systems when it comes to the actual like physical footprint that they have.
Alix: This time we're kind of switching gears.
Georgia: Yeah. Um, so I tried to structure this one so that we, we'll look into the sessions and like pieces from people who are looking more at like. Hidden identities and like misrepresentation and the way AI and maybe other digital systems facilitate those levels of misrepresentation and shielding of identities.
Georgia: And also I think how that is rooted in labor. This like leads quite well into like the first person who's a Priya Goswami , she had like just an exhibition running for the whole time, um, which was pretty cool. So people could just like dip in and out in between sessions. And there was like this giant wall of photographs, like portraits of women from around India.
Georgia: They're kind of like. Frontline health key workers, they're called usher workers. They're basically like midwife support. I think I, I don't know whether that's the right [00:02:00] terminology, but I think maybe that's how we would frame it in the west, where it's just like you're supporting mothers through their pregnancies and then when they have their children, they're really underpaid, but they care a lot.
Georgia: They have to input information through an app data about the people that they care for and then like send that information off to like some invisible authority. They're pressurized to do so as part of their job, but they dunno whether it's a good idea. So. Here's Priya.
Priya: The name of the project is Digital Par.
Priya: I'm presenting it at Athens, uh, at the a CM Conference. Not everyone knows but par. It means India in Hindi. I did not want to translate it because we talk about language bias. It's world's seventh largest country. Most populous country people should know the name Iss Ha. So I deliberately did not translate the meaning of digital para.
Priya: It's Digital India. And the way I see it, I'm a feminist tech maker and a media maker, and I wanted to bring stories of rural Indian women. So for me, my India is not. The [00:03:00] India that is represented in World Economic Forum with massively cooled, uh, payment, DPIs payment, digital public infrastructure. India is acing digital public infrastructures.
Priya: Uh, I wanted to show the DPIs on two fundamental rights of human beings, which is health and livelihood. India definitely started as a socialist country first. I don't wanna say socialist, I wanna say the constitution was socialist. And when our Constitution was written, it was written with some equality and access to all.
Priya: Fundamentally speaking, that's how our Constitution was written. So I chose health and livelihood because those are such basic parameters. For life, right? So I wanted to challenge the idea of the Shining India with the Austin digital public infrastructures, which makes, uh, UPI payments so easy. And I wanted to talk about the fundamental access to health and livelihood, digital public infrastructures, and being a feminist media and technique of My question was, what [00:04:00] about the women?
Priya: How are women responding to technology now? In my observation and basically 13 years of field work, I know the most resilient population of any country anywhere in the world is women. You go to a water on country, it's the woman who's pulling it through her teeth, and she would do whatever she has to do, no matter the situation.
Priya: Yeah, and that's true for Indian women. That's true for all the 32 states. In India, all the 200 languages, all the cultures that India has, it's the most resilient community, and yet it is the most invisibleized community because it is so easy to say this is care labor. So the question I bring to the table is what happens when data labor intersects with care labor?
Priya: Well, some of them are doing data labor without knowledge of what is data labor. These are ASHA workers. ASHA's are accredited social health workers that, [00:05:00] uh, shortens to Asha. Asha also means hope in Hindi. They were brought together by the government in 2005, exactly 20 years ago, with the idea that they will provide devotion and care to women and children.
Priya: Now it's 2025. They are not just giving SA or the unpaid devotional labor. They're also providing data labor. These are not gig economy workers who signed up to do some data labeling for an AI thing that they may or may not understand. These are community health workers who are providing, uh, devotional service going door to door and basically checking on pregnant women and children.
Priya: Are you doing okay? That's their job. Are you having contractions? Don't deliver at home. Go to the hospital. And now suddenly, if somebody were to open up and say, oh my God, I'm having contractions. I have a history of high blood pressure in my pregnancy, she would have to make a note in an act, and she wouldn't know what the implications of that data are.
Priya: And [00:06:00] the thing, the tricky thing is health is a state subject. So India's divided into multiple states. Kind of like how it's in the us This is federal law, this is state law, so it's different from different states. Some states may or may not have a digital payment gateway for Russia. Some states may not have apps yet to, um, log every information about who is a high blood pressure patient, how many cases of tur book losses.
Priya: But right now kind of infrastructures we are at the receiving end of is non-consensual. Yeah. And there is no consent by design.
Georgia: You have no idea. As in the work, the Asher workers have no idea where that data is going. When they log it and they hit submit as per, you absolutely have no idea are they, there's this like perceived pressure that they have to do it, otherwise they're not doing their job correctly.
Georgia: Is that, is that kind of.
Priya: Absolutely right. And what they have to do is they have to not just get that data like a data retriever, they also have to [00:07:00] report it to somebody who's above them, which is an auxiliary nurse midwife a and m. It's like a pipeline. And they had no idea whether information is going.
Priya: And the tricky thing here is if I have an Asher worker, uh, who comes to my house and takes care of my family, I trust her. She's like a extension of my family. So then she's gonna say, Hey, what's your social security number? Hey, what's your phone number? Yeah. And your social security number is now linked to your bank account.
Priya: And see, there's a scammer who sits up money of your bank account. Who are they going to blame? The Russia worker? Technically they're supposed to take care of 200 homes, which means roughly a thousand People say 5% per house, right? Some will take care of 700 homes, thousand homes, forget thousand people. So just to give you the context of data.
Priya: Health data, yes. But health data. How many people, so [00:08:00] we're talking about at least a thousand to 2000 people each ASHA worker handling that kind of data. But what they don't know is where the data is going. Yeah. And what is data? What is the implication of having someone's information taken off from a analog register to a phone and because we don't have a health law in India, central health law in India.
Priya: Yeah. It's so easy for Ana to say this is our health system, the state, Ariana. This may not happen in Kerala, so who cares? I, we have a bill, we have a registration to protect. The health data, the southern most state in India may or may not have it, so who cares? So it is not just led in the context of, you know, there's a saying in India, the culture changes every 20 kilometers.
Priya: Believe you me, it does Like interesting, me and my husband, we grew up in the same city. We grew up in the capital, we grew up 20 kilometers apart. [00:09:00] His mother tongue is different from my mother tongue. Is
Georgia: there a sense of like, pride in the difference of cultures? Do people like how it's just like my state is like this, your state is like that, is Oh, absolutely.
Georgia: Yeah. Right.
Priya: That, that's the subtext. Yes. Okay. It's also the subtext. It's, it's not like our workers are, uh, not human beings. There's a bias of cast. There's a bias of class. There's a bias, so. Gender. There's a bias of religion as Islamophobia takes root in India more and more and more. So there's so much happening.
Priya: They're human wings too. And say an economically poor state where the migrant population within India is supposed to, you know, like migrate for better opportunities. They would be like, mm, these workers, they, they don't know anything about health. So we'll just surely record that data. It's so many layers to the problem.
Priya: And a lot of people ask me what is their motivation to go on if they're not even paid? And then they say, we get a sense of pride and we get to step out of our abusive home situations, which is where [00:10:00] maybe they're dodging battery, maybe they know someone who went through, like for example, I love this anecdote.
Priya: We were in Kerala. It's a sudden no state of India and the usher workers there are on there probably one 22nd day of protest and they're protesting so hard. Some of them are on hunger strike. If their blood sugar drops, they ask the other person to take on the protests. So this is happening suddenly out of mood.
Priya: There was this L-G-B-T-Q Alliance who sort of moved into the protests and I'm like, what are they doing here? Because Asha workers are like traditionally traditional women and Saudis and all of that. And then, um, somebody from the, you know, trans community, they stepped up, they took the mic, and then they started to speak.
Priya: And then I was like, oh my God, they are supporting the Asher Workers movement. And then I spoke to their representative and their, their photo is right there in the center and they're like the woman with the shorter hair. Yeah. Yeah. And they're like, ASHA's are frontline workers. They know [00:11:00] everything. They shield us.
Priya: Yeah. How can we not support them? They know everything about everyone. If someone says, what's your gender in a government survey, and we wanna say, prefer not to say. They're the one who's shielding us in that prefer not to say, I get those Muslim, and I talk about this so unexpected. So I'm so unexpected in the field work, and I'm like, suddenly, you know these trans women, they're in the PTA and they're taking the mic and they're petitioning for them.
Priya: And I'm like, how cool. And then they're like, yeah, this shield us.
Alix: I love the like translation aspect of this, the way she presents her work and like the way that it connects to her language. She like prevents hegemonic takeover of her own work. Um, and basically like preserves what it is in her own language and like talks about that language and like kind of insists that a listener or a viewer engages with.
Alix: What her language calls things. 'cause I feel like the entire project of AI linguistically is oftentimes to like make everyone engage in English. And I feel like that's really nice.
Georgia: I like how she was kind [00:12:00] of like, so yeah, the World Economic Forum will like position India as this kind of like giant, like shining beacon for digital public infrastructure.
Georgia: But like really it's this, you know? And I just thought that was like a very good point.
Soizic: I think this idea of. Last episode, we were talking about deconstructing narratives through rounding them in real life examples, and I think that's also what we tried to do with who we chose to interview is to show.
Soizic: Where AI intersects with real things to speak quite simply, PRI has a bit on digital public infrastructure in India. Really trying to shine through having this like really developed set of software or DPIs that Divi Doshi was also talking about in a previous podcast episode is just really interesting to see the disconnect between what a country is saying and also how the country is managing to convince others that.
Soizic: It's really working. Like it's something that I've heard from French government officials being like, oh, and India's so good. And so having this like very [00:13:00] concrete personal vision of the fact that it doesn't work is really nice. Maybe one last point about. What I think was cool about Priya's exhibition was, first of all, the exhibition part.
Soizic: There was something really powerful about seeing the faces of these women as we were at a conference where sometimes you can have sanitized discussions about responsible AI and like what it means, and just having to stare at their faces and remembering that we're actually talking about real humans was really powerful.
Soizic: Also for an academic conference, can you actually describe what the exhibit was like?
Georgia: So it was like a large display in a grid on the wall of like, uh, larger than life, deadpan portraits of, I would say about like 20 to 50 women in India. The portraits themselves were like ranging from like a dead stare into the camera, like very powerful, quite emotional to kind of the women perhaps trying away and like putting their hands over their faces.
Georgia: And giggling a bit. And I asked her about this and she [00:14:00] did explain that it was, um, because sometimes her husband was taking a photo. And I think culturally it's like just a bit like, I dunno. Yeah, it's like a different vibe when like a man is like looking at you or taking a photo of you and like this photo, like display completely towers over everyone.
Georgia: And I think something really good that Priya said is that she wanted like people to stand in the shadow of these women for once and be like overshadowed and overwhelmed by their presence, which I thought was very powerful. That was cool. Um, another aspect of it was two video pieces, which were playing concurrently.
Georgia: They were about five minutes long, and she essentially went around and interviewed Asher workers and just like spoke to them. There were clips of them sort of vaguely saying like, yeah, we have to use an app now, but I don't understand what this app is for or why I'm using it, and kind of being generally frustrated.
Georgia: The other video was like interviewing women who were, these are women who, they're like day laborers in Raan, who we will, you'll hear more about in like PRIs second clip. They were just like. Same thing. It's like they have to go to a specific place at a specific time to log that they've done their [00:15:00] work and somebody has to facilitate that with an app, which is essentially a surveillance tool.
Georgia: The final aspect of the exhibition was a computer game that she's like in the process of making, but essentially it was just kind of like the experience of an asal worker waking up at the beginning of her day doing lots of care work just at home because she has to like. Take care of the kids, cook, clean, and so starting her day at a ridiculous hour, like four or 5:00 AM the next part is like you have to dodge cars because you're like crossing Indian traffic and if you lose, she's just like, clearly you're not used to the traffic in India.
Georgia: And like it's kind of like very tongue in cheek. And then once you've done all of that stuff, you are already pretty overwhelmed but exhausted and you're like, I don't understand the point of why I'm doing anything. Then you get to like go to lots of different houses, talk to women, tell them things like, please make sure to have your baby in the hospital.
Georgia: They kind of say, yes, thank you, whatever you say, they're very compliant. And then it gets this bit where it says, log information into the app, and then it says, would you like to send it to dah, dah, dah? I can't remember what it says, but it sounds really vague and [00:16:00] very confusing, obviously, intentionally, and you're just like, well, yes, I thought, isn't that what I'm supposed to do?
Georgia: It makes you feel like you're informing on the women in a, in a weird way, like, would you like to send to this data to the central authority? And you're kind of like, oh wait, I thought I was supposed to, but now you've made it sound bad. So I don't know. It was quite interesting. So there was like. Yeah, those three aspects to it and like Priya was there the whole time, always ready to talk to people, which was always was really good.
Georgia: It looked exhausting and, but it was really, really good. And yeah, it was a fantastic conversation. I'm really glad that she took the time.
Alix: Cool. So do you wanna introduce Kimi?
Georgia: Okay. So Kimi Wezel session, I'll let her like introduce it properly, but essentially it was on the ways in which like generative ai, like image generators.
Georgia: Misrepresent, um, people visually and how they try and put across people's like invisible identities. It was a two part session. The first part was just like a panel on kind of like the role of the artist and what have you with generative ai, but the second part was like, [00:17:00] should explain this more in our clip, but essentially she got people to send in images of themselves, describe the images, and then used that description to run that through an AI image model.
Georgia: Printed the images out and put them up on a wall in the workshop and then tried to get people to see if they could identify themselves. So it was quite an interesting exercise. Um, and yeah, we had like a nice little conversation about like how this makes people feel about the way an AI image represents them.
Georgia: Um, yeah. So here's Kimi.
Kimi: Hello, my name is Kimi Wenzel. I am a PhD candidate at the Human Computer Interaction Institute, Carnegie Mellon University. And I study the downstream harms that emerge out of. Misrepresentation and bias in AI systems. So my session was called Invisible by Design, generative AI and Mirrors A Misrepresentation.
Kimi: And it was really looking at downstream harms that misrepresentation in text to image models can have on people. We had a panel with some awesome experts from different [00:18:00] fields. They represented artist communities, critical art communities. Mental health research and AI epics research, more scholarly work.
Kimi: So that was a really wonderful panel. Afterwards, we had an interactive component where we pre-sliced some descriptions, self descriptors from participants, which we generated images out of, and then we printed those out on Polaroid film, so analog film and distributed those back out to the people. So everyone received an image.
Kimi: An AI generated image of themself on a Polaroid film. And then from there we had a discussion. How did everyone feel about it? Speaking more towards their reflections, and then moving towards more, hopefully more proactive imaginations regarding what can we do to kind of counter any of the negative feelings that emerge out of these models.
Georgia: And like, okay, so like, just to be clear, I obviously participated in this [00:19:00] even though I couldn't go to the second half of the. Workshop, but like essentially the form asked to like upload an image of ourselves and then just like describe what is in the image. And I literally just put like women with brown hair wearing orange sunglasses and a couple of other details.
Georgia: And then what your team did was like run that description through like three different image correct models. And then you picked like. One of the three at random printed it off and just like handed it to me. And I was like, what the fuck is this? Like what? Yeah. And like at my intuition there was, that is like you, you, you ran an image through, it was like image to image.
Georgia: That's what my intuition was. But it was actually,
Kimi: it was text image. Yes. So during the session here in Athens, we actually put all the pictures on a board. Oh. And we had people try to identify who they were. For some people it was easier and some people it was more difficult 'cause you know, the image really did not look like them.
Kimi: And one of the reasons we chose to do text to image was to protect people's privacy. We didn't want to send their personal images to these models and felt that text [00:20:00] was a safer option for that. Yeah. It also goes to show for anyone who wrote what we call like more invisible identities, quote unquote. We have brown hair, for example, but there might be other identities that we hold that are not visual.
Kimi: Um, and seeing how those are rendered through the image is interesting. Um, so for example, there was one person, I think she said something like, I am a mother and a professor. I'm a mother and a scientist. And the image generator, I actually forget which one this was, but it outputted. A side by side image of like someone who looked like a mother and someone who looked like a scientist, as if like, you can't
Georgia: be both.
Georgia: You can't be both. You literally can, you can imagine it being both. Yeah. That's really interesting. Uh, do you wanna say a bit more about, like, I feel like you touched on this really briefly, but like you were talking a lot about the gap between people's perceptions of themselves as like a living, breathing human.
Georgia: And then what happens between that perception and like when they experience. Images of other people that are digitally mediated in some way, [00:21:00] whether that's like, I mean, even without generative ai, you know, just looking at like, people might be scrolling infinitely through Instagram, looking at hot people constantly, and that gives them some level of dysmorphia.
Georgia: But I think what you've talked about was how like generative AI images have like exacerbated this problem. You wanna say more about that?
Kimi: We've kind of seen this as a slow evolution of technology. So I gave the example of Google image search. We can use the same example of like CEO, like you can search CEO, and then it might only show you, oh, CEOs look like white men.
Kimi: And there have been debates about this for a long, like over a decade now, I think about should these results be representative of the real statistics, which frankly, at least in the US it is like 90 something percent white men, or should they be more aspirational, which some argue would be more beneficial for the people doing the search.
Kimi: So we've seen like the Google image search, and then we keep evolving. We see kind of these like Snapchat filters or filters right. On [00:22:00] image editors, which I think is probably the closest thing to these generative AI image generators. When people are generating images of themselves, they see like a very beautified, um, symmetrical wide eye.
Kimi: Yeah. Like babyface,
Georgia: kind of like the
Kimi: Yeah. Portrait of themselves. And there's even a term now like Snapchat dysmorphia or selfie dysmorphia. Ah. Where plastic surgeons have reported that people are coming in and saying, can you like I want this space. Space. Yeah. And it's just a filtered selfie of themselves, which is quite concerning.
Kimi: Through that evolution, we now have AI image generation, it's image to image instead of text to image, they are seeing beautified images of themselves, which I think I anticipate will have kind of a similar effect. And someone in the audience asked me like, are there any similarities or differences between this?
Kimi: And just kind of like. Regular media misrepresentation that we historically had. I think there are a lot of similarities, some differences, [00:23:00] which we still have to study. This is kind of an ongoing work, is I do anticipate kind of people's technical literacy and also AI affinity will impact how they interact with and perceive their AI images.
Kimi: So people glorify ai or if people are very. Against ai. Right. Of course they will have different perceptions to this. But yeah, I think I'm most concerned probably about like youth, yeah. Interacting with these models. I sometimes, not in my research, but in other work that I do, I work with high schoolers and they like think AI is so powerful and so true, like a point of truth.
Kimi: Um, and I also, I'm was meeting with a high schooler who. As a young artist, she was telling me like, I've loved art since I was a kid. I love to sing. I love to draw. And she's like, I don't, I feel like there's no point in me creating art anymore because AI exists.
Georgia: The way she solicited the image descriptions of ourselves was [00:24:00] like, send an image of yourself and then like describe it as if you're writing alt text.
Georgia: So that's what I did. I just like really just described the visual elements of the image itself. So it's just like women with brown hair sitting in the sun with orange glasses or whatever. But other people put inde descriptors of themselves that are like not visible, which I thought was interesting.
Georgia: Like I'm a mother. Yeah. Like Kimi made the point of just kind of like there are things that you can see that are part of our identities. Like you and I have brown hair, but like you being a mother or like a scientist or whatever, researcher is not like visible to a model, but it will try and like capture that visually.
Georgia: Exactly. Yeah. Well, Larry is a fact. It sounds such hilarious. Fact. And then obviously the downstream effects are like the horrible face dysmorphia and like going to doctors and being like, make me look like this Snapchat filter. And yeah. Which, oh God.
Alix: Yeah. So funny that like, it reminds me of Eric Sgio stuff about like how it's all converging on some like boring, mushy middle that like has no [00:25:00] meaning and is, and he was on her
Georgia: panel.
Georgia: So yeah, he made some really good points about kind of like if we are just like. And it is like what she said at the end there, which was really sad about the teenager who like is like, I don't see the point in making stuff anymore because it looks like you can just do it automatically now. And I think, like Eric was saying, some really interesting things about like, so what is the role of the artist if like, we are just like writing like strangely coded inputs for like these mushy gray beige outputs of, and you can't like get out.
Georgia: Anything that you didn't put in. So it is like never ever gonna be original and always gonna be like this stupid beige center of nothing.
Alix: Yeah. That's super interesting. That's really depressing about a teenager.
Georgia: Yeah, I know.
Alix: Yeah. Also, just like the idea of productivity in creativity as a primary motivator is also depressing and kind of strange.
Soizic: It reminds me of, um, a few weeks ago, Don Oliver made a segment on AI lop. And one of the people they were showing, basically [00:26:00] saying like, AI is great. Was someone saying, you know, everyone hates making music 'cause it's so cumbersome. But now with AI you can make music easily. And then John Oliver was like, if you hate making music, there are things for you to make music with pressing a button.
Soizic: And then there was a bunch of children's toys. It, it was very funny, but also a really nice way to be like. People actually enjoy creating art and it being a process, and we have to stop thinking about productivity.
Georgia: Yeah, I hate that that guy said that. It's just kind of like, you're right. Musicians hate making music famously.
Georgia: Yeah. That's the worst bit of their job
Soizic: on that bit. I think one interesting thing as well is when we think about AI harms and body image, for instance, of course there's the body dysmorphia, but these reflections on everything becoming very neutral, beige, boring. It's a much more subtle and like pernicious transformation of society that is less captured by like systemic risk assessments and things like that.
Soizic: If [00:27:00] you think in terms of regulation, it's a nice way to think about things that are maybe less. Okay. The extractivism part is still present, but the result of what it shapes artistic practices or society as a whole, it's not deadly, but it's sad. It's nice to think about these impacts as well. 'cause they're less,
Alix: yeah,
Soizic: dramatic but maybe as important in a way.
Georgia: Exactly. Okay. So we're gonna hear from Priya again now, and I think like there is a kind of connection here in kind of like, I think lots of people's. Identities are tied up in their work and their identities are invisibleized just in the same way that their labor is, and it's like super undervalued, and they're also like expected to like shape and morph themselves to a system or a technology.
Georgia: In ways that are like damaging and unfortunate.
Priya: The other subset that I bring to ECM is not just the ASHA workers, but also daily wage laborers. Now it's a very, very different context. They are not, uh, care laborers and they're definitely not data laborers, but they have adjectives [00:28:00] fail an app, which is kind of surveilling them at the moment, and they earn as little as 300 US dollars a year.
Priya: That's not even anyone's coffee budget. Mm-hmm. So imagine their marking attendance nine o'clock and two o'clock, walking in one of the most scorching arid areas of the country where my car did not even climb even though I had like a good car and I had to like literally abandon my car and then follow them on their rocky road and Thorn covered.
Priya: May I add? Everything was thorn sla, and literally no exaggeration. I don't know how they were walking. And here they are marking attendance at nine o'clock in the morning and two o'clock in the afternoon, a hundred days a year for as little as 300 US dollars a year.
Georgia: Are these people also doing the work for which they are marking attendance or is their sole job to go around and mark attendance of the people working?
Priya: No, they're doing some work. Oh. See, doing the designated work. Say somebody wants, uh, the government wants the roads [00:29:00] cleared of pebbles. It's kind of like a pebbly, hilly road and they want the thing. Cleared. Yeah. So they have to just go and clear the pebbles, but that's hard. Manual labor. And they are paying nothing.
Priya: They paid nothing for it. And over and above that. And also one might ask why women? Because daily wage laborers can also be men, but which man would agree to do this work for 300 US dollars a year? So men are off mining or doing something or the other, which pays, puts the bread on the table while women are off slogging.
Priya: I
Georgia: wanted to ask why. I noticed you in the videos in both of them, actually, in both videos that you made. Um, you were asking them like, do you know what geotagging is? Do you know what digital rights are? And they're all just like, you're like, speak foreign language. No, I don't know. Why are you asking me that?
Georgia: Do you wanna say more, a bit more, like, a bit more about why you were asking them those questions?
Priya: That's a great question and great catch. And I have to say, you're not the first one asking Amy that. It's a very fair question. Why would I even ask such a specific technology word? So they, they're [00:30:00] supposed to know this because they, whether they know the meaning of this word or.
Priya: It's been put on them. For example, these daily wage laborers, they know they are going to some geotag location. They don't say it as geotagged. They have a different parlance or some other fused English way of saying it, or Raja and English way of saying it. I'll give you an example. They kept telling me we have to go to the sides.
Priya: We have to go to the sides. And I'm like, what is a side? I'm trying to understand side. I'm reading their lips. Then I realized they don't know how to say sight. Oh, that they, they're going to their work sites. Site. Okay. Yeah. And they, because RA has a very, was dialect, so the side doja, I'm hearing the duh.
Priya: When they did that with mate and they started calling them meats. That's when I realized, that's when I realized, ah, tea is changing today. And they don't know the meaning of the words, but they are actually [00:31:00] somewhat aware that this is what is happening. There are geotag location, so they would have a different word for geotag without understanding what is, but they know essentially that there is one thing where they have to go to mark the attendance.
Priya: Meaning is loosely similar. So that's kind of why I was asking. It's interesting. So they
Georgia: have, it's like they have a vague idea of like, it's, it's one of those things where it's just like you're on a need to know basis and it's just kind of like, I know that I have to do this. Mm-hmm. But I don't really know why.
Georgia: And then you're like, do you know what geotagging is? 'cause i's what's happening to you right now? And they're just like. Interesting.
Priya: And with ASHA Workers, the context is a little bit different because the video that you saw was from haa. And HAA has advanced payment systems, which has actually done some good, actually great and streamlined the uh, payments.
Priya: So at least there is some log of, I did 75 child immunizations. I can log it in the app, the wages they get the corresponding wages to that. So they are supposed to know this, uh, [00:32:00] that what are the geotech locations, and especially in Ana digitalization hasn't been so very implemented in, uh, Kerala or in Mara.
Priya: Remember I told you that it's health and health is a state thing. So that's why they would say what's happening in Ana is definitely not true for what's happening in, uh, so the video that you saw was from Ariana and it has some. Some payment structures and also many, many other apps to log specific illnesses.
Soizic: I think an interesting thing about this surveillance app, wage app is that it's not AI or machine learning at all. It's just a very badly done piece of software. And in the videos that Priya filmed, you can see what the interface looks like and it's horrible. Like it's just the worst 1990s piece of technology and it's glitchy and it's bad.
Soizic: And so I think there's something really cool about filling that at [00:33:00] an AI conference where everybody is working on the most advanced, or a lot of people are working on advanced context technical systems. Uh, just being like the reality of things is most of the systems that we should care about are not anything AI and are not.
Soizic: Very complex. Technically this is something we see a lot in the public sector, but it was a really nice reminder of that to have it at the conference.
Georgia: Yeah, absolutely. And I think, again, when we talk about digital public infrastructure in vague terms, I don't think these women are thinking about what that is, and they're just like, I just hate this app that I didn't need to use before to do my job.
Georgia: And now I do. You know, I think it's like this weird disconnect between the conversations and like the realities on the ground.
Soizic: And maybe a second point about, you know, the app. Is as Priya was explaining, like this is a political play. These women are being surveilled because the government doesn't want them to exist anymore.
Soizic: And so they're trying to push them out through Glitch G Technology and it's actually [00:34:00] not meant to help them. And so just remembering that. When the political goal of technologies is fraught, there's nothing you can do about it. Like even if the app was fixed, I feel like it wouldn't make anyone's lives better.
Georgia: Yeah, and then I think the referring to the people who are logging the stuff on the app as mates is like so bizarre and really like kind of gross. I think we should move on. We've got one final bit. Alex, Hannah and Clarissa. They did an AI workers inquiry and yeah, I'll let them explain that now and then we'll discuss.
Clarissa: I am Clarissa Redwine. I use she her pronouns. I am part of this amazing group called Collective Action in Tech, where we archive all of the collective actions across the tech industry. And I'm here at fact with Alex. Yay.
Alex: Uh, I'm Alex Hanna, I'm director of research at the Distributed AI Research Institute.
Alex: We're a nonprofit research institute that has two pillars. We focus both on the harms of. Digital technology and we also imagine alternatives, um, [00:35:00] to our current technological future, which is very clear right now, and we want to build something new.
Kimi: Tell us about what the session was that we just experienced.
Clarissa: Yes, yes. So, um, this session was a collaboration across a couple different groups and we titled it Workers Inquiry. Which has a very historic context for workers come together and talk about shared concerns and learn about what they have in common and how they can push back to make change together. So the session was a panel of people who are experienced in either organizing around AI or researching AI and building structures of power.
Clarissa: And yeah, our group was facilitating that conversation and. My favorite part of our session and of all of the first day of fact so far was the discussions where we broke people into groups and these researchers and workers were talking about what they were struggling with, what pressures were being [00:36:00] put on them to use ai, how they were being pressured to influence ai, all of these different ways that they.
Clarissa: Um, interact with these technologies and one of my favorite takeaways that I, I told Alex about was one of the workers in in my session said, you know, management makes these decisions about implementing AI technologies and tools and, you know, oftentimes there's a gap between the productivity level that they expect and what the tools can actually.
Clarissa: Help workers produce and it falls on the shoulders of workers to make up that labor. And I thought that was very insightful and everyone around the group kind of had a moment of, oh yeah, I'm dealing with that too.
Georgia: That's fantastic. I think when there's like spaces like this that have. Filled with so many disciplines and like so many different fields that do intersect, but in very like interesting ways, which is a kind of like, I think what you touched on as well, what you touched on, Alex.
Georgia: Sorry, can't see that one. Pointing to, uh, yeah, I think I'm just really grateful for this space that you created for everybody to come together and try and like [00:37:00] talk about experiences that they feel like they might feel, be like, I'm alone in this, but actually. It's like shed,
Alex: I love that, that kind of thing that you're talking about for, uh, uh, or that insight.
Alex: And that's really kinda a highlights how AI is not really a, a mechanism of labor replacement, but it's more about labor displacement, where it's like you're displacing putting more of the labor that if you took out some kind of a task, now you have a set of new tasks that are put onto the workers who are, who are left.
Alex: And it's really great just to hear from your group that that was something that emerged from it. There's a few things that people said, sort of like Joanne was talking about. She's the president of the Data Labors Association and she's in Narobi, Kenya, and she's talking about like the first thing she's had was like, you know, companies want to give the illusion that AI is intelligent or it's autonomous, but actually there's a whole host of people that are making this [00:38:00] work at all and.
Alex: They're doing these things like mitigating AI harms and having mental health challenges and not getting the support they need. And there's like this, this illusion that these things are happening independently, but kind of any kind of human intervention. Those are the people who are doing label, you know, labeling that is kind of prior to even the use of these tools in the workplace.
Alex: After data has been labeled and then we see it on the other end of the supply chain, uh, in which people are now being told that these things are intelligent, find a way to use them as, as Ben from Martin and his remarks. And I think that was one thing that was really helpful to hear from Michelle on the panel from National Nurses United, is that she was saying, well, we need to actually identify these commonalities across workplaces.
Alex: And that's actually really helpful. Feature of doing inquiry like this because we're kind of running into, you know, some of the same things, [00:39:00] even from different job positions or parts of the AI supply chain, or in different quote unquote verticals, whether it's nursing or education or data work, these are many of the same struggle.
Alex: That we're encountering or we're connecting the dots between all of them.
Clarissa: Alex mentioned early on that there are groups that come to fact, or like people who have different perspectives that come to fact that are kind of opposed maybe, or there's tension there. And I come to fact to move people to the side of collective action and to the side of essentially unionizing as much as possible.
Clarissa: Alex mentioned, you know, the idea of giving workers the tools to do research themselves in our group for the workers inquiry session, the topic that you brought up of like. Sometimes these AI tools are not even helpful and no one is keeping track of the productivity levels that come from those tools.
Clarissa: Something that we talked about in our small group [00:40:00] was how do we have conversations with our coworkers about like, okay, how much is this actually helping you? Do you want to use this tool? Is it going against your mission or values, or something like that? And then how do you turn that into sort of solidarity and collective action of saying, well, we all feel this way.
Clarissa: Maybe we all should talk to our boss about. Nixing this ju. Um, so that, that's the kind of thing that I hope to see happen at fact.
Georgia: Should we talk about like how you guys met and how you came here? Like what your journey has been? Is that a fun story? I don't know how, how
Alex: did we meet? Oh, no, I think we met on the internet.
Alex: I don't really under, I don't really remember how we met. I think you probably just met, I think the first time we met might've been Pat Surfer last, last year. Interesting.
Clarissa: Yeah.
Alex: But we've known each other for a while.
Clarissa: That's cool.
Alex: Yeah, so I'm
Clarissa: pretty sure like there were zines and we were like, look at these zines together.
Clarissa: Yeah. That, that happened. But yeah, no, I, so I've been a tech worker for my entire career, except when I worked at Barn and [00:41:00] Noble. I was in college. But yeah, I, uh, I was like a non-technical tech worker, so a program manager. And weirdly enough, my, my first grownup job was program manager for the Qualcomm Robotics Accelerator.
Clarissa: So like, my very first intro to tech was. Centered around automation. And at the time I really bought into this idea of like, oh, we're replacing the dull, dirty and dangerous jobs. Like all of the ways that that AI is branded or automation is branded as positive. And then I had this really formative experience working at Kickstarter, which was my next job, and my coworker called me up and was like, Hey, you know, all these problems that we.
Clarissa: Both know exist at Kickstarter. A couple of us are talking about maybe a union is the right path to pushing back against management's very concentrated power and doing what we know is best for the platform and the community and the workers. And from there we built a full fledged union drive and ended up winning our union.
Clarissa: [00:42:00] And that really changed my entire worldview and how I understand power and how these systems ripple across every role, every company, every org. And yeah, I've just been dedicated to union organizing ever since, specifically in the tech industry. 'cause that's what I know best.
Alex: I came to where I am for a lot of formative experiences.
Alex: I've been in the labor movement since I was 17. Um, organizing with United States on sweatshops as an undergrad, and then being part of the Teaching Assistants Association and graduate school. When I was a graduate student at University of Wisconsin Madison, I was co-president of the union. When we took over the Wisconsin State Capitol to oppose Scott Walker's attack on collective bargaining for public sector workers, and that really was formative in terms of thinking about how I think about power and how, uh, intellectual and tech workers think about where they are [00:43:00] situated with respect to.
Alex: Their own institutions. And then from there I was at the University of Toronto and then went to Google where I was on the F Ethical AI team as a research scientist. And I think since joining dare, a lot of the work that we've done has been around workers and technology and I think both my experiences and also my ment in this community, in fact, has really fomented me wanting to work.
Alex: On issues of labor in tech and ai. I'm also a co-author of a book, the AI Con, how to Fight Big Tech's Hype and Build the Future We Want and, and it, we talk a lot about workers struggles against AI and how AI is being used as a KL against workers and what organized workers have done about it. Including some really important wins.
Clarissa: It was mentioned a few times I run this conference for people who work in tech who are interested in organizing unions or organizing collective [00:44:00] actions. It's called Circuit Breakers, and it's in October on the 18th and 19th at NYU. And yeah, anyone who's interested, I highly encourage to come you
Alex: there.
Alex: Thank you so much for B Square. Sorry,
Clarissa: you there. And
Georgia: B Square maybe.
Alex: Be there and be square.
Georgia: And thank you so much for talking to me and also trying to teach all of us a bit more about Marx. That's always good, I think. Thanks. This has been great. I'm sorry it's so hot. And we're doing this in the heat amongst dry bracken and children.
Alex: Um, I have to apologize for the heat or the children.
Georgia: I'm British, I
Clarissa: can't help it. Thank you so much. Thank you. Thank you. Like this was a great combo. Oh, I'm so glad. Yeah. Thanks for making the space for us to just chat. Yeah, yeah, absolutely.
Alex: This is fantastic. Yeah. I love this reflection.
Georgia: Very good. Yeah, I liked it.
Georgia: Alright, well say goodbye everybody. Bye bye. Love you.
Alix: Love you.
Georgia: Bye. Love you. Um,
Alix: I feel like every interview should end with Georgia saying, I love you. That's great. I'm sorry.
Georgia: But they just had such good vibes. I was like, oh, should we get a bit like [00:45:00] afterwards? They were, they were great. I really
Alix: liked Houston.
Alix: Great. Yeah. This is super interesting. I'm really glad that you talked to them. 'cause I've been following the data workers inquiry and I think just the idea of people that are actually doing the work, either the data labeling work or the engagement directly with AI as part of their work life, being able to tell us what that's like, just feels like such an essential, like when you say it like that, it's like obviously that's essential, but it wasn't happening until they were doing this work or it was done in this very like.
Alix: Condescending way where a researcher would deploy to the field to interview the, the labor, you know, like, and like write papers that you know that the people that they were interviewing wouldn't be interested in reading. And just like the fact that they've kind of turned it on its head and been like, we want you to figure out like what are the research questions that interest you?
Alix: Like how do you want to actually be a part of constructing the project? Like what research outputs would you find meaningful? That just feels like such a. Wonderful. Like flipping the script and kind of restructuring the whole like academic project in a way that I think is really cool. [00:46:00]
Georgia: Absolutely, and I think it was a refreshing thing in retrospect inside a conference where there was lots of people who were talking about kind of like.
Georgia: Co-design and participatory design of these systems that actually, frankly, like lots of the people you're trying to co-design with, they never even asked to be part of those systems anyway. Like I think, how can we make voice recognition systems recognize the voices of like these people who speak a slightly different language in this country?
Georgia: So that they can be heard over the phone through a service that doesn't have any humans like on the other end of the phone anymore. And it's just like, what if you just put humans like, well, how about you? Like, you know, like data ai, NLP person, just like, don't try and solve this problem. And like, instead they should just put people, humans on the other side so that we don't have to like, include people into a system that they never asked for anyway.
Georgia: And like get them to help you, like, improve on it and build it and like maintain it. It just felt like, yeah, a good. Alternative to the kind of like participation like, yeah, we don't need more people involved [00:47:00] in this project. We need this project to be ended to not exist. Exactly. Yeah. Yeah. And also like not only do you have to do your job, but then you also have to do the extra labor of figuring out how this AI tool can fit in your work.
Georgia: Like please make the AI shaped hole, as it were, to like put the AI shaped solution on top of your usual. Job. The nurse on the panel who was like, basically these people are like telling me my work is like trivial and doesn't matter and can be done by a machine, which is like obviously really like demoralizing.
Soizic: I think it's really cool to bring in labor organizing into the discussions about AI and just. I hate saying responsible ai, but basically accountability In artificial intelligence, we tend to reinvent the wheel and try and find new processes and just trying to see how we can adapt existing collective action devices to what AI means isn't a really nice way to think about doing accountable, actually accountable [00:48:00] things.
Alix: Cool. Okay. Well, great work guys. I feel like I was there. Oh yeah. That's what we wanted. Thanks, Georgia for. Producing this episode and all the other episodes and Sarah, who is also producing this episode and every other episode. Um, but no really thanks guys. I think, um, you two and Hannah really put together something great and when we were first talking about you going and like interviewing a bunch of people I know from last year, it's actually really hard to like corner someone.
Alix: Space that like sounds nice and also isn't you like jumping them in the hallway of a conference and being like, I don't know you, but I've read your stuff and can I sit down and interview you? It's like such an intense way of engaging in a conference space. Sometimes it's nice 'cause it gives you a reason to like meet people, which is also cool.
Alix: Um, but I feel like kudos 'cause I feel like it's hard and you guys did a great job. So you lucky person listening. We have put out an [00:49:00] episode every week almost for a, for like exactly a year. And we took like one break over the December holidays last year. Um, and we're gonna take a break. Uh, we're gonna take two weeks off, so nothing next week, nothing the week after.
Alix: And then when we come back, our first episode when we come back is Cori Crider. Yeah. Uh, who some of you may know, formerly a fox glove now of a fellow working on market concentration. And that's what we talk about in that episode. But also how she came from human rights law and the kind of switching from civil political rights, um, and battling governments into thinking about market concentration and economics and the economy of how big tech has kind of sucked up all the energy outta the room and become giant monopolies and what she wants to do about it.
Alix: And then, um. Super excited to have Paris Marx on the week after. Corey, I've been reading Apple in China, and then we talk a little bit about that book and what he's learned, what Paris has learned from work on transportation industry and uh, Tesla [00:50:00] instead of looking closely at Musk. And then we're gonna have two kind of miniseries coming out.
Alix: We'll give you more details as we get closer, but one is gonna be around. Decentralized internet stuff. So we have kind of a cluster of conversations with different people who are thinking about how the internet looks and how it could look and what decentralization, what role decentralization plays in that.
Alix: And then a second one on kind of all things scams. We've a set of conversations about crypto, multilevel marketing, and a couple of other interviews with people that are like thinking about how right wing spaces and a speculative kind of casino like economy has kind of left us in this very strange financial world that's having a really big impact on our politics.
Alix: Uh, in the meantime, uh, we're taking some time off and we'll see you when we get back.
