Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud
Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn, and in this episode I sat down with Soizic Pénicaud. She's a colleague of ours at the, maybe we kind of dig into her whole career and how she works both within government and outside of government to improve the political accountability.
Alix: Mechanisms when states use technology as part of their public service delivery, particularly for vulnerable communities like welfare recipients. I don't wanna give too much away from this conversation, but it's one of those situations where someone has a really interesting career and I think it's hard.
Alix: To know the path of someone like Swazi, how she got here, all the different ways. She's through her career, engaged with different political questions and different vantage points on technology, politics and sort of how that all comes together in, I think, a power analysis that is super refined and interesting.
Alix: And, uh, I don't know. I always appreciate her perspective on basically everything, and this episode hopefully gives you some context on how she got to be so smart on these issues.[00:01:00]
Soizic: My name is Soizic Pénicaud. I am a consultant, researcher, and activist on AI policy and digital rights. I am the co-founder of the French Citizen Observatory of Public Sector Algorithms, and I'm a lecturer at Ari Graduate School of Public Affairs. Take us back.
Alix: How did you start working on this stuff? Like what was your first job?
Soizic: So the way I started working on this topic is I was. This is the longer story, but I was doing a master's in education and technology thinking that I would change the world through using digital technologies for education, which I quickly realized was not gonna be the way and used my social, you're not allowed social science background.
Soizic: Don't feel bad. Yeah. I, I arrived in this master's degree, uh, with a social science [00:02:00] background thinking that social science was useless. But basically I was like, oh, actually I think social science. Can help me understand any of what we're doing and talking basically with a lot of talk on innovation without ever defining innovation and depoliticizing anything.
Soizic: But the long story short is. While I was doing this master's, I got an internship at the French government task force for data policy and open data and,
Alix: oh, you're you. Wait, I'm sorry. You're like first professional employment if we count on it. Did they pay it? Did they pay interns? Yeah, yeah, yeah, they do.
Alix: Okay, good, good, good. Anybody's listening in an organization that doesn't pay interns, you are morally incorrect. Yeah. And you should be interns. Uh, but that was your first thing. Wow.
Soizic: That was my first thing. This is I, so I entered the workforce working in the public service and. When they offered me that internship, you know, I thought it's six months I was gonna work within a team that was developing a digital [00:03:00] transformation fellowship, which for people more familiar with the US context is akin to the Presidential Innovation Fellows.
Soizic: So basically bringing technologists into government to work on digital products and we were focusing on. Data products. So either that used data or even did data science before we called it ai and I was already a bit wary of the framing. I was like, I'm not sure technology is gonna be able to solve anything.
Soizic: But it sounded fun, it sounded interesting to be embedded in the public service and try to understand how it worked. And so I went there. I was like, I'll do six months and then I'll leave. And then I ended up staying four years for um, multiple reasons. This is actually how I ended up joining Ital Lab, which was said task force in charge of open data and data policy.
Soizic: And Ital lab was in charge of making things with data, opening data, but also the regulation, [00:04:00] some of the regulation on data and transparency, and at the time. This was 20 17, 20 18. At the time, France had just passed a law that imposed a number of obligations on government agencies that were using algorithms to make decisions.
Soizic: And these obligations pertain to basically the transparency of these systems. And that's how, how I learned first about both the fact that we used data systems to make decisions in government, but we were also regulating them and I was doing both at the same time, like helping develop these systems, but also thinking about how to make them or transparent.
Alix: I feel like that's a conflict of interest, not for you personally, but I feel like it's a really common one where basically the public servants in government who are. Responsible for piloting new technologies in public service or like doing that kind of work are also seen as the people that should inform legislative [00:05:00] regulatory oversight of that.
Alix: And I'm kind of, it's, first off, I've thought about this. I feel like it's probably because there's. Up until like, I don't know, five years ago there were just weren't enough people that were one interested in accountability and regulation independent of like actually doing the things. Were you just interested in both or do you feel like those roles are often played by the same people?
Soizic: I don't have a blanket answer to that. I think in government, things happen more organically and randomly than we think. And so in this, it's what you're saying about, there's few people thinking about digital technology. In government, or at least at the time, there were not many people, and so you end up doing both things.
Soizic: I will say that Ital Lab's first mission was to work on transparency of public information, and so it made sense that they would also tackle transparency of data systems because they were already tackling transparency of data. The fact that we were also making things and building products. Yeah. Is [00:06:00] the result.
Soizic: Like the way the digital transformation agency was structured. And so yeah, there was a lot of interplay between these things. I don't know if it's a concept of interest. It was interesting to be able to have a concrete understanding of these systems while we were thinking about how to make them transparent, if that makes sense.
Soizic: So like making tools and having feedback and working with tech people on what that meant. But yeah, it's a bit of an odd situation to be in. And it's also, you're not a regulator, right? You are working as an agency, and that raises a lot of questions.
Alix: Yeah. I don't even mean it as like, because I think it's, it's an original sin.
Alix: I think of how technology makes its way through institutions is that the people that. Build it can use the rhetorical device of, we're the only ones that understand it. So when you're making rules about it, you're the one that they should consult, which happens when, like Eric Schmidt is like, eh, government doesn't know what it's doing.
Alix: You should really use us. And it's like, but you would then be making the own rules that govern you and all of your interests,
Soizic: which yeah,
Alix: isn't great.
Soizic: [00:07:00] I also, I am super critical of government work, so I am being very full of praise right now, which is un unlike me. But the thing I would say is. It was very interesting when you actually wanna have more regulation on tech systems, and you have people both on the legal side and the technical side who understand these systems, but who use their understanding not to assert any type of authority, but to inform the debate in a way that leads to better regulation.
Soizic: I think that's helpful. So yeah, totally, totally
Alix: agree. Yeah, and I think, um. I feel like it's just a matter of time before we hear your true thoughts about the role of technology and government. But um, okay, so you start this internship, you're like thrown in the deep end. All of a sudden you're like helping design programs.
Alix: You're also like helping inform legislation around transparency. You're learning that like decision making, data-driven data, automated data, potentially informed in terms of prediction is like happening. Then what
Soizic: I learned that a lot of agencies are trying to make that happen. [00:08:00] I think the trying is something that we can underline.
Soizic: They were all trying to do what they call data science, but we would call artificial intelligence now. And for a lot of agencies who were also submitting projects to that digital transformation fellowship, our first answer was, you don't even have the data. We cannot do predictions like we're first gonna try and gather data.
Soizic: So that's the first thing. But the second thing is, so I'm doing this digital transformation fellowship, and then in the meantime I learn about this. French legislation and I learned about algorithms in the public sector, and the reason it grabbed my attention was that at the time it was not a sexy topic.
Soizic: I don't know if it's a sexy topic now, but back then it was kind of an overlooked topic, was not regulated anywhere in the world either. But what was interesting is that I could see that it embedded so many political decisions under the guise of. Objectivity tech, and that everybody was looking at it from a tech angle or a governance transparency angle, and that the [00:09:00] conversations on their consequences were really not mature, at least inside government.
Soizic: And so what happened was that I. In parallel to working on digital transformation work, I was also working on implementing this law. I call it my 120% job. So like Google has a 20% thing where you can work at the time, you could work on anything you wanted for a day. So I worked on it on weekends is my point.
Soizic: So on weekends I would work on implementing or thinking about transparency of public sector algorithms. What I was faced with in that work, even though we were doing it with some great people, was that, first of all, there was a limitation to tackling this work through the transparency angle, because you can make facial recognition systems transparent is one of the most radical examples.
Soizic: So I felt frustrated by our inability to grasp the most harmful consequences of these systems [00:10:00] through the frame of transparency, which was our only mandate. So we couldn't work on anything else. And then the second thing was that, because as previously mentioned, we were not a regulator, we were a government agency.
Soizic: The way we worked was to find. Willing, good faith agencies to work with. And so we would work with the ones who were the most voluntary, which were to speak. In simple words, were not the police, not social protection, not the places where these systems have the most impact. We were using our energy and using up our resources.
Soizic: On making transparent or thinking about these systems that, in my opinion, we're not the ones that we should spend our energy very limited energy on. And so that, that was a frustrating. Limitation of the work, which led me to then move on from that,
Alix: push the eject button from government entirely, or just from transparency as an extremely narrow intervention point for justice oriented thinking about [00:11:00] technology.
Soizic: So it turns out that basically after four years, my contract ended, so I was ready to leave government. Because it had gotten very frustrating and because I think working in government, I think it's the same as working in any sector, but working in government is great. 'cause you meet a lot of people who are motivated not by money, but by.
Soizic: The public interest, but you also quickly learned that the public interest is not necessarily equal to government work. I felt also that I was not necessarily serving the interests that I wanted to serve, so when my contract ended, I just decided to not renew it, continue working on algorithms, but as you said.
Soizic: Expand the lens from transparency to more discrimination work, I would say, but also to try and see what could be done, not from inside government, but from the outside. And especially basically trying to find civil society organizations or any kind of collective or communities that were affected by these systems.[00:12:00]
Soizic: That may wanna work on them and try and work with them by bringing this digital lens to work that they were already doing. And so I left and I was on the lookout for any such opportunity. This was 2023 at the time. A few organizations in France started to look at the algorithms used in our social.
Soizic: Security system, so social protection, including a risk scoring algorithm used by the French welfare agency to control beneficiaries, basically, and the grassroots access to welfare organization in France had done a call for testimonials. About anything that worked badly within the French social protection system.
Soizic: So this was digital, non-digital, anything. But they chose to focus on this risk scoring algorithm. And so I reached out and I said, can we work together? And it's like, yes. That was my, basically my first entry point into doing what I had set my mind to do, which was working with [00:13:00] communities who are not digital rights communities on tech systems.
Alix: What was that learning curve like? How did you. You just like met some people and then you're like, Hey, we should work together. Like how did that all go? Literally. So
Soizic: literally what happened was that I emailed them and I said, hi, you wanna work on an algorithm in the public sector? I used to work on algorithms in the public sector, inside government.
Soizic: How can I help you? And they were like, great, 'cause they're great people. And so they were very open. I was also very cautious 'cause I didn't know anything about any systems that they were using at the time. But I think we started with like a very introductory. Online webinar on what algorithms meant to kind of have elements of definition.
Soizic: And then we started looking more and more into the system also with a digital rights organization in France called La Jeanette, who also started looking into the system. The learning curve was on the social protection system for sure, but I was also. As mindful as possible about learning [00:14:00] as much as I could.
Soizic: Not trying to say, you know, this is what you should be looking into. 'cause I knew nothing. I remember one of our first meetings, I used the term, this is in French, but I, we can call it welfare help. And so I kept calling it welfare help and one of the people on the call raised their hand and said, Suzy, have to stop calling it welfare help.
Soizic: It's benefits and it's a right, like these are rights not help. And that humbled me a little bit, being like, okay, I need to just, my framing is not the right framing. Like I really just need to de-center myself as I'm trying to. That was the learning curve. And then the learning curve was on actually trying to understand the system, which I was also able to do.
Soizic: 'cause at the same time I started talking more about the system. Online and through talking about it on Twitter, back when Twitter was helpful, I got in touch with a journalism collective called Lighthouse Reports, who was trying to investigate the system as well, and they asked me to come on board and work with them as a researcher, and so that gave me an opportunity.
Soizic: To look [00:15:00] deeper into the system and to try and understand what the system was, but also how the whole social protection system worked and what that meant for what role the algorithm had in that, how the controllers used it, what the actual consequences were, and who was affected by it. So this was the time where lighthouse.
Soizic: Was starting to publish a few investigations. I think this was before the Rotterdam publication, some publication in the Netherlands, or at least something was happening. This was the beginning and one of their journalists, Gabriel Geiger, was posting on Twitter saying, you know, digital welfare systems are dangerous.
Soizic: We need to. Look into them, and I think I retweeted that and said, yes, we do. We should also look at them in France. And then Gabriel reached out and said, Hey, what's up, basically. And then we talked, and then I explained where I was. I explained also how French Civil Society was already organizing, but it turns out that Lighthouse was at the same time, had already started looking at these systems, and I was already also [00:16:00] researching it for different purposes.
Soizic: It made sense for us to start working together on this case.
Alix: Yeah, so interesting. I also love Lighthouse Reports. Hi Daniel. You listen to this?
Soizic: Yeah, me too. They're,
Alix: they're the best. They're so great. Like all of their investigations I know, which like are different about different subjects, but they're all so thorough and technically interesting and ambitious and like very oriented towards how we should all want the world to change.
Alix: So if you are listening and you haven't heard of Lighthouse Reports, look them up. So you start on this investigation essentially, and where does that go?
Soizic: Yes, we start on the investigation and first it goes. First, I get blacklisted by the director of controls at the French Welfare Agency because I, what does
Alix: that mean?
Alix: What does it mean to be, that sounds like the most specific blacklisting I've ever heard. Well, yeah,
Soizic: it is. It is a badge of honor, but it is a pretty specific badge of honor. So at the same time, so this is, I'm gonna slightly go back to one of your previous questions about the learning curve. What I [00:17:00] wanna emphasize is that this took a lot of time and a lot of patience, both in terms of finding information, but also.
Soizic: Building relationships with everyone. So this is mostly civil society organizations. This timeline is condensed, but it, in reality, this all took two years because it was long to understand how the system worked, but also to build trust, relationships, and understanding with people. And I think most of the time when we think about coalitions or when we think about bridging the gap between different communities, we never.
Soizic: Take into account enough the time aspect of things and the fact that things go slow and it's normal and it's okay. And it really clashes with our idea of being productive, innovating, having quick results, and having impact. So this can sometimes be difficult, but I think this is something to emphasize, to go back to the journalistic work that we were doing.
Soizic: So I try and understand first how the system is used and perceived by controllers, and this is when I get that listed, the short story. Is I first go [00:18:00] through some union contacts, which in France are pretty useful. I get a few answers from controllers, which are interesting because these public servants are not so dissatisfied with the tool that's used.
Soizic: They don't think it's that bad. But then I reach out to a lot of them on LinkedIn and through, I think I reached out to like someone who talked to their boss, who talked to their boss, and then the director of controls basically sent an email to all 800 public servant controllers in France saying, don't talk to this person, which then they stopped talking to me.
Soizic: But anyways, yeah. What I will say is because I had worked in government before, I knew how terrified you can feel when you're a public servant and when you speak out without having the agreement of your boss and your hierarchy. So I, I know deep down how it feels and so I can empathize with them. But seeing it from the inside, seeing this fear of speaking, I had someone retract their statement basically saying, I don't want you to use what [00:19:00] I said.
Soizic: And they had been. So excited about the tool, but they didn't want even want to be excited on record because their boss had said, don't talk to that person. And so I think this is also something that I've been thinking about a lot in terms of this fear environment that can be created when you work in the public sector.
Soizic: Fast forwarding, we managed to obtain the source code of the tool. Lighthouse manages to create a partnership with Mont, which is Francis leading newspaper media. So their data. Team starts working with us, and so we investigate the tool together, both looking at the parameters of the tool and its technical functioning, but also looking at human stories and trying to find people who have been impacted by the tool.
Soizic: And that's what I was in charge of. Not the tech part, but the human impact part, because what Lighthouse had had realized. Doing all these investigations in similar cases around Europe was that the tech is very important to look at the data and it's a, it's a really good way to grab people's attention and [00:20:00] to make a story, but if you don't have the emotional human element of things, you are missing out on so much, both in terms of story and in terms of impact.
Alix: Totally. And I think what's so interesting, and I think one of the reasons that Lighthouse is so cool as an example of an organization, and there are other organizations that have the like combination of technical know-how storytelling ability and also like deeper connection with what the actual problems are in these systems.
Alix: And so it feels like. Increasingly, that kind of multidisciplinary approach to being able to like go down a rabbit hole of an issue feels so valuable. And like there are a growing number of organizations that can do that, but it's actually not that common.
Soizic: And it's also so important because one of the things that I have now been struggling with is no one just be crudely.
Soizic: No one cares about the poor. This is something you were also alluding to. Like no one wants to talk about poverty. No one wants to talk about welfare. But Now that we have all these tech systems, these AI systems that [00:21:00] are permeating this field, when you talk about tech, you get the attention of people who did not previously care about this.
Soizic: And so we made the front page of Le Monde not because we were talking about welfare, but because we were talking about ai. But then the issue is you don't actually wanna talk about ai, you don't wanna talk about this biased system. I can get into it, but basically, spoiler alert, the system is biased. The challenge once you grab people's attention through this is how do you turn this attention that you got through talking about AI into a real attention to social policies and to really making.
Soizic: People's lives better. And I haven't yet found the way to do that, but I, I think it is a challenge that we really need to reckon with if you wanna have long lasting impact and not make this a tech issue and keep making this a social issue
Alix: a hundred percent. And I think what's annoying is that by [00:22:00] making the surface area.
Alix: AI or technology you end up doing, like you were describing with transparency, where you're essentially narrowing down. The changes you could make. And it's also kind of the tail wagging the dog where it's like if the political system valued poor people and basically like thought meaningfully about equity and thought meaningfully about race and like engaged in the really difficult social and political questions of providing, in this case benefits and aid and support to members of the public that needed it.
Alix: We're entitled to it. If you start there and then you're like, what technology would help enable that mission? The scope of the changes that you would make is just like so different. So I feel like there's this double-edged sword that I struggle with a lot, which is if you use AI or technology as the entry point, the room you enter into is too small.
Alix: But if you start from the like politics and sort of social thing, [00:23:00] the room is too big and there's like no way of. I don't know. There's no way of marrying those two approaches. And it feels like you have to do one or the other. And then it feels like it's easier to get mainstream attention in the small room, which is now being really hyped, but also like the change vectors like doesn't work.
Alix: And I don't know, do you, what are your, do you agree with that?
Soizic: I agree with you a hundred percent. And this is the issue, right? I don't think anyone has found. The magic formula, or if someone has, and I am just ignorant of it, please let us know. One of the solutions or pathways has been working on it with.
Soizic: Access to welfare organizations. We've kept working on the risk scoring algorithm, of course, but we also keep engaging with other issues, and so I feel like I'm not sure we, we grabbed the same audience. I definitely feel like choosing the people you work with is one thing I really haven't found. The silver bullet or like the actual solution to, what do you do [00:24:00] once you have people's attention with ai?
Soizic: How do you bring that attention to broader policies? I will say one of the things that you can keep doing is when you are confronted with tech solutions, you can keep broadening the lens. So I think to give a very concrete example still about this algorithm, one of the things that we were adamant about was to say maybe the welfare agency is gonna.
Soizic: Come forward and say, okay, we'll de-bias the algorithm. And so one of the things that we preemptively said, and La Jeanette was really clear about that was saying, you cannot de-bias this system. Like this is not a tech issue. This is not a bias issue. This is a justice issue. So first of all, don't tell us this.
Soizic: But then what also happened was that the welfare agency then told us, you know, one of the issues is that. Files are too complex. And so we're gonna simplify them by automating the distribution of benefits. And we're gonna basically simplify them by collecting more data. And this is when you can, again, once you're in this space, be like, [00:25:00] have you thought about non-tech solutions?
Soizic: And so I think you can keep asking for more political imagination once you're in the room. And as someone who comes from a more of a technology background or a digital rights background. Maybe your voice is better heard because you, you are almost expected to praise this, but instead you can say, no, actually, let's not use technology and let's pass the expertise to other people.
Alix: I really, really like that as a frame, and I think about it. It's kind of like saying. You go into the small room about technology and ai and then you basically say, Hey, we have a door here. Uh, and there's other people in like the bigger, better, more interesting, more important room. And I am happy to connect you with that person.
Alix: And I feel like it. It's happening more and more where I feel like there's an awareness on people that work on technology issues, that there's a deep, rich expertise and network of people that have deep expertise on these issues and that the responsible thing to do [00:26:00] is to find ways to bridge between maybe people with power and not understanding, or people just generally that don't quite get the depth of the problems to bring into the.
Alix: Fold and into the conversation and like into the brainstorming, whatever the thing might be. The people that do, and both legitimately represent the people that are harmed by these systems, and also the people that have, you know, their whole life's work potentially has been on organizing and understanding and analyzing like the deeper roots of political issues that we kind of systematically like uncover by just like popping up a lid and be like, oh, that app, that's really bad.
Alix: And then it's just like, it's wild though. It didn't happen before. I feel like there was a lack of awareness that that dynamic was taking place or maybe an immaturity of the people that work at the intersection of politics and technology, that that was happening. That they were being privileged in discourse with people and they didn't know nearly as much as other people.
Alix: And some people still, I think, are kind of jerks about it and like take all [00:27:00] the oxygen and they're like, this is so exciting. Everyone wants to know more about my opinion as a tech person. But I do feel like that's the answer is like structurally to just bring people in, be better collaboratively connected.
Alix: Spend the time and energy to understand the kind of deeper political questions of this work so that you can be like kind of steward the expertise rather than feeling like you have to have it.
Soizic: Yeah, absolutely. I think this is something that is definitely more and more present in the field. What I will say is it is often the end of the conversation.
Soizic: What I mean is we talk about all these things and at the end we say, and this is why we have to work with. Other types of communities or, and this is why we have to bring them into the room and then we stop and we're very happy 'cause we're like, we have realized that we're not perfect and we have to work with other people.
Soizic: But then how do we actually do that is something that is. More and more experimented, but still, I would say in its early stages. I [00:28:00] think the real question is how do we use that as a starting point and not an end to our conversations? And that's so difficult to do, right? I don't think anyone is to blame for this, and I do think it is a journey.
Soizic: I don't think it's enough to just bring people into the room or like join other people's rooms or say we should do it. We really have to do that work. And it's, it's hard and it's not necessarily as shiny. That's still definitely something in progress. Well,
Alix: they're like, these are 40 year projects and there's this like feeling that like, 'cause we work in Agile, that like that like you're gonna like in a two week work sprint, like solve.
Alix: The fundamental questions about like welfare delivery to the public via increasing or decreasing transactional costs of the thing.
Soizic: There is an expectation of speed. I, and I think we're talking about different communities here, right? But definitely in in government service delivery or like thinking up solutions.
Soizic: For [00:29:00] welfare, there is an expectation of of speed that comes with technology. But
Alix: the public too, I think there's a presumption that like when a government is engaging meaningfully and loudly with technology, that like somehow the speed of being able to change super structural issues is gonna increase.
Alix: And because of that, you then also increase the impatience when those changes aren't made. And it's kind of that thing about like, I don't even, I can't remember the expression, but like basically if you wanna go far. Go together if you wanna go, whatever. You know what I'm talking about? Like yes. You have to like dig in to do the full thing.
Alix: And technology. Is this like catnip that discourages depth?
Soizic: Yes, for sure. And I think it discourages depth, but it also distracts us from. Specific issues, and this is me speaking as like a former government person, but for instance, this is a segue into something else, but when you talked about Agile and you talked about user centric [00:30:00] service delivery, I'm not opposed to, well, agile, I may be opposed to it 'cause it really pisses me off as a concept, as a word.
Soizic: Fair. Fair. But, but no, but jokes aside, who doesn't wanna solve issues? Right? This is also the thing of like. More and more, especially now, people will say, we have understood that AI is biased, but like what can we solve with ai? Are you preventing the government from innovating is something that I've been asked like, zeq, you're bringing up so many issues, we should allow innovation in government because don't you want better public services?
Soizic: One of the issues is that if you use product. Language and framing to think about public policies, you forget about, and this isn't, I, I, I'm not the one saying this, uh, many people say this, but you forget about the power issues behind the policies. You're, you're trying to work on. I have seen this in government of, like, some people were working on a great product to help landlords receive applications from tenants, and their user was the landlord.
Soizic: And so [00:31:00] through a product delivery frame, it's not. Incongruous, right? Like you have users, your users is the landlord. What's the issue with that? And I think that's why politicizing work in government and when you work in tech and keep repeating that any tech product that you build in government is political and is tied to public policies.
Soizic: And it is tied not only to a user, but to an ecosystem of people, is super, super important because you can easily forget it. And also you can easily not even be aware of it if that's not the background you're coming from.
Alix: Totally. And I just think that, again, is a nod to the complexity of all these issues in that like if you single in on a one user persona, you're missing the bigger picture.
Alix: And I think it's technology development oftentimes encourages breaking things down to constituent parts that are simpler and simpler and simpler that you can do in sequence. And I think that that then flattens your ability to really deeply engage with the problem in front of you, even if you're very.
Alix: Earnestly trying to build something that's useful, and I think there's sometimes a clash between those two ways of thinking. [00:32:00]
Soizic: It ties us up to the limitations of AI governance as a framing to safeguard fundamental rights. We were talking about transparency at the beginning, but one interesting development with the French welfare agency is we've been in contact, and by we, I mean the access to welfare organizations have been in contact with the welfare agency, informal contact.
Soizic: But a few months ago, they came up to us and they said, we are now setting up an ethics committee to talk about data and tech. In the welfare system.
Alix: Please tell me that there are several members of that ethics committee that are actually welfare recipients.
Soizic: There is. Well, they said we would love these, your organizations to be a part of it.
Soizic: Okay. As proxies for recipients, which again is like a whole other question of fraud. Yeah. Who is really representing, like who's at the table? But what was interesting to me is that, again, the framing. First of all, I had a flashback of being in 2019 and ethics being. The buzzword of [00:33:00] ethics, frameworks, ethics, everything.
Soizic: Now, at least in the European Union, we have frameworks which take in fundamental rights into account a little bit more. I'm not gonna say a lot more, but a little bit more. The point I'm trying to make here is they were, again, taking the tech out of the global context of it being. A public service because if it's a public service, you have fundamental rights attached to it.
Soizic: You have democratic principles attached to it. You have responsibilities of the state attached to it as any. Analog public service or public policy instrument. But because it's AI and because it's tech, we tend to forget about all the traditional ways to engage with policymaking and with public service.
Soizic: And there's a, a want to reinvent the wheel and create these instruments that actually are not giving anyone more power. They are just on paper, more participatory or more information. And again, I'm not saying transparency is bad. I don't wanna [00:34:00] make this point, but what I'm saying is. I have seen now the limitations of just trying to govern these systems and put safeguards in place that prevent us from actually addressing their impact as public service instruments.
Alix: Yeah, that's super interesting. I mean, it's been a issue for years that these ethics. Things don't really have any, I don't wanna say things. I mean, like, especially in the AI space, there's been this idea that if you could get a small group with no power in an organization, um, to like share your thoughts on like whether this is a good idea or not, that that's a good way of.
Alix: Embedding within your team thinking that maybe you would miss. It's kind of always gone very badly. I don't know, like I can't really think of an example of one that's been successful.
Soizic: Yeah,
Alix: yeah. No,
Soizic: me neither. But what it does though, what it does do is it sucks out a lot of energy out of the people who are asked to participate.
Soizic: 'cause we had days of debating whether we should go, whether we should. [00:35:00] Not engaged because we were gonna be tokenized when at the same time, and this is a point we raised, we said there is the EU AI Act, which sets out like legal requirements, so why are we having an ethics committee? But yeah, it doesn't necessarily work, but also to people who don't work in tech, it's not necessarily.
Soizic: An old thing, like it seemed very novel to a few people that were Aspen. So the work of explaining that, you know, it hasn't gone well in the past and the debating and the conversation just takes out so much energy that could be spent on actually trying to solve these issues.
Alix: Okay. If you could stop government from doing anything particular with technology, what would you stop them from doing?
Soizic: I feel like any of these predictions, risk scoring that are over focused on poor and vulnerable people is one answer, but that's been said and done well, not said and done, but I think we're, it's been said so
Alix: many times,
Soizic: but it is not and done never, but it's, it's something that our [00:36:00] conversation has made that abundantly clear.
Soizic: Right now, I would like governments to stop making chatbots and sovereign. Ai generative ai. I would like government to stop jumping on this hype train that is generative AI and saying it's gonna solve anything in government. That's it. That's great. I'm very angry about it.
Alix: Yeah, that's great. That's just like a long hill to climb.
Alix: Like I feel like it's gonna take ages for that unlearning to happen. Um, I mean, you were, so we had this conversation a couple weeks ago where the starmer. Administration was like piloting the use of chatbots. I think in the context of DWP, like Department of Work and Pensions, I thought it was a good sign that they were canceling these pilots that they were doing.
Alix: 'cause they were like, this actually doesn't work and we're not gonna pursue it. To me that was exciting 'cause I was like, oh, they actually have standards of success and if those hypotheses are not. Born out, then they stop. You were not as, [00:37:00] uh, positive about that. Think. I don't know if you wanna share thinking about like why that isn't a good sign to you, that there are things that are being tried and abandoned publicly.
Alix: Yes.
Soizic: So I have to maybe come out and say that this was maybe a bit of a knee jerk reaction on my end because. I hate everything. Generally that has to do with governments trying technology out. I take your point, and this can also take us to the work we try and do at the Citizen Observatory of Public sector algorithms, but this story that the British Department of Work and pensions tried to use, chatbots, failed and stopped them.
Soizic: Is encouraging. What was not encouraging. And I think that was the, now that I remember my anger was at the fact that they had previously in their public communication said it was a success and said it was great. And the reason we know they stopped them because it was a failure was because, and I believe it was Anna Dent who made FOIA requests and obtained [00:38:00] documentation, or at least went down the past.
Soizic: Hell, yes you did. Yeah. Of being like. Hi, what happened to this? And then that's how you learn. So I will say it's actually great that they stopped because I have stories about other governments not stopping despite it being a failure. But what makes me angry is the inability to be actually fully accountable about this, and for most of the public communication around these systems to be always overwhelmingly positive, not cautious, and also.
Soizic: Always done with the premise that these systems will work. And so what your job as a civil society member or as an activist is to prove that premise wrong instead of governments having to prove that the systems work before implementing them. And so it's just so much energy spent on debunking something that is so often not true.
Soizic: We know from the story of these chatbots being discontinued, for instance.
Alix: I'm totally on board with that. I feel like there's a [00:39:00] credulousness in all of this where it's like, it's kind of like innocent until proven guilty. It's like technology doesn't add value until you can prove that it adds value and there's just this gullible, shiny enthusiasm that isn't warranted at this stage.
Alix: Like it may become warranted. There may be use cases. I'm sure there are use cases, and I think that there are people that work in. Public interest technology that like will probably in the next, I would say two or three years, find very niche examples where governments can save money, they can save time, it can make being a public servant, maybe less menial in certain departments or certain contexts to get some help in these ways.
Alix: But like presuming that these technologies are gonna take like a massive chunk out of the work necessary for your entire public servant. Working core, like the fact that Starr was like, so we're gonna cut the civil service and we're gonna replace it with ai. Like that's such a silly, naive, counterproductive starting point.[00:40:00]
Soizic: This is the hill I will die on This. Is this burden of proof? Maybe not. This is, is this too extreme? I don't think so. I think it's great. The burden of proof idea is that the burden of proof should lie on governments and not on citizens or experts. So important, and I think it would solve so many of our issues.
Soizic: Another anecdote is that when we asked the general director of the French Welfare Agency, if they had audited their risk scoring algorithm for bias, their answer was, we are audited financially by the court of audits. Like they didn't even understand the question. And I think if we were in a functioning system.
Soizic: If you can't say yes, we have done the necessary evaluations and audits and they are published, you are not allowed to deploy your system. And that solves so many issues because you don't then have to answer questions like, but don't you think risk scoring algorithms and welfare can be good? You're like, maybe just show me the work and if it's good, like I will believe you.
Soizic: The reasoning is that I don't think they'll ever be able [00:41:00] to prove it, and therefore it solves a lot of the issues we have right now.
Alix: I think that's right and I think it's partly it's this wishful thinking. It's partly like the brain rot from the last 20 years where basically we've been encouraged to think that any technology applied makes things better.
Alix: And it's like, oh, that's just 'cause we were rapidly building networked infrastructure, which does add a lot of value, but like everything we build after that doesn't necessarily add value. So could we like unlearn a lot of the like expectation that, because it was like. Pretty cool for like 10 years that like technology was making things more accessible that like that was not gonna always continue.
Alix: And also I feel like it also props up the narratives of tech companies basically saying that like the juice is worth the squeeze before we've done any squeezing to see if there's juice. Like I find it really irritating.
Soizic: There is technology that can be used in the public interest. I fully agree and I think it already exists.
Soizic: Like when I was working on this digital fellowship project, there was a project which was not about data, but which was about basically a multilingual [00:42:00] Wikipedia to give refugees, settling in France, information about any opportunity available to them, translated in a community based way into their language.
Soizic: That's so cool. Like it's great. It's technology used to solve a problem for people who need it. That's the type of thing that could be encouraged and that we could spend our time imagination, resources, thinking about, instead of thinking about how can we oppress people even more and like how can we perpetuate this power imbalance even more.
Soizic: So there are solutions to this. I don't think any of us are saying that technology. Is bad, just stop focusing on the wrong things and saying it's great. And it works. 'cause it doesn't,
Alix: yeah. And like being like, why are you being so negative when like, I'm sorry, this is serious business. This isn't like, let's make a little, let's like make a little pilot.
Alix: You know, and it's gonna work. You know, it's like that doesn't work [00:43:00] anymore. That thinking like it has to be more rigorous, it has to be more robust, it has to be more multidisciplinary, it has to be deeper work. Thinking about how this stuff's situated, and I feel like there's such a reluctance in some of those spaces to like.
Alix: Hear they, they hear a slightly nuanced take and they're like, uh, negative Nancy has entered the chat. And it's just like, it's so annoying. 'cause then it means that basically those communities can't spend time with each other. So it's basically like the really positive booster people. And then people like us being like, so that was like not great.
Alix: And they're like, well I don't wanna talk to you 'cause I was trying really hard and you don't respect or think positively about. The technology vision for the future, it's just like really annoying.
Soizic: And I will say one thing that I felt back when I was working in government is that it is very difficult to bring nuance to what you're trying to make because these are work environments where you are already fighting every day.
Soizic: To get your boss's approval to do anything to like innovate or do anything different in your own little way. [00:44:00] And so the cost of bringing nuance and questioning into the work means that you are adding another internal hindrance to. What you're trying to do. I found it very difficult to be like, how can we do work that's good while adding nuance, but still being able to accomplish anything?
Soizic: I think it's not only true of government, it's true of any environment that is resource constrained, but the cost of adding nuance. Is so necessary and yet so high that I think it's something that we need to acknowledge because otherwise we will propose solutions or like we will not be able to talk to people, provided everyone is acting in good faith.
Soizic: So what are you working on now? One of the things I'm working on is last year I co-founded and we launched with two other people, is Tel and very French names. Were all French. A French citizen observatory of public sector algorithms where we [00:45:00] basically take a bunch of information that we. Know about algorithms in the public sector and we put it in a database.
Soizic: And the reason we did this is because, as you may remember, I was in charge of doing this for the French government, so basically making a list of what systems government agencies were using in the public sector. Unfortunately that is still not the case, and public agencies do not do this job of telling us what systems they're using.
Soizic: And so despite all the issues that we've raised in this conversation, we still don't know what systems exist. And that's like the base. Minimum for anyone to be able to fight back, even understand and discuss is to know these systems exist. The three of us were very angry, and so I usually do rage fueled projects, and so out of rage we said let's build our own database.
Soizic: And so we do it as a civil society group and we now have a database of 72 systems, and we're currently working on an updated database where we [00:46:00] will add the other systems that we've learned about.
Alix: Is there ambition to do this in other. Places,
Soizic: so there are others in other places. I published recently a report where I looked at all of the registers, so databases that were done by governments, but also by civil society organizations in Europe, and there's one in Slovenia, there's one in the uk.
Soizic: There is one in. Italy, one in Switzerland as well. So there are others in other places, and each one has a slightly different focus. For instance, in the UK it's maintained and created by public law project who does litigation and like fundamental rights lawyering for citizens. Our take is. In France is transparency and also saying we don't look at impact because we say we cannot, as a team of three people who do this part-time investigate systems, but also evaluate their use.
Soizic: That needs a lot of work. But what we can do is look at what information is available on these systems and we focus [00:47:00] on budget. And evaluations. We look at how many government agencies have published information about funds and also about efficiency or any metrics, and we make statistics on that that enable us to say that hardly any organization does it so far.
Soizic: And so that gives us more tangible numbers around which to advocate and say. Government agencies are not doing their job at being accountable at how these systems are built and how these systems are monitored and evaluated.
Alix: That's great. I mean, I'm so glad you're doing this 'cause I feel like the abstraction and vagueness that these projects are described and discussed is really frustrating and I think it makes it actually really cha because one of the coolest parts of technology development is.
Alix: A thirst for quick feedback on like how something is going. And I feel like because these projects aren't meaningfully tracked, you can't actually engage in corrective work or like any type of improvement or cost benefit analysis or like evidence development [00:48:00] around when these tools are useful when they're not and like also when they're rejected.
Alix: I'm just really glad that you're doing this 'cause it feels like a very important part of a mature approach to this stuff that like we should. Drag it. I don't know. Yeah,
Soizic: and I, and I think, I mean F first of all, thank you very much. It's good to hear that such projects are important. The link I kind of wanna made to the beginning of our conversation where I said that I was frustrated about working on transparency within government.
Soizic: For us it was really a way to think about how can transparency actually be helpful to pursue other goals. So transparency in and of itself. Is not helpful, but it is necessary to just know that the system exists so that you can look more into it, for instance. And so that's kind of where I tie governance initiatives or like reflection on things that are not fundamental rights.
Soizic: That's how I think transparency can be meaningfully used. You use [00:49:00] it as a tool to learn more about the systems so that then you can evaluate the impact or you can learn that you are affected by a system or that you can investigate it further. And in that sense, transparency is helpful, just not in like the governance way that we usually refer it to.
Alix: Awesome. Um, I don't know if you the follow the algorithm. Have you read that paper? Did, did you read that at fact last year?
Soizic: Oh,
Alix: it's very good. I think.
Soizic: So
Alix: they like explore when mostly public sector algorithms were basically like decommissioned and for what reason? Yeah. Yes.
Soizic: And there's also a really good report by the data Justice lab, learning from canceled systems that is a little bit older that also talks about this.
Soizic: That is really great.
Alix: That's cool because these systems, like we have to be mature about the fact that like. Sometimes they fail and that's fine, but then just like stop them. Don't like wait until people take to the streets before you're like, oh, I guess maybe that didn't work out as wonderfully as we thought it was.
Alix: Going to just [00:50:00] say like, we tried something, it didn't work under these parameters. We decided not to continue it and like. Get rid of it.
Soizic: Absolutely. And it's also, it's looking at failures. It's being able to track them. It's being able to say that they're more than tech components, source codes, datasets, models, et cetera.
Soizic: But it's also just knowing that they exist because one of the difficulties in the French. Welfare risk scoring algorithm. Was that actually in the testimonials that were gathered by the grassroots organization that I was mentioning? No one talked about the algorithm because no one knew it existed.
Soizic: Because usually these systems are so opaque and so embedded in deep, complex decision processes that you have no way of knowing that you're affected. And so you have no way of collectivizing. And so just doing this job of saying there is a list, just look at. How widespread these systems are and maybe there's a system that is affecting you or as an organization looking at this and saying, is there any relationship between any of the systems that I'm seeing on the list and the work that I'm doing is so important and it, [00:51:00] it feels crazy that we have to do this work to remind government that this is what they should be doing.
Soizic: But it is the way it is and hopefully it will be helpful.
Alix: Cool. Okay. Well, thank you. I feel like. Unless it wasn't obvious. We've known each other for a long time. We've worked together in lots of different ways, but we've never actually sat down where I could ask you all these questions about your work history and like how you've ended up doing what you're doing.
Alix: 'cause it's all so cool and connected in all these really interesting ways that are not immediately obvious, which is it's nice to get the time to just like. I ask you lots of questions, so thank you.
Soizic: Yeah, thank you so much. This is, this is so great. I also don't often take the time to talk about my, my work and explain it, so thanks for the opportunity.
Alix: Next week we have Brandy Ger Kink from the Coalition for Independent Tech Research, and she walks us through all of the ways that independent tech research, so all the people outside of industry trying to produce knowledge for all of us to understand what is going on in our digital world. A lot of those researchers are under threat.[00:52:00]
Alix: All different kinds of threats, and the underlying ability for us to know what's going on is under threat as well. So stay tuned for next week's episode, and thank you to Georgia Iacovou and Sarah Myles for producing this episode, and we'll see you next week.
