After the FAccT: Materiality and Militarisation

Alix: [00:00:00] Welcome to Computer says maybe this is your host, Alix Dunn. And in this episode we are gonna hear a recap constructed from different conversations that the maybe team had at fact a few weeks ago in Athens. But before we get started, do you guys want to introduce your songs?

Georgia: I'm Georgia and I'm really discombobulated in a studio right now.

Georgia: But yeah, other than that I had a great time in Athens at fact with Soizic

Soizic: and I'm Soizic. I usually work as a community manager for the maybe, but I was a member of the podcast team for fact, and I loved it.

Alix: They hoofed it. The whole conference, met loads of people and we've tried to put together two episodes that kind of stitch some of the themes, some of the interesting research happening.

Alix: We by no means tried to be comprehensive, they're like. Thousands of people, I think at fact. Um, maybe a thousand. So there was no way that we could read everything, talk to everyone, but we picked up [00:01:00] on the stuff and the team picked up on the stuff that they found most interesting. Um, to bring that to you with a particular bent, I think, on some of the questions that touch on power on politics, on how AI systems are affecting communities.

Alix: So there will be a bias here and it's finding the research and the researchers that were working on questions like that, I think. Is that how you guys would characterize who you ended up speaking to?

Georgia: I think, yeah. And also, not exclusively, but we did try and pick, uh, a few people who were running craft sessions, so not just like presenting papers so that they were like not just presenting bits of their research, but people who were just.

Georgia: Running like different kinds of workshops or even like someone I think who we're gonna hear from in this first episode, who like ran essentially a demo of a workshop that she would run for like marginalized groups and just different ways of approaching the problem of like how to explain the material effects of like AI to people who don't think about it all day every day.

Alix: And I would say for those who don't know, craft, um, is a thematic program within fact. [00:02:00] Um, I co-chaired craft this year but wasn't able to attend. Um, and our colleague, Hannah Barakat, was a program officer for craft and did basically all of the heavy lifting. So if you attended Fact and enjoyed craft, um, you should send Hannah a thank you because it's her, uh, it's her doing.

Alix: But it is the part of the program that's meant to reach outside of academia, to find voices, to find perspective that is not academic, uh, primarily in nature. It's not the like esoteric questions or the sort of niche technocratic questions that you sometimes see within these spaces. So it was, I think, more artistic, more engaging, more participatory, just by design.

Alix: That's not to say that the other sessions weren't good, but just that we gravitated towards craft for probably obvious reasons.

Soizic: Fact is a big tent of many different communities, and so we were really focusing on people who focus on power politics and ai, which. Is not necessarily a given in the fact community, which [00:03:00] is maybe okay, but something we need to acknowledge.

Soizic: The interest also for us to look into craft sessions was also to try and think about ways academics, activists, artists can work together. We picked a lot of workshops, craft sessions, tutorials as well that addressed opportunities for collaboration around concrete impacts.

Alix: But before we go into fact, maybe we could hear from Alex Hanna, who I think had a good overview of, um, what fact is and what it isn't.

Alex: It's the first day of fact fact in Athens. Fact has a lot of different things. It's kind of a big tent type of organization and conference. There's some interesting elements of fact, there's some frustrating elements of fact. And I guess an interesting thing is that it brings a lot of people in a lot of different quarters to the table that that also means there's lots of people that in other parts of life, might actually pee each other [00:04:00] at each other's throats in certain kinds of ways.

Alex: How should I put this? I wanna put this in a way that is like kind and it's more like there's people here that are under the umbrella of responsible ai and that can include people that work in corporations. Like Microsoft or Google or in startups. But there's also people who work in civil society who are more in organizing and activist spaces and who are really invested in liberation and abolition.

Alex: And that can actually run very counter to a lot of trends and, and something like an academic conference, it's also an academic conference. So it's, you know, like this is a much different environment than circuit breakers, which is what Clarissa is very involved with. Or like the organizing conference, you know, or Labor Notes.

Alex: Those are organizations that are much more organizing forward and have very different conversations. Whereas [00:05:00] someone might be saying, we're trying to mitigate some of the worst harm happening at Microsoft, but we still can't stop Microsoft or Google from participating in genocide. And so that's the tension.

Alix: It was in Greece. It was hot as fuck, um, for four days, uh, which I feel like connects to some of these questions about AI infrastructure.

Soizic: So one of the sessions organized as part of craft was an, actually, it was an offsite session organized amongst other spy Charis Papaevangelou, that basically bringing together researchers and activists and journalists working on the Greek context to look at what's actually happening on the ground with regards to infrastructure investments in Greece, data center sub C cables, migration technology as well.

Soizic: And use that as a way to deconstruct some of the narratives we often hear on ai, including set of digital sovereignty and basically [00:06:00] interrogating who benefits from them.

Charis: I'm Charis Papaevangelou. I'm a post-doctoral researcher at the University of Amster. Working for the Institute for Information Law, and I'm part of the program called Public Values in the Algorithm Society.

Charis: So the session itself was on the hidden costs of digital sovereignty. Mm-hmm. So, you know, the hidden costs of this kind of rhetoric and the kind of objectives including digital transformation and specifically the tensions that these hidden costs create in the Greek and the local context and how this play out.

Charis: We wanted to have a craft that actually invited and put on the spotlight, you know, local experts and voices of people that are working on these topics here in Athens, here in Greece, which is why we invited Dr. Ra Kosaka was the director of the Observatory Commons at NNA Institute for Alternative Policies.

Charis: Chris Retos from [00:07:00] Electra Energy Cooperative and Nuhu, a journalist here at the Investigative News outlet, Solomon. Because we wanted to highlight that experience, that local experience, and that local expertise, but also engage with the community. This was the reason why we chose to host the craft session outside of fact at the NNA Institute, where people could actually join, you know, the wider republic could actually join without having to pay, you know, but it's just something with, for some people is forbidding, if I may say so.

Charis: I think it was a very, like, it was a resounding success. Mm-hmm. Because after the panel, I was very happy to hear comments, especially from Greek people telling me, you know what? Like we never had heard any of this. You know, it was eye-opening. And in that sense, I think this is why we're here for, and this is what we were aiming for.

Charis: But the Greek state, because of the financial crisis, had to, or chosen to, you know, it's always a, a matter of political will. Right? The political willingness. So they chose to [00:08:00] pursue a certain path that included a lot. Privatizations a lot of, you know, liquification of resources, natural resources, assets in a way that was not necessarily inclusive, democratic, and at the same time precluded the investments in other types of infrastructure.

Charis: In that sense, it invited and it created a very welcoming environment also by amending regulations to invite foreign investments. And because we're doing in the context of this discussion, digital infrastructures, so data centers, Microsoft's data centers in this very specific case in a way to quickly access these global value chains of this technological production.

Charis: So I would say that this was a key decision that was made at the expense of course, of other, again, more local oriented or community [00:09:00] oriented and diverse that would potentially benefit the public interest further. One other thing that I want to say with regard to the local context is the framing. The framing of these investments, the framing of this whole rhetoric around digital sovereignty and digital transformation, which is that Greece was often painted as a digital laggard in need of quick modernization, again, digital transformation.

Charis: And in order to achieve that in an expedited way, they had to go through Microsoft or Google or you know, the large tech science, which is paradoxical in a way, because at the same time, this comes at contradictory terms with, let's say the EU ambitions for digital sovereignty. So I think this is quite an interesting tension to further unpack and further explore.

Charis: And this is, yeah. What we also try to do, uh, yeah, in yesterday, discuss

Soizic: what you were saying yesterday, is that under this framing, Greece is actually not. [00:10:00] Advantaged and that either foreign companies or even bigger European countries are gonna be advantaged by this framing because they have the investment and power and they have the the resources.

Charis: Precisely, yes, absolutely.

Soizic: And so concretely, what's happening in Greece, what kind of digital infrastructure is being built in Athens? Yeah. And in Greece in general,

Charis: the, the main object of our inquiry for the paper that we published recently with my colleague from University College, Dublin, eu, Eugene, we started basically the investment of Microsoft, which is called Microsoft, DR.

Charis: For Growth. Again, a very poor word play in my opinion, which includes among others, the construction of three data centers in the broader region of Athens, Attica, the upskilling of a hundred thousand employees, public or private sector. Also the upskilling of primary school students. Always, of course, in [00:11:00] the Microsoft ecosystem, right?

Charis: So this already creates some sort of implications for, you know, locking effects, dependencies and whatnot. And very specifically about the data centers. By putting, you know, these energy hungry infrastructures that do level, at least in the current paradigm to give back to the community or be more sustainable, then you put extra strength on these already degraded and old infrastructures.

Charis: So to make a point, very concrete, as an example, we read Microsoft's impact assessment reports that they're mandated to publish before the investment. And they actually say that during the data centers operational phase. The largest data center of the three, which is approximately 19.2 megawatts, is expected to consume energy levels equivalent to up to 82% of the total electricity consumed annually in the municipality that is going to host it.

Charis: And [00:12:00] during its initial phase, it's gonna consume double that. So you imagine that it's already quite a lot. Of course it's gonna demand even more. And one last point here is in the impact assessment report, based on our empirical study, even in the most optimistic scenario that they lay out, because this is among others, what they do in the impact assessment reports, they still highlight that it's gonna have negative, serious negative consequences.

Charis: But then again, of course, proceeded. With having sidelined these concerns or even proposals for making these data centers more sustainable,

Alix: that's cool. I'm really glad that there were Greek researchers there. 'cause I feel like. That was a really important addition last year, in fact, where there in Brazilian presenters it was like always so much more interesting to talk about real on the ground impacts and it felt like, because I don't know, a lot of these conferences can feel placeless [00:13:00] and I feel like by centering them in an actual place and having people from that place actually speak at the event and share how they're conceptualizing things, it almost always ends up being more physical, more about actual people, more sort of centered in a place.

Alix: Yeah, it's super interesting to hear his thoughts on Greece's role in all of this.

Georgia: He added like some recent historical context with talking about the thing with like the recent financial crisis and that kind of set everyone up to, or like rather set politicians up to think that they needed to make themselves available and friendly to foreign investment, which I think put them down a path of defaulting to privatization, which has now led to all of this data center development.

Georgia: Stuff, which is the last thing they need maybe, but yes. Wasik, did you have anything to add?

Soizic: So one of the difficulties of these sessions is that they were so packed with great stuff that it was hard to translate everything into one podcast episode. But as part of the session, a journalist named Lydia Emanu Liu, who works on migration technologies and whose [00:14:00] worked on the Greek context was there talking about that and showing how narratives of digital sovereignty, investment infrastructure, and actually the end goal being militarization are really connected.

Soizic: And one thing that I thought it did really well is tied in global trends to a local. Context and Greece is super interesting for that because you can really see clear links between things that don't seem connected like data centers. And then the securitization and militarization end goals, which are sort of hidden agendas as another researcher named ra, uh, was mentioning.

Alix: I also wanna pick up on what Georgia was saying about the financial realities for a lot of countries. And it feels really connected to what's happening in the UK and other nations that are similarly trying to kind of get out outta this like growth situation or like lack of growth. Um, and it's actually something that came up in the data center report that the maybe team put [00:15:00] together the idea that national narratives and national economic narratives drive.

Alix: Not just speculative economic activity, but make governments much more likely to be like, take everything that we love about our country and please grind it into the ground with your data centers. Like take our water, take our land, take our lovely pastoral, I don't know, whatever else you got that's good.

Alix: Like the green belt in the uk, et cetera. Put it in the grist of the mill of growth and that that isn't necessarily what people want. It's not necessarily what local communities want. It's not even something that necessarily makes very much sense, but there's this kind of government panic of both. We need growth and we need to show our populations that we're thinking about growth at basically all other costs.

Alix: I hadn't connected in my head that like, of course, like every Greek government is having to kind of come out of the shadow of the financial crisis and be like, we're grownups. We know how to manage an economy. And like being a grownup right now for a member of government is like. [00:16:00] I will sell the soul of my nation for the digital economy.

Georgia: I think it really, really connects with um, lots of themes you were picking out and with your conversation with Maurizio as well about how like a digitization means privatization. Similarly with your conversation with Bianca Wiley from like all the way last year about the general lack of confidence coming from like governments local and national and how their tech companies are coming in and like using that to their advantage and using it to like juice the narrative that they are the ones who can like come and like save them from their economic despair.

Georgia: And also like the other side of it is you don't care about your citizens unless you do everything you just described.

Soizic: There is also interesting links between what you just said and the discussions that happened during the session on tools and tactics for AI environmental action. 'cause one of the co-organizers Bens Nice from the ODI was mentioning that the power imbalances that you have, for instance, as a local government, when.

Soizic: Someone comes up to you and asks you to build a data center in [00:17:00] your community. If you are a rich local government, you actually have lots of other sources of income, and so you don't really need to say yes. But poor local governments have different incentives and different challenges to tackle, and so it's interesting to see that things you can have at the local government level are really reflected at the Greek level.

Soizic: And also keeping in mind, it's like we often think of Europe as one big entity, but realizing that Greece is definitely not in the same position as France or Germany within Europe and within what the EU is imposing and asking was really interesting and super important to take into account to just have nuance in the conversation.

Georgia: That's a super good point and that leads nicely into our next clip, which is from other Georgia. Georgia Panagiotidou. Who ran the session you just described. So let's hear from her now.

Georgia P: My name is Georgia Panagiotidou. I'm an assistant professor at uh, Kings College London. My work is mostly on human computer interaction and sustainability and how those two interact.

Georgia P: So it all [00:18:00] started after having a chat with Tamara and Ben, the co-organizers of the craft session. We ended up kind of exchanging ideas about our work. Realize that we have some kind of overlaps as it comes to agency and who has agency in terms of environmental action. Ben works a lot with community, so does Tamara.

Georgia P: I work mostly with developers and machine learning developers. So we thought it would be a nice opportunity to move away from all this data that is being, uh, not just the data, but move away from just discussing about it to see what kind of activities and action can be made. And to support environmental action and more collective action.

Georgia P: So that's why we said, okay, let's put a craft together. And fact seemed like the good place to do that because it's quite interdisciplinary also. And it has kind of open format. So then we invited the other co organizers as well to have a more journalistic perspective from art, some from activism as well.

Soizic: And so the idea was to say, we are collecting all this data and now what do we do?

Georgia P: Yes, exactly. What do we do

Soizic: with it? What

Georgia P: do we do with it? Can [00:19:00] we figure out what tactics different kind of stakeholders in this problem are actually using? And can we exchange ideas in a workshop setting? To see how we can build the future better.

Georgia P: Like it sounded very cheesy, but how to actually basically be able to adapt, fight back in the situations with data centers on the machine learnings, use of resources. My main takeaway was that it's a bit obvious maybe after the fact, but how much there's a necessity for such interdisciplinarity. So you could see like different people were talking on different levels.

Georgia P: Some participants were talking about very concrete ideas of we should do caching, like of prompts, so we don't use it. Friends uses so much energy and others were really talking about policy level discussions and then communities in between all that. So it was the first time I've been in a setting that was so.

Georgia P: Diverse in terms of like bringing the same topic, but from so many perspectives it would've benefited from another two hours [00:20:00] of discussion probably. So that was my first takeaway. I think the second biggest takeaway is that I think we need to make more bridges between developers and communities. There are some gaps in the research and some gaps in the action side of things of how do you go from a tech worker working in a team and developing something to a community, living next to a data center and having being impacted by that work.

Georgia P: So there's a big gap there and we need to do more things there.

Soizic: This echoes also something Tamara and niece mentioned during the workshop, how having tech workers and local communities work together was one of the most impactful things you could do because you moved away from thinking about environmental impacts as carbon measurement and to thinking about actual human impacts on people.

Soizic: The top.

Georgia P: Yes. I, I, I really like that. And it was echoed in, in the other table as well. The discussion originated from looking at responsible AI tools and how can [00:21:00] we bring like sustainability or environmental sustainability into the discussion of these kind of various responsible AI tools that exist out there.

Georgia P: And I think it kind of ended up being like, I think we shouldn't go that direction. That's not the way to move forward. These tools have been critiqued for their solutionism in a way, and their oversimplified nature. So there needs to be more localized solutions, more solutions that work with the actual communities, which might be different than what has been done so far in the responsibility.

Georgia P: I, I was thinking like afterwards, like, should I be depressed after the, the fact craft session? But um, I think it's quite the opposite. Like it had the opposite effect rather than. It making me feel like the work I'm doing is useless. It made me feel like first I situated my work in a larger scale. I still believe that the measurements are necessary, the transparency is necessary, like the tools are necessary, but only in the context of the larger communities that might be ending using them and and making these connections with [00:22:00] policy and everything.

Georgia P: So first it helped me situate where I am. And second, I actually asked that explicitly to Aurora and some of the other participants, how should computer scientists work in these things, right? And the response was, we need funding, right? Like, activists don't live on air, right? So me as a computer scientist, which might end up having more access to some resources for one reason or another, can build together with communities and help channel some of that there.

Georgia P: So that was my takeaway for my personal work at least. It's not a first step. I'm sure there are other, been other kind of sessions like this before. I really enjoyed the session and the interdisciplinary and the fact that we can be hybrid with so many people that are coming in from other places. I just hope it builds upon this.

Georgia P: I see it as a stepping stone basically. It's definitely gonna impact my personal work and my research groups work. I really hope to see more collaboration across the people that were in the room.

Soizic: The session was about whose responsibility is [00:23:00] environmental sustainability in AI research and AI applications, and who has the power to act.

Soizic: And so the session brought together different people, researchers, activists, artists to explore this question and share tools and tactics that they've used. And then there was a long discussion, really interesting with participants also on what they've been doing and the challenges and kind of tensions we could see.

Soizic: In the space, and there were many cool things that surfaced. So Georgia specifically comes from human computer interactions and she focuses on tools to measure carbon impact and measurements for thinking about how can AI developers use these tools. To think about the issues. And so that's kind of like a really responsible AI quote unquote responsible AI approach to this issue.

Soizic: But then alongside that, you had, for instance, or Gomez from Rio, which is a Spanish organization working in the region or like a campaign [00:24:00] working in Lamancha, uh, who has suffered a lot of problems from lack of water due to data centers and not only due to the data centers. And so Aura was sharing how they've just built a movement in Spain to make this issue a real issue and how they've picked water as the focal point because water was already a big issue and has just been made worse by data centers.

Soizic: So it was just a lot of different perspectives put together to explore this, this issue. There was two things that came up in that session that was really interesting, talking about environmental impacts and. AI action. The first one is that a lot of the workshop was focused on how to obtain, explore and use information as a way to make impacts more concrete, and who should gather this information and who should use this information.

Soizic: Also, navigating between quantitative data and qualitative data and how you bring together data that the community can use to shed light on where they live. In [00:25:00] a way that gets more attention than just saying we think something is wrong. And so a lot of the session was focused on the lack of data, which I think has come up in a lot of our conversation and also our research on data centers.

Soizic: It was interesting to hear feedback about whether toolkits and responsible AI tools around measuring carbon impact were even useful. And it was really reminiscent of the discussions that we have been having around fairness toolkits and like all these interventions that are not necessarily conducive to collective transformative action and are rather like patches that you use at one point.

Soizic: And that really just give you numbers about nothing. And so it's interesting to see that these discussions we've been having around other aspects of. AI politics, AI policy are now seeping into environmental issues, but participants really brought that up. We'll hear about that in the second episode, but [00:26:00] it's really a question of what can these tools do and not do, and how do you then move into more collective organizing and the difficulties around that?

Charis: Greece plays this strategic geographical role, and by extension, geopolitical role, and as the needs for ai, or rather, the needs for computing power are becoming larger because of the proliferation of solutions. Technological solutions, digital services, tools and whatnot are enabled and enhanced using ai.

Charis: Of course, this is gonna require more computing power, and this is gonna require also for better and more reliable connectivity solutions. Enter the subic cables, offering lower latency. For whatever purposes this can be from military to streaming services. Greece is, I would say, now taking the baton from Marsai, becoming that hub that [00:27:00] connects the emerging markets of Middle East Asia and Northern Africa.

Charis: And I think that because I've been now following this very closely, you also have more infrastructure, more data centers being invested in by Amazon, by Google Digital Realty. Big players in the field are now expanding their infrastructures here in Greece precisely to have access to these emerging markets.

Soizic: It's fascinating, and I think to go back to the title of the session of the Hidden Costs of Digital Sovereignty. You are really emphasizing that the promise of benefits for Greece actually obfuscates a lot of damages for local communities, even for the local economy potentially, that behind the promise of something better we're just.

Soizic: Mindlessly without a plan moving forward with a lot of [00:28:00] infrastructure investment, while as Greece just becomes more and more strategic in the region.

Charis: Frankly, I couldn't have put it better myself, but, but, uh, but one thing that I will say is that I'm not sure if it's without plan. I think it is deliberately and strategically, you know, conceived this plan.

Charis: And this is evident in the way that these developments, including these investments are being framed. And this is why we think that it is very important to always pay attention to the discourse. So, for example, when the Greek Prime Minister Kiki says, discussing Microsoft's data center investments, that this is a leap, right?

Charis: This is gonna be a catalyst for Greece to enter the 21st century and be on the winner's side of the so-called fourth industrial revolution. This is deliberate decision made. To integrate Greece into the global value chains of Microsoft, [00:29:00] of other big tech companies in order to achieve, you know, anything ranging from short term electoral benefits, all the way to securing that.

Charis: Greece becomes this welcoming and inviting environment for these kind of investments that, as you also put sideline, any kind of concerns for the local population, the local interest, the local communities. And I think that we try to emphasize also that, you know, when you're entering and integrating Greece and any other country really into these global value chains, which are at the same time being used for warfare, including apartheid in in Israel, or the genocide in Gaza, then it is absolutely worrying.

Charis: There, there is no concern about this. If I may just quickly sort of digress a bit here. Oftentimes, especially when we talk about product safety and, you know, regulations regarding products, [00:30:00] we think a lot of supply chains, right? We think of having clean supply chains, whatever that means, right? But then we don't really think about that in the digital aspect.

Charis: So what does it mean if I'm making use of Microsoft Azure or the cloud computing infrastructure of Microsoft to, I dunno, send emails. I'm just gonna go very, very simple here, right? But then at the same time, this infrastructure has a dual use. If we look at the protests of Microsoft employees in the US at least very recently, who practically said and was not denied direct.

Charis: There were, there are publications, there are reports about this on The Guardian. These same infrastructure is currently enabling IDFs operations. Then why don't we think about that in terms of having clean, you know, supply chains, the military, industrial, and digital. Complex goes hand by hand. It is mutually reinforcing and it will only get worse as AI is integrated in those supply chains.

Alix: It's so interesting this like the whole transparency and accountability field as the transparency leads [00:31:00] to accountability. 'cause I feel like we don't even have transparency and so then talking about accountability or like, wait, the first. Is transparency and data access. And then it's like, okay, so now we're gonna spend time figuring out how to advocate for transparency.

Alix: And then I get a little bit concerned that like by the time we get the transparency, it's like challenging at the right, a quick enough pace to actually. Go for accountability. And I feel like that's the case in like, I mean, as you said, like basically every context on AI politics is like we don't even know enough about what's going on.

Alix: There's not enough leverage to actually get them to disclose anything. And it's like, I think Sam Jeffers from Who Targets me from the time that he was like, it's bad that social media companies are allowing microtargeting of political ads to individuals without us knowing what are those ads from the time he identified that problem.

Alix: And I was like, damn, that's so true. That is such a problem. We should have transparency around that to that being like the Google ad library, I think is what it's called. There's [00:32:00] like a, now there's like a dashboard that basically Google shares the ads that people use in the process of political campaigning.

Alix: In Europe in particular, I don't think they do it in the us. That was like 10 years. And then I'm like, I don't know. There's like the, the lag of, of transparency to accountability or even like advocating for transparency 10 years later you get some transparency, but by the time you do, I don't know, it's just like a bit depressing as a formula for change.

Georgia: I agree. It's like we've taken all these steps to get to some level of transparency and then because that takes so much time and so much energy, you are then kind of like, I don't even know what accountability looks like for this kind of thing at this point. It's just like, sure, we all know what's going on, but like how do you hold giant corporations with like huge lobbying powers accountable?

Georgia: It feels insurmountable. It's really good when you have some people in conferences like these who are trying to like bring people together and like find shared overlaps in people's experiences to then build a bit of solidarity and then hopefully build some movement.

Soizic: If you want to be inspired about this.

Soizic: The work of or Gomez [00:33:00] from in Spain and what they've been able to accomplish by putting data centers on the agenda in Spain through relentless media outreach. And also asking not only for more data, but mobilizing the community around demanding accountability is just a really good way to not feel defeated by this transparency that's gonna take ages.

Soizic: 'cause they've been doing a lot of things and accomplishing. Really cool stuff.

Alix: Yeah. It's not to say that like they're only advocating for transparency. I just find that like the, those are usually linear and I think it's great that they're doing stuff alongside the pursuit of transparency.

Georgia: Uh, I think we'll move into David now.

Georgia: So David Widder is gonna talk about his, I don't think we need to introduce him too much 'cause he's gonna introduce himself and he will talk about his session on like what he's referred to as like the love triangle between military and big tech and academia. So yeah, let's go into that now.

David: Hello, my name is David Gray Widder and I'm an incoming assistant professor in the School of Information at the University of [00:34:00] Texas at Austin.

David: The theme of the tutorial was as I call it, the love triangle between the military and big tech certainly, but also given that we're an academic conference, talking about the role of academics as the third member of that love triangle. Tina Park was a wonderful moderator for that. Did a lot of the logistics work.

David: Joshua Kroll. He is at the Naval Postgraduate School, an institution you won't often see on the same paper line as me, but thinks carefully about how military processes are shaped by ai. And also Shada Ahmed, who is a wonderful scholar of China and geopolitics of ai, and I've learned so much from instead of the rhetoric of great power competition and beat China and all that than just one blank space.

David: Tina and I were talking like at the end of a conference, fourth day, last actual session, is everyone gonna show up? And the room was almost full. That made me happy each time, like a new group of folks sort of started waiting the door. I was like, come [00:35:00] on in. That really was cool. And actually what really surprised me was that when we took a poll about like to what extent do you think about military implications of AI or work, whatever.

David: It was actually split 50% saying a lot and 50% saying not much. Yeah. And like I was expecting the people to show up who are like already caring about this, but the fact that a lot of people who like don't think about this showed up made me really happy and made me actually think a little bit differently about strategies going forward.

David: I just learned the department head of the machine learning department at Carnegie Mellon recently said on Mike that if you can use chat GPT to plan your kid's birthday party, why shouldn't the military be able to use chat GPT to plan a bombing run or something Interesting. Yeah, yeah. Link, right, exactly.

David: But this really like teaches me that like, wow, like this narrative that Shta was talking about is so powerful because it helps to enlist the academy, the the American universities as members of this fight. And I think that's all backwards. I mean, call me a peacenik or whatever, but [00:36:00] like maybe the universities are the space where we like.

David: Show that this isn't a fight. I have a lot of Chinese friends who also are skeptical of big tech in China or you know, whatever. And, and we should see who that Asda pointed out, who that rhetoric benefits is to sell more AI to on both sides.

Soizic: Yes. Right. Yeah.

David: Yeah.

Soizic: Um, and you were also presenting your research?

David: Absolutely. Yeah. This work was with my co-authors ish, Raja and Lucy Suchman ish is an NLP scholar. Brilliant guy, knows a lot more than just NLP, but through his PhD in it. And then Lucy Suchman, a esteemed professor of anthropology, particularly anthropology of technology. And together we used a data set collected by Dan Pacheko of.

David: DOD grants for AI or grant solicitations for ai. And what that means is, at least in the US a lot of research is funded through the military, also through the NSF and private corporations, but here focusing on the military. And so the solicitations are where the [00:37:00] Department of Defense, the military, um, grows and says, we want research to do this.

David: Please propose research researchers at universities to do these things. So it outlines sort of their goals and what they hope to use AI for and what their dreams are of ai. 28 years or so in the future, I don't know, something like that. We filtered the data set of many into the data set of 7,000 military grant solicitations for AI and asked what we could learn from that.

David: And what I presented today was a, one of the sort of main free findings when I was at Carnegie Mellon. I'd like to just talk to my colleagues. It's weird that half of our funding comes from the DOD or uh, department of Homeland Security. How do you feel about that? And they'd have various answers. And I'd ask them about like, Hey, like do you accept military money?

David: On average? About half of them would say yes. Right? And the conversation would go something like, well, I only accept funding for basic research, pure science. Dual use, whatever. I'm advancing science and or maybe using this money from a place I don't agree with to advance science, which I do. And I think that holds a lot of weight and it's hard to sort of know how to push back against this.

David: And so I was really excited to look at what basic [00:38:00] research solicitations are in this data set and to go, maybe this is literally true in what they're saying, that you can use basic research money to advance science, but what are the ways in which this might not be true? Or what are the ways that this story is not as as simple as perhaps I thought or as simple as perhaps they thought even for the very, very blue sky grants, literally using the word blue sky ambitious research.

David: I believe in the solicitation for the Vannevar Bush faculty fellowship of one of the main faculty fellowships, the DOD provides, they're sort of making these claims around advanced fundamental research, pure science, advancing fundamental research engine of economic progress. And actually then we get to see, and then they start talking about.

David: Providing a, a wealth of talent for the defense industry and like, oh wait, this is starting not to sound very basic. And so my point is not so much that lasers are bad or GPS is bad because they were funded by the DOD, but more that like the kinds of things we choose to imagine are inherently shaped by the funding we have available.

David: And the DOD really isn't going to fund anything that doesn't have at [00:39:00] least useful defense purposes, even if they then have other purposes. And that has a shaping effect on our field. Things which have more research funding, uh, get more attention, more people work in that area. Even if they don't have that research funding, they become the hot thing perhaps.

David: And so we should be really careful about how, like that so-called basic research funding implicitly shapes the incentives and norms in our field.

Soizic: Yeah, that makes a lot of sense. And it's also interesting that you were talking about funding creating and narrowing down imaginaries, and Zeda was talking about narratives.

David: Yeah. I really like that.

Soizic: Yeah. And so it just, it's really interesting and probably needed for the field to be thinking about the research in those Stearns and think about the broader implications. Yeah. What was your motivation in organizing this tutorial

David: That positions me as an organizer, which that was actually Tina.

David: Okay. Um,

Soizic: what was Tina's motivation? Yeah, right. Sorry. What was

David: All of our collective, she did the organizing work is what I meant, but we, we all our organizers in that we wanted it to happen. The motivation in, in, [00:40:00] in this is, I think a lot of funding apparatuses are very opaque, perhaps at fact talking about military research as taboo.

David: And I really am glad we made a space where we could make something slightly less opaque and slightly less taboo. I think that even if we don't agree on like the bigger issues of whether or not military contractors should be sponsors of fact or whether whatever else, I think like we can agree at least that this kind of research should exist at fact.

David: And I think that hopefully we see more and more people at fact and otherwise in computing communities, heating research communities, starting to see this as legitimate research to do place that will have a platform at fact and others. I think also it's important that at again, the world's largest AI ethics conference that we discuss, the pointiest end of ai, which is, as I see it, project lavender, those systems which help Israeli war planes drop American bombs on GA and children, this is the space.

David: If, if ever there was one for that conversation in an academic [00:41:00] context, to ignore it, to not have, it would be wrong.

Soizic: You spoke up against Yeah. The sponsorship of the conference. Do you want to just share about that? Yeah, sure. I

David: was kind of blacking out from nervousness when I spoke in the town hall. We opened this conference, uh, briefly mentioning in the keynote Trump's, uh, uh, taxing in seats, especially in the us Um, I think it's good.

David: Also mentioned that Goodwill and Microsoft paid millions of dollars in Trump's inauguration fund, um, voluntarily in that context. Um, and this is of course amid my country, uh, the United States providing bombs to Israel and atrocities that former Israeli prime minister is called war crimes. And what the International Court of Justice has said, plausibly constitutes a genocide and Microsoft and Amazon are con, uh, contract with the US and Israeli governments providing essential cloud infrastructure enabling this war.

David: Put this plainly. Microsoft and Amazon are military contractors and I believe that military contractors. Have no [00:42:00] place sponsoring the top AI ethics companies in the world. This is confidential and if you feel similarly, I invite you to stand if you want. My friend told me after that about like half of the people stood up when I asked if we thought we should think more about the role that military contractors, such as Google and Amazon and Microsoft play in the role of facts.

David: Like one of the biggest, best AI ethics conferences. It seems like a contradiction, apparently about half the room stood up that that's something we should think about. And that was way more than I expected. Right. And the fact that so many people showed up tutorial that we're, that we're not actually thinking about the issue per our poll, made me think like, I thought this was about like maybe a little bit more, more of a rallying people who already agree with this to be vocal, but it seems like more people than I thought, or at least caring or thinking a little bit.

David: And maybe it's more about reaching more of those people. Mm-hmm. Or maybe reaching those people specifically. I just returned to my point, and I think this resonates with a lot [00:43:00] that the top AI FX conference should not be sponsored by military contractors during a genocide. I, and a lot of other folks that I work with and don't work with, uh, are skeptical or concerned about the role that big tech broadly has in shaping AI ethics outcomes.

David: I, I spent a lot of my PhD understanding how individuals and tech companies have agency or response feel agency or feel responsibility to act on ethics. And what it comes down to is like they don't wanna lose their jobs. And so the range of ethical agency they have is narrowly constrained by what their company gives them.

David: And so I think something similar happens in sort of academic spaces in more subtle ways, perhaps, that others have written about. And so I and others think that we should at least think much more critically and skeptically about the tech companies We're hoping to hold accountable sponsoring the venue in which we.

David: Hope to hold them accountable. Maybe we disagree on some of them, but I think the first ones should be the ones that are providing support [00:44:00] to rising fascism in my country by, for example, donating to Trump's inauguration fund. I believe that was Google and Microsoft donating a million H, I believe. And it seems like we're all agreeing that like it sucks that our jobs are under attack from this administration and then yet we go to a conference that's paid for by the same people paying for the attacks.

David: It seems contradictory. So maybe we start there. Maybe we also start with the companies that are providing what, what are wrongfully called general purpose tools, cloud computing infrastructure that enable genocide. What is increasingly recognized as a genocide. So this word is used a lot, but the ICJ said it's plausible genocide.

David: A former Israeli prime minister said that these are constitute war crimes and these companies are supporting that and an institutionalists in some ways and not another. But one way in which I am is I like come into fact. Fact is a great space. I believe it is the space, if ever there was one to build a critical vision for computer science.

David: And for that, I just have to say thanks [00:45:00] to those who do so much unpaid labor to make this happen. I am always nervous when I get on the mic to talk just because I'm a little scared. And I think often I can come off as to some, at least to trying to somehow impugn the motives of those who have done all that unpaid work, who are often my friends and colleagues.

David: So just a shout out to them, I don't always get it right, and I am always grateful for those who tell me that. So

Alix: thank you, David. Yeah, David last year also organized a town hall kind of like during a lunch thing about no Tech for Apartheid. And just like the general discomfort with participating in a conference where you really enjoy a lot of the research, a lot of the researchers, you find it a really valuable space, and yet it's sponsored.

Alix: By companies who you are really, you don't wanna be affiliated with or associated with. And in fact, a lot of your research shows that those companies are perpetuating things like genocide. I'm really glad that he did it again this year in a more formal [00:46:00] fora. Did, how did it, do you guys wanna say a little bit about how it went?

Alix: 'cause I presume you were, you were there.

Georgia: It was kind of like the chairs sort of like went through their kind of like, this is us being transparent and this is what we spent the money on and blah, blah blah. People just like lined up behind microphones to say their bit. David made sure he stood up first.

Georgia: He also, I will note, very proactively went around to people in the days leading up to this, during the conference and said, I'm gonna say this at the town hall. I'm gonna call them out for like taking funding for military contractors and I'm gonna ask people to stand up and when that happens, can I like rely on you to stand up?

Georgia: And obviously we were all like, yes, yes, yes. And I would say he talked, lots of people clapped as you heard in the recording. And then like 90% of the room stood up. I mean, he only said half, but. He spoke for a little bit more and then the general chair kind of like told him to stop so, so he could yield time for other people.

Georgia: He was like really, really nervous having to bring it up. And I think something that anyone with eyeballs can see is just like, why aren't we talking about this? And it's a shame that he had to like, like anyone has to put any of that, that amount of like [00:47:00] emotional and just like nerve energy into having to like call this stuff out.

Alix: It just must be really challenging Also because he was confronting people who he probably respects and has done projects with and they're like fellow academics and also academia is such a conflict averse space. There's not that many academics that like wanna necessarily roll up their sleeves and like hash it out.

Alix: But it also felt interesting happening around the same time as what Abe Babe Hanni has been going through. I don't know, have you guys followed this?

Georgia: Very vaguely, very lightly. Yes.

Alix: So similarly going to an AI for Good conference at the un, you wouldn't think conflict. You would think like, I don't know, like lots of well-intentioned people wanting to talk about certain technologies might make certain problems easier.

Alix: And there's always this like grift there too though, where there's like a bunch of companies that sponsor these things to try and like on the one hand do some dirty work and on another hand be like, but we're also like, I don't know, offering free nonprofit licenses for now for like people to use AI systems.

Alix: So ABA was invited into a keynote and [00:48:00] she focused on the role that major tech companies are playing in enabling genocide. And they in the run up to her going on stage, which would be really stressful. I don't know. I don't give that many talks, but when I do, it's like that hour before you're like in your head like super stressed and I'm sure David was too when he was about to like have to stand up and like they pressured her to take stuff out of her slides to basically delete content both around genocide, around Israel, around some like bizarre meta stuff also that had nothing to do with genocide.

Alix: She then gives the keynote and then it's like, I don't know. That would be a really terrifying thing to do. And it's not to say that there weren't people that were part of the open letter that she was sharing in her keynote. It's not that there weren't the community of people around her, but like the fear that comes with like using those kinds of platforms and taking those kinds of moments to like say the thing that is so uncomfortable even though everyone in the room knows it's true.

Alix: Like that's the other thing. Like these, [00:49:00] they know it's, that's an obvious. Thing that is true. And being the person that says it out loud, I feel like, I don't know, I think it'd be terrifying.

Georgia: Yeah, agree. And I really respect David for what he did. I literally, because it is like for whatever reason these academic spaces are, I think they have like the same mentality as like tech industry people sometimes with the kind of like, no, we have to remain neutral and we're just doing research.

Georgia: And it's like basic research. So it doesn't have anything to do with, I don't know. And I think he's actually like using his research muscle in his activism, which I don't think you see very much or enough. So I thought that that was really great.

Alix: I'd say neutral in those spaces and really positive in the AI for good space.

Alix: So maybe Abe had it worse. We don't have to compare, but like everybody, like let's talk about AI for good. And she's like, well, let's talk about it not being used for war crimes, which is a bit of a vibe shift. Um, whereas I feel like fact is so critical that like being critical makes sense, but I still feel like it must be really.

Alix: How did you guys feel about critique? At fact, [00:50:00]

Soizic: I think there's a lot of critique happening. In corridors and during breaks as well of like people reflecting on what's happening with the community. How the community is evolving. Yeah. Georgia, you were nodding. Like I don't know about general critique. I know people are thinking critically about the fact space.

Soizic: There are tensions that I feel are really difficult to resolve, like the big tent aspect of the conference makes it difficult to reconcile different groups that are there. And so I don't know if there's a possibility to like bring, actually bring everyone together even though it is helpful to have everyone under one roof.

Soizic: Same with David and everyone else who was mobilizing, speaking up against the funding. It's difficult because you're speaking up and people are expecting you to propose a solution, be like, okay then what should we do? And there's no obvious solution to this because big tech funding and all these applications are so entrenched in the field.

Soizic: And so it's like you shouldn't have to speak up with the responsibility to offer solutions. I think it's totally valid. Like [00:51:00] that's what David was doing, just asking for the space to discuss it without putting an action plan forward because it's such a complex question that we should just take time to address.

Georgia: I think also in the context it was in, it was just kind of like why on earth would he have concrete recommendations in a town hall meeting where you're just sort of gang up and like expressing your concerns and wanting to share, which is like the space that they said they were gonna give and like he was cut off a little bit and it was a bit, you know, awkward, intense.

Georgia: Which I don't think is like, that isn't the vibe that they should be bringing. Even if they do disagree. I don't know. It wasn't just not, not ideal, not very community. Vibes. Yeah. I dunno, I don't even know what to call that.

Alix: Back to Abe quickly, I don't know if you guys saw this paper that she co-authored a couple years ago about the proportion of academics presenting at machine learning conferences that come from industry or academia or civil society and how the proportion has basically entirely flipped.

Alix: So it used to be 25% of people coming to those conferences were from industry and like 75% were either [00:52:00] academics or from civil society. And now it's basically the reverse. Like 75% of people presenting at those conferences are industry and 25%. And I may be getting those numbers, I'll dig up the paper and we can put it in the show notes.

Alix: But um, I think that also affect things because there's a lot of people there who are. Paid a salary by these companies. And I imagine it's just like really uncomfortable to have a conversation. They know that it's true that the companies that they work for do stuff that they probably disagree with and disapprove of, but there's less and less space for them to say that out loud in those companies.

Alix: And there's also like less and less incentive for them to engage in those kinds of conversations. 'cause it's like, I don't know. We're at the point where we, we see these companies clearly, and if you work there, like it must feel not great to be, and if you're, and if you're a majority in those spaces, I imagine it gets harder and harder to have those kinds of conversations in sort of public settings.

Georgia: So next we've got Tania Duarte. Something that's really important here that I think is a good connection to make is that the ways that we're kind of like normalizing this sort of [00:53:00] background of like violence and like big materiality that we have in AI is like something that we do within our own disciplines.

Georgia: But I think it's like, it happens generally with all the narratives that the, like the general population, the people who don't think about this all the time, like are just being fed constantly. Yeah. Something that what the work that Tanya's doing is trying to build important levels of like critical literacy around this stuff.

Georgia: So yeah, I thought I would add her in at the end here and she's gonna discuss two workshops that she runs and will continue to run, uh, in the uk.

Tania: I'm Tania Duarte, I Run We and ai, which is a UK nonprofit looking at critical AI literacy, starting from the margins and doing interventions to. Basically increase public power through better ability to hold AI to account.

Tania: And then we talk a lot about what is AI and like what does that mean and what are the agendas and, and the rest of it. And then we also run the better images of AI project, which if you haven't come across it better, images of ai.org is a free image library on creative commons [00:54:00] licenses with non-white robot glowing blue brains.

Tania: Pictures that give much better public understanding of, of like, what is it that we're actually talking about to help. Again, it's an intervention for public understanding about ai. So we were here yesterday, a fact to do a workshop, an asynchronous workshop, which we'd called sensory AI boxes. So show technical engagements to support marginalized voices.

Tania: A little bit of a mouthful and what, what it actually was was two different tables with hands-on activities on each one to get people thinking in a very material way. About ai, which is often thought of as being very intangible and immaterial and ethereal and magical and mystical. This was two different interventions.

Tania: One for people with learning disabilities, or we say learning disabled people. And then one for older people. We invited people to not act as participants 'cause they generally [00:55:00] weren't in these groups, but to come and explore how it's possible to engage people in concepts about ai. Even if there are perceived barriers to these groups.

Tania: But it was also nice to get feedback. You know, this is one of the reasons we were really, you know, glad that craft enabled us to bring this to fact. 'cause we, we had around the table as participants, people who already are teaching computer science at universities and, and also have their own way of teaching probability through games, which are maybe more advanced than what we're doing.

Tania: But it's kind of making that, we we're keen to make those connections to kind of think of new ways to explain that computer science lecturers are probably already doing, but to the general public, we're saying, Hey, this is your friend that you're talking to. You know, think of this like your best friend or think of it like a chef that's prepared, you know?

Tania: Yeah. This is how we're talking to people. We don't, an assistant that never gets tired or something. Yeah. We don't need to do that. Yeah. We don't need, we can learn from these ways that people are teaching, you know, already. But we can simplify them. That's fine. You know, we're not gonna take tests on [00:56:00] it, but we are gonna use.

Tania: Tools in a more intentional, safe, appropriate way. So that's, that's the kinda idea of designing from the margins. And so we had a lot on sensory input on sensory methodology, lots of other sensory elements as well. So people were able to build a lot on self-determination, which with learning disabled people, it's important for them to give them choices and not to assume that they don't have them.

Tania: But again, it's a really important concept to ask for, like everybody, to give them the tools to have self-determination. And quite often you are stripped of that if you don't understand what the actual choices are. So you're being given this hype stories or these narratives about all the things AI is gonna fix.

Tania: And you know, it's. Kind of the clever people doing it, and you just have to come along for the ride and sit back and support it, sit back and support it. And it's kind of that, that self-determination is missing. So again, it's kind of in designing for the, for the margins, we're thinking about how can we encourage self-determination?

Tania: So with these boxes, they take out an object, you know, [00:57:00] reference object, which then represent and becomes an activity to teach a specific concept. In this case about chatbots, and we built it around chatbots because they're incredibly. Amazing, but deceptive. And they're deceptive at every stage of the conception and design in terms of trying to make you think they're human.

Tania: And then the other one was a game, which was to engage older people, although we, we've used it different variations in other contexts, which was about the materiality of ai. And, and this was more like the sociotechnical context of ai. So we, in ai, we work on as, as I mentioned, critical AI literacy. So the, how I differentiate that from AI literacy.

Tania: And I didn't used to, I used to be like a real advocate for AI literacy and say, you know, this is, we need to tell people, you know, what it is that they're encountering. And so for me it was an empowering thing, but the term has been really co-opted into meaning learning skills to work in ai, learning to be a good consumer.

Tania: [00:58:00] So what we've done with this kind of box for, say box, but it's a set of activities for, for older people, it's a game and. Again, this can be used for different groups. We worked with a researcher, Ella Laura Lima, who has bought the kind of concepts of memory boxes for older people. So it linked up, oh yeah.

Tania: With this work that she's done and research that she's done on how bringing physical, tangible objects to older people, especially those that are suffering memory loss, although that's not our group in this particular exercise, really help to engage different senses, creativity, connect their brain in different ways.

Tania: And of course, again, that kind of methodology is like, can be useful for any of us when we're trying to think a bit differently or connect with like the social or creative elements or, you know, whatever it is. So, so it's been a really useful methodology, but she's also kind of really looking at how the model of dealing with older people in terms of education is like, oh.

Tania: These poor people, they're getting left behind. We must try and [00:59:00] stop them being left behind. So the interventions that we need are to try and make them feel comfortable with technology to try and make them feel like they know how to use it to like not be so scared of, you know, whatever it is. And that kind of mindset.

Tania: Well, I'm not saying there's not a role for that. It assumes that it's their job again, to keep up. They have no say in like, well, do we want to lose cash? Or, you know, it's like, this is just the way it's going and, and our job is to help you adapt. But these are people that, so in actual fact, not only, and, and NR has been working on like, trying to get 'em to reclaim their narrative sovereignty over technology.

Tania: Not only do they have, should they still have a say. But there's a lot that we could learn from and listen to. So what the game does, um, that we already worked with Dead Lovelace Institute. We kind of developed it on a project for them, which was looking at defining public goods. Then we came in and introduced some sociotechnical concepts of AI so that they could then put the two together in community groups around [01:00:00] the uk.

Tania: So starting with the public good, then the concept sociotechnical concepts about then saying what's public good. Whereas of course most paradigms are like, Hey, we have these tools. How can we use them for public goods? Yeah, yeah. Like, where's your, or we have these tools. They are a public good go. So, exactly.

Tania: So we use this in an intervention, um, on a project for them, which is published on, on defining, you know, public good and ai. It's part of a bigger project they did. And so here we developed them into a game. So you've got a board, it's got. Phases of introducing the idea of, you know, the AI kind of value chain, the process and materiality.

Tania: So drawing on work from, you know, Kate Crawford and James Maldoon of this idea, we'd already done quite a lot on materiality and the better images of AI projects, which we also run on trying to get pictures, images of AI that show like silicon or labor or people at work. And so now we've transferred that into.

Tania: Physical objects that people can interact with. So it could be, yeah, a rock of silicon. And then [01:01:00] it's a discussion prompt with cards where it's like, why, you know, most people will not understand why they have a, a bit of silicon in their hand. Yeah. Or like, um, like crazy Lego blocks all stuck together, which is the one I picked out.

Tania: Yeah. Which was the cleaning data. And we also had objects to represent a miner to get in the concept of labor. So we've got labor throughout the chain as well. The water for cooling. I mean, that one usually like really astounds people when they're like, why have you got a, a water spray? Like, what can this possibly have to do with ai?

Tania: And then you get into, you know, conversations about data centers and cooling and, you know,

Georgia: it's a really, really great way of just like opening up different threads of conversation about something that's like really, really complex. I really liked it. I thought it was fantastic.

Tania: Oh great. Yeah. 'cause if you say to people, okay, let me come and talk to you about data centers.

Tania: Yeah. And then you quite often. Glaze, give them a water spray. Um, it's, you know, why have you given me a water spray? 'cause you are prompting them to ask the question and then you're like, well, I'm glad you asked that. Let's go. Yeah, that's cool. And they can, they can [01:02:00] feel it, they can realize this is, this water is water that is being drained from a desert, you know, that people rely on.

Tania: And it's, it's real. I have a right to have a say actually in this system of production, I'm part of the system of production. It's not, it's not just for politicians or tech leaders. It's like, this is my planet. I have a Right. So it's a, it's a kind of empowering, enabling people to realize that this is tangible and material and we all deserve to know that.

Tania: But most people like here, obviously at fact, everyone's very well aware of this. You know, we have such Loni here. Yeah. You know, it is like we are really, but, but generally, when we speak to people. They have no idea.

Alix: Yeah, I think that's a really nice one to end on because I feel like a lot of the method of academia just isn't sufficient to engage in the kinds of questions we wanna engage in.

Alix: And so I'm really glad, like it's actually one of my favorite things about fact is there are these ways of more dynamically exploring information and insights with people. And that might sound like an obvious thing, but I feel [01:03:00] like in academic conferences is actually quite rare. It's usually somebody droning on about findings in a really long PDF and then.

Alix: Sitting down. And I feel like it's nice. It's nice that they platformed and by they, I guess, I mean craft this type of work. 'cause I feel like this type of methodological innovation is the stuff that gets me most excited. And it's really nice that, um, that's still happening. Um, so with that, we have another episode about fact coming out next week.

Alix: So today was really looking at physical infrastructure. Also this question of sort of how the community's engaging with AI and warfare and big tech involvement in fact, and also this, um, cool, uh, methodological innovation from Tanya. Um, and next week we're gonna be talking about yeah, how a AI screws up understanding who we are as people, because it is really bad at predicting basically anything and identity is like the most complex thing.

Alix: So we're gonna dig into research and, um, interviews on that topic next week. So thank you Georgia and Soizic for taking us through this. It's super interesting. Thank you to Sarah Myles who will turn our multi-person [01:04:00] conversation, like eight people in this, this episode, uh, into something that hopefully was nice to listen to.

Alix: Thank you for joining and if you are at fact, if you've got any thoughts or reflections or feel like we missed anything, feel free to reach out and we will see you next week.

After the FAccT: Materiality and Militarisation
Broadcast by