The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew
Alix: [00:00:00] Hey there. Welcome to Computer Says maybe this is your host, Alix Dunn. And in this episode I sat down with Georgia and two folks from the Collective Intelligence Project, which is an initiative designed to experiment with democratic engagement as it relates to ai both. Thinking about how to include more people in the product roadmap of generative AI and AI systems, and also using some techniques with generative AI to essentially make it easier for communities to set rules for themselves and then have assistance in being governed by those rules.
Basically to bring democracy closer to communities and community governance. We get into whether or not this makes sense in a context where companies control basically all of the infrastructure, all of the data, all of the products around. Generative ai. And also we get into a little bit about like what it means to interpersonally relate to chatbots and the trends around that, [00:01:00] which I found really disturbing but also really interesting.
'cause even if we find trajectories of consumer technology a little bit scary or gross, um, I think it's really important to understand them so that we can engage with people where they are rather than mansplain to them how they should use these technologies. And with that, the interview Georgia and I did with the Collective Intelligence Project.
Divya: I am Divya Siddarth . I'm the founder and executive director of the Collective Intelligence Project.
Zarinah: I'm Zarinah Agnew. I'm research director at CIP.
Alix: Maybe we could start with like the organization's raison dech, and then hear about like how your work fits together and like how you got into it.
Divya: I started CIP in late 2022.
Essentially, I've been thinking about AI for a long time. It became pretty obvious for me that AI was going to be a transformative technology, and also that the default path was not one. Or almost anyone in the world had any input [00:02:00] into where that was going, and that seemed wrong to me. And it did not only seem wrong, it seemed fixable.
And I think, uh, it seems wrong to a lot of people, but the crucial thing is that our systems aren't set up to fix that problem, and they're not set up to fix that problem. Partly because of the power issues, but also partly because it's a very hard problem to fix. You know, when I started CIP, it seemed pretty niche and strange to be like, I want to build collective input into AI systems.
It wasn't very long ago, but people were like, why would you leave your nice job at Microsoft Office of the CTO to do this unbelievably niche and likely useless thing? I think actually things have changed a lot in the last few years. There is a lot more general, you know, if I took a shot every time someone says and democratize AI in San Francisco, I would be dead on the streets in minutes.
You know, there's a lot less progress than I'd like on what it actually means. How do we actually do that? I. I'm still very happy that there is more discussion of it and more general agreement, but I started CIP to solve that problem. I founded it with a close friend and collaborator, saffron [00:03:00] Huang, who joined the team at Philanthropic.
We partner with most closely, and so we still work together. We started it together in late 2022. I'm now the executive director. The brilliant Zaina is research director. I'm sure she'll say how she got there. I think it's been really interesting to see CIP. Grow and become more institutionalized as well.
When I started the organization, I actually. I was referring to it as collective intelligence project in my head. Like it wasn't even that I had chosen a name. Um, and now we are here and we have partners all over the world and we're doing, I think, really exciting stuff. So it's still kind of unbelievable to me that this.
Has happened.
Zarinah: My background's actually in neuroscience. I was an academic neuroscientist for 15 years, which I absolutely loved and adored. But over time I started to realize that the kinds of things that I was really interested in were happening between brains, not within brains. I became more and more interested in the science of the collective form.
I ultimately left my academic post some [00:04:00] years ago to get into the field of behavioral economics and started exploring other ways in which people were studying group dynamics. Collective intelligence and the collective intelligence project was a natural home for me. It's really fascinating for me to think about the collective as an entity itself, and I'm also really interested in the context in which technologies can be truly transformative for society.
So I do a lot of work in future crafting and thinking about the future. Often what we find is that the technologies themselves, what, you know, the kinds of factors that determine whether a technology is transformative, sort of utopian or dystopian for a society. Is really the context, the incentives, the financial structures, the economic tools in which it's developed.
And so I think CIP is doing really incredible work and both thinking about the collective form and how we can shape technologies to be beneficial for the public good.
Alix: Super interesting. I love that idea that more is happening between brains than in brains.
Zarinah: I think. Yeah, I think we're looking for a lot of the [00:05:00] magical pieces about humanity in, in the wrong places.
Alix: It's also super interesting 'cause I think that type of relational thinking is oftentimes very absent in text spaces who are all about maximizing the potential of like individuals and in this like hyper capitalistic way, being like, are you the smartest person? Are you the most financially successful person?
Like it's so don't. Wanna pay taxes, don't wanna participate in any social system. So I just like that as a, as a frame for approaching intelligence. 'cause that feels very missing in a lot of the conversations about it. Okay. Well, I mean, I don't know. Should we start with democracy? Let's start somewhere simple.
Divya: Such a banner, time for democracy in the world. Why wouldn't,
Alix: isn't it just, isn't it just, um, but you, you use that word democratize ai. And when I think of that, that to me sounds like. PR speak of companies being like, I want everyone to be able to access a product and not question it. And I wanna be able to push my product to as many people as possible.
And by virtue of everyone knowing what I'm talking about, we can [00:06:00] then call it democracy, which obviously isn't democracy. Uh, but how do you Yeah. Like how do you structure it in your head, both the like narrative battle of making it mean what you think it should mean and also trying to make it more likely to be a part of the equation.
Divya: Yeah. A couple of things. One. I think we just absolutely cannot seed the word democracy, even though I have empathy or like also see democracy being used in all sorts of contexts. I mean, democratic AI to some people is just, anything that happens in America is Democratic ai, right? I mean, forget democratic as access, which at least there's a clear reason why that is beneficial.
But nonetheless, I think we cannot see democracy. I think about. Democratic AI in four ways. I think. First is democratic access. So democratic AI is accessible by people. I mean, I think this is access in terms of people being able to use the technology being available in different languages. It being understandable.
All of those questions, something that makes technology democratic. I think there's democratization of development. So who is involved in the actual development [00:07:00] process? Is that a broad group? Not just for the sake of a broad group being involved, but in terms of different kinds of needs being. Brought into the process and what kinds of applications are we building?
Are we building B2B SaaS software only? Are we building healthcare? Are we building education? What are the kinds of things that we're building? We want to democratize those and make sure that those applications are in the public interest. I think there's kind of a straight up democratization of benefits.
So CIP has done some work We're interested in what are the economic goods produced by transformative technologies pre distributive. Stuff like what does it mean to not just have redistribution as a core way to deal with the economic effects of ai, but rather also pre distribution ways that you prevent the large scale inequality from being created.
So you need mechanisms for democratization of benefits. And then finally, democratization of governance, which is a bit different than development, right? Fundamentally, a smaller group of people will be involved in creating something, then we'll use it. I think that's just the way of the world. I don't want everyone on earth to be involved in every decision of [00:08:00] ai.
Everyone on Earth doesn't want that. There's a great Oscar Wilde quote, I love socialism, but it takes too many evenings. I don't believe in a world where we all have tons of direct democracy meetings constantly, and yet we do want democratization of governance. We want things to be governed in the public interest.
We want people to be involved in those processes. So how do you build systems that actually do that? So I think between access, which you mentioned, which can sometimes be a quite co-opted thing, access development, benefits and governance, that kind of encompasses what I think could be democratic. But I named the org, the Collective Intelligence Project, and not like the participation project or the Democracy project because I think.
As much as we cannot see democracy, I do also believe that our governance systems are going to need to evolve in the future. And part of the exciting part of thinking about collective intelligence is what are ways that we can make good collective decisions together that don't immediately read? I. As democracy and at what points are those also useful to do?
Zarinah: I think there's an important piece here, which is I almost feel [00:09:00] like the branding of democracy is clunky, slow, bureaucratic, and anti-innovation, and I think there's something really powerful in thinking about collective intelligence. I. Which is that those things don't have to be true. Right? And I think, you know, DIA's point that not everybody wants to be involved in every decision is really important, but figuring out where people do need to have input and where that leads to genuinely better outcomes for everybody involved, I think is the sweet spot.
I.
Alix: I mean, I wonder about like the step zero of basically companies that are building a lot of these systems have accumulated such resources to basically decide to go on this like massive a data center expansion bonanza, push out LLMs to every. User of most technical infrastructure in the world without really anybody saying, Hey, I'd really love it if there was like a chat bot that like lied to me one in two times.
Now, I think is the status of like open AI's model like 50% of the time just like makes shit up and there's something about that step zero where no one is [00:10:00] consulted, aside from basically a couple of VC firms who have structured a lot of their investments as like pyramid schemes, where if you can get enough people excited about it and you can get out before it all collapses, then you've like.
Succeeded or something. So how do you feel about the overall like allocation of resources in terms of societal level decision making, in terms of direction? So not to say LLMs are representative of all AI or that there aren't good use cases, but just wondering like that core question of like who decides is always tied to resources and it feels like the people with the resources are an incredibly small number of people.
Then all of us are kind of like reacting to their big, huge, consequential decision making.
Zarinah: I mean, I think this is a core problem and that we need to make collective input and collective intelligence cheaper, faster, and easier. And you know, I think a lot of the tools that we're trying to prototype, explore and build are sort of leaning towards that as long as we have to run in-person assemblies and things that are very, very long and expensive to run.
What's our tagline? Make collective intelligence obvious. We need these tools to be accessible, [00:11:00] easy, efficient, and produce clearly demonstrable better outcomes for everybody. Otherwise you do end up with democracy washing.
Divya: Yeah, and I think it's essentially correct that there is a huge connotation of power in this space.
I just think we have to grade ourselves by succeeding within, and despite that, sometimes for us, that comes through working with companies. Sometimes for us that comes through working with governments who are trying to reign in companies. Sometimes that comes through, oh, can we build. Like a coalition external to this, that itself gathers power.
But fundamentally, like I kind of just see it as the incentive model that we are in. And so anything we do has to practically work within that structure. It can practically work adversarially, it can practically work collaboratively. It can practically work by creating some other center of gravity, but that's just the world we have to build collective intelligence within.
Alix: Yeah. Which I think is reasonable.
Georgia: I guess I'm curious about, like something that this reminds me a lot of is a piece by Brian Boyd in the New Atlantis. He was talking [00:12:00] about instead of fine tuning or doing the work to fine tune one big bad like LLM to provide outputs to everybody in a way that like makes sense to somehow Yeah.
Making a huge generalized model that's gonna like. Do that, which seems like a completely unattainable goal. Not even worth trying. Why not just break it down? Do like many separate iterations on the same model or many separate models, whatever it might be. But like one for Catholic priests, one for farmers, one for people who live in Iceland.
You know, that's a small enough demographic maybe. I don't know. I would wonder what you would think about that and if those are the kinds of things that you're thinking about. But then like one step further is kind of like. Okay, sure. If we had this like great time where we like did a bunch of fine tuning for lots of specific demographics, it's like, but still, what are we doing that for?
What do people need a chat bot for? In a way, I think it's all one and good to kind of like train it up to make sense for your specific demographic, but then it's kind of like, but what is the end goal I think is always the bit that I get stuck on.
Divya: Yeah. Incredible. [00:13:00] We spent a lot of time on this question and we built out a full.
Kind of end-to-end pipeline called community models where we actually did work with specific communities around the world to fine tune their own models, to align with their values and the ways that they wanted those models to behave. And I think it was incredibly interesting to just see. The diversity of inputs that people had and the different ways that people wanted to use those models.
You know, we had a group called the Grandmother's Collective thinking about like preserving wisdom through the ages and like thinking about that. We worked with the Bhutan AI Society, thought about what values they wanted to bring to the table. You know, they're opening up their country. They wanna be involved in determining the tech future.
They have like a very unique and specific of this culture. How do they bring that into ai, you know? All of these different pieces, and so I think a lot of it is giving people the tools to do it for themselves. I am never going to know, in fact, what different kinds of communities want to use these models for.
How do we build tools for people to do that? I do think the slightly pushback on the general intelligence piece though. Maybe I'm just less skeptical that we [00:14:00] won't get to increasingly general platforms like I think that we are moving in the direction of capable general intelligence, different from will we reach superhuman intelligence, et cetera.
That's not the question I'm discussing, but in terms of practically, it does seem cross domain intelligence is clearly better to solve a lot of crucial tasks like we are moving up in terms of. Being able to do that. The world I see is probably one closer to, we do have strong general intelligence platforms and then they can be tweaked to particular context, which is even what our project looked like, right?
Like we built on top of open source, so we fine tuned on top of CLO or something like that. We didn't start from scratch. That is both. Probably more effective, but also just practically more likely than I, I would not bet on generalization, basically.
Zarinah: Yeah, I think that's right. And I think actually like going back to the sort of democratization of accessibility, I think one of the.
Giant asymmetries that we're seeing already afoot is people's ability to know how to prompt. [00:15:00] And so if you are very well adept at prompting, you can get to a, a very, very specific, niche tailored sort of experience with your ai. And if you don't, you, I. Dunno how to pick up when it's reinforcing maybe some of your biases and then perhaps even like leading to mental health problems from there.
And so I think we're going to start seeing, unless we can sort of make the ability to tune models or prompt your experience something that is available to everybody, I think we're gonna start seeing a sort of emerging divide happen.
Alix: When you're thinking about the next like five years or something, how do you imagine this work?
Evolving, like what types of problems do you imagine taking it on? Like what, what, what does it look like?
Divya: I think we want to get deeper and deeper into the development of the technology in terms of where input comes in. There's been a lot of discussion recently, for example, around the personalities. Of chatbots and so there was, I guess it's a scandal for my niche corner of the world, which is that opening AI's uh, four O model was experienced by users [00:16:00] to be very sycophantic.
Actually, this kind of blew up a little like Rolling Stone, read an article about it. You know, it was, if you put in something like, I hear voices from the walls and they're telling me that I'm a Messiah, it would be like, wow, that's really interesting. You must be really brilliant. Let's talk more about that.
You know, like just very, very purely encouraging of anything the user said. And as people log more and more hours with these chat bots, that's really bad, right? Like that has active negative outcomes. The Rolling Stone article mentioned this wife whose husband was spending hours on his chat bot. It was telling him that he was the chosen one or things like that.
And what does that have to do with CIP? Well. The part of the reason that happens is models are fine tuned. They go through this RLHF process where lots of people just AB test which response they like better. And it's kind of like the way social media algorithms optimize for engagement in the short term or in a single conversation.
Lots of people are gonna pick the response That's very nice to them because that's a common human thing to do. And the aggregate of that is you get [00:17:00] these like really sycophantic models that might have really negative societal effects. And that's something that's. Now that we talk about it, it sounds pretty obvious, but it just happens from like collecting very shallow information and optimizing for quite a shallow thing and kind of rolling things out without understanding them.
Right? And each of those steps I think, should have much deeper post societal input and understanding as part of them. That's just one example that happened recently, but I think in every stage of the process. There are so many ways in which we are adjusting ourselves alongside these models that we don't understand that we'd have very little input or transparency into.
And I'd like all of those steps to have more transparency and a better understanding of where, what kinds of input would be useful. And that's what we're working towards.
Zarinah: I. The Rolling Stone article's. Interesting. 'cause all of the case studies that they talked about were people who started using their chat bots for making their routine, their daily routine a bit more efficient, or updating their resume, and that was the sort of like gateway to these psychosis [00:18:00] inducing experiences.
I recommend checking out their GPT psychosis Reddit thread, if you're curious about that. It's upon us. One of the signals that we're really picking up on in these global dialogues conversations, which are going out every two months to the global public. Is the relationship between humans and ai. These relationships are forming in ways that go way beyond updating your resume, and people have such high trust in their chatbots.
So we're seeing like really interesting things, which is low trust in the AI companies. High trust in the chatbots, trusting the tool, but not the people that created it. And a plethora of like very intimate relationships are forming between humans and the ais, and yet people have no real. Capacity to modulate that or set boundaries for themselves in those relationships.
Alix: I hadn't thought about the difference between trust in the institution and trust in the chat bot as being distinct in someone's head.
Divya: I actually think the interesting thing is it's very unclear if it's distinct in someone's head until asked, at which point they reflect on [00:19:00] the difference between the chat bot and the company, which I don't think is.
The immediate thing people think about when they start talking to a chat bot is, Hey, trusting this is somewhat equivalent to trusting 200 people in an office in San Francisco, but there is a big delta, which to me means it's not really the case that people. If they were thinking about that consistently would trust their chatbots to the extent that they do.
But the reality is they actually aren't always making that connection.
Zarinah: And it's also worth saying, you know, in the context of loneliness, epidemic, people finding emotional support in their chatbots is both really important and concerning. In some ways, it's really wonderful that people are finding.
Emotional support, therapeutic support and human society
Georgia: needs enrichment. I can't remember what it's called now, but 4 0 4. Media did like a thing on, um, like an AI service to call your parents for you. I will. It's also like, you know, the time it takes to set it up. Can you just like call your mom for five minutes, but also this, I think this chat is also [00:20:00] reminding me of like, I don't know, have you guys seen this new pure AI Facebook feed?
It's like a social feed, but it's just AI generated stuff. I think this is like an interesting time to look at it because I don't think people realize. That stuff that inputting into the AI is actually being posted in public. They think these are private conversations or whatever, and like lots of stuff is having to get deleted, but then the stuff that's left over is just like.
The most beige, gray, drab, like nothing content. 'cause it's not interesting for anybody else because it's just insight to what people want to use AI for, which very often is just ideas for one pot, pasta dishes or whatever. And like just really, really bad. AI art basically is interesting. Anyway, I can't believe it exists.
It's really strange. It's like people aren't talking to each other, they're just talking to ai and then people are looking at it, I dunno, weird what's the point?
Alix: But the chat bot thing, like the relational, it also makes sense that you have invested some emotional energy into something and then you kind of imbue it with something of value because you've put time and energy into it.
And I think that people don't really [00:21:00] do that with institutions anymore. Which is also part of the problem that basically the breakdown of institutions is that like you don't have real relationships that. Construct institutions. You have an institution that's like being top down and like not very accessible and like not very engaged.
And so then you have this really not necessarily emotional connection 'cause that implies two way and like it is not connecting with you. Um, but like you have an emotional like transaction over and over again that like accumulates and I can imagine people getting really Yeah. That making them feel close to a thing.
And when you feel close, you're more likely to trust it. I imagine.
Divya: One of the principles that underlies a lot of our work is moving from the individual to the collective in our technological interactions, which is difficult at times, right? But community models, for example, I believe that we're gonna have.
Pretty good personal agents at some point separate from, you know, are the companies trustworthy, et cetera. Just purely at the technical level. It will be possible for me to imbue [00:22:00] an AI assistant or an agent with my preferences, but it's unclear to me that that will successfully happen at the community level.
Like who is thinking about who's going to adjudicate trade-offs between groups, particularly smaller communities. That's where the community models project came from, right? Even alignment assemblies and global dialogues, we think about. Even voting is a quite like, yes, we aggregate votes in the sense that we're counting things by numbers, but these are things that have become quite individual ways of participating in democracy.
We want them to be more collective and communal in terms of how we imagine like a successful future. How do we build a social fabric? How do we build institutions that are good that we wanna participate in? They require more basically like fundamental collectivity. And so I think that's something that.
That's why we build community models. That's why we think about global dialogues. One, we build out evaluations. We're trying to go to civil society organizations, to particular communities, bring them together to talk about these things because the collective isn't just the aggregate of a [00:23:00] bunch of individuals, right?
The idea behind collective intelligence is fundamentally the whole is greater than the sum of its parts. You get something different. By bringing people together, not just in a hold hands, kumbaya sense, but you genuinely create new, higher density information by doing this, and you can't shortcut that. I think that is Alex, to your point about we're moving in this very individual direction.
I think trying to shift that is kind of a big goal, at least of what we're doing.
Alix: But to do that, you have to imbue not so in an individual sense. You imbue it with like some sense of emotional personhood because you're engaging in it emotionally. If you were working at the community level, you would be imbuing it with some type of democratic legitimacy in terms of a structure that has a governance role to play because it's essentially playing the role that traditionally, let's say, a policy maker.
Which sounds fancy, but like just someone who's politically representing a group of people and then kind of like helping structure decision making around allocation of resources, which is kind of how I think of democracy working. [00:24:00] So what do you think it looks like at the community level? Like what are you imbuing in a system like that when you think at the community level about it playing a role that is the kind of role that you all are thinking about?
Zarinah: So the way that community models worked is that a, a group of people could come together and they engaged in a sort of collective process of sharing their values. You know, I want the model to never advise me to call the police. I want the model to always think about ethical alternatives to eating meat, whatever it is.
And so the group goes through a sort of collective process where it shares its values. Everybody votes on the values that. Are there, the values that reach a certain threshold of consensus are then used to create a collective constitution, which has then shapes the model. And so there is a sort of deeply collective input process to building a community model.
And what was interesting is that right now we're living in a time where people are primarily engaging with the AI on a personal individual one-to-one basis. We are not largely using LLMs in its collective form. And I hope we start to see that change. 'cause I [00:25:00] think, you know, to DIA's point. Earlier, like I think we want to see these things happening at the collective level, and I think if more of our human AI interactions were in a collective space, we could sort of catch each other when we fall down the rabbit hole.
I think we've all experienced watching a friend get polarized by social media algorithms. You can sort of see their trajectory and because it's visible to you, you can call 'em in. You can be like, Hey, I'm worried about you. How are you doing? I see that you're posting a lot of stuff that seems a bit different, you know?
And so I think. When our conversations are more open and seen by each other, at least we can sort of tend to each other and take care of each other. I hope that we start to see more sort of collectivity in, in AI usage soon.
Alix: So you'd be kind of imbuing it with something like stewarding the constitution of inputs that you had given it.
So essentially the role that it plays is synthesizing. Decisions or governance, communication based on a set of things that were input into it. So you're not having to [00:26:00] redo all of that thinking every time. Yeah. Okay. Okay. Interesting.
Zarinah: Right? Yeah. And so we work with various different communities and for this project we were really looking for communities that had very specific, unique sets of values and needs.
And so we work with a, with a bunch of different people and yeah, it was really fascinating to see the different kinds of things people want from their community models. And yet people are defaulting to a personal LLM, not a collective LLM for usage.
Georgia: What kinds of things, from your work or from your experience of doing this work, have you seen communities maybe want to use a community model for as in like out in the open, like not I.
An individual engaging with an LM or like what kinds of things have they said that as a community we will use this model to,
Zarinah: you know, various different things. So an arena that people were curious about was sort of culturally specific things. You know, we are a group of people working in Nigeria in this industry and it's really important to us that our model is always.
Keeping these four things in context is an example. We built a model for people coming out of long-term incarceration and [00:27:00] other sort of abolitionist communities who explicitly don't want to rely on the castral system, who have very specific vulnerabilities and needs. That Google search won't give you this context, and so those models were really relying on personal wisdom from.
Community leaders to shape the model. That's the sort of crucial piece. Like there's a lot of rich information held by grassroots communities and cultures that is like you can't necessarily find on, on the internet and using that information to, to shape a model and model response. This is really
Georgia: interesting, but yeah, it's like.
Unlocking a bunch of stuff that you would have to like troll through a bunch of horrible cores to like get an answer to maybe. Yes. Yes. Uh, I think also there's an interesting tension there between like, like you're saying, it's like you're trying to help the individual escape from just these like one-to-one private interactions, which are maybe not super healthy even, and into more of a kind of like, how do we bring these conversations into like a collective space.
But when you're doing that with something like previously incarcerated people. Are they not going to want to have [00:28:00] maybe more private interactions with a model that's especially fine tuned for their needs?
Divya: The interactions remain private. It's just the steering that is collective, and so. When you are interacting with the model, those interactions aren't automatically shared with everyone in the community.
But the model that you are interacting with is one that is kind of co-created by everyone in the community. So that's how we think about walking this line. 'cause I agree, especially if you want to. Be a part of the closer relationships people are having with their models. Like of course, many of those things are private.
You know, interestingly, I had a friend recently who was like, oh, I can't afford chat. GPT. I forget exactly what it was. Can I use your login? And I was absolutely shocked by this question. Because I think it's a very intimate thing to share. You know? Like it feels, not that they meant it in any bad way, it just occurred to me.
I hadn't thought about it before, and when this person asked, I was like, oh my goodness. Not that there's anything particular in there I wouldn't [00:29:00] share, but it was kind of like reading my diary. Like it is becoming kind of an intimate place for people, and I think that that is completely fine and unsurprising, but how do we make it an intimate place that, you know, has some.
Oversight and some community. Input and comradery as part of it, as opposed to a place of like escape, a place of falling down the rabbit hole, a place of addiction. All of those kinds of things which you don't want to see even as you want privacy, even as you want individuality, even as you want a safe space to ask certain things, you wouldn't be able to ask otherwise or to think through things.
And I know people have found AI very useful for that. And I think it also comes back to this. Are you trusting the company? Are you trusting the chat bot? Like where, when you are placing trust in that interaction, where is that trust going? And that comes to the power question, right? Like, yes, we can solve some of these problems by building cool tools like community models, but we also do need the actual practical oversight of.[00:30:00]
Who's looking into where the data is going, who is making sure that when models are released, they don't have ancy problems? And by who? I don't just mean like who are the researchers, but what systems are in place so that we catch those things. Therefore, I think community models is one part of not only C'S work, but our shared work.
And if we want to build collectively intelligent ai. We want the tools for community steering. We want the tools for individual interaction and personal agents and all of those kinds of things. And then we also want the tools for systemic oversight. How do we catch stuff? Where is the data going? Who is tracking economic transformations?
Like all of those pieces cannot be solved either on the individual or on the community level. Like some things require higher levels of federation.
Alix: When I think about an entire industry that has basically hoovered up. Against all legal and ethical, whatever, any scrap of data they could find on the internet has like systematically evaded any type of controls and is [00:31:00] intentionally trying to basically lower the threshold by which they can collect data.
If people are, are treating it like a discreet interaction meant to help them be more vulnerable than they can be in other spaces with other people, like truly private. Space. I find it like really terrifying. Like if you think about that at scale, these companies are so abusive and they're so disregarding of any true sense of privacy and they like are gonna take advantage of that.
Like relative positioning with people, I think so much and I like, I'm immediately like. Yeah, what are we gonna do about this?
Divya: And it's here. I mean, if you look at our most recent Global Dialogues data, it's like a third of people. Is it say that they use AI for emotional support at least weekly? Mm-hmm.
More than More than a third.
Zarinah: Yeah,
Divya: more than a third. And. This is not just our data and we have more data like this, but various other sources have confirmed this. You know, pew has done surveys, Harvard has done surveys like it's here, you know, and so I do think [00:32:00] how do we walk that line is a big question.
And one really difficult thing is I. At least for now, people are doing that because they are getting something out of it. We have to contend with the fact that we have built a society in which this is a big need people have, that they're not feeling otherwise. And unless we are able to fill this need otherwise, which obviously would be ideal, this is going to keep happening.
And you know, I think that's just what, where we are. I mean, two, two
Alix: observations here. One, I feel like. The last 20 years has been this process of allowing technology companies to accumulate and accumulate and accumulate power. And then they've only recently started to exercise it in ways that show what they're willing to do with that power.
Are we just letting them do that again, where essentially in like five years, they're gonna leverage that positioning? And then specifically, we had David Seligman on a couple weeks ago from Towards Justice, and he's been working with AI now and a couple other organizations on surveillance connected to wage suppression.
Price fixing and essentially how companies are using [00:33:00] these privileged positions of insight and the ability to collect tons of data to actually figure out is someone desperate enough that they'll take less money for that gig job. And it just like thinking about like the ways in which this deep insight into more than a third of the public that's sharing, sharing really private information about themselves.
Like how do you, I mean, I get, I get that there's like a. Back to the individual piece, that there's individual value being created in transactions with these and engagements with these chatbots For some people, and maybe there's like use cases at the community level where you could imagine innovation being useful for collective intelligence or collective allocation of resources and more empowerment.
In that way, but like these companies, it just feels like are gonna leverage this position in some crushing way that we like can't even see right now.
Zarinah: You know, our data suggests that people do not largely trust the companies making these lms, but they do trust their chat bots. And nearly 40% of people have said that they.
Trust their AI chatbot more than their local elected [00:34:00] representative. And so we're in a very tender moment right now, right? We are building intimacy with our LLMs Trust has been forged. Relationships are there, and what is steering these models is very, very small groups of people in very a few places in the world.
And so it's a very penant moment right now, and I think we have to accept. The reality of what is currently afoot, if we are going to make it better, and I don't just mean like harm reduction, I mean actually like shape things in ways that can be like really wonderful for society. And so I think our job is to figure out how we look at what people are doing, look at what people think should be happening, and then turn that information into meaningful change.
Alix: I just wanna burn it all down. I'm like, really?
Zarinah: Like my,
Alix: I'm like very glad that you're like thoughtfully approaching how to take advantage of the innovation that might be of benefit, but like the, the, just the colossal scale of the power being accumulated by such a small number of people, I just find [00:35:00] so terrifying.
Georgia: I find it tough as well. Like I think lots of these dynamics where you're kind of like, we're learning all these new stats, like you say about a third of people use chat GBT, whereas for some kind of like emotional support is very like. Well, firstly, it all makes me like, wanna walk around and like smack people's phones out of their hands.
Like, stop it, just go to a therapist, but who's not available you can't afford. And also like, yeah, yeah, just do that thing that you can't do. But again, okay, so, and then obviously that's not, that's not like a good reaction, but like the, the point is, is like. There's like kind of two things that I keep bumping up against is that one, I feel like this is almost like they're scraping the bottom of the barrel.
There's kind of like no other option. And I understand that you're saying like this is kind of how it is, so we have to try and like make it transformative and positive rather than like this last ditch attempt at some kind of like meaningful grasp, but good mental health for people because there's literally no other option.
The other thing is, I don't know, it feels like in these conversations, lots of us, and I mean us generally, not US four, like cosplay at being like behavioral psychologists. I think we are not, we don't [00:36:00] know why people like, I think especially in conversations about like big abstract conversations about like technology and like why, why do people want to use X, Y, Z and why do they interact with it this way?
It's just like, well, we don't know. We're not psychologists, so why are we acting like. We know and we can like then do, it's like an invisible problem. Invisible problems are like extremely difficult, if not impossible to solve. So I think, yeah,
Zarinah: I guess, yeah, no, I think that's totally right and I think, you know, in our sort of explorations of who is the collective, who is the polity, I like, I think it's really important.
You know, I'm waiting for, I dunno, psychological societies to generate standardized prompts for people to, if you are using your LM for therapy, here are 10 prompts that. Will keep you safe and make this meaningful for you. It's a very low cost, low hanging fruit thing for professional organizations to do.
That can not just protect people from harm, but like allow them to use their LMS in ways that like can lead to flourishing. And I, I want to see more things like that happening.
Divya: I also think there's a philosophical question here, which [00:37:00] is how much do people know what's good for them and how much do we trust their choices?
Zarinah: Genuinely. Well, we've had lots of, we've had lots of historical examples of this and we, we fail every time, you know?
Divya: Well, yeah, right. Like, because I think one of the founding principles of CIP as an org and of democracy is like to some extent you have to trust people and then we create a lot of like power imbalances and addictive mechanisms and their context in which.
You know that that's not true and it is this complicated line to walk. You know, I think sometimes I at least do fall on the, well, we trust people side of the line, which has real downsides. Like there are massive inequalities in how, how much resources people have to engage, how much time they have to think about these things as we talked about with the therapist, like when you don't have other options, you know, it's much harder to make a good decision or you might want to be making all of those things are true, but at some point.
I want people to have the freedom and agency to make decisions over their lives and to [00:38:00] try to create the structures in which that's possible and to try to create a world in which that's not totally precluded because there's zero power in their hands. But I think. Sometimes as happens with democratic structures, like you end up with people, a people making decisions.
I disagree with constant all the time. Um, but also people making decisions that maybe even an expert or some objective sense you could say like, you're making this decision and it's definitely worse for your outcomes based on what I understand of what you have said you want. Like, that also does happen, but I think it is a complicated line and I guess I tend to err on the.
I want to trust, I want to be able to educate, to give resources to, but at the end of the day to say I want to be able to trust people's choices for themselves and their communities, even though I recognize that comes with swallowing a lot of downsides.
Alix: I'm totally with you on like trusting individuals and like letting people do what's right for them.
Like a hundred percent. But I don't [00:39:00] think that they should have to understand everything happening behind those decisions to be able to protect themselves. And I think that the risks like. It should be, in some ways bumper bowling. Do you guys know bumper bowling? You know, like when you go bowling and for kids they like block the gutters, you know, so you're just like, you know, it's some way like being a consumer or being an individual, engaging in technology systems, it should be kinda like bumper bowling.
Like you shouldn't be able to accidentally disclose something in an exchange that then leads to, if you're an Uber driver, to somehow companies colluding to make your job as an Uber driver worse because they've surveilled you and know that you're like. Seeking work and like struggling psychologically, like extreme example.
But I really wouldn't put it past them to like given, given the current state of data privacy and data regulation like that, I could very much imagine that happening. And I feel like it just feels like we're putting so much on the individual in this like frontierism and even like the language of frontier models, I just like don't want people to like.
Get dysentery and die while [00:40:00] crossing the Oregon Trail to make like some small number of dudes even richer. I just feel like we have to do something so that the stakes are lower, you know? Mm-hmm. Like exploration shouldn't come with the stakes it comes with.
Zarinah: Yeah. I mean, I think it's also important to say that, you know, the benefits of deeply engaging with your LMS are really high.
There's a really strong incentive to share your. Personal information or your deepest thoughts or whatever, you know, because you get these really rich conversations. And so the carrot is really big, it's a really big carrot, and the stick is kind of imperceptible. Like the cost is like intangible to people.
Uh, so we're dealing with a really big asymmetry there. And then I think the other big piece is that when we think about individual agency that. Requires choice and there aren't really alternatives right now. And so people are faced with a giant carrot, an imperceptible stick, and no other choices, and they are gonna do the things that they are doing, which is taking the risk to share all their personal information with their LLM.
This is not something that can be solved at the individual level. We have to build structures. That [00:41:00] give people choice agency information at a societal level. If you are not, if you don't have an enriched society and people are very lonely, they are going to find themselves getting addicted to anything that is addictive, you know?
And so we can try and make the technology less addictive, but it would be better to make a society that produces less addictive tendencies. And so we sort of need to tackle all of these things at the same time.
Divya: Yeah. There's a quote. I really. Like that I might butcher. That is something like the way to stop a revolution is not to jail revolutionaries, but to remove the causes that make it necessary.
As in, if you can't continue being like, okay, we're gonna make this thing illegal and we're gonna try to stop people from doing this thing, but the core problem isn't solved, right? Like, we're gonna jail people from bankruptcy, but we're gonna make healthcare really expensive. Like, great. Now you have two problems.
Um, and I think that's something we want to definitely stay away from doing here. And I really love the emphasis on choice. And I think choice actually is quite a radical thing to want to give people [00:42:00] because there are so many things that need to be true for you to have a genuine choice. And I think that is.
What we want to make possible, right? Like everything from okay competition, like not having to pay huge amounts of money to switch between things. Being able to understand what your choices are, having the time to choose, having the freedom to choose, understanding what's going on, like all of these things are priors to choice.
I completely agree that this cannot be solved on the individual level. Sometimes in the conversations we have, it's very easy to push back against the concept of democratic or collectively intelligent AI by saying. And people say this to me, although in slightly nicer words, like people are really stupid, and why should they be involved in these decisions?
Everyone's dumb and only technical experts should be doing this, right? And I think that's where my passionate defense of people's interests come from. Where I. To an extent, like yes, you can't go around asking people like, how do you evaluate four oh for sycophancy and expect that they have good decisions?
But [00:43:00] like when we talk to people and they're given time and they have a space to participate, they have like nuanced, interesting, engaged thoughts and clarity around what they want to see for themselves, for their kids, for their communities. Not universally. Obviously, we get some hate speech, obviously we get some disruption.
Nothing's perfect, but that core is present if people have the time to have a choice. And that feels like the time, the resources, everything else, the, the political economy of choice. And I think that is what, you know, we need to be able to preserve.
Zarinah: Yeah, I think that's, it's been very reinforcing for me to see the kinds of like nuanced, beautiful, visionary things that people all around the world say when you ask them what they want from the future of ai, people are able to dream better dreams if you.
Give them space to think about it. I also think that some of the really touching comments at the end of our conversations point to the fact that nobody's ever asked them these questions before.
Georgia: I was wondering about how you feel about, I don't know how to phrase this exactly, but like building systems with [00:44:00] people that are like.
Built in a participatory way and also like themselves then infused with like democratic participation. But those systems, the under, like the underlying infrastructure is, I don't know, owned by like other entities that you don't have any control over. Like do you ever think about like how you would build like a more, whatever this means, like public AI infrastructure on which you can like put.
These kinds of systems on for different communities, that's more like decentralized, for lack of a better term.
Alix: If you had like $20 billion, what would you do? Maybe put differently. Wow. Much more concise. Only 20
Zarinah: a year.
Alix: Forever.
Zarinah: Yeah. I mean, I think Georgie, you're right, you're on the money, right? We need.
Different, better entities for holding public infrastructure for public goods. And it's tough because we put a lot of our, like innovation in the private sector and knowing that we are setting up incentive structures that don't really serve humanity. And you sort of can't blame [00:45:00] companies for doing the things that they're doing.
They're doing the thing that they are meant to do very well, and we don't have a good mechanism. You know, I often talk about how it's fairly easy to take. Common or public goods and privatize them. It's really hard to take private goods and make them public. And we need tools that can say, Hey, your innovation seems to have passed the threshold for being a public good.
It is now going to become a public good. And we have infrastructure. We have legal and economic tools to take this from the private to the public sector without degrading the technology. And so I really want to start seeing those things. Happen. And so I think a lot of the missing innovation is in the sort of economic, legal, and financial tools.
Yeah, I agree. And I think.
Divya: There are so many levels of this stack that need to be made public. They're somewhat modular, but ideally you get everything right, like everything from, you mentioned data centers, like data centers, computation resources, the data itself, the talent, like who is building these and who is paying them?
Uh, what incentives do they operate by? What [00:46:00] applications do we build for the evaluations for those applications? Where do they get deployed? Who tests them? Like each aspect of this part of the stack could be. If not immediately made public, like brought more into the public sphere. And I think different organizations are D.
There are different ways to do that for each part of the stack, but eventually, ideally you do that for every part because also modularity creates a. Compounding returns, like the more public, different parts of the stack are, the more different kinds of people can build on them. And that I think is the beauty of things like open source.
I mean, in ai, open source is a pretty complicated question. Also, a lot of things that are called open source aren't truly open source, et cetera. Many people have done great work on this, but the idea is. You can create more compounding through this transparency and modularity than you could otherwise.
And I do think that's absolutely necessary. There's also just a very difficult question of what's public. Like, for example, is it that the government builds it? Because it's not that I am super thrilled by like, uh, large scale national [00:47:00] sovereign models as much as I may not love. Only having private models, like I don't think it would be better if they were all replaced by purely national country models.
Right. Even though that's a good part of the portfolio. And so where does public live? What does it mean to have a commons layer when we have very little practice with that in technology? I mean there's internet protocols, T-C-P-I-P, there's like the web I can, like, there are a couple of types of entities that can model this and I think we need to learn from them.
But I think the reality is we. Don't have that thriving public layer beyond companies in the state. One way of dealing with that is to try to build that layer, which we should. The other is to create countervailing power, at the very least, between companies in the state so that they aren't like completely colluding to create something.
And so I think you, you have to create both of those, uh, structures. And something should be done by government to be clear. Like we need state capacity for evaluation and auditing. Like I think that's absolutely necessary. But then where should compute live? Like. I'm not
Alix: sure. Who knows? Yeah, I um, talked to Maritza Chaka, um, earlier today about her [00:48:00] book the ku and one thing she said that just like, it actually wasn't in the book, but I just keep thinking about, I've been thinking about all day, is we have accepted that digitization means privatization.
And I just think that's like such a powerful way of thinking about it. 'cause that's kind of what's happened. We've essentially. And we've like allowed it to happen by basically saying, oh, the state, it's not very good at innovation. Or like, oh, like we need to leapfrog this moment because people expect a certain type of consumer style innovation from states, which means we should just outsource literally everything.
Um, and I feel like that attitude, that lack of confidence of government. Is also part of the problem where they're like, please make me look good to my constituents without me having to learn anything meaningful or get budget allocated, or hire people that work more than 12 months on something because it's kind of terrifying because you have to take risks and states don't really like taking risks unless it's like.
War or something.
Divya: Yeah. Then once in a while we go through the government and fire the most talented people who are doing this stuff,
Georgia: so bye. Yeah. [00:49:00] Innovation and austerity are just the same thing really. I think, uh, this reminds me of like, oh man, I don't even remember what it's called, but like France have made their own version of like Google Docs and like Google Enterprise that Oh yeah.
What has happened with that?
Alix: Yeah.
Georgia: I Exactly. He was like, I
Alix: don't want to use a state. I don't wanna
Georgia: use that. Exactly. Google Doc. And also it's, but at
Alix: the same time I'm like, be kind of cool. I don't know they,
Georgia: yeah. 'cause they're kind of like looking, I guess at Silicon Valley and they're just like, we suddenly really more so than ever, like, are terrified and hate the people who basically have the off switch and all of our like, infrastructure, right?
So it's like we need our own thing. Ah. But it's just really, it's typical. It's bloody typical, and it's also like quite disappointing at the same time that a state would just basically just do a one-to-one replica of what already exists in like corporate enterprise world.
Divya: I mean, early on France did this with the internet.
Georgia: Oh, did they? Wait, what do you mean?
Divya: Yeah. They built this thing called Minitel in I think the 1980s, and it was like, oh, a really French version of the internet. Somewhat similar reasons. And I think [00:50:00] it was discontinued only in like the 2010s. They felt that the internet was American. Um, which, you know, and wanted a, an alternate version.
And it's kind of interesting, right? Because the reason it didn't work out is because of network effects. That's not what the architecture of the internet enables is for people to have their own internets. And we don't know what that's gonna look like with language models. But I do think this brings us back to the infrastructure question of like.
Well, even if you're not training your own sovereign models, which everyone's obsessed with, and that is reasonable to do in questions of cultural alignment and language, and there are reasons to do it, but even if you're not doing that, which parts of the stack do you want to have stake capacity over?
And certainly there should be some parts of the stack where you want that.
Zarinah: I mean, I also think it's important, you know, like I. Even the state-based models, it still leads you to a sort of balkanized world, right? Where we're relying on nation states to be the boundary conditions, and I think we need to think beyond that when we are thinking about these kinds of transformative technologies.
You know, I think we need to think about like coalitions and [00:51:00] cross border collaborations, and the corporate world has done this very successfully, right? So it can be done.
Alix: I do feel like companies have accumulated so much power at this point that I feel like there has to be some type of state intervention in the next five years that is to use DIA's term countervailing, because I feel like if we just try and build.
Not just 'cause I know it comes with tons of challenges and is there's super promise, but we look at individual or smaller scale community engagement. I think like without both of those things happening, I don't think it is gonna work as a project.
Divya: We need the state options and then we also need, I think, to bring.
Collective input into what we're currently building, even if it's within private companies. I partly say that because if something's going to be really transformative, I think we just have to
Zarinah: get it through on every front, and I think it's important to say that the capacity for harm reduction here is huge, but also the capacity for unfathomable things.
That are great for the world is also huge, and so there's just a lot to [00:52:00] play for. One of the themes really coming out in our sort of global dialogues is that people don't want these technologies or their conversations with their LMS to replace human, human things, but to encourage or make better their human human interactions, whether it's with their therapist or with their friends or with their romantic partners.
People are very afraid of replacement and I'm, I'm worried about what that will do to. Human sociality, which I think is appropriate. And so I don't think we have to think about this as a binary, but more how do we create this to be a sort of symbiotic relationship that makes all things better. Yeah,
Georgia: absolutely.
I think we have like a very, very small group of extremely unimaginative billionaires deciding what the future's gonna look like for us. And I think it's really, really hard to like. Look past that. And I think it's important to be doing work like this where it's like, what if the future was not only not bad, but also amazing.
It's just like, I think important to be doing that kind of yeah. Stuff instead of like lamenting at the billboard ads that say, stop hiring humans and like [00:53:00] God knows what else.
Alix: Yeah. And I feel like. We might get captured also by the abundant space, which I think is trying to do or is make laying claim to that as an exercise, when in fact that space needs to be, I think, more contested and more creative and more power aware than it currently is.
'cause yeah, they don't know in the future.
Okay, great. I hope you found that interesting. We. Don't normally talk to people who are this actively and propositionally engaging in AI systems. And so it was a bit of an experiment for us and I personally learned a lot both about how consumers are engaging with these technologies in a way that I find a little bit disturbing, but, you know, um, to each their own.
And also we need to be. Clear-eyed about as these technologies are diffused in society, how people are actually using them. 'cause it will affect the politics around them. And also getting into that propositional headspace of like building things is really interesting. And I think, you know, if we [00:54:00] could set aside the corporate domination and surveillance architecture that the private sector has built around these technologies, and think a little bit about what the innovations might afford us as a society.
I mean, it's kind of interesting. So having some time with people who are engaging in those questions was fun. Thanks to Georgie Iacovou, um, for joining in this interview and also for producing the episode. And thanks to Sarah Myles for also helping produce the episode, and we will see you next week.
