Nodestar: Turning Networks into Knowledge w/ Andrew Trask
Alix: [00:00:00] Hey there. Welcome back. Um, this is our last episode in our No Star series. Although as with other series, there's different people we wanna talk to and also people come out of the woodwork when we do these episodes, um, with really good ideas for our conversations they wanna have. So maybe not our last for good, but our last in this series.
Alix: So go back and listen to the other episodes if you haven't already. With Mike Masnick and Rudy Fraser. This conversation is gonna be focused on decentralization, but we're gonna kind of take a left turn away from social media and towards ai. I think most people, me included, when we think about AI and especially large language models and our current AI hype cycle, I think about all of the work that's been done to show that concentration of power and consolidation and commercialization and corporate control.
Alix: Is essential for these technologies to do what they're doing. So I just wanna caveat, uh, that skepticism I have about overall how centralization plays a role in large [00:01:00] language models. But on the show today we have Andrew Trask, who's the CEO of Open Mind, and he's been thinking on a pretty long arc of technical development and trying to find ways to disrupt that concentration by building protocols to allow for more decentralized training of models so that it becomes.
Alix: Essentially impossible for these companies to maintain the level of control that they've had over the processes necessary to make. Models. So we get into how important it is to try and make sense of lots of information at once, which is one of the main challenges of our information environment. And it's actually one if that's an interesting problem to you.
Alix: I think Eric Salvaggio and I sat down in December, I think, um, talking about the age of noise. And Andrew is working on the problem of what do we do in an age of noise? How do we not just broadcast more information into these spaces? How do we build the infrastructure necessary to do broad listening? Two asides, one during our childhoods.
Alix: It turns out Andrew and I grew up within a five minute drive of each other in Memphis, Tennessee, which is a [00:02:00] random piece of information. The second thing is that in our conversations over the last few months, as I've gotten to know more about open minds, uh, mission and work, um, I joined their board. Very recently.
Alix: So just to flag that disclosure. So with that, let's dig into it with Andrew Trask.
Andrew: Hi, my name is Andrew Trask. I'm the Executive Director of the Open Mind Foundation. We're a not-for-profit software shop that builds open source software looking to decentralized power over ai, uh, through combination of kind of deep learning, cryptography and distributed systems.
Alix: So, do you wanna talk. A little bit about, I mean, we could start with just your background, like how you got into Open Mind, and then I'd love to hear a little bit about the problem you're trying to solve.
Alix: But like, how did you end up founding a nonprofit that's trying to build technology, which is like the two most difficult things to do at one time?
Andrew: Uh, [00:03:00] that's a, that's a good reminder. Um. I'm a bit of a one hit wonder. My first job at undergrad, well I was in undergrad and really wanted to do AI ish stuff.
Andrew: Uh, this is in like 2011. I took an AI course, got very lucky 'cause everything was still busy in then. And I was like, neur network works. Seemed cool. I should do that. And I was like the one kid in class who was interested in neuro ads and had to do all this weird side stuff to try to do it. This was before GPUs and all this kind of stuff, so it didn't really work.
Andrew: Got into it. Tried to find the one company in Nashville that did ai, found it and they did a particular brand of ai, which was AI for data that was too sensitive to go to the cloud. Thus, my journey on like access to private data was born, worked there for three, four years. Went back to grad school to do a PhD.
Andrew: My second stroke of luck, which was basically I got into language modeling, neural networks and language modeling circa 2013, 14. So I published a paper at ICML, which was my third stroke of luck. 'cause right after word deve there was, um, a period where it was particularly easy to publish. And so I did a very word deve and got in NBA first author [00:04:00] paper at a leading conference.
Andrew: Kind of helps you jump into grad school, which is great. And then my fourth struggle of luck was. My PhD supervisor led the language modeling team at Deep Vines and they funded my PhD. So a year after that I started working at Deep Vines on the language team. A little before that I discovered ho morph encryption and differential privacy, and some of these, these algorithms, definitely a person with a hammer looking for a nail, except for the fact that I had spent through four years doing access to private data stuff already.
Andrew: Started Open Minds as a nonprofit just to basically find people who knew about these different fields. 'cause they really didn't talk to each other. They went to different conferences, did different things, this, this kind of stuff. And then for the last eight years, it's been a tech transfer journey from then to now.
Andrew: So basically a bunch of raw ingredients produced in a raw form with problems, statements that are basically pretty ill framed. Fast forward eight years. I think the problem that we're working on is attribution based control, which is this idea that. Someone who has data can know and control whose predictions in the world, they wanna make more accurate.
Andrew: So it's actually not about sharing data, it's about knowing and [00:05:00] controlling that someone else is aggregating data and using it to make better predictions about the world, whether statistics or AI or whatever, or just even by hand, like just learning stuff and making decisions based on it. It's about this idea of attribution based control, meaning or a D.
Andrew: C. People who have data can know and control who they give more power to or more decision making power to along a certain dimension and vice versa. The people who are receiving information can know and control who they're relying upon for each particular insight. That's inherently a decentralizing principle because it's have a direct to market principle.
Andrew: It's sort of people with data directly connected to people who are seeking insights without necessarily intermediaries, obfuscating the middle, which is really the main breaker of attrition based controls, usually algorithms thing, runway intermediaries. So yeah, that's sort of the, the journey in a nutshell happened to start working on private data 15 years ago and AI and LMS.
Andrew: Because it was a natural language processing company for access on really sensitive data, and I basically never left.
Alix: Um, I appreciate the note of humility in it, but one hit wonder is like the opposite of what you just described. Oh. [00:06:00] Because you at every stage basically have like found the thing within the thing and like you've had some lucky breaks, but like every time you've iterated you've been doing something impressive at that stage that has accumulated into something that it makes the next thing possible.
Alix: So. In terms of the problem you're trying to solve. If, if you wanna paint us a picture, if in like 10 years you all have like a confluence of luck where circumstances all align, you build technology, you find the right partnerships, the timing is good, what does the world look like? If open mind is like wildly successful.
Andrew: Broad listening gets solved, and it becomes possible for you to have an interactive conversation with a million people all at the same time. The most mature version of this tech is a communication technology. So it's like deep learning becomes a communication technology.
Alix: Do you wanna describe the difference between broadcast and broad listening?
Alix: Because I feel like, yeah. We've had conversations about this in the past that I'm like, it's taken me like two days to process it. And then I'm like, oh shit. That was actually really, really insightful.
Andrew: Oh, well thank you. Yeah. Um, yeah. Broadcasting [00:07:00] technology is in the history of information technology.
Andrew: It's 250,000 years since we started using words to like now. Right? It's important to understand the problem that information technology is trying to solve, which is we were people living in the jungle and one of us would eat a. Poison berry and it would make our stomach feel bad, and we needed the ability to tell others, Hey, this berry is bad.
Andrew: Don't make this decision because it leads to bad things. But the problem is we'd come around the corner, see our local tribes people, and have no ability to tell them. And so they would live the same patterns that we did largely right. We didn't have ways of better living that would traverse across time and space.
Andrew: You know, our, our kids would live the same life that we did largely. And so when we got language. This was a huge upgrade relative to gestures and tradition that gave us the ability to broadcast and, and broad listen information. Broadcasting being a half piece of information. I'm gonna make copies and I'm gonna give it to other people.
Andrew: And broad listening being I'm gonna take in information from different people around me and I'm gonna synthesize that into a better model of the world and use that to make better decisions. [00:08:00] Since that time, quarter of a million years ago, the main change is increasing scale. Of our ability to do that.
Andrew: So if the first broadcasting and broad listening technology was language, it was at the scale of yelling. You know, it's like I could broadcast to a hundred people if I went to the densest part of the community and just yell at the top of my lungs. And we still call that, you know, protesting to this day.
Andrew: It's, it's an important thing. And broad listening. Is still a little bit less scale 'cause you can only really listen to one person at a time. The exception are things like choirs or chanting or whatever. It's like kind of social things that we do to make broad listening more palatable so that you can know that a hundred people all agree at the same thing at the same time.
Andrew: So yeah, broadcasting broad listening is like. This journey of increasing scale and the project of broadcasting is almost done. It's almost possible for one person to send a message to every other person on Earth for free. And instantly we're like a couple orders of magnitude off from that being a thing.
Andrew: And when it's done, it'll be actually done. Done. There won't be just like infinite rocket shit up of like more broadcasting 'cause it's. [00:09:00] Broadcasting to who, you know, it's like, it's like, what's the point? Like, like you could send it to more rocks in outer space, I guess. And maybe if we discover aliens out there, that, that, that'll be interesting.
Andrew: Broad listening however is so far behind, oh my gosh, it's still so far behind. We still largely listen to like one person at a time. The contrast between broadcasting and broad listening is the source of enormous numbers of problems and, it is a huge source of the centralization of power in society because it's this.
Andrew: Really, really upstream problem that requires us to centralize control over information to get anything done. That then makes all the doing really centralized in terms of its power. There's kind of three main technologies that we use to broad listen. Right now it's a literary canon, statistics and sampling.
Andrew: So by literary canon, what I mean is. 10 people witness something, they each write articles. One person reads those 10 articles and writes a new article, and then someone reads 10 of those articles and writes a new article. And there's like some entrance criteria into some literary tradition that allows you to eventually read a paper that was sourced by 10,000 people, but it might take 10 years for it to, or a [00:10:00] hundred years for it to ultimately come together.
Andrew: Right. And that's the cat's meow of broad listening because you can fully synthesize nuance perspectives about the world and the other two main. Ways of broad listening are, they're really crappy versions of that, but the in exchange for the crappiness, you get to go a lot faster and a lot higher scale.
Andrew: And so one of them is sampling, where instead of summarizing anything, you just grab a sample, grab one example, one interview, one case study, one you know person, and say this is representative of the group. Doctors say that you should. Try Mylanta because it's gonna make your life better. Like, and, and, and you know, if you wanna know where advertising and public relations came from, there's this guy named Edward Bernas from the 1920s who realized, oh crap, I can get one person to say something and totally mold public opinion.
Andrew: And like, this is where like. PR and advertising also came from, there's a wonderful documentary called Century of the Self that I highly recommend watching. It talks about that. The third and final one is statistics. Love statistics. Statistics is [00:11:00] great, but it requires you to throw away all the nuance when you're converting the fuzzy world into specific facts and figures.
Andrew: Specific numbers, like, you know, let's say age is just a number and like they're kind of right. Different people are actually older or younger, physically or mentally or whatever. We throw away all that nuance. We create an age and then we can go, what's the average population of Nebraska? We know that.
Andrew: Does that tell us their average maturity level or their average life expectancy, or their average whatever? Like not necessarily. It's this myopic view like through line on life. It's not really a full description of what's going on. And so those three technologies are like powerful and each one of them has been introduced.
Andrew: The world has radically changed. I mean, the introduction of the literary canon was like. You know, it was a long time ago, so we, we don't, you know, it's the beginning of kind of recorded history, so we don't really have a ton of the before or after. 'cause before that we didn't have literary canons, so we had no idea what's going on.
Andrew: But, uh, radical, radical, radical change, right? But they're still very flawed. And today there are still kind of free big problems that are preventing broad listening scale. It's the problem of information overload, [00:12:00] information privacy, and information veracity. And when those three problems are solved, it'll become possible for you to listen at scale to the world's public and non-public information with.
Andrew: The ability to verify the values of who you're receiving information from, who you're trusting to rely upon for facts about the world. And when that becomes possible, you can listen to the whole world. You could have an interactive conversation with everyone in the state of Nebraska. And by that I mean people load their data into devices they control.
Andrew: That creates an interface for automatic communication.
Alix: Okay, so I get the capability that you're trying to design for an individual to participate in the system. Do you wanna talk a little bit about the technology? Like what would that require and what are you guys building?
Andrew: So there's, there's a, a collision of three blobs of technology or three kind of.
Andrew: Fuzzy groups love a good blob, um, love a good blob. I, you know, I wish I could say these were like really crisp, just technologists technological combine with this thing, but like it's tech transfer, so everything's a little fuzzy. One is kind of the deep learning statistics blob. [00:13:00] One is the cryptography, privacy and its technologies blob, and the other one is distributed systems blob.
Andrew: And so for deep learning, unfortunately when you actually dig in, it's like eight or nine different technologies that are all colliding as different ingredients. I'm not inventing any of these technologies. The best analogy is. The people who aspired to build the Apache web server back in the day were combining a whole bunch of like recently developed individual ingredients that weren't mature enough to cause a general purpose web server to happen before then, but when they could combine all of them into this one general purpose thing, magical things happened.
Andrew: Same thing with the iPhone. Like the iPhone wasn't held back by like any particular genius. It's like, oh, the iPhone. Everyone knew tablets were coming. It was in cartoons and animations and all sorts of stuff. Battery technology and screen technology, and like all these like individual ingredients that got small enough and low power enough and all this kinda stuff to finally make the phone possible and Apple.
Andrew: Did a great job of like pushing a few of those way farther forward in the synthesis of them together. So we're in the same type of moment. That's the nature of the challenge of trying to describe what's going on. So one is a partition form of deep learning that doesn't have a name. My personal favorite is deep [00:14:00] voting because I think it lends credence to type of power that we're trying to create, which is this kind of weighted combination of different people's models.
Andrew: And so this is already happening around four or five different dimensions because of efficiency reasons. But the sum total is instead of having one blobby model with. All your black box weights in one black box file that you run on a big GPU cluster, but you have one deep learning model that's actually separated into tens or hundreds or even thousands of submodels that are each controlled by different people and likely trained on different subsets of data.
Andrew: So if you've heard of mixtures experts, or RAG or model ensembling or model merging or giri basin or like these different. Paradigms, they're all doing the same thing. They're all splitting up data into partition sections and combining them at runtime as opposed to combining everything during training.
Andrew: That's the kinda the first blob of technology is basically just deep learning tech that is partition. The second is a blob of cryptography technologies that try to make end-to-end encrypted computation work. So you've probably heard of end-to-end, [00:15:00] encrypted. Message transfer stuff like WhatsApp and Signal, where it's like I have a message and I can encrypt it and it'll be encrypted until it gets to you end-to-end Encrypted computation is like that except you're also controlling some sort of information processing between.
Andrew: Sender and receiver of information. There's four different categories of cryptography that need to come together to make this possible. These categories are called input privacy, output privacy, input verification, and output verification. We probably don't have time to get into all of them, but if you've heard of things like home morphic encryption or secure or differential privacy or serial knowledge proof that these are all the little sub components and sub ingredients that make this possible.
Andrew: This is the part where the holders of specific private keys. Can control how their information is used after it's technically left their computer. And by by use you mean which encrypted computations you're going to choose to allow and you have to have a, a whole little bundle of cryptography technologies to actually do that.
Andrew: Well, that bundle is sort of still coming together. And the last one is distributed systems. Showed systems is [00:16:00] really about, one is all the stuff you need to have for the data to actually stay federated in its raw form. So if you actually want people to maintain control over how their data is used, it actually has to stay on computers.
Andrew: They control. You only leverage it in an encrypted form for specific encrypted computations. They're gonna produce specific insights, specific context that you want to have happen. And doing distributed systems is just like really hard. There's just like a bunch of really important sub problems to doing that well.
Andrew: And doing it in a modular way that doesn't pin you into one particular network effect or one particular opinionated protocol. And then the other big problem that's being solved there is what people call the web of trust. So being able to sort of overlay your actual social graph of like who you trust.
Andrew: To send or receive information with. In reality, like, and the grand prize here is basically word of mouth at scale. The ability for you to ask your friends of friends of friends, of friends of friends, something or, or not even, not that. That could be an organization who knows a person who knows an organization, who knows a person.
Andrew: It could be any type of social hop across these different edges. That's where the [00:17:00] solution to problems like disinformation and. Intermediation information, this kind of stuff can be found because in theory, you can find many redundant paths back to the same event or the same source. So you could talk to a thousand people who all use the same product, or you could talk to a hundred eyewitnesses at the same event or whatever, via your different networks.
Andrew: And if you have the ability to do that, it significantly reduces the chances that there's a coordinated disinformation program that can be successful because they would have to get so many people on board to convince to have the same. Corroborated story, right? That it just becomes logistically very difficult for people to kind of deceive you, as opposed to when all of your information funnels through two or three bottlenecks, it becomes a lot easier for people to deceive you, especially if those two or three bottlenecks have the same information incentives.
Andrew: Anyway, those are the three big kind of blobs of technology. One is proficient deep learning technologies, end-to-end encrypted computation technologies and distributed system technologies, and that's roughly, you know, deep learning. Slash statistics, cryptography, and then you network infrastructure.
Alix: That's so cool. It kind of reminds me of, um, do you know entangled particles in [00:18:00] physics?
Andrew: Oh, I know about them from like a popular mechanics level. I'm not like a specialist. Me too. Me too, me too. I'm not, but like the idea
Alix: that, I think it was Chinese scientists that. Took one. So basically two particles get entangled and if you change the state of one, it changes the state of the other.
Alix: And distance doesn't matter. Actually, these like Chinese scientists flew one of the two entangled particles into space and then changed the state of the one on earth and it changed the state of the one. That's wild. I know.
Andrew: That stuff boggles my mind.
Alix: It makes you wonder what the world is. I dunno. But it's so interesting that once a piece of data leaves.
Alix: Your control or your immediate control that you could continue to sort of change the state.
Andrew: Yeah, that's a great analogy. That is the dream, right? And even when it's been combined with many other particles, right? In this case, if your data gets combined with many other data points into a malware or whatever, that you still have your degree of agency or like the degree to which your contribution contributes to the whole, I think is, is very much the goal.
Alix: Yeah. Super cool. So what's the, 'cause I feel like you also use the phrase [00:19:00] network effect for like. In our first conversation, I remember like my big takeaway was you thinking about the history of decentralized technology and public protocols for the internet and like where we are now in terms of the network effects that are created and which entities are in control of the protocol layers of this our challenge.
Alix: Do you wanna talk a little bit about maybe what you've learned from the last 20 years of concentration of power in terms of like protocol control and like what you see maybe in the next like. 10 years is like the fights that are gonna happen around the protocol for these kinds of things.
Andrew: I think the right backdrop for that conversation is to acknowledge that everything that I just described is going to happen regardless of whether I'm here or not.
Andrew: Like these technologies are colliding because there are very big, important problems that they solve. Thousands of people have been working on them for decades and they're reaching a point of maturity where something exciting is about to happen. And this is the story of technology in general. There's a wonderful documentary from the BBC from like the late nineties, [00:20:00] think late nineties, called Connections That documents basically the whole history of technology from the beginning of birth and civilization to now that basically makes this one point, which is nothing can be invented before or after.
Andrew: It was because it was waiting for all the ingredients and the political moment to make it happen. The other thing, if you look at history of technology that happens over and over and over again, is that right? When a new information technology is birthed, whoever the most powerful people are of the day tend to have an outsized amount of control over it.
Andrew: For some non-trivial period of time, you could say literacy in the dark ages, or you could say, you know, the mainframe of computer and IBM, or you could you. The radio, television, whatever. That's like almost an absolute truth because it takes a bunch of r and d and power and whatever to like build and implement new technologies.
Andrew: The library, the library of Alexandria, this, this kind of stuff. The one big exception was the internet, and this is like. Wacky and wild. 'cause it, it, it shouldn't have happened that way in a sense that like ARPA Net was prototyped on top of Bell telephone's existing Monopoly, which was like so strong and so stringent that they were [00:21:00] literally broken up, you know, a few years later as a result of that monopoly.
Andrew: Yet, because the US government threw a bunch of money combining a bunch of ingredients before, they were kind of naturally going to be combined, they got ahead of commercial industry and created public goods. Early. Normally there's is arc where like it starts off as like a little experimental thing. It becomes centralized for a while, then it becomes very centralized for a while.
Andrew: Then people start like, this is a terrible thing. It should be decentralized, and then it like arcs down and becomes more federated and then it becomes like a public good. Like literacy is like pretty much fully there, although it's never there as much as people seem to think that it is. Even just like knowing the language of our most important text, you know, it used to be Latin and all this other kinda stuff.
Andrew: They were like culturally controlling things. Knowledge of computers, stuff like radio and broadcasting is decentralized with the advent of social media and all this kind of stuff. The question is kind of like, almost like from a COVID analogy perspective, like how did the internet cut the peak? Right?
Andrew: How do they skip all the way to the end? The big thing was they got a network effect crazy early with free open source tech. I mean, just like so, so, so early. [00:22:00] This network effect was so powerful, even though it was kind of kicked off. In large part by funding from the US government in 19 91, 9 2, when the US government tried to issue, you know, another official standard for the internet called OBS, it's reported that like basically the world kind of ignored them because everyone had already deployed.
Andrew: Some version of the protocol and they weren't all gonna reinstall all at the same time. I mean, think about like T ccp IP having 32 bid IP addresses, like talk to Vince s he'll tell you that was the prototype that got away. It was not meant to have 32 bid ips. And even though it benefits every single person in the world to upgrade a 64 bit ips, it is still taking three to four decades to like do it.
Andrew: The lesson here is that basically every time there's a new set of technologies. If there's an arc of centralization and an arc of decentralization, the main task is not to be the lone genius that like invents a bunch of new technologies or whatever. It's to cut the peak. Like how can you skip to the end and have safe, but federated tech as early as as possible?
Andrew: That's primarily a function of who builds it first, and the people who [00:23:00] build it first are usually the ones who are just crazy, wealthy and powerful at the time. Every once in a while and by every once in a while. I mean, like one time we, we, we did better,
Alix: which is helpful. Yeah,
Andrew: yeah, yeah. Yeah. The odds are very much stacked against us except for the fact that it happened recently and so it's still in people's, you know, recent ever read this kind of stuff, so I don't wanna like.
Andrew: Trivialize, all this is just like a fundraising problem, but it definitely starts there. Ultimately, the magic happened when World War II ended and the US government said That seemed to go well. Why did that go well? Superior science and technology, let's always do that. And then they started ARPA and just through crazy amounts of money during the Cold War and my dad was in the military and he, he told me in all their military simulations like.
Andrew: The people who invested in r and d and technology were always the ones who won in the long run. You just have these like systemic advantages. And so, you know, for me, the thing I look out now is like, are we going to do that? Are we gonna have enough of a cohesive vision to build the ingredients that haven't happened yet?
Andrew: As opposed to throwing money, kind of chasing LLMs or something like that to get ahead of [00:24:00] it, uh, far enough where you can actually get these sticky network effects before the next obvious thing will happen, which is basically a startup let scales these protocols outta Silicon Valley In the grand scheme of things.
Andrew: There's this old phrase, I think it's like a military phrase, which is like the novices focused on strategy and experts focus on logistics. And what they were talking about was like in war, the people who win are the ones that keep their troops fed and watered better than the other guy. Again, strategy is like about what you don't do.
Andrew: Like, oh, we're gonna go left and instead of going right and all this kinda stuff. And logistics is like how much resources you're pulling in. And, and so I, I really do believe that this is the stuff that's actually going to control how AI gets used in the long run. 'cause it's control over the supply chain.
Andrew: Whoever builds the protocol that accesses the world's data and compute and talent and, and networks it together, that protocol is gonna have a massively outsized impact on like who gets power and who doesn't, whether it's good or whether it's bad, and all this kind of stuff. And the particular algorithm you happen to be using, whether it's a transformer, lstm or whatever, is gonna kind of matter, but it's gonna matter only in in so much as it changes who owns the supply chain and.[00:25:00]
Andrew: I'm deeply more passionate about that problem than the problem of just like numbers go up and make, make the model smart. It gets kind of boring staring at the line after a while. People think about chatbots as like talking to the chatbot, and I think that's something I really buy right up until I read this paper that was kind of the founding paper of the internet that gave a rich description of what communication actually is.
Andrew: That just changed my life. I would say the most important paper I've ever read in my entire life is called The Computer is a Communication Device. And it was paper written in 19, 19 58, I think it was right before JCR liner went to ARPA to create architect. The intro to the paper basically says you think you know what communication is, but you don't.
Andrew: Communication is not the sending and receiving of bits, which is what everyone would say communication is, and like, you know, dial someone on the telephone or you know, sending 'em a text message, so whatever, that's communication. He said, no, no, no, no, no communication. And he was a psychologist. Also, the person who like.
Andrew: Funded and coined the idea of the internet psychologists, not technologists, which is interesting. Communication is the alignment of mental models between two people. You have a mental [00:26:00] model of the world. I have mental model of the world, and we throw bits at each other until our mental models align. And that that's what communication actually is.
Andrew: And as soon as you make that connection, all of a sudden deep learning tech is an amazing communication technology. Federated learning is an amazing communication technology. Chatbots are an amazing communication technology because it's actually about you being able to put your mental models into a software program that then someone else can download those mental models.
Andrew: When you aren't around.
Alix: This is also reminding me of when I was a kid, I used to think all the time if whether the blue I see is the blue you see? Or if you Oh, need from color and call the same thing because we totally,
Andrew: I wonder that too. Yeah. Yeah, yeah, yeah.
Alix: Um, but when you think about what's happening with Rock, for example, like this idea of language models and chat bots, the purpose isn't, I put my mental model or maybe for Musk it is, but like there is something happening that is.
Alix: I want my mental model to be the hegemonic mental model of the world. Ah,
Andrew: yes. Which [00:27:00]
Alix: is a very different Yes. Scary proposition.
Andrew: This is also an extremely common pattern in the invention of new information technologies. The encyclopedia invented in the 17 hundreds in the encyclopedia movement in France, and it was.
Andrew: To create the official dogma of like what the world is in one book that that go alongs with. Another pattern that I saw in the information, information technologies, like every time there's a new synthesis technology, every time there's a new way to bring information to stick together, some genius has the bright idea, like let's put all human information, all human knowledge into one versions of these things.
Andrew: Like the, the first synthesis technology, I would argue is the library. It allowed you to put lots of ideas in geographically the same location, and you could go in and synthesize. And of course, Alexander the Great is like, let's put all human knowledge into one building, right? One building into one library.
Andrew: And then like when someone invented the book, the encyclopedia we're like, let's put all human knowledge into one book. Then the, the computer scientists of like, you know, the, the Google Books project, like let's put all human knowledge into one computer, all this kind of stuff. We're still living in the, in the era of that.
Andrew: Everyone has that bright idea and thinks that they're geniuses and then ultimately it, they realize that's a bad [00:28:00] idea. But you get to have an amazing instrument of propaganda for a little while. Even, like even literacy was that way for a while. Like, oh, literacy, oh, well only the elites should have, you know, the one centralized all, you know, whatever in one, in one place.
Andrew: And I think it is potentially dangerous, but this is what trimming the peak is all about, right? So if, if you're just trying to like cut off the peak of centralization, that is kind of the inevitable. Ebb and flow, that is kinda the path dependent thing, which is not that we're gonna be centralized forever, but there's gonna be a period that really is crappy, right?
Andrew: And we have, we have the chance to kind of skip that and, and allow our children to benefit from it instead of our grandchildren or not, instead of in addition to our grandchildren.
Alix: I feel like broad listening as a concept. Maybe it doesn't miss this, but I think about a lot about feminist epistemology and the idea that there is no distilled truth.
Alix: Everything is a combination of multiple perspectives and experiences and ality, and that like the pursuit of a single point that we can all agree on is kind of missing the point of society. Yes. So I, so I wonder a little bit like when you say [00:29:00] one of the pillars I think you said was veracity, like how do you think about.
Alix: Preference and pluralism and complexity of social experiences within that.
Andrew: So I'm definitely a plural. There is. Also definitely an an interesting open question of what happens when we get to the end of information technology of like what that does to that. Because like since the beginning of time, no one has had the overview, no one has had the ability to take in all the information in the world and like make a perfectly well-informed decision.
Andrew: HG Wells had a, a series of essays called the World Brain. He had this great hope that if information technology reached its climax and everyone could see the same information, they would come to the same decisions. It would have world peace. So, so it's not, not a new idea, I guess is all. All I'm trying to say is like, in my opinion, and this is where the kind of web of trust stuff that I mentioned earlier ago, kicks in, is when information technology is done and kind of complete, there's still an ongoing problem that people need to work on, which is.
Andrew: What sources of information do you trust or not trust? Like what is your personal weighting of [00:30:00] information you're willing to take in and rely on for decision making power? And there's this really tricky problem that we all die. And as a result, as a result of that, that is not where I
Alix: thought you were going with that.
Andrew: No, no, no. Well, we all live short lives. No, none of us live long enough. To actually learn fully what sources of information are true or not true.
Alix: That's why academia is so cool to me, because the idea that you make your contribution for 30 or 40 years and then mm-hmm. You die, you know, a literary
Andrew: cannon.
Andrew: It's amazing. Right? Yeah. Like, it's, that's fantastic. Um, but I think there's a, if you remember like Dunbar's number, you can only remember like 150 brands or like, you know, 500 faces or something like this, right. This kind of thing. And I think this gets back at the same core idea, which is that like at some point.
Andrew: All the world's trust systems are grounded in one person. Just taking a risk and believing another person and then learning after the fact whether that person was actually a reliable source of information. There is no way to predict it in the future unless you have that ground source data to extrapolate from.
Andrew: That's so interesting.
Alix: So [00:31:00] trust as distinct from being correct.
Andrew: And even there's this question of like, what is correct? And, and I would say that what is truth? Let's just go all the way there. What is truth? And there's like two competing definitions in my opinion. And what they, they're the leading ones and one of them is truth is repeatable, verifiable information.
Andrew: I can put in the same information, I get out the same information. This is like the, the grounding of like a scientific method and all this kinda stuff. And the other one is. Truth is whatever information that I take in that allows me to like flourish in life by my own definition, I take in this information, I use it to make decisions and the decisions work out.
Andrew: And so that's like a totally, it's like it's an end-to-end kind of truth. And like that's the difference between like Socratic thought and like traditional thought. And it's the difference between science and religion and politics. These two definitions of truth are in like a a 2000 year struggle to figure it out.
Andrew: They're not going to go away. We are more trapped in the latter than we are in the former in a sense of like, how do you know that one eye is deceiving you? Right? If your brain is just a, a vat with sensory information coming in, the way that you know is it gives [00:32:00] you inconsistent information with what your other senses are telling you.
Andrew: If you only have one sense, if you only had one eye and no touch or taste or whatever, you have no ability to filter out fact conviction. You just have to trust your eye because that's the one you got. This is like, you can extrapolate this all the way up to like, how do I know to trust that journalist?
Andrew: Well, I didn't trust that journalist because they're attached to this institution, which attached to this idea of the institution which I've trusted in the past, that like led to good things for me or I trust them because people who I trust trust them. These are like the two big things. It's like your whole world is, it's constructed based on like.
Andrew: 50 to 150 relationships that you have personally tested yourself. And then you extrapolate that to all the relationships that they trust and the source of information that they pull in. And that's like a ugh, icky thing to like grapple with.
Alix: But it also means that cancel culture is broad listening.
Andrew: Any type of in information synthesis of scale is broad listening.
Andrew: It can absolutely have challenging themes.
Alix: I think there's like the whisper network thing of like someone that shitty things to different people and then you ate that thing knowing about that person and maybe there was [00:33:00] no formal process whereby that person was held accountable. But I know that like that person is someone that I will avoid.
Alix: That's so interesting. Yeah.
Andrew: So it's product reviews are broad listening. Voting is broad listening. Statistics is broad listening. Any, any type of, like many people say something and I am waiting it via the relationships is being filtered through and ultimately synthesizing it so that I can make better decisions.
Andrew: All of that is broad listening. The big, big, big problem is that it happens at crazy low scale, and so because it's so low scale. We delegate the three big problems to centralized institutions instead of doing it ourselves. And the three big problems are information overload, privacy and veracity.
Andrew: Information overload is, there's too much information for me to read. What should I look at? Newsfeed, what should I look at? News station, broadcaster, whatever, researcher. The second problem is privacy, which is most of the world's information that could be useful to me to make a better decision. No one's gonna send to me, right?
Andrew: Because it's like in the personal lives of people's personal affairs and I don't know or have trust with those people. So think like every decision you make in your life, there's probably a billion people in the world who have. Faced a [00:34:00] very similar thing within the last 10 years, but you can't go and talk to them about that because it's like, uh, there's too much information and they don't wanna just send it to you in the raw because it, it just, it's icky and scary and could cause all sorts of problems.
Andrew: It could give you power that they don't wanna give you power. The final one is the inverse of that, which is you receive information from people, but you don't know which pieces of information are true. You have two tools with which to do that. One is test. Some information channels yourself manually, which is painful and scary and like that is the trust building exercise.
Andrew: And the second is, uh, redundant pass to sources of information that you can't see, so that if you get 10 eyewitnesses to the same thing, 10 people all try the same product or a hundred or friends of friends of friends of friends, like larger lists, then it becomes more redundant. So like what word of mouth continues to be so popular is about the fact that it's so low scale, right?
Andrew: It it's harder to coalesce these types of centralization issues. And so yeah, broad listening is like. Each of those three things, we delegate to a centralized authority to like process more information than we could. Or if it's private information we have, you know, the CIA or healthcare institutions or banks or whatever that hold kind of the world's [00:35:00] private data and can centralize it and give us insights about it.
Andrew: And the third one being veracity. You know, we will hire people to go out and see where the stuff is really true and we have journalistic institutions and research institutions and all this kind of stuff. And you might, you gotta have to ask yourself, do I trust that there is institutional, that kinda stuff.
Andrew: But unequivocally, we have to solve these collective. Problems. We delegate it to centralized institutions. It's hard to think about the world in a world where like broad listening technology is fully accessible and you no longer need to centralize that power. It makes the whole political. Economic constraints just radically different.
Andrew: Why are our markets inefficient? You can't talk to every other potential buyer and seller. The people who are building products can't talk to every prospective, you know, customer about what they actually want in the world. Like there's like, there's this huge, huge information, bottlenecks that make it difficult.
Andrew: Why is the political process so jam? Well, when you go to vote. In this multiple choice question that you, you get to send to the government. Are you actually fully expressing everything that you need from your government and, and, and their people? Like, no. You're doing statistics. It's an information bottleneck.
Andrew: It's [00:36:00] that statistics can synthesize information once every four years in the 17 hundreds. But like if you just wrote an essay with like, here's what I need from my government. The government would get, in the case of America, 360 million essays and like it is a totally, it's a cumbersome system, right? And that support conversation happens anyways, the big.
Andrew: Whatever people call the, the political dialogue or whatever. It's pretty decoupled from systems of power. It's not the way in which they are appointed or not appointed. They're appointed or not appointed by votes. And what's that old phrase? Uh, you show me the incentives and I'll show you the results.
Andrew: Plays quite true there. So anyway, the whole world changes if, if we get Broading Tech. And so I guess to get back to your original question of like what does it look like if Open Mind is radically successful, it WW we get to the end of information technology early and we get there. Based on public goods before they were able to be co-opted for, you know, a few decades or co, maybe a couple centuries.
Andrew: The experience of it feels a little more like synthetic media because that's the most accessible form. This won't be the exclusive way, but my guess is that it looks and feels like a signal app, but where you get to choose massive groups of [00:37:00] people. Filtered by your own friends of friends, of friends of friends to have a conversation with.
Andrew: And it might be audio, visual, text, all this kind of stuff. Um, and it's not you talking to the ai. The AI is just an interface to all the concepts that they've written down. And at the end of the day, it just becomes communication at scale. And when you wanna go solve a problem in your life, you get to have a conversation immediately with everyone who had that problem very recently, even if it's a problem in your personal life.
Andrew: 'cause you're not actually learning personal data about any particular person. You're having a conversation about your own life. And they're just informing it based on kind of the latent patterns that they've experienced.
Alix: Who, when hearing this conversation, are you trying to attract into a conversation?
Alix: Just so people know I should reach out to Andrew if I am a health institution trying to figure out blah, blah, or I'm a politician. Like what? Like who are you looking to talk to about this?
Andrew: There are not that many programs in the world. That can trigger this network effect fast enough and early enough.
Andrew: The main bottlenecks are the small teams of [00:38:00] people who. Are specialized in one of these core ingredients that needs to come together and be combined with the other ingredients. So the 150 leading experts in differential privacy, the a hundred leading experts in ho morph encryption, the like, like these like little tight communities in these out of the Overton window.
Andrew: Tech stuff, like they need to like realize that they're a part of a much bigger picture and start building that way and researching that way as opposed to continuing to work on the pure version of, of their own thing in isolation. The next community are people who can fund public infrastructure at the scale that is necessary for this stuff to be finished and adopted.
Andrew: In my opinion, there's probably only like three to five programs that I know of that are plausibly able to do that. So this is like stuff in and around the national AI research resource in the us so this like kind of public AI pool of funding that and, and program being run by the NSF that like has the potential to set aside a few billion dollars to drive public infrastructure.
Andrew: Around, you know, federating control over AI or decentralizing ai. It's the democratizing [00:39:00] ai, I guess the phrase they use anything around that. Darpa, N-S-F-N-I-H. These places they can write, in theory can, I know the pre current political climate is challenging, but in theory could write big enough checks and have in the past to cause this to happen.
Andrew: In the UK there's a, a thing called the National Data Library Project that I think, uh, is probably the leading contender in the UK to, to manifest this type of vision. Also, a lot of people outside the UK maybe don't know that. Arnet had a sister project in the UK that was incredibly important called NPL net.
Andrew: That was basically the same type of packet switching prototype and packet switching actually was developed in the uk and I would say the global network effect got solidified and the public good got solidified. Not when the US built Arnet, but when the US linked internationally to other nations that had similar analogous protocols.
Andrew: And the, the whole word internet is actually. From a paper called the Interconnected Network of Networks. Uh, and that was when all these little test prototypes linked together, and that's when the world became open and the, the trajectory of the 20th century really changed. So I would say, yeah, there, you know, there's a kind of a complicated set of potential funders in Europe that could build kind of public infrastructure.
Andrew: I, I, you, you probably know more about them than I [00:40:00] do, Alex. And then I have a vested interest in like also slowing down this information, transferring to those who would be competitive to it. So. I don't wanna see any VCs standing up a broad listening program anytime soon. That would be like disastrous, I would say.
Andrew: Now once the highways are laid down, go nuts, go crazy, right? This is great. But like the software playbook is so centered around owning marketplaces and like owning the common, this kind of stuff. And it just like the whole ball game is about can public funding front run VC funding with enough time? That the highways can get laid down so that commerce can then happen on the highways and not in the highways, I guess.
Andrew: Policymakers. Unfortunately, there's not a lot from regulatory perspective that I think can really change things. People will talk about standards bodies, but in the history of the internet standards bodies, they were not relevant in the beginning. They were relevant in the middle to late and in perpetuity after that.
Andrew: But like we're using TCP because they were the ones who built it first, not because the ones who sat around and had a conversation. And I, I love standards bodies. They're [00:41:00] really important. Everything that is being concerned right out needs to find its way to being stewarded by a standards body that absolutely needs to happen.
Andrew: But we should not get our cart in a horse backwards like. The power is in the hands of the builders and the funders, and it's actually mostly the funders. I would, I would say that's where the bottleneck is.
Alix: I feel like I'm, I always take away a lot from our conversation, so thank you. This is a type of decentralization that I don't think that many people know is possible or think about in this way because I feel like most decentralization conversations happen at the level of broadcast and centralization, not at the level of meaning making and private stores of data.
Alix: So I feel like it's just a really helpful. Different kind of approach to a problem that other people, I think, think a lot about, but in very different contexts. So thank you. This was lovely. I will be talking to you soon.
Alix: Thanks to Andrew for wrapping us up in this three part series on Node Star. Next, we are actually doing another. Series, which thanks to Georgia Iacovou and Sarah Myles as ever for producing these [00:42:00] individual episodes, but also for architecting these bigger series where we can go deeper on particular topics with lots of different conversations with different folks.
Alix: So our next mini series is on scam. The first in that series is gonna be an interview with Mark Hayes, who is an anti crypto lobbyist in DC and I'm hoping that a conversation on crypto is kind of situated really nicely in between decentralization and scams. So we'll get into that next week. And if you happen to be in New York for Climate Week, reach out.
Alix: We're gonna be around, and if not, we're gonna have a live show. On data centers, obviously the primary environmental conversation we think we should be having during climate week about ai, even if you're not there in person. Um, we will have a live stream, which we'll link to in the show notes, and we will be streaming it on YouTube.
Alix: We have a new-ish YouTube channel. It's always helpful to subscribe and also share this episode and all the episodes that you love that we do, just to reach the number of people that engage with these topics and the people that we [00:43:00] platform. So do subscribe to our YouTube channel and also sign up for the live stream.
Alix: And also let us know if you're gonna be in New York for Climate Week, um, and we will hopefully see you there.
