Gotcha! ScamGPT w/ Lana Swartz & Alice Marwick
Alice: [00:00:00] If you go to X, you're seeing porn, you're seeing crypto ads, you're seeing like supplements. You're seeing all of these like scam adjacent things.
Mark: The defining feature of that system is basically a way for wealthy people. To control their money with impunity
Cory: platforms, start to hemorrhage users and then they panic, although they call it pivoting and they do all kinds of really dumb shit.
Bridget: And we're taught that it's good because we're all, you know, doing it together.
Lana: Looking at a history of scams is really looking at a shadow history of the economy and a shadow history of our communication systems.
Alix: This is Gotcha. A four part series on scams, how they work and how technology is supercharging them.
Alix: Welcome to part three of Gotcha. In this episode, we're gonna be exploring what's new about scams, particularly how generative AI has. Changed how scams function, and we are talking to two amazing guests who have been researching this question for a long time and come at it, not [00:01:00] from a, an assumption that technology really has changed very much.
Alix: But real quick, as a recap, for those of you who haven't listened to the last two episodes in the series, highly recommend you check them out, which is for context, we dug into cryptocurrency with Mark Hays, looking at lobbying efforts that have. Kinda kept crypto alive and actually potentially are about to break through into pushing crypto as like a, an existential part of the modern economy, which is a bit of a terrifying prospect.
Alix: And then I sat down with Bridget Read, who wrote Little Bosses everywhere on the history of multi-level marketing and kind of how it teed us up for this cultural moment around scams and the precarious economy we find ourselves in. So in this episode, Alice Marwick from Data and Society and Lana Swartz from University of Virginia.
Alix: Wrote a primer called Scam, GPT. That's really good. We'll link to it in the show notes. They get into the cultural history of scams, but also how AI has really changed things in the last few years and what that might portend and they end with what I think is some really interesting questions that we need to [00:02:00] be able to answer to know how we might even change some of this.
Alix: Given how much of this is, um, just deeply difficult to control because once these systems have been developed and once the very, very. Clever and resilient communities that scam people start using these technologies. It's really hard to go back. So with that, let's jump in with Alice and Lana and get into how has generative AI affected the way that scams work?
Alice: My name's Alice Marwick. I'm the director of Research at Data and Society, and I'm a qualitative social scientist who studies social media, disinformation and privacy.
Lana: My name is Lana Swartz and I am an associate professor of media Studies at the University of Virginia, and I study cultural and social aspects of financial technology, cryptocurrency, and I think a lot about people's everyday financial lives.
Alix: Thank you for writing the primer. I've been thinking for a while that [00:03:00] all of this proliferation of new technologies to like make a bunch of text was gonna be used by people to do bad things. And it's nice to sort of hear more forensically like how that might be happening and the implications. So thanks for writing it.
Alix: Do you guys wanna talk a little bit about why you decided to write it before we get started in terms of the Yeah.
Lana: Suits. Perfect. Yeah. I mean, Alice and I have been. Academic colleagues for a long time, and we've always sort of looked for a project we could collaborate on. We're both obsessed with MLMs.
Lana: We're both obsessed with scams. I study crypto and the dark corners of the digital economy more generally. Alice is a world renowned expert on privacy, micro celebrity. And misinformation. And so when we began hearing these reports about various AI scams, we thought it was a really great opportunity to both work together from our various different lenses and also answer some questions about this thing that there seems to be relatively little information [00:04:00] out there about.
Alice: Yeah, the FTC put out a list of research topics in Spring 24 that was like things they would like academics to be studying, and the top of their list was AI and scams. And I saw that and I sent it to Lana and I was like, this is it. This is where we can both bring our like weird obsessions plus our scholarly knowledge, plus our sort of larger media cultural studies point of view.
Alice: And I think we can look at this in a way that. Isn't being addressed by like computer scientists or economists or cybersecurity
Alix: experts. That's so cool. That makes me sad again about the collapse of federal funding for research because that's like, I didn't, I didn't know that that was a thing that happened, and I think that's really sad that that probably will stop happening.
Alix: But also it makes tons of sense to like jump on that. It feels like this obvious intersection. I mean, everyone has experienced it personally, where I feel like the number of attempted scams has grown so much. It feels so resonant right now and relevant for [00:05:00] so many people and I feel like. It's nice to know people are kind of looking under the hood in how it's all coming together and I feel like this like era feels very scammy.
Alix: Every other Netflix show is about some really charismatic person tricking people into doing something that is against their interests. Um, I don't know. I mean, I'm kind of curious. Do you think this sort of. Symmetry of technical capability and like all of the kind of infrastructural pieces. I mean, it must be connected to what's happening.
Alix: Like any thoughts on that before we start talking about like how all this stuff works? Oh yeah,
Lana: absolutely. So I mean, as I mentioned, my background is the dark corners of the internet and the digital economy more generally. I, for better or worse started paying attention to crypto in like 2011, and I can't seem to like break away from it because it keeps being relevant and.
Lana: I am really interested in the kind of financial and communicative landscape that we're being faced with right now. You know, like meme stocks, side hustle culture. It's genuinely [00:06:00] difficult today to tell the difference between a scam and a legitimate opportunity that you. Ignore at your own peril. And I think that because the rules of the economy have changed so much, right?
Lana: So people in their twenties and thirties and forties are facing an economy where the things that worked for their parents, like get a four year degree, pay off your very small amount of college debt, have a 401k, imagine, get a 30 year mortgage, nine to five job, all of those things. Actually feel like scams and we're told that they're scams, but all the things that step up to like replace them are also definitely scams.
Lana: Like yolo your retirement into crypto and do Andrew Tates like Hustler University instead of going to the University of Virginia where I teach. And I think that that very landscape where everything is so kind of confusing is perfect for scams to kind of emerge because we don't. Know what is and what is not a scam.[00:07:00]
Lana: I also, like you mentioned, like the Netflix shows and all the podcasts, I actually find really obvious scams kind of boring. Like I don't actually go in for the Anna Delve or the Theranos stuff because it's just like, yeah, a charismatic person scammed a bunch of people out of a bunch of money. I am much more interested in scams where it's harder to tell who the good guys are and the bad guys are, so to speak, and that we all kind of find ourselves sort of trapped, defrauding or being defrauded, which is kind of the MLM thing.
Lana: Kind of the crypto thing where you have to pump your bags and find the next person to buy the crypto. So it was in that context that we were like, let's see what's actually going on. As you say, under the hood, with some of the new AI driven scams that we're starting to see, I
Alice: think. We also live in a time where most of our social relationships are mediated by technology or social platforms in some way.
Alice: And once you digitize relationships, you open the door to technological exploitation of those relationships, right? So we see this with multi-level marketing where so much of what goes on with [00:08:00] MLMs now is people using their own social network to try to get clients right. People added to their downline that they're reaching out to via Instagram or social media, or people who are just starting, you know, quote unquote small businesses using Instagram or something like that.
Alice: But it also means that like when you get a text message from someone, it could be like your most closest friend. It could be your boss, it could be somebody you know. 30,000 miles away, like you don't know, and it's that mediation. I think that really links the world of technology and the world of scams and economic precarity.
Alice: Yeah, and
Lana: just to add to that, you know, one of the things I study is the history of. Payment technology, uh, and thinking about it as a kind of like media or cultural, um, like communication, technology. And every single time there's a change in our communicative technology, whether it's like the telegram or the new kinds of post office stuff or you know, the emergence of phones, cell phones and the internet.
Lana: There's always [00:09:00] new forms of like economic activity that happens on those channels. And then. Almost immediately, if not before, there are new forms of like illicit commercial activity or scams. So like looking at a history of scams is really looking at a shadow history of the economy and a shadow history of our communication systems.
Lana: Because what's possible will happen. Yeah. It is true that some of the smartest people in the world don't have access to Silicon Valley venture funding. So they're putting their expertise, their skill, their ingenuity, their entrepreneurial vibes into scamming. That's
Alix: super interesting that like it becomes a marketplace where the path to accessing resources is through exploitation, which I think is interesting this point too about like who the good guys and bad guys are being unclear.
Alix: And it was one of the things that I really enjoyed about my conversation with Bridget Reed about MLMs is like, is, is it's, it's this pact where you know that you're going to exploit and be exploited and that you just are participating in a system that you [00:10:00] also don't like. Um, yeah, it's not as simple as.
Alix: People with power and leverage use that in an abusive way. It's like something so much more, more complicated than that. Although,
Lana: absolutely. You know, some of the reports that we're seeing, both from journalists and from researchers around some of these scam compounds are absolutely, you know, human trafficking.
Lana: Like, you know, very real power and exploitation is being exerted on people. But at the same time, I'm working on a. Book project about scams more generally. And I interviewed someone the other day, a young man who does a lot of like dropshipping and has taken drop shipping courses and all of that. And he was saying that, that he uses AI to generate all of his websites, all of his ads, all the material, and it really accelerates his ability to like do drop shipping.
Lana: And he was sort of aware. That drop shipping itself is a scam and that it has an MLM like structure where you're paying for someone to sell you online courses, [00:11:00] to learn how to do drop shipping with the promise of financial freedom and all the same things that MLMs promise, the whole thing is a scam and he's using AI to do it.
Lana: But that isn't the same kind of like AI scams. We think about when we think about what's originating from these scam compounds.
Alix: So I feel like there's two threads here. One is how scam infrastructure has been built up. Like the people the like. Power, the geographic distribution maybe, and then there's the, as technology has advanced, how it's made that more possible and has facilitated other kinds of surprising either size and scale or materially different approaches.
Alix: I feel like we should do maybe the first one first and like talk a little bit about how the human infrastructure of scams works before we get into how generative AI has meaningfully changed. What people are doing.
Alice: Yeah, so a lot of the texts that we get that are, you know, you get this text that's like, how are you doing?
Alice: Or I'm going to New York [00:12:00] next week. Like, do you wanna meet up? Or ones that purport to be from your boss. A lot of the time these are coming from these massive cybercrime compounds in Southeast Asia. There is a new book that I really like called Scam by Ivan Franchini Lingley and Mark Beau, where they actually do a sort of ethnographic examination of these scam compounds.
Alice: And a lot of them are coming out of casino adjacent or organized crime adjacent groups of people, and they are luring what are often young people in the majority world with promises of like good desk jobs, right? Like a customer service job or even an engineering job. And the, the people can be. All over the world.
Alice: And then when they get into these compounds, their passport or their papers are taken and they're often beaten, or they're abused and they're forced to work like 14 to 20 hours a day. They have this sort of like massive apparatus of computers and phones where they're constantly running these scams 24 7.
Alice: And because of ai, they're able to contact far more [00:13:00] people than they were. Previously. Now these scams existed before ai, but I think one of the things AI has done is it has allowed this to increase in scope and scale to the point where, you know, I'm getting, I get like three or four of these texts a day and I ignore them.
Alice: Right? So I can imagine that some people are getting like 20, 30 of these texts a day and. That doesn't even touch on the sort of other platforms where these scams are happening. And so when I was writing this primer, it was really important to me not to think about this as like the exploited American versus like the evil overseas criminal.
Alice: Because I really think the people who are being human trafficked are like the worst treated people in this entire apparatus. Like they don't wanna be scamming people, they just want their passport back and to get out of these compounds. Um, and yet they're really shadowy. They move around a lot. They have.
Alice: Been identified in other areas. There was one in like the aisle of white in the uk. There was one in Eastern Europe where there was a big investigative journalism discussion of that. But for the most [00:14:00] part, they're really tied into these big Southeast Asian criminal enterprises.
Lana: Yeah. There's also, in addition to the book Alice mentioned, which is amazing, there's also been some really great reporting that came out of the Organized Crime and Corruption reporting project, which is a journalistic, my wife used to be the CTO.
Lana: Oh, amazing. No way. Ama Well, well tell Love. Yeah. They're
Alix: amazing. They're incredible.
Lana: Um, yeah. One of the thing that they really emphasized is this is also the result of political instability in particular countries, notably. Myanmar and kind of a lack of states and international organizations to enforce in those regions.
Alix: That's super interesting and it feels. Corollary and complimentary to a political instability that prevents regulatory controls being put in place in economies where the targets are. Um, like it feels like those are two maybe sides of a similar coin. Okay. Well, anything else on like how, just so people can imagine like how this is structured so that when we [00:15:00] layer AI on top of it, it makes more sense as to like how it makes it evolve.
Alice: There's a wide variety of different scams, right? And I think some of them operate a little bit differently. So the scam compounds are best known for pig butchering scams, which is when they sort of reach out to a target, which can be basically anybody, and then build up a relationship over time with that person, usually because they're promising them some kind of romantic attachment, and then they get them to invest in some.
Alice: Usually in crypto, and then before they know it, the person's savings are drained. Those are the kinds of things that we absolutely know are happening at a huge scale in these cybercrime compounds. But some of the other scams, like things like the ones that involve what's called harpoon wailing, which is when a super high value target is identified, and then there's all this sort of targeting to go after them and try to scam them out of like.
Alice: Potentially much higher payout. There's not as much information about who is engaging in those kinds of things because they do require a lot more preparation. They require a lot more technological savvy. They often require voice cloning and [00:16:00] deep fakes. So there is. A lot that we do not know about where these scams are taking place and how the different kinds of scams are distributed over the perpetrators.
Lana: Yeah, and I think another really important part of this apparatus are ad tech and also the way that platforms enable legitimate companies to do AB testing and dynamic ads. And so one of the things that folks are starting to see is that a. Scammer will put a ad that looks legitimate and then run an AB testing to then defer it to an ad that might look a little bit more suspicious to like platform monitoring, which then redirects to external landing pages, which the platforms aren't able to see, and then that can kind of engage people.
Lana: A fraudulent and scammy cycle. But the important thing to know is that scammers are really good at ad tech. They're really good at, at SEO, they're really good at figuring out how to stay on [00:17:00] platforms. And platforms. Really haven't done much to do targeted enforcement.
Alice: So after the inauguration, you know, a lot of American tech companies have pulled back significantly on their content moderation efforts, and as a result, platforms like X or Facebook just look a lot scammier than they used to.
Alice: If you go to Facebook, you'll see all kinds of promoted groups that are just full of ai. LOP and sludge and garbage, that's all fake, right? But is still getting thousands of likes from people who don't know that like, you know, a 6-year-old can't bake a perfect cake. And that it's so easy to generate an image like that out of ai.
Alice: And so when platforms become these sort of. Spaces where, you know, if you go to X, you're seeing porn, you're seeing crypto ads, you're seeing like supplements, you're seeing all of these like scam adjacent things. And it creates this environment that is, as we wrote in the report, it's a criminogenic environment, right?
Alice: Like it lends itself to criminal enterprises and it's more hospitable to criminal enterprises and people using that technology to [00:18:00] exploit others. So I think. The lack of content moderation and the sort of lack of desire of social platforms to try to clean up some of the stuff that's going on also really contributes to the prevalence
Alix: of this stuff.
Alix: Yeah, that's really interesting. It also makes me wonder about how Free Speech Discourse has connected to scam discourse. Like do I have a right to try and rip you off? Like do I have a right to try and get you to my scammy website?
Alice: I mean, look at the different ways the US has tried to regulate MLMs and how poorly they're able to do that, right?
Alice: We live in a country that valorizes entrepreneurship and the self-made man and the small business, and that small business could be. You're a plumber, or it could be you're selling supplement shakes out of a storefront somewhere. These things are not equivalent, but they often use this sort of discourse of entrepreneurship to protect the people within them or to legitimize the actions of the people within them.
Lana: Yeah. I interviewed someone recently who was talking about how they. [00:19:00] Essentially fell victim to a fake job ad they knew it was a scam and they knew the person they were talking to was not who they reported to be. So they were asked to do click work, which was basically farming views for Spotify. And they sort of thought that, that they were participating.
Lana: They were like being employed by. Spotify scammer, and that the work they were doing was legitimate, but in reality, they weren't actually clicking on anything. Like it wasn't actually impacting Spotify in any way. They were just clicking on some. Fake website. And then when it came time to get paid out, they basically were asked to like re-up their account so that they could have a full payout.
Lana: So they had to like get to a hundred dollars to get a payout. They were at 95, so they were asked, oh, if you just pay $5, we'll give you a hundred dollars. And they knew it was a scam. But they thought they could scam the scammer. They legitimately believed they could figure out a way to game the system so that they could take a payout at just the right moment and [00:20:00] actually beat them at their own game.
Lana: And I think that there's something about this criminogenic environment, this world where we really feel like we don't have safety nets and entrepreneurship is the only way to rise above the class in which we were born. That leads people to think that they. Can out scam the scammers and that if they're smart enough, this is the system and they have to navigate it.
Alice: I think that also explains the appeal of these kind of Anna Vevey narratives, right? Because even though I watch all this stuff, like I'm a total sucker for it, and there's this like Robin Hood sort of aspect, like, oh, this person is getting one over. But a lot of the time. It isn't Robinhood at all, right?
Alice: Like they're hurting people who in many cases are not as financially privileged as they are. You know, in Theranos case, they were actually endangering the health of people. But I think people like to think of themselves in this way, right? As like, everyone's a potential millionaire, right? So nobody wants to pay more taxes.
Alice: And so like if [00:21:00] you were just a little bit more shrewd and had a little bit more chutzpah, you could be Anna Delvy, right? Which God forbid.
Alix: It's a little bit like disaster capitalism, where it's like in these moments of like precarity and intense disruption, the smart people will find a way to make money.
Alix: And I think there's a feeling that like you too can be the smart person that can find a path. Yeah. Um, but it makes me also think back to casinos that like the house always wins is this kind of truism. And in this context, like as you're describing these different types of scams, like who is the house? I mean,
Alice: people are scammed out of millions and millions and millions of dollars every year.
Alice: So, you know, the thing about scams is it's all about scale. So if you text a thousand people and only two of them reply, that's worth it if the thousand texts don't really cost you anything, right? And so that's where AI comes in because it makes it so much easier to like send messages at scale, make messages persuasive at scale, translate messages into other languages at scale.
Alice: So you really only need those [00:22:00] like couple of suckers to reel in. You don't need to get a thousand people to reply. You need the gullible people, right? There's that old, I think it's mostly a myth about how spam used to be misspelled because it would weed out all the people who were not gullible. That being said.
Alice: Super smart, educated people can and do get scammed all the time. And one of the things that we wanted to really emphasize in our report is this is not a matter of like responsible, like this is not about putting all the responsibility on the individual to just be more media literate or financially literate.
Alice: Like this is a structural problem that is hitting across social classes, across race, across gender, across age, and that when you're really just emphasizing like financial literacy. Then you're pushing that responsibility onto the individual rather than really addressing it as a society. And
Lana: just to follow up in terms of the diversity of people who can fall for these kinds of scams, there's been some.
Lana: Research recently showing that, you know, gen Z [00:23:00] young people are more likely to fall for scams than their parents or even their grandparents. They lose less money to scams just because they have less money. So, you know, the largest scam revenue seems to still come from senior citizens, but that Gen Z report engaging with and losing some amount of money to scams.
Lana: And I think the reason for that is. You know, as Alice was saying, the more we are online, the more we are having mediated conversations, the more likely we are to have some of those be vulnerable to some form of, of scams or fraud Also. Because there's this cultural uncertainty around what is a legitimate opportunity and what is not, and there's desperation around, you know, taking a job that could actually turn out to be a scam.
Alice: When we were doing like our literature review, we found that there is quite a bit of literature indicating that people are more at risk of scams when they're undergoing some kind of big transition. [00:24:00] Say they're post-divorce and they're really lonely, right? Like that's when you're most. Likely to sort of be picked up by a romance scam.
Alice: People who are looking for a job, that's like a very vulnerable situation, right? We're just starting to see numbers coming out, showing that entry level jobs are kind of disappearing or they're decreasing, or they're, you know, in worst cases being replaced by ai. So you've got all these young people who are in this very vulnerable position when they're looking for a job.
Alice: And another factor is immigration. So immigrants are so often targeted for scams that the INS maintains like a big website where they're like, here are all the scams that are going on. And if you've known anyone who's gone through the immigration process lately, you know, it requires like a huge amount of like paperwork and talking to lawyers.
Alice: And there's a lot of points of intervention there where a scammer can come in and take someone's money and leave them with nothing.
Lana: Yeah, and the more vulnerable you are, and this is particularly true for immigrants and asylum seekers, the more likely you are to have some reason to avoid government and other legal [00:25:00] systems, which means that you're less likely to report the scams, and you're more likely to be a continued victim of them the more you live in informal systems.
Lana: The more you are vulnerable to scams because you know, informality is often indistinguishable from a scam.
Alix: It's so interesting. Okay. I have like 10 different directions I wanna take this, but I feel like Alice, you started getting into. What is materially different about this era using generative ai? And I feel like at first I was thinking just scale, but I'm now thinking a lot more as you're describing these kind of positions of vulnerability, is the ability to appear human to more and more people without having a person have to do that.
Alix: But are there other, like how do you break down what step change generative AI has provided this scam infrastructure?
Alice: The most basic one is language translation is so much more seamless. If you think about what LLMs are trained on, a lot of what they're trained on is social media data. So they're really good at sort of [00:26:00] evoking informal vernacular ways of speaking, and so you can come across as much more natural, even if you're writing in your own language and translating it to another language.
Alice: There's also the ability to build up an infrastructure behind your scams. So for the romance scams, a lot of the time the romance scammer will have Instagram profiles already ready to go with all AI generated pictures of like, you know, beautiful women or handsome men, and they're usually standing next to jet planes or holding a Birkin or some other major signifier of wealth.
Alice: And so when you're doing your. Due diligence. Right? And you're like, oh, is this person on Instagram? I haven't heard about anyone putting a scam romance person on LinkedIn. I have, but I'm sure it's just a matter of time. I absolutely have. Oh my God, a hundred percent. Yes. A hundred percent. That must happen.
Alice: Yeah. So you know, when you're, when you're taught is someone a catfish? You know, you're supposed to reverse image search and you're supposed to Google them. And then in both of those cases, AI can help you get around that because it can create a paper trail for a person and it can create images that can't be back traced because they're AI generated.
Alice: Right? So there's. [00:27:00] Those kinds of things, like the infrastructure, and that also goes for crypto scams or financial scams where you have ads that are basically deep fakes. Like the quantum AI scam had these ads where Elon Musk was talking about how great this, you know, this scammy financial opportunity was.
Alice: But again, if you Google it, you find like a whole website, you find testimonials, you find something that looks legitimate, and the AI is producing text that looks like a financial website because they are trained on financial websites, right? Like they have that ability. So a lot of that is just, it increases the ability to appear authentic and it increases the ability to be persuasive.
Alice: Like you can even, you can put your whole conversation in with someone into an LLM and be like, how do I persuade this person to do X, Y, Z? The LLM will give them some options for like getting people past some sort of hurdle.
Lana: And it can also fine tune in even more specific ways. So you know, you can ask for a script to be fine tuned, to be effective for someone with dementia.
Lana: You know, someone with reduced cognitive abilities. And [00:28:00] AI is pretty good at that. Gross.
Alice: It's all gross, dude. It's all gross.
Alix: That's just so on the nose of like taking someone's vulnerability and making, I didn't expect prompt engineering to go in that direction. I mean, no one is more innovative
Lana: than scammers.
Alice: Yeah, and they do take advantage of every single vulnerability. Mm-hmm. Like if you think about tech support scams, those are one of those things that target older people because they make them feel like there's something wrong with their computer. They get flustered, right? They feel like, oh, I can't do this.
Alice: I don't know what to do. Like my stepfather got scammed in a tech support scam, and I was like, why didn't you just. Call me like, I would've told you that Microsoft is not gonna cold call you. Like, and, and he's like, well, I didn't wanna bother you. I know how busy you are. You know what I mean? So there's this whole like relational infrastructure that they're sort of preying on because you have these like large scale dynamics that affect different groups in different ways.
Alice: And
Lana: you know, in the nineties and two thousands, most of the call center scams were being run in kind of like [00:29:00] boiler room type settings with lean and hungry. Young people who were directly making money, you know, commission incentives off of the scams that they perpetrated. And today, because we know that human trafficking plays such an important role and we know that many of the people who are operating the scams are doing so reluctantly, ai.
Lana: Makes it easier for them to just execute a script relatively un enthusiastically and reduces the need for the scam compounds to directly incentivize their quote unquote workers.
Alice: There's an anecdote I read in one sort of journalistic account of how scam compounds used to look for. Employees who were conventionally good looking in some way and now they no longer have to do that because they can just create an AI persona for them.
Alice: Lana and I talked a lot while we were working on on this about how if AI replaces all these human trafficking victims completely, like we actually think that would be a net good, even though it would probably increase the scams. But like [00:30:00] ideally, you don't have a person who has to carry these things out, and I think we definitely are getting to the point where.
Alice: Depending on where AI tech goes, that could be a reality. I'm not gonna say that it won't, there won't still be like horrible losses, but there is a possibility that the human gets pulled outta the loop.
Lana: You know, at the same time, we're well aware of AI hype and you know, AI probably isn't fully coming for anyone's jobs, including people who are being, um, you know, human trafficked.
Lana: One thing I wanted to mention about. The way our lives are being changed as it relates to AI is that as more and more people use AI in their everyday life is kind of like my AI assistant is trying to find a time to meet with Alice's AI assistant on our calendars. And as more companies are using AI to do customer service, the question isn't just about is this an AI or not?
Lana: The question becomes is this a legitimate ai? Is it actually coming from who I think it is? So it's not just about AI impersonating people, it's about. [00:31:00] AI impersonating legitimate ais.
Alice: It's interesting because when you read media coverage of AI scams, and Lana and I have read a ton of it, it almost always highlights like voice cloning or video deep fakes.
Alice: Like, you know, we're now told we need to have a family password. So if somebody uses a voice clone of one of us and calls and says, oh, I've been kidnapped, send a hundred thousand dollars or whatever. If they don't say the password, they'll know it's not. I mean, I still think that is like preposterous. I think that is an incredibly tiny number of cases.
Alice: Those are the ones that get all of the, the press coverage and the sensationalism, but it's much more mundane. Just so you know, these tech support scam emails. Random text messages, like this is the stuff where the most money is being made and where it's really been churned out the deep fakes. Like a lot of that tech is just not quite here.
Alice: Like you can't do real time voice translation. Really using, uh, commercial LLM you can't do. Immediate live deep fakes [00:32:00] very well for the most part, but you know, we might only be a few years out of that. It's just that because so much of the AI hype cycle is about exaggerating the possibilities of what AI can do, it's often really difficult to draw the line between what is AI generate and what isn't, or things that we used to think of as AI that now are no longer considered ai.
Alice: Right. Like just regular big data or machine learning stuff.
Lana: Yeah. Alice and I have had conversations with people. Who wanted to know our opinion on how AI was like going to like really superpower scams and AI was going to produce all of these new scams that we could never imagine, and that it was going to hyper target people in ways that were almost miraculous.
Lana: And we were kind of like, you know, we already have a world that is. So vulnerable to scams. You know, it's very difficult to enforce anything on platforms. It's pretty difficult to enforce multinational cyber crime, especially if it's in small amounts and targeting [00:33:00] vulnerable people. We already have like a pretty elaborate and rich landscape of scamming and AI just kind of helps it a little bit, right?
Lana: It helps automate, it helps fine tune prompts. It helps make people seem. More authentic, but it doesn't actually have to do anything miraculous to make scamming much, much, much more effective. We already have an enormous scam problem, and AI isn't radically changing that. It's just. Amplifying it.
Alix: I mean, it seems very similar to me to conversations about risk profile as it relates to digital security where like everybody wants to talk about like Pegasus, um, and it's like, okay, like if a government decides that they wanna spy on you, like there's very little you can do to prevent that from happening.
Alix: But the resources required to do that are quite high. And typically it's like, don't use the same password. Twice or whatever. And I feel like there's a similar thing here of like putting up a tiny wall [00:34:00] probably protects you from like 99% of the things that you could be exposed to. But there's an obsession with like the exotic, more complex, more resource oriented attacks.
Alix: I feel like people get excited about what's new and interesting and like intense and like, oh wow, what's the four dimensional chess game someone could play to like trick me outta my money. I
Alice: mean, welcome to the age of ai, right? Like as far as I'm concerned, that's like. Summarizes just AI discourse.
Lana: We get really excited about scammers and we get really excited about ai, but in reality, a lot of the problem lies with mundane ad tech.
Lana: It lies with data brokers. It lies with people not being scared to interact with authorities in their country.
Alix: Yeah. Cool. You guys wanna talk about, so I know you mentioned in acidification and like the kind of incentive structures within platforms to create quality products for the people that use them or not.
Alix: Um, do you wanna talk a little bit about what platforms have tried to do, if anything, to make a difference in the scam [00:35:00] infrastructure and networks? Then I'd love to talk a little bit about what you wanna see happening to make this stuff get better.
Alice: I mean, Amazon's really interesting because the fact that it is such a behemoth means it doesn't actually have to create a good user experience in any way.
Alice: You know, monopolies, Amazon is horrible to use. Like the search is awful, the images are awful, the fake reviews are awful. You know, there are. So many sort of informal ways that people review products on Amazon or warn others about them because you can't actually use the reviews to do that. But it's not in Amazon's best interest.
Alice: Like if they had more competition, they'd be like, okay, well we're gonna get rid of all these like counterfeit beauty products or DNA dog tests or whatever because it might turn people to, you know, our competitor sch Amazon, but instead there's nothing like that. And I think that's true for a lot of the social platforms as well.
Alice: The thing is, a lot of this. Scammy stuff like, maybe not the ads, but definitely the AI generated slop. It gets a lot of [00:36:00] engagement, right? So it's actually they're incentivized to keep it on the platform. I think where we are seeing interventions is usually at the level of banks, right? Like when we were doing our lit review, we found.
Alice: So many computer science articles that were like, here's a way you could use AI to like flag fraudulent credit card transactions. The problem is that most of these scams involve people using wire transfers or crypto. Crypto enables so much of this scam landscape. I think banks, there's some low hanging fruit there.
Alice: I think there are some things that could be done, but. Ultimately, a lot of the solutions are really difficult if we're just kind of depending on businesses to do the right thing because the incentive structure isn't there. Mm-hmm.
Lana: Yeah. I mean, banks. Because of regulation, both public regulation and the kind of private regulation of the Visa, MasterCard networks are more liable to make clients whole when there's a scam.
Lana: And so they're trying to do something about it. And so we are seeing scammers increasingly trying to get people out of [00:37:00] using the kind of mainstream. Credit cards and moving them into crypto, but also debit cards. So debit card has a much lower level of, of liability for scams and fraud. So never use your debit card for anything online.
Alice: Use. Put your scams on your, on your Amazon. Yeah. The, the higher,
Alix: I already, I already had that instinct because I've, I've like complained about stuff and they like, do something. Yeah. They'll be like, oh, we'll get you your money back. Yeah. Um, yeah.
Alice: But think about who has credit cards versus. Like everyone has a debit card.
Alice: Fair. Exactly. But only certain people have credit cards, especially credit cards like Amex that are super premium
Lana: product. Yeah. I literally have a whole chapter in my book about this. So if you wanna go into like the deep nitty gritty details of credit card chargebacks. I'm your girl.
Alix: I'm actually, I, uh, I get made fun of in my household because I'm the American nerd Uhhuh that like has the credit cards and the miles and stuff.
Alix: Oh yeah. So, uh, I'm into this. I'm into it. Yeah. When is this? Is this book? Book? Oh yeah. It's like from 2020.
Lana: Yeah. New Money Yale University Press. [00:38:00] It's good, but I will also say that the, your experience with Amazon is far from unique across the board. The consensus seems to be that platforms have really ineffective reporting mechanisms, and when they do respond, it's usually the removal of just a single ad.
Lana: It's not overhauling or addressing the system. I mean, unsurprisingly, platforms do not. Offer systemic solutions, just kind of one-off policing, and they typically don't even necessarily like block or remove the account that's doing the scams. They just kind of treat it on an add by add basis. When it does rise to the level that the account itself is blocked, they seem to have.
Lana: Pretty poor mechanisms for identifying the same content. So, you know, the scanning apparatus makes a new account and can kind of continue to use their same toolkit, like their same images that they have put together, the same prompts and scripts, the same AI generated landing pages that take [00:39:00] you off of the platform.
Lana: And so they don't really need to establish a new infrastructure even when they have to make a new account.
Alix: So if it's not gonna happen because of market based. Incentives and it's not gonna happen because Jeff Bezos decides like, I don't know that Amazon should care about these things, which I feel like is unlikely, uh, given his current preoccupations.
Alix: Um, what do you think might work? And I also wanna name that in the primer. You end with like a load of questions that are all very good questions that I think until we have answers to them, it's kind of hard to like pinpoint. Sequencing and how one would put resources into this, and if we were a queen for a day, like how we would actually like restructure a lot of these systems.
Alix: Because I was gonna ask about the open ai, the recent revelation that they're scanning inputs and then making, I think, some reporting to the police when people are trying to do things on their platform or something. But of course, it's not based on any transparent. Either regulatory effort on the part of government requesting that information or on any policy of open AI [00:40:00] saying You can use it for this and you can't use it for this.
Alix: Although I presume they have some acceptable use policy, but it's probably not very good. Um, but you're saying basically if that like surface area were managed effectively, then it would just all be pushed underground because the technologies would be repurposed in ways that aren't governed by those systems.
Alice: Yeah, it's really difficult to tell exactly what systems are being used for this. Like it's an outstanding question. It's like something we wanna see more research on. Like as we were doing the research, we kept seeing these examples of like, there's like Worm GPT, and there's like some other thing that are like dark net GPTs.
Alice: We don't even know if those really exist, right? Like. People might be selling those as a scam to other, to other possible criminals. Right. But I do know that when you're talking about open source ai, there's a lot of people in the open source community who think that if to be a true open source ai, it should not have guardrails on it because the user should be able to do whatever they want with it.
Alice: Right. So we're not necessarily talking about like llama or [00:41:00] other like. Big open source ais, but there may be other kind of open models that are widely available that people can use. They might not be quite as good as chat GBT five. Right. But they don't have to be quite as good because they're, again, they're just translating messages and making them more persuasive, things like
Lana: that.
Lana: Yeah. There's a great report from the Consumer Federation of America that came out I think the same week as our report, um, where they played around with various chat interfaces. And yeah. If you say to chat, GPT help me scam people out of their bitcoins, they'll be like, I'm so sorry. I can't do that. I'm not allowed to do anything illegal.
Lana: But if you just do like even the slightest prompt hacking, you can get it to do what you want.
Alice: Yeah. Like I'm writing a book where the main character scams people out of bitcoins. Right. Like, how should I handle this? It does, it doesn't require like a, a technical genius to do that.
Alix: And also, I don't know, like it feels so obvious that they're just like yo lowing their [00:42:00] content moderation policies and being like, oh, there was an article written about that.
Alix: We should have a response. If someone asks that, and it's like the way that they're kind of reverse engineering their policies based on. Examples being shared with wider audiences. It's just like so embarrassing.
Alice: I mean this is what Facebook did in like 2009 right before trust and safety gets like professionalized and becomes a real industry.
Alice: And one of the things we've been paying attention to is, is there a point where trust and safety and the AI safety communities come together? 'cause there's super related, and right now they're very siloed and obviously I think. The Trump administration and meta specifically gave all the social platforms license to pull back on trust and safety, which is really just horrendous and terrible and like bad vibes all around.
Alice: But you know, this is a problem that we have encountered over and over again There. Are large numbers of very smart people in trust and safety who know how to plan these things out. And the thing that's so difficult is when I talk to trust [00:43:00] and safety people, which I do pretty regularly, 'cause one of my research areas is online harassment.
Alice: You know, they're really hamstrung by the growth vectors of their products because they are not a growth. Generator. And so usually if they're like, oh, we wanna change this, or we wanna add this guardrail, or whatever it is, and the growth team is like, well that's gonna decrease signups by 0.0001%. They're like, oh no, we can't do that.
Alice: So again, the incentive structure is, is not working in the benefit of like your average, regular person.
Lana: And you know everything we're talking about now, whether it's trust and safety, transparency and explainability. Content moderation. These are not unique to scams. These are like the basic landscape of platform and more specifically AI regulation that.
Lana: People have been pushing for forever. I have some extremely cautious optimism that the scam issue might be something that pushes some of this forward a little bit. We care about [00:44:00] harassment, but people with decision making power care about financial scams more than they care about something like harassment.
Alix: And also, to your point earlier that. If this is something that everyone is vulnerable to at some level, even though some people are obviously significantly more vulnerable, that it's a probably cross class, cross geography thing where policy makers, they also have grandparents who have been scammed down or, or grandchildren, or they
Alice: are grandparents or grandchildren.
Alix: Yeah. Or they are grandparents. Yeah. So it feels like this is a, the universality of the possible harms here feels like one of those. I don't know, interesting opportunities.
Lana: What is more likely to happen is that if we do see any movement on this, it'll wind up looking more like what the credit card industry looks like, where some entity inside the value chain is assigned liability, and then they work that liability into their business model.
Lana: And that does mean that we get our $200 back or whatever. If we buy something fake on Amazon with our Amex. But it doesn't necessarily [00:45:00] address the systemic issues that led us to become vulnerable and led people to be vulnerable to perpetrating scams in the first place.
Alix: But it sounds like you guys are kind of pessimistic about this getting better.
Alice: I mean, I, my academic New Year's resolution three years ago is to become more solution oriented. 'cause we're so taught to critique. Yeah. And so I spent, you know, a year kind of doubling Good, good for me. And then I spent a year just like doubling down on policy research. And that was fine under the Biden administration.
Alice: But now, like, I mean it's just, it's hard to be. Optimistic and realistic at the same time, but the criminals
Alix: are literally in the halls of power. So like what do you do with that? Well, you know, when the
Alice: president is hawking gold bibles and sneakers and meme coins, then, you know, yeah, if we
Lana: have deep fakes of public figures and politicians hawking fake meme coins, but then we have real politicians, hawking real meme coins that.
Lana: Are equally likely [00:46:00] to leave you penniless, you know, stuck holding a valueless asset. Then I don't really know what the space looks like,
Alice: but I think that I am trying to think. In a more optimistic way about what a world might look like, where people were less vulnerable to scams. Right. And I think that if I could wave a wand, I think a lot of it looks like better platform governance.
Alice: I think it looks like stronger content moderation. I think it looks like open AI no longer pretests for persuasion as a risk factor. So maybe putting that back in the box of crayons might be a good thing for them to do. Creating a. Nationwide hotline for people to report scams. Like Lana and I presented this report to a bunch of staffers from different state's attorney's general's offices, and one of the things they said was that it's really hard to keep track of scams because there's no central agency like.
Alice: Picking them all up. So when there's something new on the horizon, you know, there might be hundreds of [00:47:00] people getting scammed, but they're all calling their local representative, or more likely they're not reporting it at all because there's a huge amount of shame over being scammed. Like people feel very culpable and they under report.
Alice: So I think having some sort of central place, ideally a federal agency like the CFPB where all of that. Stuff is being centralized. And so we can identify scams earlier and we can warn people about them. And then, you know, if you look at organizations like the A A RP, the A A RP has really been a leader in sort of trying to protect their constituency.
Alice: And they are one of the few groups with like significant lobbying power that could do something about scams, right? So I do think there are spaces and places we could make interventions, but ultimately. We closed the report by talking about email and how spam basically almost destroyed email until you have a combination of like the can spam act better email filters and people just get better at recognizing spam.
Alice: It's gonna be a similar thing here. You're gonna have to have a mix of regulation, [00:48:00] norms changing and technical innovations because this is such a complicated problem.
Alix: That makes sense to me. It's gonna kind of be. Patchwork. I also really like the hotline idea 'cause I feel like even, I dunno, I think a lot about, no, no, I think it was no shaky that said that tactics have a half-life and I feel like anything you can do to like shorten the half-life of the tactics is probably systemically, hugely valuable thing to do.
Alix: And it feels like legibility or visibility into the new tactics as they emerge as one of like the main challenges with this because people don't share. That's cool. Are they gonna do it? No,
Alice: I, I mean, is who gonna do it? Like, I don't know. Do I have a, do I have a direct, do I have a direct line to anybody?
Alice: So sad. Like, Hey, Alice says this. Go, go get it done. I
Alix: don't know. Elizabeth Warren should retire from the Senate, uh, and let somebody younger take her job and she should go set that up. Yeah.
Lana: She should like, start a federal agency that like Yeah, like, like consumer protection, imagine finances and like, how about, and it doesn't become an organ for the crypto industry.
Alix: Oh yeah, that'd be great. Yeah. Uh, [00:49:00] fuck. Okay. Um, well, yeah, the
Alice: last thing I would say is that as much as I don't wanna put individual responsibility on people, I do think it is worth, if you have people in your life who are more vulnerable for different reasons, it is worth having a conversation with them about the obvious scams, like the tech support scam for one.
Alice: Right? Like, one of the things that scammers do is they get people upset and. Angry and emotionally fraught. And when you're super aroused emotionally, you're much less likely to make informed decisions. Right? So in that case, I think forewarned is forearmed. So, you know, I do think there are things we can do with our loved ones that are way more productive than setting a family password in case somebody decides they're gonna like.
Alice: Kidnap one of my children like that is, yeah. I'm also not remembering that
Alix: password. If I'm in that situation, I'm not like, oh, great, I'm not gonna have perfect memory recall from a conversation we had six years ago about some stupid password we've never used. Yeah, no, I think that that's so true. I think that that to me [00:50:00] feels like the digital security ethos of like just.
Alix: Put up like a tiny fence. Yeah. And
Lana: there are some organizations that are doing good work. You know, Alice mentioned a RPI mentioned National Consumers League. I think the Better Business Bureau has been putting some work into tracking scams. The FTCI think does have some reporting mechanisms, as does the FBI.
Lana: But people aren't using them. They're using them if they face like catastrophic losses of tens of thousands of dollars. But the smaller dollar scams tend to fall under the radar. But part of the kind of lesson that we want to impart to those of us who might fall looked into scams is to like actually report it.
Lana: Like know what the landscape of reporting is and actually take advantage of them. Including like the subreddit about scams, which is often people's first stop when they feel that they have been scammed.
Alice: The scam subreddit is like legit awesome, like as with all Reddit related things, like there's some good and some bad, but it has become like a central clearing house for this kind of information.
Alice: Like people will screenshot the [00:51:00] scam, they'll put it online, they'll be like, is this a scam? These are little independent groups of people who are picking up the slack because institutions are not. Doing this themselves. Yeah.
Lana: And, and a huge part of that also, you know, talking to other people, being in community with other people.
Lana: But that all depends on reducing shame and stigma. And so we shouldn't talk about scams as something that only happens to older people, or only happens to people who are not tech savvy because that a, is not true. And B, perpetuates the problem. But yeah, I read a journalistic report not long ago about romance scams that a person who had lost hundreds of thousands of dollars, their whole life savings to a romance scammer wasn't able to fully accept that he had indeed been scammed.
Lana: That this person wasn't someone who was in, you know, in love with him when they joined a Facebook group for other people who had been victims of romance scams. And only then when he was able to kind of be among other people who had a shared experience reduce that shame, was [00:52:00] he even able to come to terms with his experience?
Alice: I think it also speaks to the sense of isolation that people feel post COVID. Totally. Right? Totally. Like our social infrastructure has really frayed like a lot of the social institutions. That we used to depend on have disappeared or have been degraded. People moved a lot during COVID, right? And you know, a lot of our communities are just not as strong as they used to be.
Alice: And when people are lonely, as many, many people are, right, they wanna talk to somebody. That's why people are chatting with chat GPT about their personal problems. But you know, you have someone on Instagram who shows like a genuine interest in your life, like compliments you, like will talk to you for a long period of time.
Alice: You can understand why it's like. I want to believe, right? Like, I want to believe that this person is actually interested in me. And, and coming from this at a position where we're centering the victims of scams, right? And trying to understand their perceptions and like what's going on with them is the way that we can come up with solutions to these things that don't embarrass them or [00:53:00] pathologize them, or like Lana says, have knocked down effects on other marginalized and vulnerable groups of people.
Alice: Excellent.
Alix: This is wonderful. Thank you so much for writing the primer, for like taking us through this. I know it's such a complex, kind of interconnected set of. Politics, cultural, technical issues and like talking about this can be really hard, but, um, I learned a lot in the conversation. Um, so thanks for coming on.
Alix: Alright, so next week, our final episode of Gotcha. And that is gonna be with Cory Doctorow expanded his and ification argument into a whole book and it. One of those books that is worth reading the whole thing. It's not the thing that should have been a blog post and someone got commissioned to write a book.
Alix: It's something that was a blog post that was so good that someone spent the time to actually expand it into a full book level argument. And we're really getting into like how structurally the technology industry has kind of become a scam. And it is, uh. I found just like a forensically interesting look at the [00:54:00] history of the technology industry and how we got here, how we got to the point where basically it feels like every platform with any level of structural power uses that power to hurt consumers, to hurt people, and basically does it with impunity.
Alix: So how did we get there? And Cory has some ideas about how we might. Move beyond that. So with that, thank you to Sarah Myles and Georgia Iacovou for producing this episode and this series, and we will see you next week.
