It’s been nearly 10 years since swiping left became the gesture of choice for millions of daters with the advent of Tinder. Since that app first came on the scene Sept. 12, 2012, it seems every single application has tried to suss out the best way for users to burn through potential dates at a pace that can set fire to the most-well intentioned thumbs.
Despite it being a decade since, what so many of these apps still struggle with is dealing with bot, spam, and scam accounts. Gizmodo has previously reported on the thousands of people who told the FTC they were scammed through Tinder during the pandemic. Innocent folks looking for love reported getting scammed out of tens of thousands of dollars, convinced to put down credit card info, or even being threatened after they didn’t want to pay their fees. It took Tinder until last year to introduce ID verification to most of the app’s global user base, although it remains voluntary for the majority of users. The idea of being deceived on a dating app even got a worldwide premiere through the popular Netflix documentary The Tinder Swindler.
Bots and scammers are rife on dating apps. A 2017 study published by researchers at the University of Southern California pointed out that it’s especially difficult to determine if a user is a bot since there’s few ways to actually look at the users’ profile without interacting with them. These accounts often seem more legit than not, with original pictures and other social media accounts. Scammers are even more difficult, since even when you knock off one predatory account, they can easily come back onto the platform with a different identity entirely.
Well, one dating app had a novel approach to dealing with scammers and bots on their platform — turning them against each other. In an August blog post, co-founder of video-centric dating site startup Filter Off, Brian Weinreich, said that when a supposed scammer first signs up for the site, they’re put in a so-called “Dark Dating Pool” away from other users. The dev said his small team flooded the pool full of GPT-3 based chat bots and collected the most hilarious examples of scammers attempting to scam a being with no compassion or love (sorry, but AI simply isn’t there yet).
Weinreich wrote that all chats are encrypted, and they “err on the side of caution” when putting users in the dark dating pool, which could mean some potential scammers slip through. In a Wednesday interview with TechCrunch, Weinreich said they used algorithms that sort accounts based on how scam users most often sign up for the app. Funnily enough, these scammers will apparently try to scam each other, arguing back and forth over who should send a $US40 ($56) gift card.
“We have probably over 1,000 scammers that I know of that are actively talking to just bots,” Weinrich told TechCrunch.
Though Gizmodo couldn’t independently verified much of what the developer is claiming, reading through these posted chat logs between bot and scammer is like watching the Aliens vs. Predator of the scummy dating scene. The bots are chock full of canned autoresponses that even a basic question resulted in repeated answers or even straight-up contradictory replies. How does a bot respond to “Are you on WhatsApp?” Well, it first says “no,” then “no,” then “no,” and finally “yes.”
Here’s a few of the best snippets we found from when a scummy scammer met their match against the most obstinate opponent imaginable: a derelict AI bot.
“No, I’m not on WhatsApp… No, I’m Not… I’m not…”
How does a bot react to a simple question, such as an inquiry as to whether you’re using another app? Well, the implication is that the scammer wants to make their way off of the dating application and onto the other, less scrutinised texts on WhatsApp, but the bot doesn’t seem to have any set response.
“I know it’s a dot! I didn’t know what to write.”
This sounds like the sort of thing I would try and pull off on my own, and I’m no bot (I swear). Scammers try to get as much details as they can from you, starting large then working their way into the nitty gritty as you grow to trust them. But even the most advanced, AI-learning based chatbot that derives it’s speech algorithmically from text found on the internet, can’t talk about what isn’t available to it. In this case, it seems the bot doesn’t have any cognizance of any actual place, or else some glitch is keeping it from spelling out the name of the place it has in mind. Either way, good luck getting the credit card info for Heather who lives in “____.”
“Yes, I live in Niceville, Florida.”
Score one for this AI bot. Niceville, Florida is indeed a real place. Unfortunately for the supposed scammer, AI isn’t all too impressed with your early bird work hustle. The bot also seems to appreciate time as the enigma it is, because not only is it 10:49 a.m. “here” in Niceville, it’s 3:49 p.m. “there” in Niceville. Hell, I don’t blame you for scratching out a Sci-Fi story idea based on this conversation alone. Well, that is until the scammer desperately tries to find the bot on Facebook, but can’t seem to find a way to get the bot to give up its last name. Take that as a lesson for all you lovers on these sorts of apps. Never give out your last name until you can verify the other person and until you’re sure it won’t come back to hurt you.
“I’m visiting family.”
Honestly, listening to this conversation feels more like a learning experience. If you’re on a dating app and you start to feel creeped out by another person you met online, just start retyping “I’m visiting family” over and over until the other person gets the gist you just ain’t interested.
“I think we’re both chatting with bots.”
No, I don’t think gaslighting is by itself funny. But also, watching a bot do it to a scam account that’s obsessed with getting the bot’s number is goddamn hilarious. This bot in particular seemed to be very aware about how to mess with this lowly scammer. It was so effective at denying each one of the supposed human’s requests for “prove [sic]” that I’m actually a little convinced the AI is indeed flesh and blood. The fact that the bot’s supposed phone number starts with “555-555” didn’t tip the scammer off is yet another reason to believe the scammer may be more robot than even they might think.
“No, I don’t have kids… I’m a stay at home mum… dad.”
This sounds like the kind of tough, strict, but fair parent I can support. I would love to meet such strong, gender fluid parent such as this, who not only takes care of their finances but still finds time to eat healthy and exercise. They have a wife and two kids, and they’re lucky to have their little bundle behind them all the way. Too bad the alleged scammer is too stuck in his old way to get with the times and come out of his shell.
“My email address is lisa[@]email.com.”
The bots seem programmed to all give out the same fake email and phone numbers, as evidenced by other videos the app developers posted. The scammer seemed to notice as well when he asked “How many people are using this number on this app.” If only he knew there wasn’t really any “people” using that number. Then the bot goes for the killing blow:
“This is not a conversation with someone you love,” the bot wrote. “This is a conversation with someone who is trying to scam you.”
And then immediately backtracks, just to keep the scammer on his toes.
“What? No. I’m not a scammer from Africa!”