Dating Apps Crack Down on Romance Scammers
Michael Steinbach, the head of global fraud detection at Citi and the former executive assistant director of the FBI’s National Security Branch, says that broadly speaking fraud has transitioned from “high-volume card thefts or just getting as much information very quickly, to more sophisticated social engineering, where fraudsters spend more time conducting surveillance.” Dating apps are just a part of global fraud, he adds, and high-volume fraud still occurs. But for scammers, he says, “the rewards are much greater if you can spend time obtaining the trust and confidence of your victim.”
Steinbach says he advises consumers, whether on a banking app or a dating app, to approach certain interactions with a healthy amount of skepticism. “We have a catchphrase here: Don’t take the call, make the call,” Steinbach says. “Most fraudsters, no matter how they’re putting it together, are reaching out to you in an unsolicited way.” Be honest with yourself; if someone seems too good to be true, they probably are. And keep conversations on-platform—in this case, on the dating app—until real trust has been established. According to the FTC, about 40 percent of romance scam loss reports with “detailed narratives” (at least 2,000 characters in length) mention moving the conversation to WhatsApp, Google Chat, or Telegram.
Dating app companies have responded to the uptick in scams by rolling out both manual tools and AI-powered ones that are engineered to spot a potential problem. Several of Match Group’s apps now use photo or video verification features that encourage users to capture images of themselves directly within the app, which are then run through machine learning tools to try to determine the validity of the account, versus someone uploading a previously-captured photo that might be stripped of its telling metadata. (A WIRED report on dating app scams from October 2022 pointed out that at the time, Hinge did not have this verification feature, though Tinder did.)
For an app like Grindr, which serves predominantly men in the LGBTQ community, the tension between privacy and safety is greater than it might be on other apps, says Alice Hunsberger, vice president of customer experience at Grindr, whose role includes overseeing trust and safety. “We don’t require a face photo of every person on their public profile, because a lot of people don’t feel comfortable having a photo of themselves publicly on the internet associated with an LGBTQ app,” Hunsberger says. “This is especially important for people in countries that aren’t always as accepting of LGBTQ people or where it’s even illegal to be a part of the community.”
Hunsberger says that for large-scale bot scams, the app uses machine learning to process metadata at the point of sign up, relies on SMS phone verification, and then tries to spot patterns of people using the app to send messages more quickly than a real human might. When users do upload photos, Grindr can spot when the same photo is being used over and over again across different accounts. And it encourages people to use video chat within the app itself, to try to avoid catfishing or pig-butchering scams.
Kozoll, from Tinder, says that some of the company’s “most sophisticated work” is in machine learning, though he declined to share details on how those tools work since bad actors could use the information to skirt the systems. “As soon as someone registers we’re trying to understand, Is this a real person? And are they a person with good intentions?”
Ultimately, though, AI will only do so much. Humans are both the scammers, and the weak link on the other side of the scam, Steinbach says. “In my mind it boils down to one message: You have to be situationally aware. I don’t care what app it is, you can’t rely on only the tool itself.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.