There’s never been a worse time to be on dating apps. Or there’s never been a better time. It all depends on how you feel about AI.
Dating apps are no strangers to artificial intelligence. But given online dating companies’ years of experience in AI tech, it’s a welcome surprise that they don’t seem inclined to immediately flood their apps with new, ChatGPT-like features, unlike other social platforms.
Rizz uses generative AI to come up with opening lines to use in dating apps.
Screenshot by Katelyn Chedraoui
When ChatGPT exploded in popularity in 2022, Rizz co-founder Roman Khaves saw the tech as a great opportunity to help improve the experience of using the big dating apps and, most importantly, help folks get off the apps and go on a date. One of the biggest features Rizz offers is aid in crafting an opening line, which is a make-or-break moment, Khaves said.
“It’s challenging to start a conversation,” Khaves said in an interview. “It’s also not really something [people] are used to in real life … Rizz makes it easier to have and start conversations.”
People using Rizz definitely run the risk that their match might not appreciate learning they used AI to connect with them. But Khaves pointed out that Rizz’s goal is to help people get past the all-too-common talking phase and avoid ghosting.
“We offer suggestions; we don’t tell you exactly which line to write,” said Khaves. “Everyone is choosing a very different line. So what we do is we help give you the strategy, then we offer different examples.” Rizz also lets you insert keywords, so if you want to appeal to a potential match’s love of sushi and dancing, Rizz can come up with different examples of opening lines based on those interests.
While Rizz’s ultimate goal is to get you off dating apps, not every use of AI tech for online dating is so benign. A London-based lesbian dating app recently made news for adding facial recognition to its verification process. According to the app’s founder, the tech can identify specific appearance characteristics that aid in the app’s overall goal to purposefully exclude trans women from accessing the platform. Like any technology, the potential benefits or harms AI can cause depend on how it is used.
New AI tools, old concerns renewed
Where AI goes, privacy concerns follow. For people on dating apps, that’s nothing new. Dating apps encourage you to share as much personal information as possible so the algorithm can find the most compatible matches. But that comes with inherent safety risks, whether it’s being more vulnerable to scammers, app-wide data breaches or having your information shared and used by third parties.
Match Group has a set of guiding principles for generative AI, including a promise to be “transparent about how we use data to improve generative AI outcomes.” As of publication, in Tinder and Hinge’s privacy policies, AI isn’t mentioned once. The apps don’t have specific policies laying out how AI is used.
Hinge does have its own AI principles, which make a similar promise to its parent company about using AI safely and responsibly. Hinge said in the principles that it does not use generative AI, though it is exploring future integrations, which was confirmed in Match Group’s first-quarter earning report. For both Tinder and Hinge, the only way to delete all your data is by submitting a request and closing your account. Bumble also doesn’t have an AI policy, and its privacy policy doesn’t mention AI either.
For companies that are beginning to roll out more AI features, not having specific AI policies or updating their privacy policies is worrisome. It’s especially concerning considering how terrible dating apps have been at protecting their users’ personal information in the past.
“A lot of the companies are using that information not just to help you find love, but to help them make money,” said Jen Caltrider, program director of the Privacy Not Included project at Mozilla Foundation.
The project evaluated 25 of the most popular dating apps, and over half failed to meet minimum security standards. Most dating apps (80%) have policies for sharing or selling personal information with advertisers, and Caltrider’s team tagged 88% with a privacy and security warning label. Their concern is that generative AI will make apps more data-hungry.