Contact Information

37 Westminster Buildings, Theatre Square,
Nottingham, NG1 6LG

We Are Available 24/ 7. Call Now.

Unlock the Editor’s Digest for free

Online dating company Match Group is seeking to solve an age-old problem with romance: how to get men to behave better.

The group, which owns matchmaking platforms Tinder and Hinge, is using artificial intelligence to detect signals that somebody might be sending a message that is abusive or overly sexual, as part of its push to coach users into more chivalrous conduct online. 

For “men especially”, a “big part of our safety approach is focused on driving behavioural change so that we can make dating experiences safer and more respectful”, said Yoel Roth, head of trust and safety at Match. 

When a user types an “off-colour” message, Match’s apps will generate an automated prompt asking them if they are sure they want to send it. “We think of it internally as ‘too much, too soon’,” Roth said. A fifth of people who receive these prompts reconsider their messages, according to Match.

The efforts to enlist AI to help improve dating behaviour come as the three largest online matchmaking brands globally — Match’s Tinder and rivals Badoo and Bumble — are all shedding users as a result of so-called dating app fatigue among Generation Z users. 

This has seen online dating groups launch an array of new features, including friend-finding and community-building products, in an attempt to help reverse a post-pandemic slowdown in users.

Hinge’s logo on a smartphone screen
Match owns leading dating apps Hinge and Tinder © Nikos Pekiaridis/NurPhoto via Getty Images

Surveys suggest that “burnout” on matchmaking platforms is particularly prevalent among young women, a group that Match chief executive Bernard Kim last year described as “literally the most critical demographic for all dating apps”.

“In the context of online dating, where young people grow up and enter the dating marketplace [ . . .] there’s a real need and opportunity to help people understand the norms and behaviours that go along with respectful and consensual dating,” said Roth.

Roth joined Match in March last year, 16 months after suffering a very public break-up from his previous company Twitter, now known as X, where he worked for more than seven years heading the team that banned US President Donald Trump’s account in January 2021 following the attack on the Capitol. 

He resigned just two weeks after Elon Musk’s takeover of Twitter in October 2022, writing in the New York Times that he could not remain at a company where policies were “defined by edict”. Soon after, Roth became the target of a flood of harassment, which followed criticism from Musk himself. 

As Match’s safety chief, Roth will once again have to contend with Trump and his close confidant Musk. 

A woman looking at a smartphone
Surveys suggest that ‘burnout’ on matchmaking platforms is particularly prevalent among young women © Mustafa Hatipoglu/Anadolu via Getty Images

The impact of the new US administration has already begun to affect online safety policies at major social networks. Meta, which owns Facebook and Instagram, last month moved to end its fact-checking programme and weaken hate speech policies as part of a “free speech” overhaul. 

Roth insisted that Trump’s election would not “meaningfully” change Match’s approach to trust and safety, in part because of the company’s distinct role in forming one-on-one connections.

“We are not planning any changes to our policies or our products and, ultimately, we’re doubling down on safety,” said Roth. “We’re not just doing it because we think it’s the right thing to do morally. We’re doing it because we know it’s the right thing from a business perspective.” 

Beyond shaping daters’ conduct, Roth said his job was also increasingly focused on combating sophisticated organised scams.

The US Federal Trade Commission estimates that consumers last year lost $823mn to romance scams, and warns that these schemes — while less prevalent than other types of imposter scams — are often particularly costly for individuals.

While rapid advances in AI have raised alarm bells about the potential for deepfakes and automated bots to accelerate online fraud, Roth said the biggest threats facing dating app users still come from humans.

Spam activity from bots was still relatively easy to block, he said, but the organised crime rings behind “massively profitable” scams had developed increasingly sophisticated techniques to evade detection. 

These crimes generally involve “call centres full of people” — often people “who have been kidnapped or trafficked” — in countries such as Myanmar, Laos and Cambodia.

“We’re not talking about bots and fake accounts but actually real people, using real phones, engaging in real manually typed conversations,” said Roth.

Source link


administrator

Leave a Reply

Your email address will not be published. Required fields are marked *