Scammers target users of online dating apps using malicious AI ‘bots’ that aim to scam lonely hearts | #facebookdating | #tinder | #pof

Tinder bots: Scammers target users of online dating apps using malicious AI ‘bots’ that aim to scam lonely hearts out of their cash

  • A chatbot is a script that simulates a real conversation with a user
  • Scammers are using them on dating apps to convince people to visit websites
  • Some companies use ‘bots’ to provide customer services and engagement  

Have you ever matched with someone on a dating app that seems ‘too good to be true’? They may well be, according to an online dating consultant.

Scammers have been letting ‘malicious bots’ loose on dating apps in an attempt to convince people to part with their money, says online dating expert Steve Dean.

He warned against ‘people’ who open chats with ‘here’s my phone number, you can call me here’ with a link as soon as you swipe them on Tinder.

Often these links take you to a scamming or live webcam site.

Scroll down for video 

 ‘Malicious bots’ are being used alongside fake profiles on dating sites. They appear to be human when messaging users and attempt to convince them to follow a link that often points to a dangerous website

Malicious bots are usually created by third party companies and dating apps actively attempt to weed them out.

‘Unfortunately some companies aren’t always honest about their use of bots’, Mr Dean told CBS (via Wink News).


Romance scams, where criminals create phony profiles to trick love-lusting victims into sending them money, are on the rise. 

To avoid falling prey, here’s what you can do: 

  • Slow down and talk to someone you trust. Don’t let a scammer rush you.
  • Never wire money, put money on a gift or cash reload card, or send cash to an online love interest. You won’t get it back.
  • Contact your bank right away if you think you’ve sent money to a scammer.
  • Report your experience to the online dating site, FTC or the Federal Bureau of Investigation. 

Source: FTC 

‘They’re manipulated into buying a paid membership just to send a message to someone who was never real in the first place.’

A high profile example of this comes in — one of the most-used dating platforms — which is currently being investigated by the US Federal Trade Commission for unfairly exposing consumers to the risk of fraud.

The commission claims that took advantage of fraudulent accounts to trick non-paying users into purchasing a subscription through email notifications. 

The company denies this and says the accusations were ‘completely meritless’. 

There are also ‘good bots’ on such platforms, said Lauren Kunze, CEO of Pandorabots, a company that produces chatbots for dating firms and others. 

She says that dating companies use their services to create bots that engage users when there aren’t any matches or to provide customer support.  

The problem with the use of bots — whether ‘malicious’ or ‘good’ — is that it is ‘becoming increasingly difficult for the average consumer to identify whether or not something is real’, says Ms Kunze. 

‘I think we need to see an increasing amount of regulation, especially on dating platforms.’ 

Chat bots are also used in a ‘good way’ by dating apps and other companies, often to help people with customer service queries and in some cases to engage users when there are not matches available for them 

The problem is very difficult to regulate or control at the moment, Ms Kunze said, adding that the best solution is to promote the best practice in which ‘bots should disclose that they are bots.’ 

Technology may also be able to help solve the problem of fake profiles and chatbot scammers. 

Artificial intelligence software can be trained to ‘think like humans’ when looking for fake dating profiles, according to a study by researchers at the University of Warwick published early in 2019.

After scanning all the fakes submitted for it to review, the algorithm applied its knowledge to profiles submitted to online dating services and came to conclusions on the probability of whether each profile was real.  

In total, only one percent of the profiles it flagged as fake were genuine, reported the University of Warwick. 

Online dating is a big business. Tinder alone operates in 190 countries and has made more than 30 billion matches since it launched in 2012.  

The problem with bots is more common on social media platforms, such as Twitter and Facebook.

A study by Oxford University in 2018 found that fake accounts were being run by government and political parties in 48 different countries.

A different study, also from 2018, found that more than half of 4,500 Americans surveyed by the Pew Centre in Washington DC said that they could not tell the difference between social media posts from a bot and a human. 

The younger generation seemed to be more aware when it came to bots. However, since the survey results were self-reported, there’s a chance that people are overstating or understating their knowledge of bots.

Despite this, most people regarded these bots as bad. It was clear that the more knowledgeable a person became about social media bots, the less supportive they became of their use.


Alan Turing is seen as one of the fathers of computing. He proposed the Turing Test in 1950

The Turing Test was introduced by Second World War codebreaker Alan Turing in 1950.

He wrote the test as part of his paper Computing Machinery And Intelligence.

In the paper he predicted that computers would one day be programmed to acquire abilities rivalling human intelligence.

He proposed a test called The Imitation Game, which would identify whether a computer is capable of thought.

A person, called the interrogator, engages in a text based conversation with another person and a computer – and must determine which is which.

If they are unable to do so the computer is deemed to have passed.


Source link