Stopping scam payments: Machine learning for fraud detection | #lovescams | #datingapps


After years of trying to become more financially savvy, Claire joined an investors’ group on a social media site.

Before long, she met a fellow member named Sebastian, who had a good reputation in the community as a professional investment manager. Sebastian took Claire (not her real name) under his wing, giving her advice and helping her install various financial applications on her computer. She was seeing great returns, so he encouraged her to keep investing more. She even emptied her savings and took out multiple loans to invest in new funds.

Then she got a message from her bank, saying they had blocked one of her payments. When she called to inquire, an agent told her that she was most likely being scammed — an accusation that Claire vehemently denied. But as the agent’s words slowly began to penetrate, her stomach dropped. She fell silent. “What am I going to tell my husband?” she finally asked.

That’s just one of hundreds of similar stories of complex online scams that Ian Selley, the manager of fraud analytics and data science at the U.K. bank TSB, has heard in recent years. As security for banking and payments has tightened, fraudsters have shifted their focus to impersonation tactics. Masquerading as family members, debt collectors or even romantic partners, they convince victims to send money. Scammers weaponize people’s best impulses — to give, to help, to love — in well-practiced acts of social engineering. 

These types of scams are known as authorized push payments, and last year, 207,372 APP scams were reported in the U.K., with gross losses of £485 million ($618 million in U.S. dollars). They account for 40% of U.K. bank fraud losses. Experts predict these crimes could cost $4.6 billion in the U.S. and the U.K. alone by 2026.

Stopping these scams is enormously difficult.

For banks, spotting fraudulent payments among millions of transactions is like finding a needle in a haystack. Victims send the money themselves; scammers don’t need to break any security measures. The victims usually don’t realize they’ve been ripped off until after the payment goes through, when their money has already disappeared into a network of tens or even hundreds of accounts controlled by criminals to launder money.

But a new Mastercard tool powered by artificial intelligence is stopping these scammers by flagging them in real time and preventing payments from ever leaving the victims’ bank accounts.

Focusing on the fraudsters first

Mastercard has been using AI to prevent fraud and detect threats and vulnerabilities for years, but now it’s applying the technology to its network-level view of payments in the U.K. Existing fraud detection systems often look at the payee, flagging, say, an elderly customer sending a large sum of money to a bank they’ve never transacted with before. But Consumer Fraud Risk also examines the recipient account’s activity — factors like account names and payment values, and its relationships with other accounts — and it can spot risky payments in real time and alert the bank to stop the transfer.

“It’s a completely new data point,” says Selley, whose bank was one of the first to adopt Consumer Fraud Risk. “A payment may look perfectly legitimate from the sender’s side until CFR reveals the beneficiary to be risky.”


$4.6 billion


Projected annual losses to authorized push payment fraud in the U.S. and the U.K. by 2026.

TSB, one of nine U.K. banks using CFR, says the tool has dramatically increased its fraud detection. If its performance was mirrored by all banks in the U.K., it would equate to nearly £100 million saved from fraud.

“These types of scams shake the confidence of consumers and can erode trust in the digital economy,” says Ajay Bhalla, president of Mastercard’s Cyber & Intelligence business. “By sharing fraud data and applying powerful tools like Consumer Fraud Risk, we are scaling detection and advancing protection.”

Unmasking master manipulators

But that’s just half the battle. Once CFR flags a payment, the bank can either reject it outright or contact the sender to share their concerns. But victims aren’t always ready to accept that they’ve been fooled.

That’s because fraudsters are masters of psychological manipulation. For example, they often create a false sense of urgency, flustering victims so they don’t pause to assess the situation. Posing as an agent from your bank, a fraudster might warn, “Criminals have taken over your account. You need to act fast to protect your savings!” You follow their instructions to transfer your balance and it all ends up in their pocket.

Victims of longer cons, such as investment and romance scams, are often the slowest to recognize that the person on the other end isn’t who they think. Romance scammers in particular spend months fabricating an online relationship with victims. They often encourage victims to disengage from family and friends who raise doubts about the relationship; when confronted with those suspicions, many victims are reluctant to question what feels like the happiest part of their life.

Because CFR exposes what the victim may struggle to acknowledge — that the recipient is a scammer — it provides valuable evidence in the bank’s discussions with customers. TSB relies on specialized teams to, as Selley puts it, “break the spell” cast by the fraudster. Often aided by local police, the bank’s agents ask about lesser-known details of the customer’s history to shine a light on red flags.

“CFR gives us much stronger corroboration to bring to these conversations,” Selley says. “We can say, ‘The beneficiary’s marked as suspicious. Are you sure this person is who they say they are?’”

Criminal tactics are always evolving, so thwarting scams is a never-ending arms race. “As we live more of our lives online, fraudsters are always finding new opportunities,” Selley says. “It’s a constant battle, but we do whatever we can to stay alert and keep our customers safe.”



Click Here For The Original Story


. . . . . . .