A world in which evildoers employ robots to do the evil bidding seems like science fiction.
Yet when it comes to modern-day fraud, the scams have gone increasingly digital, relying on automation to reach the masses in the hope that, by casting a wide net, a few unsuspecting fish will get caught up in it, costing them hundreds to thousands of dollars.
About 1,100 Manitobans and Manitoba businesses lost a reported $4.3 million to various schemes in 2020, Canadian Anti-Fraud Centre figures show — the highest figure since 2017.
And, increasingly, fraudsters are using technology to their advantage, and our own love of tech against us, by leveraging the information we plug into social media and other corners of the internet to better design their scams.
“We’re seeing more targeted approaches that are becoming much more sophisticated,” Jason Storsley, vice-president of fraud management at RBC.
Fraudsters’ pitches have become more refined and include everything from romance scams — perpetrated via online dating sites — to automated phone calls purporting to be from the Canada Revenue Agency to text messages and highly convincing, fake emails from your bank aiming to ‘phish’ (a.k.a. steal) your user name and password.
But the crux of these pitches often remains the same — to appeal to your emotions, and circumvent your reason, so you quickly react as opposed to thinking twice about what they are going to do.
If there’s one message Jeff Thomson, senior RCMP intelligence analyst with the Canadian Anti-Fraud Centre wants the public to understand this March — which is Fraud Prevention Month — it’s to slow down and think.
Be it fake medical mask scams during the pandemic or emergency email schemes — whereby a fraudster poses to be an acquaintance in a pinch needing money — “they’re designed to create an emotional reaction,” he says.
“They prey on desperation: for example, you’ve just lost your job; the mortgage payment is coming up, and you just signed up for the CERB (Canada Emergency Response Benefit), and then you get that text saying to get the payment to click on the link.”
Of course, that’s not how people were actually alerted about a CERB payment, yet many people last year received this phishing scam via text or email, he notes. While only a few people were actually swindled into giving out critical information — i.e. their online banking password — fraudsters are continually honing scams, making it harder to detect what’s authentic and what’s fake.
“We are now spending far more time online because of the pandemic,” Storsley says. “We’ve been buying a lot more online, so we are seeing the savvy cyber-criminals… creating phishing attacks that are pretty sophisticated to exploit all this activity.”
These days many scams use technology like chat-bots — providing automated responses when you text or email back. The technology remains fairly crude, enhancing only slightly criminals’ ability to spam millions while harvesting only a few victims.
But businesses are already seeing more complex attacks, known as ‘spear-phishing’.
“This can involve where fraudsters have compromised a number of social media accounts and started scraping these accounts to identify contacts, and then tailoring messages in a more directed fashion,” Thomson says.
Requiring more research and work on the part of cyber-criminals, businesses are often spear-phishing targets as they have more money to lose. But as artificial intelligence and related technologies become more widespread, spear-phishing perpetrated by bad bots (really just automated, but responsive computerized processes) instead of the criminals themselves could become the norm, says computer science professor Florian Kerschbaum, executive director of Waterloo Cybersecurity and Privacy Institute.
‘We definitely are seeing increased use of artificial intelligence and machine learning techniques to run automated attacks,” he says.
“But the current capabilities are not yet at a stage where this personally frightens me.”
The dystopian future that does give him the willies, however, is on the horizon. Within the next few decades, or possibly sooner, fraudsters will be able to access and analyze more data from the digital universe to create targeted attacks on a mass scale using these technologies.
For the time being, however, automated scams still involve “mass mailings targeted toward computer illiterate people,” he adds.
More sophisticated “spear-phishing attacks are still done by humans.”
But businesses struggle to defend against them all the same. And even more so would the average consumer, Kerschbaum says.
“If someone with a lot of resources targets you for whatever reason, it’s extremely hard to defend against.”
Fortunately, fraudsters aren’t at the point yet to use machine learning or similar technologies to run widespread scams that can tailor their message down to the individual. “With a little bit of effort I could probably build something in the lab with my grad students,” Kerschbaum adds.
Of course, fraudsters don’t generally have the skills of computer scientists. But as AI and machine learning become ubiquitous, a future where it becomes difficult — if not impossible — to know who you’re talking to on the other end of an email, social media or a phone call seems much more likely.
“It might take a decade or so,” Kerschbaum says.
“Perhaps it’s 40 years away — I don’t want to sound too pessimistic.”