Arkose Labs’ Report Finds Nearly Three-Quarters of Web Traffic is Malicious, with Generative AI and Cybercrime-as-a-Service Fueling Bot Attacks | #datingscams | #lovescams

New Analysis Reveals Increase in Bot and Human-Based Attacks Leading to Darker Effects of Fraud

Arkose Labs, the global leader in bot management and account security, released its threat intelligence report analyzing the contemporary attack landscape. The report, “Breaking (Bad) Bots: Bot Abuse Analysis and other Fraud Benchmarks” found that bots and human fraud farms were responsible for billions of attacks in the first half of 2023 and into Q3. These attacks comprised 73 percent of all website and app traffic measured. In other words, almost three-quarters of traffic to digital properties is malicious.

This press release features multimedia. View the full release here:

The report studied billions of sessions worldwide across industries to reveal the top attacks by industry, type, and region. Researchers assessed the attacks across three primary attack vectors: basic bots, intelligent bots, and human fraud farms. Fraudsters use these vectors to launch attack types such as SMS toll fraud, web scraping, card testing, credential stuffing, and more.

The analysis found bot attacks overall increased 167 percent in the first half of the year, weighted heavily by a 291 percent increase in intelligent bots. These smart bots are capable of complex, context-aware interactions. The attacks, though, weren’t limited to bots. Research found that when fraudsters’ bots are blocked, they pivot attacks to human fraud farms, which increased 49 percent from Q1 to Q2 2023.

“Bot attacks aided by human fraud farms are about more than concert tickets and high-priced sneakers. They can point to far darker activities,” said Kevin Gosschalk, founder and CEO of Arkose Labs. “We’re seeing more attacks, using more intelligent bots, conducting more sophisticated types of attacks. Fake account registration, credential stuffing, scraping, SMS toll fraud–these are the types of attacks that fraudsters use as the first steps to more harmful crimes. They lead to romance scams that groom for human trafficking, money laundering from drug deals, or theft to fund illegal weapons.”

Two trends are highlighted in the report as driving the increase in attack level: Generative AI (GenAI), and Cybercrime-as-a-Service (CaaS).

During the past six months, Arkose Labs’ threat researchers have observed a significant uptick of GenAI being used for content generation by bad actors who are now able to write pristine phishing emails for Man-in-the-Middle attacks or perfectly-worded responses on dating apps in their romance scams. In addition, the researchers found attackers are using bots to scrape data from websites and then using that data to tune their GenAI models.

An equally prodigious trend, Cybercrime-as-a-Service (CaaS) lowers the barrier to entry for adversaries looking to commit cybercrime. CaaS vendors advertise their questionably-legal services openly. Anyone can reach out to these vendors to buy bots to circumvent security measures or carry out an attack. Fraudsters with limited to zero technical skills can then use fully automated bots at scale that cause widespread damage to businesses and consumers. Fraudsters no longer have to know how to code to deploy a sophisticated volumetric bot attack. They can simply buy the bots off the web along with the training they need and even tap into the sellers’ “customer” support.

Gosschalk added, “The massive rise of CaaS has completely changed the economics for adversaries. It’s much cheaper to attack companies and the attacks are just better because it’s a dev shop that is doing the attacks instead of just individual cybercriminals.”

Industries Under Attack

With so much traffic to digital properties made up of malicious attacks, Arkose Labs researchers delved more deeply into the specific industries under attack. Nearly every industry experienced an increase in the number of attacks. The report lists the following as the industries that had more than 50 percent of traffic coming from bad bots and details common attacks carried out by malicious bots.

  1. Travel and Hospitality – 76 percent bad bots
  2. Technology – 71 percent bad bots
  3. Retail – 65 percent bad bots
  4. Streaming – 61 percent bad bots
  5. Gift Cards – 57 percent bad bots

“Breaking (Bad) Bots: Bot Abuse Analysis and Other Fraud Benchmarks” shares additional insights on how attacks happen and what can be done to detect and block them. To download the full report, visit here. To join the webinar, visit here.

Methodology: From January through September 2023, Arkose Labs analyzed billions of sessions from the Arkose Labs Global Intelligence Network, a consortium of the biggest companies in the world as well as category leaders that are Arkose Labs customers. The large customer bases of these companies represent high-value targets for cybercriminals. Arkose Labs’ unique position to observe this activity informs the analysis throughout the study.

About Arkose Labs

The mission of Arkose Labs is to create an online environment where all consumers are protected from spam and abuse. Recognized by G2 as the 2023 Leader in Bot Detection and Mitigation, with a high score in customer satisfaction and the largest market presence six quarters running, Arkose Labs offers the world’s first $1M warranties for credential stuffing, SMS toll fraud, and card testing. With 20% of our customers being Fortune 500 companies, our AI-powered platform combines powerful risk assessments with dynamic threat response to undermine the strategy of attack, all while improving good user throughput. Headquartered in San Mateo, CA, with offices in Argentina, Australia, Costa Rica, India, and the U.K., Arkose Labs protects enterprises from cybercrime and abuse. For daily insights pertinent to the shifting threat landscape, follow the company on LinkedIn.

Jean Creech Avent

Head of Global Brand and Communications

Arkose Labs

[email protected]

+1 843-986-8229

Source: Arkose Labs

Click Here For Original Source.

. . . . . . .