© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Botnet fraud: tracking your taps to fight ‘zombie armies’


Identity fraud has reached an unprecedented scale, as scammers deploy botnets to multiply bogus applications for credit – forcing cybersecurity firms to study user behavior to try to counter them.

During the COVID pandemic, online shopping and gaming soared as global lockdowns obliged us to stay at home – and so did government spending to support citizens unable to work. But during this period, fraudsters also saw their opportunities to commit crime balloon.

One of the most notorious instances was the Paycheck Protection Program (PPP) abuse in the US, which saw fraudulent “crisis loans” cost the government some $80 billion. On top of that, as much as $400 billion is thought to have been defrauded from other taxpayer funds intended to help people made jobless by the pandemic.

One fraudster even bragged about his exploits in 2020, claiming to have bilked $1 million from the designated funds and used the ill-gotten gains to buy a Lamborghini.

Dubbed by one legal expert as the “biggest fraud in a generation,” this parallel epidemic of scamming also represented a game-changer of huge proportions for the cybersecurity industry.

“Pre-pandemic, most of the fraud that was occurring was individual, what you would call applicant-level fraud, where we were trying to stop one person at a time,” says Jack Alton, CEO of NeuroID, a behavioral analytics provider that specializes in fraud prevention and detection.

“During the pandemic, we saw fraudsters get rewarded through some of the things they did with the PPP loans – literally, there were fraud start-ups,” he adds. “As they got more sophisticated and figured out how to steal money at scale, we saw more fraud rings and bot attacks.”

This upwards trend is forcing companies like NeuroID to up their own game, as threat actors marshal the power of botnets – ‘zombie’ armies of hijacked machines – to fill in fraudulent applications for credit, insurance, buy-now-pay-later schemes, and merchant accounts on a massive scale.

A human solution?

Ironic then that the very thing that makes bot-driven scams so virulent for credit companies and consumers also constitutes a major weakness – their essential lack of humanity.

Using AI-driven machine learning to scrutinize applications, NeuroID says it has found a way to distinguish between genuine credit applicants, human fraudsters, and bot scams. By observing behavioral cues and tells, Alton hopes to work around the problem presented by verification data that can no longer be relied upon to weed out the digital con artists.

Abstract graphic to represent AI machine learning tool
Source: Shutterstock

“These historical data sources that are viewing whether or not you live at a particular address – the fraudsters have all that information now,” he says. “So when you go to check it, all the information is coming out as though you are who you say you are, and I should do business with you – when in fact, that information has been compromised.”

Evidence compiled by other industry firms appears to support this claim. FiVerity released a report saying that last year financial companies lost an estimated $20 billion to synthetic identity fraud. Another report from Javelin Strategy & Research said that in 2020 alone, 49 million people were robbed of a total of $56 billion through similar means.

“As someone is filling out their application, and they're moving through and interacting with their PII [personal identifying information], what we're looking at is: are they moving at a pace that a human cannot move?” asks Alton. “Are they doing the exact same movements through the application that a human being wouldn't do that a machine that has been scripted can do at scale?"

“It's not just speed: when you look at bots, one of the key indications [is] they don't make mistakes. They move at a pace that is inconsistent with a human, they fill out the form in the exact same way over and over again, and without behavior you just get the final answer, which is going to be correct. All of your historical data sources say that it looks like a good customer, when in fact, it was filled out by a machine.”

Copy, paste… and con

The same technology can also be used to spot cybercriminals who aren’t using botnets but may be working off a database of harvested credentials to mimic legitimate credit applicants, again on a mass scale.

“If somebody has a bank of stolen identities and they made a lot of mistakes, and their input didn't look as though they were pulling that information from their long-term memory – those are huge blind spots,” says Alton, citing a single fraud ring attack NeuroID says it averted that could have cost a client upwards of $850,000.

“Typically with a fraud ring there's still fraudsters that are manually trying to move it through, but they're using techniques like a lot of copy and pasting and transcribing information,” explains NeuroID vice president of solutions Peter Andrious during a demonstration of its detection software, a Java-based script that can be inserted into the application interface used by client companies.

Andrious ferrets out manual fraudsters by tracking the amount of time an applicant spends on each field of a credit request form. “They pasted in their email, and they hovered for about 35 seconds before they did anything on the form,” he says of one who turns out to be a cyber-fraudster.

“You can see here they've pasted in their first name, they then flipped off-screen for 16 seconds. This is something we typically see with fraud rings, they've got their spreadsheet or data dump, and they're looking for the next piece of information. They've taken some time to find the relevant piece, and in this instance, they came back in and pasted that. So that on its own just looks really risky.”

Hacker entering data on a computer
Source: Shutterstock

Andrious says around three-quarters of applicants initially flagged as potentially risky turn out to be fraudsters, either working manually or using a botnet to fill in bogus applications for credit.

“We look at [a client company’s applicant] population and start off with the top 30% we treat as genuine, the bottom 1% we treat as bad, and everything else we put in the middle,” he says. “And then we'll adjust those thresholds based on feedback through that customer's own dataset.”

“We have a separate signal that focuses specifically on bot behavior,” he adds. “We're looking at interactions driven by code. So what we would typically see is a bot would focus on all the fields that are available really quickly. The speed that it's being completed, the consistency, the speed of transition compared to normal behavior.”

Collateral damage

Alton hopes that behavioral analytics monitoring technology will serve both consumers and corporations by shielding genuine credit applicants from the crossfire many have been caught up in as a result of the escalating cyberwar between businesses and fraudsters.

“When we talk about these fraud rules getting tightened more and more, to protect from these attacks, flip the coin and say: what is the overall size of us impacting good customers [through] false declines?” he says. “We're subjecting good customers to friction, or they're bailing out of a process because it became too onerous. That is now being measured by some firms as 70 times larger than the global fraud and risk problems.”

Alton and Andrious hope that AI machine learning can address the blind spots that have been caused by the mass migration of business transactions to the online domain and redirect the heat away from legitimate consumers toward the cybercriminals impersonating them.

“If you think back to when we did business in person, we used to pick up on all of these cues,” says Alton. “Is this person looking away, are they scratching out their name, are they having trouble answering questions that are about them that they should know?” “Typically what we see is, just by introducing any friction for fraud ring users or bad actors, they'll self-select out,” adds Andrious.

Cat and mouse

All well and good, but how will honest consumers feel about their every move – what Alton and Andrious refer to as the “taps, types and swipes” – being monitored remotely when they apply for credit?

“We don't rely on any third-party or historical data, we're not looking at geolocation or IP address, for example,” says Alton. “What we're really doing is reconnecting the human to the business. Right now, there's a great deal of frustration and a lot of fraud that's slipping through. That ‘are you who you say you are’ question is what's driving a lot of the unnecessary friction and the misidentification of high-risk and low-risk.”

It all sounds like a digital game of cat-and-mouse being played at the highest level – and for the highest stakes. Does Alton see this as the future of the industry, with anti-fraud specialists having to come up with ever more ingenious solutions to combat increasingly devious cybercriminals?

He concedes that this is the likely outlook but adds: “Fortunately, one thing that's very cool about behavior is it's very difficult to fake, for me to input your PII as though I'm you. That behavior has no borders – we've seen that work across the world in every different company. That information can't be compromised, so this is a major step forward for being able to complete that picture that we've never had, which is: who is on the other side of the screen?”


More from Cybernews:

Crooks in Lamborghinis: how cybercriminals continue to exploit US coronavirus relief loans

Synthetic ID fraud: who does it hurt the most?

Sebastián Stranieri, VUsecurity: “identity theft spiked dramatically during the pandemic”

7.8M T-Mobile customers' data leaked: what to expect?

Armen Najarian, Outseer: “fraudsters are looking to capitalize on the mounting international tensions”

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked