PaymentsJournal

PaymentsJournal


AI Has Become an Integral Part of Fraud Prevention—and Fraud Attacks

March 13, 2025

Just as organizations are implementing artificial intelligence and machine learning in novel ways, cybercriminals are continually looking to incorporate AI into their attacks. The disruptive technology allows criminals to find targets more effectively, scale their efforts, and forge better attacks that are increasingly harder to detect.


In a PaymentsJournal podcast, Alex Cox, Director of Threat Intelligence, Mitigation, and Escalation at LastPass, and Jennifer Pitt, Senior Fraud and Security Analyst at Javelin Strategy & Research, discussed the AI-powered methods cybercriminals use, the impacts of AI-related fraud, and the ways that organizations can protect their customers and themselves.



A Big Data Problem

One of the areas where AI excels is in sifting through massive datasets to pinpoint an anomaly. Many organizations use that capability to identify fraudulent activity. On other hand, criminals use that functionality to find their next target.


“Bad guys have a big data problem that AI is helping them address,” Cox said. “For example, there was the MOAB list that came out recently, which is the Mother of All Breaches, and it had billions of username/password pairs. If you think about the magnitude of credentials that are available publicly, the amount of data makes it difficult. The bad guys figured out that if they put these things into language learning models and use AI to help them manage that data, they’re able to pull things out more efficiently and summarize it.”  


Once criminals have parsed large data sets to find their target, AI can also be implemented to make fraud attacks more effective. In the past, phishing attacks were much easier to spot. There may have been incorrect grammar in the email, a logo that wasn’t quite right, or other cues that the communication was fraudulent.


“Enter AI and LLMs, and criminals can go to this LLM and say, ‘Help me craft this phishing e-mail based on this lure,’” Cox said. “It will write it for you in very convincing English language that appears it’s from a native speaker. Once you get past all the technical controls, the final barrier is the person. If the person can look at an email and think it sounds like a person, it’s not a phishing e-mail, and they respond to it, it has made the bad guys that much better.”


A Blended Threat

Another way that cybercriminals are employing AI is to create deepfakes, with the objective of either creating a convincing persona or assuming an existing identity. This ability is just one aspect of the growing AI arsenal available to criminals.


“The combination of these capabilities is significant,” Cox said. “Microsoft has analyzed how some of the bad guys use ChatGPT, and you see them using it the same way that the traditional good guys are using it. They’re summarizing, they’re getting help with coding, and they’re getting ideas on how to improve their attacks. With this blended threat, they are able to use AI to pull information on a target, based on their internet presence, and craft an attack that is potentially able to compromise the target’s machines.”


The powerful technology has led to a decrease in the technical sophistication required to carry out damaging cybercrimes. There has even been a shift toward AI agents, which are fully autonomous fraud engines. It means criminals can lean on artificial intelligence to do much of the heavy technical lift.


“AI is allowing these bad guys to do this en masse,” Pitt said. “We used to see phishing emails where you’d have one single attacker that would have scripts and send out a few phishing emails or a few social engineering attacks. Now it’s all being automated with AI, so it’s thousands of emails, thousands of social engineering attacks, thousands of malware attacks all at once. It’s just easier for them to get that information out there.”


People, Process, Technology

Just as criminals find new ways to implement AI, many financial institutions are searching for ways to combat these attacks. To do so, a three-pronged approach that considers people, process, and technology is required.


On the people side, it means education. Organizations should ensure that their employee base, and potentially their customer base, understands that fraud attacks are now more sophisticated. The end user should understand that they can never fully trust the communications they receive, and they should question unusual asks.


From a process standpoint, organizations should take a zero-trust approach which includes constant authentication.


“We need to look at what we call perpetual KYC,” Pitt said. “In banking, traditional Know Your Customer processes often occur once, typically during onboarding, or on a cyclical basis. We look at the sanctions list, the person’s income, perform their identity verification, and then it’s set aside. Perpetual KYC uses AI to do continuous authentication in the background automatically in real time.”


Integrating AI to combat AI-driven fraud is one of the most powerful technology approaches available to organizations. Fraud and security teams can use artificial intelligence for anomaly detection among large data sets, and they can use it to summarize the gist of a large collection of documents. Organizations can also use AI to make their fraud prevention efforts more effective at a larger scale.


Tracking the Threat Environment

Though there are powerful benefits to adopting the disruptive technology, AI has many well-documented flaws. For instance, the technology is only as good as its data set, and it has been known to produce false or misleading information. These issues have caused some misgivings about AI adoption among many professionals.


“It’s important to use these tools as fraud professionals,” Pitt said. “We may be hesitant to use tools that we think are going to be used by the bad guys. Start using the tools and get familiar with that, if you’re not already as an individual. Tell your organization how AI can be beneficial. Yes, AI is absolutely used by the fraudsters, but if we don’t how to use it for good, we will never, ever beat them.”


For many institutions, another barrier to AI adoption is the organization’s resistance to change.


“I spent about half of my career working for big banks,” Cox said. “Typically, when a new technology comes out, they will ban it and then bring it on board over time in a way that makes sense. I think that AI is moving so fast that that approach is not going to work anymore, because you’re going to be at a disadvantage.”


One benefit for financial institutions is the sheer amount of education that’s available to them about artificial intelligence. AI has dominated the attention of the tech world for over a year, and the disruptive technology has been heavily scrutinized from every angle.


The amount of information available means security and financial professionals have a multitude of training opportunities they can use to educate themselves and their organizations. There is also constant news about the emerging capabilities of AI, and the techniques that cybercriminals use.


“Think about what you do day-to-day,” Cox said. “Think about the work that you have to do at your job and then start thinking: how can AI help me here? It should be clear very quickly that it will be valuable for a lot of different things. Just keep track of the threat environment, understand what’s going on, and that will help you make the right decisions to protect your firm.”