New Money Review podcast

New Money Review podcast


Unseen Money 14—the AI malware threat

November 13, 2025

Last week, Google’s threat intelligence group warned that artificial intelligence (AI) is making malware attacks more dangerous.


[Malware is malicious software—programmes designed to disrupt, damage or gain unauthorised access to computer systems—usually delivered via phishing emails, compromised websites or infected downloads]


“Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations,” Google said in a 5000-word blog.


Are malware programmes using Large Language Models (LLMs) to dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, as Google warns?


Or it this yet another case of tech firms selling solutions to a problem they have created themselves?


Listen to the latest episode of Unseen Money from New Money Review, featuring co-hosts Timur Yunusov and Paul Amery, to hear more about the effect of AI malware.


In the podcast, we cover:


  • Google’s warning about the rise of AI malware – reality or hype? (2’ 35”)
  • Why LLMs were originally protected from harmful behaviour (4’ 10”)
  • How criminals learned to develop LLMs without guardrails (4’ 55”)
  • Model context protocols (MCPs) and AI agents as offensive tools (5’ 30”)
  • Malicious payloads and web application firewalls (7’ 35”)
  • Tricking LLMs by exploiting the wide range of input variables (8’ 30”)
  • The state of the art for fraudsters when using LLMs (10’ 10”)
  • Timur used AI to learn how to drain funds from a stolen phone (11’ 05”)
  • How worried is Timur about the rise of AI malware? (14’ 20”)
  • AI has dramatically reduced the cost and increased the speed of producing malware (15’)
  • AI, teenage suicides and protecting users (16’ 50”)
  • AI for good: using AI to combat AI malware (19’)
  • How a Russian bank used AI chatbots to divert fraudsters (19’ 40”)
  • Data poisoning—manipulating the training data for AI models (22’ 10”)
  • Techniques for tricking LLMs (23’)
  • Only state actors can manipulate AI models at scale (25’ 40”)
  • The use of SMS blasters by fraudsters is exploding! (27’)