The Security Ledger Podcast

The Security Ledger Podcast


Episode 180: Gary McGraw on Machine Learning Security Risks

March 31, 2020

In this episode of the podcast (#180), Gary McGraw of the Berryville Institute of Machine Learning joins us to talk about the top security threats facing machine learning systems.

As long as humans have contemplated the idea of computers they have contemplated the idea of computers that are capable of thinking – reasoning. And as long as they’ve contemplated the notion of a thinking machine, they’ve wondered about how to contend with the consequences of computers’ faulty reasoning?

Spotlight Podcast: How Machine Learning is revolutionizing Application Fuzzing

Stories about machines acting logically – but based on faulty or incorrect assumptions – are the fuel for science fiction tales ranging from 2001: A Space Odyssey (Arthur C. Clark) to Minority Report by Philip Dick, to the 1980s cult classics like the movies War Games and The Terminator.

Gary McGraw is the Co-Founder of the Berryville Institute of Machine Learning.

So far, these warnings have been the stuff of fiction. But advances in computing power and accessibility in recent years has put rocket boosters on the applications and abilities of machine learning technology, which now influences everything from multi-billion dollar trades on Wall Street, to medical diagnosis to what movie Netflix recommends you watch next.

As machine learning and automation fuel business disruption, however, what about the security of machine learning systems? Might decisions be manipulated and corrupted by malicious actors intent on sowing disruption or lining their own pocket? And when machine decisions go awry, how will the humans impacted by those decisions know?

Adversarial examples such as altered street signs can poison machine learning algorithms with bad data. (Photo courtesy of Cornell University.)

Facebook opens up on System that ‘protects Billions’

Our guest this week, Gary McGraw, set out to answer some of those questions. Gary is the founder of the Berryville Institute of Machine Learning, a think tank that has taken on the task of analyzing machine learning systems from a cyber security perspective. The group has just published its first report: An Architectural Risk Analysis of Machine Learning Systems, which includes a top 10 list of machine learning security risks, as well as some security principles to guide the development of machine learning technology.

In this conversation, Gary and I talk about why he started BIML and some of the biggest security risks to machine learning systems.