Bionic Bug Podcast

Bionic Bug Podcast


The Tank (Ch. 35) – Bionic Bug Episode 035

December 16, 2018

Hey everyone, welcome back to Bionic Bug podcast! You’re listening to episode 35. This is your host Natasha Bajema, fiction author, futurist, and national security expert. I’m recording this episode on December 16, 2018. 

Let’s talk tech:

“Does AI Truly Learn And Why We Need to Stop Overhyping Deep Learning,”published on Forbes.com on December 15.

This is great article that clearly articulates what today’s AI is and more importantly what it isn’t.

The media hype about AI has led the general public to have misconceptions about the art of the possible

When I talk about AI to my students, I refer to it as the next generation in software applications. Of course, that’s an oversimplification. Hardware is important as well. But it’s important to understand that machine intelligence at this current stage is not all that intelligent, at least not when compared to humans. Can machines outperform humans in certain areas? Yes, but that’s been true for decades. 

Machine learning refers to a new approach that allows computers to “learn” from data rather than be limited to manually coded sets of rules and to “reason” their way to accurate outcomes. But it’s important to understand what “learning” and “reasoning” means when it comes to computers. It’s not even close to human notions of learning and reasoning.

The author of this piece argues that data scientists “treat their algorithmic creations as if they were alive, proclaiming that their algorithm ‘learned’ a new task, rather than merely induced a set of statistical patterns from a hand-picked set of training data under the direct supervision of a human programmer who chose which algorithms, parameters and workflows to use to build it.”

When a machine learning tool “learns” to identify dog breeds, it does so using “spatial groupings of colors and textures with particular strings of text.” The tool doesn’t understand what “breed” or “dog” means or that the dog is wearing a collar and why that is the case. 

A slight change of context and this tool would fail to perform its task. For example, what if we dressed the dogs up in Halloween costumes? Unless the tool was specifically trained on images of dogs in costumes, it would most likely fail to identify them as dogs. Compare this to human ability to learn – once a child understands what a dog is, it doesn’t matter if the dog is wearing a hat, boots or a costume, it is a dog.

For a detailed understanding of the current status of AI, I encourage you to check out the Artificial Intelligence Index, 2018 Annual Report. It will give you current numbers, but also analysis on the current capabilities and limitations.
 “Congress Can Help the United States Lead in Artificial Intelligence,” by Michael Horowitz and Paul Scharre on foreignpolicy.com on December 10

The U.S. has fallen behind in the development of AI. Last year, China released its “Next Generation Artificial Intelligence Development Plan.” It plans to become the world leader in artificial intelligence by 2030. Many other countries have released AI strategies. The US does not yet have a comprehensive AI strategy.

Congress is about to hold hearings to assess Department of Defense’s progress on AI.

The most recent National Defense Authorization Act mandated the creation of the National Security Commission on Artificial Intelligence. Members will be appointed by senior congressional leaders and agency heads and will develop recommendations for advancing the development of AI techniques to bolster U.S. national security.

The authors make three important recommendations for the commission:

First, we need to accelerate the pace of bureaucracy to le...