HumAIn Podcast

HumAIn Podcast


Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM

March 29, 2020

Bias in AI is becoming a concern as algorithms cause unfairness in many areas including hiring, loan applications and autonomous vehicles. Everybody expects AI to be accountable and calls for developing standards and governance systems to create balance.

The idea of black boxes demonstrates the flaws of using AI since this technology cannot be scrutinized. Humans want an accountable technology and with AI being a black box, this means responsibility to control how algorithms work for better outcomes.

AI can also cause destruction and make secret decisions, which cause negative implications on people’s lives and translates to using responsible AI systems. By integrating explainable AI into their AI models, businesses make accurate decisions, map patterns and optimize operations.

Listen in, as I discuss Why Responsible AI is needed in Explainable AI Systems

In this episode: Prof. Christoph Lutge, Director of TUM Institute for Ethics and AI (Germany)

This episode is brought to you by For The People. You can grab your copy of For the People on Amazon today, or visit SIMONCHADWICK.US to learn more about Simon.

Learn more about your ad-choices at www.humainpodcast.com/advertise

You can support the HumAIn podcast and receive subscriber-only content at http://humainpodcast.com/newsletter