Concerning AI | Existential Risk From Artificial Intelligence
Latest Episodes
0070: We Don’t Get to Choose
Or do we? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3
0069: Will bias get us first?
Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harvard We discuss the idea that we’re currently using narro
0067: The OpenAI Charter (and Assassination Squads)
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
0066: The AI we have is not the AI we want
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3
0065: AGI Fire Alarm
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3
0064: AI Go Foom
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
0062: There’s No Room at the Top
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
0061: Collapse Will Save Us
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?