The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)


Latest Episodes

V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677
March 25, 2024

Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video

Video as a Universal Interface for AI Reasoning with Sherry Yang - #676
March 18, 2024

Today were joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making, which explores how generative video

Assessing the Risks of Open AI Models with Sayash Kapoor - #675
March 11, 2024

Today were joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models. We dig into the controversy around AI safety, the ri

OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674
March 04, 2024

Today were joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673
February 26, 2024

Today were joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Bens recent paper, Why think step by step? Reas

Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672
February 19, 2024

Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671
February 12, 2024

Today were joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, Are Emergent Abilities of Large

AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670
February 05, 2024

Today were joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advanta

Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669
January 29, 2024

Today were joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks vers

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668
January 22, 2024

Today were joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Bens recent Fawkes, Glaze, and Nightshade p