TechSpective Podcast
Why AI Agents Need Guardrails — And Why Everyone’s Talking About It
The latest episode of the TechSpective Podcast dives straight into one of the most pressing questions in cybersecurity right now: what happens when the vast majority of identities in your environment aren’t human anymore? I sat down with Danny Brickman, co-founder and CEO of Oasis Security, for a wide-ranging conversation about the future of identity, the rise of agentic AI, and why enterprises may be sprinting into an AI-powered future without realizing just how much risk they’re accumulating along the way. Danny brings a background that blends offensive experience, deep identity expertise, and a pragmatic understanding of what security teams actually need—not just in theory, but in the messy reality of modern cloud environments. We covered a lot of ground. Some of it gets philosophical. Some of it gets unsettling. None of it is boring. A few themes we talk about (without giving the episode away): Identity is no longer about people. If you’re still thinking of identity as usernames and passwords, you’re roughly a decade behind. The overwhelming majority of identities in an enterprise belong to machines, services, workloads, keys, tokens—digital “keycards” with no owner attached. And that was before agentic AI entered the picture. AI agents behave like employees… just much faster. This creates opportunity. It also creates chaos if you don’t know what your agents can access, what they can do, or how quickly they can do it. The idea of an AI system accidentally wiping out a database is no longer hypothetical. Access is becoming the currency of the AI era. The value an agent delivers directly correlates to the access it’s granted. That tension—between capability and control—is now central to modern security strategy. Governance frameworks for AI agents aren’t optional. Danny and his team have been working with industry leaders to build a framework that defines what’s acceptable, what’s risky, and how enterprises can put real guardrails around AI systems. It may be the first time you’ve heard the term “agentic access management,” but it won’t be the last. We also dig into the AI bubble, the trust problem, and why ‘do your own research’ is becoming less meaningful in an AI-shaped world. These tangents got lively, but they all tie back to a core idea: when machines act on our behalf, we need to understand the implications. Why this episode matters AI is reshaping cybersecurity faster than any shift we’ve seen in years. But it’s also blurring lines—between humans and machines, autonomy and oversight, innovation and risk. We don't go out of our way to try to package neat answers. Instead, we raise the questions every security leader should be asking right now: What should agents be allowed to do? Who’s accountable when something goes wrong? How do we maintain trust in systems that move faster than we can supervise? And what does identity even mean in a world where humans are the minority? If you want a thoughtful, candid exploration of these issues—and a look at how one company is thinking about securing the future—give the episode a listen. The full episode is now live on the TechSpective Podcast. Let the conversation challenge your assumptions.





Subscribe