Agentic AI: Hype, Hope, or Real Risk?

Infographic element

Find Threat Talks on

Agentic AI: Hype, Hope, or Real Risk?

Andrew Grotto (founder and director of the Program on Geopolitics, Technology and Governance at Stanford University) puts it plainly: there’s a 5% chance that within the next 10 years, AI could rule over humans. That number might sound small, but it’s enough to take seriously.

He joins host Lieuwe Jan Koning and guest Davis Hake (Senior Director for Cybersecurity at Venable) as they dive into the technology, governance, and risks behind autonomous AI. From system trustworthiness to liability, and market incentives to regulation, they break down what’s already happening and what needs to happen next.

They also discuss how humans will struggle to validate AI outcomes in areas where AI excels, why thoughtful deployment is key, and what it means to be “quick, but not in a hurry.”

Key topics:
✅ How to adopt your security and governance to the use of AI
🧠 Why applying existing IT risk frameworks is a smart starting point
⚖️ How to balance regulation, trust, and innovation

Can your organization keep up with AI that moves faster than human oversight?

 

Your cybersecurity experts

Lieuwe Jan Koning

Co-Founder and CTO
ON2IT

Davis Hake

Davis Hake

Senior Director of Cybersecurity Services,
Venable LLP

Andrew Grotto

Andrew Grotto

Research, Teaching, and Advisory
Stanford University

Episode details

From FOMO to Frameworks: Governing AI in Practice

As AI adoption accelerates, organizations are racing to understand how to implement and govern it securely. In this episode of Threat Talks, Lieuwe Jan Koning speaks with Andrew Grotto (Stanford University, Hoover Institution) and Davis Hake (Venable) to discuss the current state of agentic AI and its broader cybersecurity implications.

Rather than focusing on hypothetical risks, this conversation zeroes in on what organizations can and should be doing today. Andrew outlines four waves of IT innovation, positioning AI as the next major leap: one that manages complexity like never before. He encourages organizations to “be quick, but don’t hurry,” advocating for thoughtful AI deployment using existing IT governance where it fits.

Davis pushes for a mindset shift: trust and value must guide security decisions, not just technical controls. He likens modern AI agents to the Matrix (systems that “already know kung fu”) highlighting their speed and complexity. The group discusses how market incentives often reward insecure products, and how end users and vendors alike need better alignment.

In this episode of Threat Talks:
• Why AI complicates validation and compliance
• How to draw the line between user and vendor responsibility
• What Europe’s AI regulation framework gets right (and why the US’ has gaps)

Infographic element

Get your Hacker T-shirt

Join the treasure hunt!

Find the code within this episode and receive your own hacker t-shirt for free.

4 + 7 =

Christmas Hacker