OpenClaw and The Dark Side of Agentic AI

Infographic 2026

Find Threat Talks on

OpenClaw and The Dark Side of Agentic AI

What if your biggest threat this year isn’t malware, but your own AI assistant?

OpenClaw connects a large language model directly to your terminal, browser, email, and messaging apps. It runs with your permissions and executes tasks without hesitation.

Within days of release, researchers uncovered a One-Click remote code execution vulnerability. Cisco called it a security nightmare. Gartner labeled it an unacceptable risk.

OpenClaw was not built with meaningful guardrails. It centralizes automation and gives AI agents direct execution power inside your environment under a default allow model.

Much has already been said about OpenClaw. The headlines focused on how big a threat it is.
In this episode of Threat Talks, Rob Maas, Field CTO at ON2IT, speaks with SOC analyst Yuri Wit to move beyond the noise. They examine what OpenClaw actually reveals about agentic AI security and why, once deployed with user-level access, there is almost no practical way to constrain it.

AI agents are already entering real environments. The real question is not whether they are powerful, but whether we understand the consequences before they scale.

 

 

What you’ll learn

  • How OpenClaw works and why agentic AI changes the security model
    Why connecting LLMs directly to local systems creates a fundamentally difficult risk profile
  • How the One-Click RCE exposed structural weakness
    What the webook takeover vulnerability reveals about AI agent security in practice.
  • How malicious skills expand the attack surface
    How community-created “skills” can introduce malicious instructions that the AI will execute without judgment.
  • Why there is no practical way to secure OpenClaw once deployed
    Beyond blocking installation or isolating it in a sandbox, there is no mature control model to enforce granular restrictions on an autonomous AI agent.

Your cybersecurity experts

Rob Maas

Rob Maas

Field CTO
ON2IT

Yuriwit

Yuri Wit

SOC Specialist
ON2IT

Episode details

OpenClaw, previously known as Clawdbot and Moltbot, is an AI assistant that runs inside your own network and connects to either public LLMs such as OpenAI or Anthropic, or locally hosted models. It can interact with services, execute commands, install tools, and perform tasks based on natural language instructions.

The core issue is not capability. It is control.

OpenClaw operates with a default allow mindset. If a user can perform an action, the AI agent can perform it as well. There are no meaningful built-in guardrails to restrict behavior once it is installed and running with user permissions.

Shortly after release, researchers demonstrated a One-Click remote code execution vulnerability by exploiting the webhook used for service interaction. A crafted link could redirect control and allow attackers to execute commands directly on the system.

The Clawhub skill marketplace compounds the risk. Skills are simple instruction sets. Some have been shown to be malicious. The AI does not evaluate intent. It executes what it is given.

Rather than focusing on hype, this conversation addresses the structural tension between autonomy and security. It explains why experimentation must happen in tightly controlled environments and why understanding the architectural implications matters before deploying autonomous AI agents in any setting.

The conclusion is not that OpenClaw is uniquely flawed. It is that autonomous AI agents are being deployed before a viable containment framework exists.

Infographic 2026

Get your Hacker T-shirt

Join the treasure hunt!

Find the code within this episode and receive your own hacker t-shirt for free.

6 + 10 =

Christmas Hacker