Promptlock – The First AI-Powered Malware

Infographic On2it Banner

Find Threat Talks on

Promptlock – The First AI-Powered Malware

If malware used to be predictable, those days are gone.
With AI in the driver’s seat, malware has become a shape-shifter.

Promptlock is the first documented case where AI was active inside the breach. It doesn’t just prepare an attack—it thinks during it. In this Threat Talks episode, Rob Maas (Field CTO, ON2IT) and Yuri Wit (SOC Analyst, ON2IT) break down how Promptlock rewrites the rules of intrusion.

Promptlock queries an LLM mid-attack, generates fresh payloads on demand, and pivots in real time. Yesterday’s defenses—static hashes, predictable fingerprints—collapse under its weight. But this isn’t a doomsday story. With Zero Trust as the anchor, you can choke its egress, block interpreters, and trap the “thinking” malware in its own loop.

This episode shows how to turn panic into power—and how to get ahead before shape-shifting malware becomes the norm.

Key Topics Covered

·       The new malware loop: Go loader → Ollama → LLM → adaptive Lua payloads

·       Why non-deterministic AI output kills static detection

·       Behavioral defense over signatures: EDR/XDR, sandboxing, SSL inspection

·       Zero Trust in practice: interpreter blocking, restricted egress, shrinking the blast radius

Your cybersecurity experts

Rob Maas, Field CTO, ON2IT

Rob Maas

Field CTO
ON2IT

Yuriwit

Yuri Wit

SOC Specialist
ON2IT

Episode details

Promptlock behaves like a thinking intruder. Once it lands, it contacts an attacker-controlled inference endpoint (e.g., Ollama) and asks an LLM what to do next. The model returns code—often Lua—generated on the fly. Promptlock runs that code, surveys the host, and repeats the loop. Each pass can change: enumerate files, detect OS/EDR, then pick a path—ransomware, data theft, or sabotage. Because each payload is new, the bytes rarely match prior samples. Hashes expire instantly. Static IOCs lose meaning.

Defend by taking away the attacker’s room to improvise. Favor behavioral detection in EDR/XDR and sandboxing over signatures. Disable or restrict unneeded interpreters (Lua, Python, PowerShell). Where policy allows, decrypt and inspect TLS so LLM calls are visible. Enforce default-deny egress: only allow known destinations; block unknown inference endpoints. Segment tightly to limit lateral movement. Watch for suspicious chains (unknown binary → script engine → network call). This is Zero Trust in action: verify every step, minimize privileges, and assume the next payload will be different. Do this, and Promptlock’s “intelligence” hits walls—not your crown jewels.

Infographic On2it Banner

Get your Hacker T-shirt

Join the treasure hunt!

Find the code within this episode and receive your own hacker t-shirt for free.

2 + 15 =

Christmas Hacker