I run OpenClaw. Not in a lab — in my actual workflow, connected to my messaging platforms, with access to my files and tools. It's powerful. It's also, if you're not careful, a gaping hole in your security posture.
OpenClaw is an open-source AI agent framework that bridges large language models to real-world tools: shell access, browser automation, file systems, paired devices, and messaging platforms. For consultants and small teams, it's transformative. But the same capabilities that make it useful make it dangerous when misconfigured.
I've spent the last several months running OpenClaw in production while advising organizations on SOC 2 compliance, HITRUST certification, and security strategy. Here's what I've learned about the attack surface — and how to lock it down.
The Threat Landscape: 10 Risks You Need to Know
1. Unrestricted Shell Access
By default, the AI agent can execute arbitrary commands, read and write files, and access network services on the host machine. If you haven't tightened the execution policy, your assistant has root-equivalent access to everything on that box. An attacker who can influence the agent's behavior inherits that access.
2. Prompt Injection
This is the big one. Prompt injection attacks craft messages that manipulate the AI into performing unintended actions. Anyone who can send a message to your bot — a DM, a group chat message, even content the bot reads from a webpage — can potentially hijack its behavior. The model follows instructions. If an attacker's instructions are more convincing than yours, you have a problem.
3. Overly Permissive DM and Group Policies
OpenClaw's default pairing mode provides reasonable guardrails, but many users set dmPolicy="open" or groupPolicy="open" for convenience. This means any stranger who finds your bot can trigger it — and by extension, trigger the tools it has access to. Convenience is the enemy of security.
4. Session Log Exposure
Every conversation with your AI assistant is logged to disk at ~/.openclaw/. These transcripts contain your full conversation history: commands run, files accessed, decisions made. If those files are readable by other processes or users on the system — and by default they often are — you're leaking operational intelligence.
5. Browser Relay and CDP Exposure
OpenClaw's browser automation uses Chrome DevTools Protocol (CDP) via remote debugging ports. If the browser relay or debugging port is exposed beyond localhost, an attacker gains operator-level access to your browser sessions — cookies, authentication tokens, and all.
6. Node Remote Code Execution
Paired nodes allow system.run, which is literal remote code execution on the paired machine. If your exec approval policy is too permissive — or worse, auto-approved — anyone who can influence the agent can run arbitrary commands on your other devices.
7. Credential Storage on Disk
WhatsApp credentials, Telegram bot tokens, API keys, and other secrets are stored on disk within the OpenClaw directory. If file permissions aren't locked down to 700 for directories and 600 for files, any local user or compromised process can read them.
8. Plugin and Extension Trust
Plugins run in-process with the OpenClaw Gateway. Installing a plugin via npm install runs lifecycle scripts — meaning untrusted code executes with the same permissions as the Gateway itself. One malicious or compromised package and your entire agent environment is owned.
9. Network Exposure
Binding the Gateway to 0.0.0.0 instead of 127.0.0.1 exposes it to the network. Using Tailscale Funnel instead of Tailscale Serve makes your control interface publicly reachable. Short or weak authentication tokens compound the problem. Each of these alone is a risk; together, they're an invitation.
10. Elevated Tools in Group Chats
This is the worst-case scenario: an open group chat where the bot has elevated tool access. Combine prompt injection with shell execution in a room where anyone can post, and you've created an open RCE endpoint with a natural language interface.
Best Practices: Locking It Down
The good news is that OpenClaw provides the controls — you just need to use them. Here's my hardening checklist:
Run regular security audits. OpenClaw includes a built-in audit command: openclaw security audit --deep. Run it. Run it regularly. Automate it if you can.
Use allowlists, never open policies. Replace dmPolicy="open" and groupPolicy="open" with explicit allowlists. If someone doesn't need access, they shouldn't have it.
Scope sessions per peer. Set dmScope to per-channel-peer in multi-user setups. This prevents conversation bleed between users and limits the blast radius of any single compromised session.
Lock filesystem permissions. Set the ~/.openclaw/ directory to 700 and all sensitive files within it to 600. This is basic Unix hygiene, but it's easy to overlook.
Use Tailscale Serve, not Funnel. If you need remote access to your Gateway, Tailscale Serve keeps it within your tailnet. Funnel exposes it to the public internet. The difference matters.
Restrict node exec approvals. Review and tighten the approval policy for paired nodes. Auto-approving shell commands on remote machines is asking for trouble.
Pin plugin versions and audit before enabling. Treat plugins like you'd treat any third-party dependency in a production system. Pin versions, review changelogs, and audit the code before you let it run in-process.
Choose instruction-hardened models. Not all LLMs handle prompt injection equally. Modern instruction-hardened models from Anthropic, OpenAI, and others are significantly more resistant to manipulation. Use them.
Treat the Gateway as a trust boundary. The Gateway is the control plane for your AI agent. Apply the same rigor you'd apply to any other critical infrastructure component: network segmentation, access controls, monitoring, and logging.
Separate dev and prod environments. Don't test new plugins, policies, or configurations on the same instance that has access to your production credentials and systems. Maintain separate environments with separate trust levels.
The Bigger Picture
AI assistants like OpenClaw are becoming integral to how we work. That's not going to slow down. But the security conversation hasn't caught up to the capability curve. We're giving AI agents shell access, browser control, and remote execution — and too many deployments treat security configuration as an afterthought.
If you're running OpenClaw — or any AI agent framework — in a professional context, you need to think about this the same way you'd think about deploying any other privileged service. Access controls, network segmentation, credential management, and monitoring aren't optional.
Need Help?
If you're deploying AI assistants like OpenClaw in your organization, you need a security strategy — not just for the AI, but for the expanding attack surface it creates.
Take our free Fractional CISO assessment to see where your security posture stands, or book a strategy session to talk through your specific setup.