TL;DR

Deploying OpenClaw via Docker is the recommended approach for setting up an isolated AI agent gateway on a VPS or cloud host. Using the official scripts/docker/setup.sh script, you can initialize the gateway, configure API keys, and enable the Agent Sandbox in minutes. This guide walks you through the entire OpenClaw deployment process from scratch to production.

📋 Table of Contents

✨ Key Takeaways

  • Automated Setup: Use the official setup script for a seamless onboarding experience with pre-built GitHub Container Registry (GHCR) images.
  • Secure by Default: The default image runs as a non-root node user (UID 1000) and drops unnecessary network privileges.
  • Agent Sandboxing: Protect your host system by isolating LLM agent tool executions in ephemeral containers.
  • Persistence: Map OPENCLAW_CONFIG_DIR and OPENCLAW_WORKSPACE_DIR to the host to ensure your data survives container restarts.

💡 Quick Tool: Need to adjust your configuration? Use our free JSON Formatter to validate your openclaw.json or .env files before restarting the container.

What is OpenClaw?

OpenClaw is a powerful gateway and orchestrator for AI agents. It connects Large Language Models (LLMs) with tools, APIs, and messaging channels (like Discord, Telegram, and WhatsApp), allowing you to build autonomous workflows.

Rather than running AI agents as simple scripts, OpenClaw acts as a robust middle layer that manages authentication, session state, tool sandboxing, and web UI dashboards.

📝 Glossary: AI Agent — Learn more about what makes an AI system autonomous and capable of taking actions.

Why Choose Docker for OpenClaw Deployment

While you can install OpenClaw directly via Node.js, a Docker deployment offers distinct advantages:

Feature Local Install Docker Deployment
Environment Pollutes host system Fully isolated container
Dependencies Requires Node.js, Bun, pnpm Only requires Docker Desktop / Engine
Security Host-level execution risks Non-root container, optional Sandbox
Portability Hard to replicate Highly reproducible via Docker Compose

If you just want the fastest development loop on a personal laptop, a local install might suffice. But for any public-facing VPS or team environment, Docker is mandatory.

How OpenClaw Architecture Works

Before running the commands, it's crucial to understand how OpenClaw components interact within a containerized environment.

graph TD User["User / Browser"] -->|HTTP 18789| Gateway[OpenClaw Gateway Container] CLI[OpenClaw CLI Container] -->|Local API| Gateway Gateway -->|Tool Execution Dispatch| Sandbox[Agent Sandbox Containers] Gateway -->|Volume Mount| Config[("~/.openclaw")] Gateway -->|Volume Mount| Workspace[("workspace/")] style User fill:#e1f5fe,stroke:#01579b style Gateway fill:#fff3e0,stroke:#e65100 style CLI fill:#fff3e0,stroke:#e65100 style Sandbox fill:#fce4ec,stroke:#c2185b style Config fill:#e8f5e9,stroke:#2e7d32 style Workspace fill:#e8f5e9,stroke:#2e7d32

The openclaw-gateway acts as the long-running daemon. The openclaw-cli container is spun up on-demand to run commands (like adding messaging channels or approving devices) and communicates with the gateway over the shared container network namespace.

Step-by-Step OpenClaw Docker Deployment

Prerequisites

Ensure your host machine has:

  1. Docker Engine and Docker Compose v2 installed.
  2. At least 2 GB RAM (otherwise the build process might be OOM-killed with exit code 137).
  3. Basic understanding of Linux file permissions.

Method 1: The Automated Setup Script (Recommended)

The easiest way to deploy is by using the official setup script and the pre-built images from GHCR.

bash
# Clone the repository
git clone https://github.com/openclaw/openclaw.git
cd openclaw

# Specify the remote pre-built image
export OPENCLAW_IMAGE="ghcr.io/openclaw/openclaw:latest"

# Run the setup script
./scripts/docker/setup.sh

During this process, the script will:

  • Prompt you for your AI Provider API keys (e.g., OpenAI, Anthropic).
  • Generate a secure gateway token and save it to .env.
  • Launch the openclaw-gateway via Docker Compose.

Once started, open http://127.0.0.1:18789/ in your browser and paste the shared secret generated during setup.

Method 2: The Manual Flow

If you need granular control over the build process, you can execute the steps manually:

bash
# 1. Build the local image
docker build -t openclaw:local -f Dockerfile .

# 2. Run the onboarding wizard interactively
docker compose run --rm --no-deps --entrypoint node openclaw-gateway \
  dist/index.js onboard --mode local --no-install-daemon

# 3. Configure the gateway binding
docker compose run --rm --no-deps --entrypoint node openclaw-gateway \
  dist/index.js config set --batch-json '[{"path":"gateway.mode","value":"local"},{"path":"gateway.bind","value":"lan"}]'

# 4. Start the daemon in the background
docker compose up -d openclaw-gateway

🔧 Try it now: Before configuring complex JSON configurations, ensure your syntax is correct using our JSON Validator.

Advanced OpenClaw Configuration

Adding Messaging Channels via CLI

Because the openclaw-cli tool shares the network namespace with the gateway, you must run it after the gateway is up.

bash
# To link a Telegram bot:
docker compose run --rm openclaw-cli channels add --channel telegram --token "YOUR_BOT_TOKEN"

# To link Discord:
docker compose run --rm openclaw-cli channels add --channel discord --token "YOUR_BOT_TOKEN"

Environment Variables for Setup

The setup script supports several environment variables to customize your deployment:

Variable Purpose
OPENCLAW_IMAGE Use a remote image instead of building locally.
OPENCLAW_DOCKER_APT_PACKAGES Inject extra APT packages (e.g., git curl jq) during build.
OPENCLAW_SANDBOX Enable the Agent Sandbox setup (1, true).
OPENCLAW_HOME_VOLUME Persist /home/node in a named Docker volume.

Agent Sandbox Setup

When an AI agent is given the ability to execute code or shell commands, running those commands directly on the host or inside the main gateway container is extremely dangerous.

The Agent Sandbox solves this by spinning up ephemeral, isolated containers exclusively for tool execution.

To quickly enable the sandbox:

bash
# Export the sandbox flag
export OPENCLAW_SANDBOX=1

# Run the setup script
./scripts/docker/setup.sh

This modifies your configuration to look like this:

json
{
  "agents": {
    "defaults": {
      "sandbox": {
        "mode": "non-main",
        "scope": "agent"
      }
    }
  }
}

The gateway itself remains on the host (or in its own container), but any shell commands executed by the AI are routed to the sandbox container.

Best Practices

  1. Persist Your Data — Always bind mount OPENCLAW_CONFIG_DIR (maps to /home/node/.openclaw) to your host. This ensures your API keys, OAuth profiles, and openclaw.json survive container restarts.
  2. Handle File Permissions — The OpenClaw Docker image runs as the node user (UID 1000). Ensure your host directories have the correct ownership to avoid permission denied errors:
    bash
    sudo chown -R 1000:1000 /path/to/openclaw-config
    
  3. Use Pre-built Images — Unless you are actively modifying OpenClaw's source code, always export OPENCLAW_IMAGE to use the GHCR image. This skips the memory-intensive pnpm install build step on your server.
  4. Monitor Disk Space — AI sessions generate logs and media files. Regularly check and clean up /tmp/openclaw/ and the session JSONL files to prevent disk exhaustion.

⚠️ Common Mistakes:

  • Running CLI before Gateway → The openclaw-cli container will fail if the gateway isn't running because it relies on the service:openclaw-gateway network mode. Always ensure the daemon is up first.
  • Using 0.0.0.0 for Bind Mode → OpenClaw expects logical bind modes in its config (like lan or loopback). Do not set gateway.bind to raw IP addresses.

FAQ

Q1: How do I check the health of my OpenClaw container?

OpenClaw provides unauthenticated probe endpoints out of the box:

bash
# Check liveness
curl -fsS http://127.0.0.1:18789/healthz

# Check readiness
curl -fsS http://127.0.0.1:18789/readyz

Q2: My server ran out of memory during installation. What happened?

If you build the image locally without OPENCLAW_IMAGE, the pnpm install step requires at least 2 GB of RAM. If you are on a 1 GB VPS, the Linux OOM killer will terminate the process with exit code 137. Use the pre-built image instead.

Q3: What is the difference between lan and loopback bind modes?

  • lan: Allows host browsers and published Docker ports to reach the gateway. This is required if you are accessing the dashboard from outside the server.
  • loopback: Only allows processes strictly inside the container's network namespace to reach the gateway.

Q4: How do I install Playwright browsers inside the container?

If your agents need to browse the web, you must install Playwright dependencies inside the running container:

bash
docker compose run --rm openclaw-cli node /app/node_modules/playwright-core/cli.js install chromium

Summary

Deploying OpenClaw via Docker provides a secure, robust, and scalable foundation for your AI agents. By leveraging the official setup script, configuring persistence correctly, and enabling the Agent Sandbox, you ensure that your LLM workflows run smoothly without compromising your host system's security.

👉 Start formatting your JSON Configs now — Ensure your OpenClaw configuration files are perfectly structured.