OpenClaw is a self-hosted AI agent platform that runs on your machine and connects to the messaging apps you already use — Telegram, iMessage, Discord, WhatsApp, Signal, and more. You pick the AI model. You own the data. It runs 24/7 in the background.
It launched in early 2026 and search interest hit Breakout status almost immediately. The reason is simple: people are tired of switching between ChatGPT, Claude.ai, and Gemini depending on what they need. OpenClaw gives you one AI agent, in your existing apps, running on whatever model you want.
What OpenClaw actually does
Think of it as a personal AI assistant that lives on your hardware. You send it a message in Telegram (or iMessage, or Discord), and it responds using the AI model you configured — Claude, GPT-5, Gemini, or a local model via Ollama. No new apps. No context switching.
The core architecture has four pieces:
- Gateway — the Node.js server that runs locally (or on a VPS). It receives messages, routes them to your AI model, and sends responses back. You start it with
openclaw gateway start. - Channels — the messaging apps it connects to. Telegram and Discord are the most popular. iMessage, WhatsApp, Signal, Slack, and about 20 others are supported.
- Providers — the AI models it uses. Anthropic (Claude), OpenAI (GPT), Google (Gemini), and Ollama for local models. You can switch models per conversation.
- Skills — optional extensions that add capabilities: web search, Apple Notes, Apple Reminders, code execution, browser control, and more.
Why people use it instead of just ChatGPT
There are four reasons that come up over and over:
- It lives in Telegram (or wherever you already are). Most people check their messages constantly. Having AI in that same window is faster than opening a separate app.
- You can run local models. Via Ollama, you can run Llama 4, DeepSeek, or Qwen entirely on your own hardware — no API costs, no data leaving your machine. Useful for sensitive tasks or cost-sensitive automation.
- It runs 24/7 without you. Set up cron jobs and it sends you daily summaries, monitors things, or runs tasks on a schedule. ChatGPT only works when you open it.
- You switch models without switching apps. Want Claude for writing and GPT-5 for code? Configure both. The model is a setting, not a product you log into.
Who it's for
OpenClaw is squarely aimed at developers, power users, and technically curious people. The installation requires a terminal and Node.js 22+. There is a setup wizard that handles most of the config, but it is not a consumer app you install from an App Store.
If you run a Mac Mini or a home server and want a personal AI that is always on, always available in your messaging apps, and costs nothing beyond your API keys — OpenClaw fits that use case better than anything else right now.
Is it free?
OpenClaw itself is free and open source. You pay for the AI model API calls you make — Claude, GPT-5, Gemini all charge per token. If you use Ollama with local models, the AI is free (you pay only electricity and hardware). There is no OpenClaw subscription.
How to get started
The fastest path is the installer script. On macOS or Linux:
curl -fsSL https://openclaw.ai/install.sh | bashThat installs the CLI, detects your Node version, and walks you through the onboarding wizard. From there you configure an API key and connect a channel.
Full step-by-step instructions: How to Install OpenClaw →
Frequently asked questions
- Does OpenClaw store my messages?
- It runs on your machine and your config files stay local. If you use a cloud AI provider like Claude or OpenAI, messages go to their API as normal. If you use Ollama, nothing leaves your machine.
- Does it work on Windows?
- Yes, via WSL2 (Windows Subsystem for Linux). Native Windows support exists but WSL2 is the recommended path for the best experience.
- Can it run on a Raspberry Pi or Mac Mini?
- Yes. The gateway is a lightweight Node.js process. Many people run it on a Mac Mini or home server so it's always on without leaving a laptop running.
- What AI models does it support?
- Claude (all tiers), GPT-5 and earlier OpenAI models, Gemini, and any model supported by Ollama. You can configure a primary model and a fallback.