NanoClaw: The Complete Guide to the Lightweight Containerized AI Assistant
OpenClaw has 275K stars and half a million lines of code. NanoClaw delivers the same core functionality in a handful of files, with agents running in their own Linux containers for true OS-level isolation. With 20,000+ GitHub stars, NanoClaw is the security-focused, lightweight alternative to OpenClaw — built to be understood, audited, and customized.
What Is NanoClaw?
NanoClaw is a lightweight, open-source AI assistant that connects to WhatsApp, Telegram, Slack, Discord, Gmail, and other messaging apps. It runs agents securely in isolated Linux containers (Apple Container on macOS, Docker on Linux) and is built on Anthropic's Claude Agent SDK.
- Language: TypeScript
- License: MIT
- Stars: 20,000+ ⭐
- Forks: 3,300+
- Contributors: 49
Why NanoClaw Exists
The creator's motivation is clear: OpenClaw is impressive, but with ~500K lines of code, 53 config files, and 70+ dependencies, it's impossible for a single person to truly understand and audit. OpenClaw's security model relies on application-level checks (allowlists, pairing codes), and everything runs in one Node process with shared memory.
NanoClaw takes a different approach:
- Small enough to understand — One process, a few source files
- Secure by isolation — Agents run in Linux containers, not behind permission checks
- Built for the individual — Fork it, customize it, make it yours
Core Features
🔒 Container Isolation
Every agent runs in its own isolated Linux container:
- Apple Container (macOS) — Native lightweight runtime
- Docker (macOS/Linux) — Cross-platform support
- Agents can only access explicitly mounted directories
- Bash commands execute inside the container, not on your host
📱 Multi-Channel Messaging
Connect your assistant to the apps you use:
- WhatsApp — via
/add-whatsappskill - Telegram — via
/add-telegramskill - Discord — via
/add-discordskill - Slack — via
/add-slackskill - Gmail — via
/add-gmailskill - Run one or many channels simultaneously
🧠 Per-Group Memory
- Each group gets its own
CLAUDE.mdmemory file - Isolated filesystem per group
- Each group runs in its own container sandbox
- Main channel (self-chat) for admin control
⏰ Scheduled Tasks
Set up recurring jobs that run Claude and message you back:
@Andy send an overview of the sales pipeline every weekday morning at 9am
@Andy review the git history for the past week each Friday and update the README
@Andy every Monday at 8am, compile news on AI developments from Hacker News
🐝 Agent Swarms
NanoClaw is the first personal AI assistant to support Agent Swarms — spin up teams of specialized agents that collaborate on complex tasks within your chat.
🌐 Web Access
Search and fetch content from the web, directly integrated into agent capabilities.
Philosophy
AI-Native
- No installation wizard — Claude Code guides setup
- No monitoring dashboard — Ask Claude what's happening
- No debugging tools — Describe the problem and Claude fixes it
- No configuration files — Customization = code changes
Skills Over Features
Instead of adding features to the codebase, contributors submit Claude Code skills:
.claude/skills/add-telegram/SKILL.md
Users run /add-telegram on their fork and get clean code that does exactly what they need — not a bloated system trying to support every use case.
Best Harness, Best Model
NanoClaw runs on the Claude Agent SDK, meaning you're running Claude Code directly. Claude Code's coding and problem-solving capabilities allow it to modify and expand NanoClaw and tailor it to each user.
Quick Start
git clone https://github.com/qwibitai/nanoclaw.git
cd nanoclaw
claude
Then type /setup in the Claude CLI. Claude Code handles everything: dependencies, authentication, container setup, and service configuration.
Architecture
Channels --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> Response
Single Node.js process. Channels self-register at startup. Agents execute in isolated containers. Per-group message queue with concurrency control. IPC via filesystem.
Key files:
| File | Purpose |
|---|---|
src/index.ts | Orchestrator: state, message loop, agent invocation |
src/channels/registry.ts | Channel registry (self-registration) |
src/container-runner.ts | Spawns streaming agent containers |
src/task-scheduler.ts | Runs scheduled tasks |
src/db.ts | SQLite operations |
groups/*/CLAUDE.md | Per-group memory |
Customizing
No config files — just tell Claude Code what you want:
- "Change the trigger word to @Bob"
- "Remember to make responses shorter and more direct"
- "Add a custom greeting when I say good morning"
- "Store conversation summaries weekly"
Or run /customize for guided changes.
NanoClaw vs Alternatives
Category: This tool is a lightweight containerized AI assistant with multi-channel messaging.
| Feature | NanoClaw | OpenClaw | CoPaw |
|---|---|---|---|
| Focus | Lightweight, security-focused | Full-featured personal AI | Multi-channel assistant |
| Stars | 20K ⭐ | 275K ⭐ | 9.1K ⭐ |
| License | MIT | MIT | Apache 2.0 |
| Language | TypeScript | TypeScript | Python |
| Codebase Size | ~Few files, auditable | ~500K lines, 70+ deps | Medium |
| Container Isolation | ✅ Apple Container / Docker | ❌ Application-level | ❌ Application-level |
| Messaging Channels | ✅ WhatsApp, Telegram, Discord, Slack, Gmail | ✅ WhatsApp, Telegram, Discord, iMessage, more | ✅ DingTalk, Feishu, QQ, Discord, iMessage, Twilio |
| Agent Swarms | ✅ First to support | ❌ | ❌ |
| Memory | ✅ Per-group CLAUDE.md | ✅ Shared memory | ✅ Persistent (ReMe) |
| Scheduled Tasks | ✅ | ✅ | ✅ Cron/heartbeat |
| Config Files | ❌ None (AI-native) | ✅ 53 config files | ✅ |
| Skills System | ✅ Claude Code skills | ✅ | ✅ Custom skills |
| Local LLMs | Via API proxy | ❌ | ✅ llama.cpp, MLX, Ollama |
| Setup | Claude Code /setup | Manual | pip install |
| Web Access | ✅ | ✅ | Via skills |
| Team | qwibitai | openclaw | AgentScope (Alibaba) |
When to choose NanoClaw: You want a security-first AI assistant small enough to audit, with true container isolation (not just permission checks), Agent Swarms, and an AI-native philosophy (no config files, Claude Code drives everything). Best for security-conscious individuals who want full control.
When to choose OpenClaw: You want the most mature, full-featured personal AI assistant with the largest community (275K stars), broadest integrations, and battle-tested infrastructure. You're comfortable with a larger codebase.
When to choose CoPaw: You want a multi-channel assistant focused on Asian messaging platforms (DingTalk, Feishu, QQ) with local LLM support (llama.cpp, MLX, Ollama) and a console UI. Built by Alibaba.
FAQ
Can I use third-party or open-source models?
Yes. Set ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN in your .env to use Ollama, Together AI, Fireworks, or any Anthropic-compatible endpoint.
Is this secure? Agents run in containers, not behind permission checks. They can only access explicitly mounted directories. The codebase is small enough to actually audit.
Can I run this on Linux? Yes. Docker is the default runtime and works on both macOS and Linux.
Conclusion
NanoClaw fills a critical gap: it delivers OpenClaw's core functionality — multi-channel messaging, memory, scheduled tasks, web access — in a codebase small enough to understand and audit, with true container-level security isolation. The AI-native philosophy (no config, skills over features, Claude Code as the interface) represents a new paradigm in personal AI assistants.
