DeerFlow: The Complete Guide to ByteDance's Open-Source SuperAgent Harness
#1 on GitHub Trending. ByteDance's DeerFlow 2.0 is an open-source SuperAgent harness — not a framework you wire together, but a batteries-included runtime where agents research, code, and create. With 26,000+ GitHub stars, Docker sandboxes, long-term memory, extensible skills, and sub-agent orchestration, DeerFlow handles tasks that take minutes to hours.
What Is DeerFlow?
DeerFlow (Deep Exploration and Efficient Research Flow) started as a deep research framework. The community pushed it beyond research — data pipelines, slide decks, dashboards, content workflows. So ByteDance rebuilt it from scratch.
DeerFlow 2.0 is a SuperAgent harness built on LangGraph and LangChain. It ships with everything an agent needs: a filesystem, memory, skills, sandboxed execution, and the ability to plan and spawn sub-agents.
- Language: Python
- License: MIT
- Stars: 26,000+ ⭐
- Forks: 3,100+
- Built by: ByteDance
- Website: deerflow.tech
- v2.0: Ground-up rewrite (shares no code with v1)
Core Features
Skills & Tools
Skills are what make DeerFlow do almost anything. Each is a structured capability module — a Markdown file (SKILL.md) defining a workflow, best practices, and supporting resources.
Built-in skills:
- Research
- Report generation
- Slide creation
- Web page generation
- Image and video generation
Skills load progressively — only when the task needs them. Custom skills drop into /mnt/skills/custom/.
Built-in tools: Web search, web fetch, file operations, bash execution. Extend via MCP servers and Python functions.
Claude Code Integration
The claude-to-deerflow skill lets you interact with DeerFlow directly from Claude Code:
npx skills add https://github.com/bytedance/deer-flow --skill claude-to-deerflow
Capabilities: send messages, choose execution modes (flash/standard/pro/ultra), check health, list models/skills/agents, manage threads, upload files.
Sub-Agents
Complex tasks decompose automatically. The lead agent spawns sub-agents on the fly — each with scoped context, tools, and termination conditions. Sub-agents run in parallel when possible, report structured results, and the lead synthesizes everything.
A research task might fan out into a dozen sub-agents, each exploring a different angle, then converge into a single report — or a website — or a slide deck with generated visuals.
Sandbox & File System
Each task runs inside an isolated Docker container with a full filesystem:
/mnt/user-data/
├── uploads/ ← your files
├── workspace/ ← agents' working directory
└── outputs/ ← final deliverables
The agent reads, writes, edits files, executes bash commands, views images. All sandboxed, all auditable, zero contamination between sessions.
Context Engineering
- Isolated Sub-Agent Context — Sub-agents can't see each other's context
- Summarization — Completed sub-tasks summarized, intermediate results offloaded to filesystem
- Aggressive compression — Stays sharp across long multi-step tasks
Long-Term Memory
Most agents forget when the conversation ends. DeerFlow remembers.
Persistent memory across sessions: your profile, preferences, technical stack, recurring workflows. Memory stored locally, under your control.
4 Execution Modes
| Mode | Use Case |
|---|---|
| Flash | Fast, single-pass responses |
| Standard | Balanced execution |
| Pro | Planning mode with structured decomposition |
| Ultra | Full sub-agent orchestration for complex tasks |
Embedded Python Client
Use DeerFlow as a Python library without HTTP:
from src.client import DeerFlowClient
client = DeerFlowClient()
# Chat
response = client.chat("Analyze this paper", thread_id="my-thread")
# Streaming
for event in client.stream("hello"):
if event.type == "messages-tuple" and event.data.get("type") == "ai":
print(event.data["content"])
# Management
models = client.list_models()
skills = client.list_skills()
client.upload_files("thread-1", ["./report.pdf"])
DeerFlow vs Alternatives
Category: This tool is a SuperAgent harness / autonomous multi-agent platform.
| Feature | DeerFlow 2.0 | LangGraph | OpenAI Codex |
|---|---|---|---|
| Focus | SuperAgent harness | Graph agent framework | Coding agent CLI |
| Stars | 25.9K ⭐ | 25.9K ⭐ | 63.9K ⭐ |
| License | MIT | MIT | Apache 2.0 |
| Language | Python | Python | Rust |
| Built by | ByteDance | LangChain | OpenAI |
| Batteries Included | ✅ Full harness | ❌ Framework (build your own) | ✅ Terminal agent |
| Skills System | ✅ SKILL.md + Custom | ❌ | ❌ |
| Sub-Agent Spawning | ✅ Dynamic, parallel | ✅ (manual graph) | ❌ |
| Docker Sandbox | ✅ Isolated containers | ❌ | ✅ Sandbox |
| File System | ✅ Full FS in container | ❌ | ✅ Local FS |
| Long-Term Memory | ✅ Persistent, local | ❌ (add your own) | ❌ |
| Context Engineering | ✅ Summarization, isolation | ❌ (manual) | ❌ |
| Claude Code Integration | ✅ claude-to-deerflow | ❌ | ❌ (competitor) |
| Execution Modes | ✅ Flash/Standard/Pro/Ultra | ❌ | ✅ (suggest/auto-edit/full-auto) |
| Embedded Python Client | ✅ DeerFlowClient | ✅ | ❌ |
| Report/Slide Generation | ✅ Built-in skills | ❌ | ❌ |
| Image/Video Generation | ✅ Built-in skills | ❌ | ❌ |
| Web Page Generation | ✅ Built-in skills | ❌ | ❌ |
| MCP Support | ✅ MCP servers | ❌ | ❌ |
| Model-Agnostic | ✅ OpenAI-compatible | ✅ | ❌ (OpenAI only) |
| Deep Research | ✅ (v1 origin) | ❌ | ❌ |
| Code-Only Focus | ❌ | ❌ | ✅ Optimized for code |
| GitHub Trending #1 | ✅ Feb 28, 2026 | ✅ | ✅ |
When to choose DeerFlow: You want a batteries-included SuperAgent that researches, codes, generates reports/slides/sites, and remembers you across sessions — all in Docker sandboxes with extensible skills and sub-agent orchestration. Built by ByteDance.
When to choose LangGraph: You want a low-level graph framework to build custom agent architectures from scratch. Maximum flexibility, minimum opinion. DeerFlow is built on top of it.
When to choose OpenAI Codex: You want a lightweight coding agent in your terminal, optimized for code tasks, made by OpenAI. Fast, sandboxed, but code-focused only.
Quick Start
git clone https://github.com/bytedance/deer-flow.git
cd deer-flow
# Configure
cp .env.example .env
# Add your LLM API keys
# Run
docker compose up # or run locally
# Dashboard at http://localhost:2026
Conclusion
DeerFlow 2.0 is what happens when ByteDance rebuilds a deep research tool into a full SuperAgent harness. Skills that load on demand, sub-agents that fan out in parallel, Docker sandboxes with real file systems, long-term memory that persists across sessions, and 4 execution modes from flash to ultra. The Claude Code integration, embedded Python client, and MCP support make it a platform, not just a tool. At 26K+ stars and #1 on GitHub Trending, DeerFlow is defining what a SuperAgent should be.
