docs: align runtime docs with gateway mode (#2868)

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
This commit is contained in:
Eilen Shin 2026-05-12 16:19:21 +08:00 committed by GitHub
parent 20d2d2b373
commit 84f88b6610
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
21 changed files with 135 additions and 201 deletions

View File

@ -185,9 +185,9 @@ If you need to start services individually:
1. **Start backend service**:
```bash
# Terminal 1: Start Gateway API and embedded LangGraph-compatible runtime (port 8001)
# Terminal 1: Start Gateway API + embedded agent runtime (port 8001)
cd backend
make gateway
make dev
# Terminal 2: Start Frontend (port 3000)
cd frontend
@ -207,7 +207,7 @@ If you need to start services individually:
The nginx configuration provides:
- Unified entry point on port 2026
- Gateway owns `/api/langgraph/*` and translates those public LangGraph-compatible paths to its native `/api/*` routers behind nginx
- Rewrites `/api/langgraph/*` to Gateway's LangGraph-compatible API (8001)
- Routes other `/api/*` endpoints to Gateway API (8001)
- Routes non-API requests to Frontend (3000)
- Same-origin API routing; split-origin or port-forwarded browser clients should use the Gateway `GATEWAY_CORS_ORIGINS` allowlist
@ -231,7 +231,7 @@ deer-flow/
├── backend/ # Backend application
│ ├── src/
│ │ ├── gateway/ # Gateway API and LangGraph-compatible runtime (port 8001)
│ │ ├── agents/ # LangGraph agent definitions
│ │ ├── agents/ # LangGraph agent runtime used by Gateway
│ │ ├── mcp/ # Model Context Protocol integration
│ │ ├── skills/ # Skills system
│ │ └── sandbox/ # Sandbox execution

View File

@ -228,7 +228,7 @@ make down # Stop and remove containers
```
> [!NOTE]
> Le serveur d'agents LangGraph fonctionne actuellement via `langgraph dev` (le serveur CLI open source).
> Le runtime d'agent s'exécute actuellement dans la Gateway. nginx réécrit `/api/langgraph/*` vers l'API compatible LangGraph servie par la Gateway.
Accès : http://localhost:2026
@ -296,8 +296,8 @@ DeerFlow peut recevoir des tâches depuis des applications de messagerie. Les ca
```yaml
channels:
# LangGraph Server URL (default: http://localhost:2024)
langgraph_url: http://localhost:2024
# LangGraph-compatible Gateway API base URL (default: http://localhost:8001/api)
langgraph_url: http://localhost:8001/api
# Gateway API URL (default: http://localhost:8001)
gateway_url: http://localhost:8001

View File

@ -181,7 +181,7 @@ make down # コンテナを停止して削除
```
> [!NOTE]
> LangGraphエージェントサーバーは現在`langgraph dev`オープンソースCLIサーバー経由で実行されます。
> Agentランタイムは現在Gateway内で実行されます。`/api/langgraph/*`はnginxによってGatewayのLangGraph-compatible APIへ書き換えられます。
アクセス: http://localhost:2026
@ -249,8 +249,8 @@ DeerFlowはメッセージングアプリからのタスク受信をサポート
```yaml
channels:
# LangGraphサーバーURLデフォルト: http://localhost:2024
langgraph_url: http://localhost:2024
# LangGraph-compatible Gateway API base URLデフォルト: http://localhost:8001/api
langgraph_url: http://localhost:8001/api
# Gateway API URLデフォルト: http://localhost:8001
gateway_url: http://localhost:8001

View File

@ -184,7 +184,7 @@ make down # 停止并移除容器
```
> [!NOTE]
> 当前 LangGraph agent server 通过开源 CLI 服务 `langgraph dev` 运行
> 当前 Agent 运行时嵌入在 Gateway 中运行,`/api/langgraph/*` 会由 nginx 重写到 Gateway 的 LangGraph-compatible API
访问地址http://localhost:2026
@ -254,8 +254,8 @@ DeerFlow 支持从即时通讯应用接收任务。只要配置完成,对应
```yaml
channels:
# LangGraph Server URL默认http://localhost:2024
langgraph_url: http://localhost:2024
# LangGraph-compatible Gateway API base URL默认http://localhost:8001/api
langgraph_url: http://localhost:8001/api
# Gateway API URL默认http://localhost:8001
gateway_url: http://localhost:8001

View File

@ -56,11 +56,8 @@ export OPENAI_API_KEY="your-api-key"
### Run the Development Server
```bash
# Terminal 1: LangGraph server
# Gateway API + embedded agent runtime
make dev
# Terminal 2: Gateway API
make gateway
```
## Project Structure

View File

@ -11,34 +11,26 @@ DeerFlow is a LangGraph-based AI super agent with sandbox execution, persistent
│ Nginx (Port 2026) │
│ Unified reverse proxy │
└───────┬──────────────────┬───────────┘
│ │
/api/langgraph/* │ │ /api/* (other)
▼ ▼
┌──────────────────────────────────────────────┐
│ Gateway API (8001) │
│ FastAPI REST + LangGraph-compatible runtime │
│ │
│ Models, MCP, Skills, Memory, Uploads, │
│ Artifacts, Threads, Runs, Streaming │
│ │
│ ┌────────────────┐ │
│ │ Lead Agent │ │
│ │ ┌──────────┐ │ │
│ │ │Middleware│ │ │
│ │ │ Chain │ │ │
│ │ └──────────┘ │ │
│ │ ┌──────────┐ │ │
│ │ │ Tools │ │ │
│ │ └──────────┘ │ │
│ │ ┌──────────┐ │ │
│ │ │Subagents │ │ │
│ │ └──────────┘ │ │
│ └────────────────┘ │
└──────────────────────────────────────────────┘
/api/langgraph/* │ /api/* (other)
rewritten to /api/* │
┌────────────────────────────────────────┐
│ Gateway API (8001) │
│ FastAPI REST + agent runtime │
│ │
│ Models, MCP, Skills, Memory, Uploads, │
│ Artifacts, Threads, Runs, Streaming │
│ │
│ ┌────────────────────────────────────┐ │
│ │ Lead Agent │ │
│ │ Middleware Chain, Tools, Subagents │ │
│ └────────────────────────────────────┘ │
└────────────────────────────────────────┘
```
**Request Routing** (via Nginx):
- `/api/langgraph/*` → Gateway API - LangGraph-compatible agent interactions, threads, runs, and streaming translated to native `/api/*` routers
- `/api/langgraph/*` → Gateway LangGraph-compatible API - agent interactions, threads, streaming
- `/api/*` (other) → Gateway API - models, MCP, skills, memory, artifacts, uploads, thread-local cleanup
- `/` (non-API) → Frontend - Next.js web interface
@ -196,7 +188,7 @@ export OPENAI_API_KEY="your-api-key-here"
**Full Application** (from project root):
```bash
make dev # Starts LangGraph + Gateway + Frontend + Nginx
make dev # Starts Gateway + Frontend + Nginx
```
Access at: http://localhost:2026
@ -204,14 +196,11 @@ Access at: http://localhost:2026
**Backend Only** (from backend directory):
```bash
# Terminal 1: LangGraph server
# Gateway API + embedded agent runtime
make dev
# Terminal 2: Gateway API
make gateway
```
Direct access: LangGraph at http://localhost:2024, Gateway at http://localhost:8001
Direct access: Gateway at http://localhost:8001
---
@ -247,7 +236,7 @@ backend/
│ └── utils/ # Utilities
├── docs/ # Documentation
├── tests/ # Test suite
├── langgraph.json # LangGraph server configuration
├── langgraph.json # LangGraph graph registry for tooling/Studio compatibility
├── pyproject.toml # Python dependencies
├── Makefile # Development commands
└── Dockerfile # Container build
@ -365,8 +354,8 @@ If a provider is explicitly enabled but required credentials are missing, or the
```bash
make install # Install dependencies
make dev # Run LangGraph server (port 2024)
make gateway # Run Gateway API (port 8001)
make dev # Run Gateway API + embedded agent runtime (port 8001)
make gateway # Run Gateway API without reload (port 8001)
make lint # Run linter (ruff)
make format # Format code (ruff)
```

View File

@ -561,12 +561,13 @@ location /api/ {
---
## WebSocket Support
## Streaming Support
The LangGraph server supports WebSocket connections for real-time streaming. Connect to:
Gateway's LangGraph-compatible API streams run events with Server-Sent Events (SSE):
```
ws://localhost:2026/api/langgraph/threads/{thread_id}/runs/stream
```http
POST /api/langgraph/threads/{thread_id}/runs/stream
Accept: text/event-stream
```
---
@ -602,13 +603,21 @@ const response = await fetch('/api/models');
const data = await response.json();
console.log(data.models);
// Using EventSource for streaming
const eventSource = new EventSource(
`/api/langgraph/threads/${threadId}/runs/stream`
);
eventSource.onmessage = (event) => {
console.log(JSON.parse(event.data));
};
// Create a run and stream SSE events
const streamResponse = await fetch(`/api/langgraph/threads/${threadId}/runs/stream`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Accept: "text/event-stream",
},
body: JSON.stringify({
input: { messages: [{ role: "user", content: "Hello" }] },
stream_mode: ["values", "messages-tuple", "custom"],
}),
});
const reader = streamResponse.body?.getReader();
// Decode and parse SSE frames from reader in your client code.
```
### cURL Examples

View File

@ -20,24 +20,22 @@ This document provides a comprehensive overview of the DeerFlow backend architec
│ └────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────┬────────────────────────────────────────┘
┌───────────────────────┼───────────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
│ Embedded Runtime │ │ Gateway API │ │ Frontend │
│ (inside Gateway) │ │ (Port 8001) │ │ (Port 3000) │
│ │ │ │ │ │
│ - Agent Runtime │ │ - Models API │ │ - Next.js App │
│ - Thread Mgmt │ │ - MCP Config │ │ - React UI │
│ - SSE Streaming │ │ - Skills Mgmt │ │ - Chat Interface │
│ - Checkpointing │ │ - File Uploads │ │ │
│ │ │ - Thread Cleanup │ │ │
│ │ │ - Artifacts │ │ │
└─────────────────────┘ └─────────────────────┘ └─────────────────────┘
│ │
│ ┌─────────────────┘
│ │
▼ ▼
┌───────────────────────┴───────────────────────┐
│ │
▼ ▼
┌─────────────────────────────────────────────┐ ┌─────────────────────┐
│ Gateway API │ │ Frontend │
│ (Port 8001) │ │ (Port 3000) │
│ │ │ │
│ - LangGraph-compatible runs/threads API │ │ - Next.js App │
│ - Embedded Agent Runtime │ │ - React UI │
│ - SSE Streaming │ │ - Chat Interface │
│ - Checkpointing │ │ │
│ - Models, MCP, Skills, Uploads, Artifacts │ │ │
│ - Thread Cleanup │ │ │
└─────────────────────────────────────────────┘ └─────────────────────┘
┌──────────────────────────────────────────────────────────────────────────┐
│ Shared Configuration │
│ ┌─────────────────────────┐ ┌────────────────────────────────────────┐ │
@ -52,9 +50,9 @@ This document provides a comprehensive overview of the DeerFlow backend architec
## Component Details
### Embedded LangGraph Runtime
### Gateway Embedded Agent Runtime
The LangGraph-compatible runtime runs inside the Gateway process and is built on LangGraph for robust multi-agent workflow orchestration.
The agent runtime is embedded in the FastAPI Gateway and built on LangGraph for robust multi-agent workflow orchestration. Nginx rewrites `/api/langgraph/*` to Gateway's native `/api/*` routes, so the public API remains compatible with LangGraph SDK clients without running a separate LangGraph server.
**Entry Point**: `packages/harness/deerflow/agents/lead_agent/agent.py:make_lead_agent`
@ -65,7 +63,7 @@ The LangGraph-compatible runtime runs inside the Gateway process and is built on
- Tool execution orchestration
- SSE streaming for real-time responses
**Configuration**: `langgraph.json`
**Graph registry**: `langgraph.json` remains available for tooling and Studio compatibility.
```json
{
@ -84,6 +82,7 @@ FastAPI application providing REST endpoints plus the public LangGraph-compatibl
**Routers**:
- `models.py` - `/api/models` - Model listing and details
- `thread_runs.py` / `runs.py` - `/api/threads/{id}/runs`, `/api/runs/*` - LangGraph-compatible runs and streaming
- `mcp.py` - `/api/mcp` - MCP server configuration
- `skills.py` - `/api/skills` - Skills management
- `uploads.py` - `/api/threads/{id}/uploads` - File upload
@ -91,7 +90,7 @@ FastAPI application providing REST endpoints plus the public LangGraph-compatibl
- `artifacts.py` - `/api/threads/{id}/artifacts` - Artifact serving
- `suggestions.py` - `/api/threads/{id}/suggestions` - Follow-up suggestion generation
The web conversation delete flow is now split across both backend surfaces: LangGraph handles `DELETE /api/langgraph/threads/{thread_id}` for thread state, then the Gateway `threads.py` router removes DeerFlow-managed filesystem data via `Paths.delete_thread_dir()`.
The web conversation delete flow first deletes Gateway-managed thread state through the LangGraph-compatible route, then the Gateway `threads.py` router removes DeerFlow-managed filesystem data via `Paths.delete_thread_dir()`.
### Agent Architecture
@ -354,9 +353,9 @@ SKILL.md Format:
{"input": {"messages": [{"role": "user", "content": "Hello"}]}}
2. Nginx → Gateway API (8001)
Routes `/api/langgraph/*` to the Gateway's LangGraph-compatible runtime
`/api/langgraph/*` is rewritten to Gateway's LangGraph-compatible `/api/*` routes
3. Embedded LangGraph runtime
3. Gateway embedded runtime
a. Load/create thread state
b. Execute middleware chain:
- ThreadDataMiddleware: Set up paths
@ -412,7 +411,7 @@ SKILL.md Format:
### Thread Cleanup Flow
```
1. Client deletes conversation via LangGraph
1. Client deletes conversation via the LangGraph-compatible Gateway route
DELETE /api/langgraph/threads/{thread_id}
2. Web UI follows up with Gateway cleanup

View File

@ -82,10 +82,10 @@ pnpm start
Key environment variables (see `.env.example` for full list):
```bash
# Backend API URLs (optional, uses nginx proxy by default)
# Backend API URL (optional, uses local Next.js/nginx proxy by default)
NEXT_PUBLIC_BACKEND_BASE_URL="http://localhost:8001"
# LangGraph API URLs (optional, uses nginx proxy by default)
NEXT_PUBLIC_LANGGRAPH_BASE_URL="http://localhost:2024"
# LangGraph-compatible API URL (optional, uses local Next.js/nginx proxy by default)
NEXT_PUBLIC_LANGGRAPH_BASE_URL="http://localhost:8001/api"
```
## Project Structure

View File

@ -111,10 +111,9 @@ checkpointer:
```
<Callout type="info">
The LangGraph Server manages its own state separately. The
<code>checkpointer</code> setting in <code>config.yaml</code> applies to the
embedded <code>DeerFlowClient</code> (used in direct Python integrations), not
to the LangGraph Server deployment used by DeerFlow App.
The Gateway embedded runtime uses the <code>checkpointer</code> setting in
<code>config.yaml</code>. The same setting is also used by
<code>DeerFlowClient</code> in direct Python integrations.
</Callout>
### Thread data storage

View File

@ -23,8 +23,7 @@ Services started:
| Service | Port | Description |
| ----------- | ---- | ------------------------ |
| LangGraph | 2024 | DeerFlow Harness runtime |
| Gateway API | 8001 | FastAPI backend |
| Gateway API | 8001 | FastAPI backend + embedded agent runtime |
| Frontend | 3000 | Next.js UI |
| nginx | 2026 | Unified reverse proxy |
@ -36,13 +35,12 @@ Access the app at **http://localhost:2026**.
make stop
```
Stops all four services. Safe to run even if a service is not running.
Stops all services. Safe to run even if a service is not running.
</Tabs.Tab>
<Tabs.Tab>
```
logs/langgraph.log # Agent runtime logs
logs/gateway.log # API gateway logs
logs/gateway.log # API gateway and agent runtime logs
logs/frontend.log # Next.js dev server logs
logs/nginx.log # nginx access/error logs
```
@ -50,7 +48,7 @@ logs/nginx.log # nginx access/error logs
Tail a log in real time:
```bash
tail -f logs/langgraph.log
tail -f logs/gateway.log
```
</Tabs.Tab>
@ -74,7 +72,7 @@ export DEER_FLOW_ROOT=/path/to/deer-flow
docker compose -f docker/docker-compose-dev.yaml up --build
```
Services: nginx, frontend, gateway, langgraph, and optionally provisioner (for K8s-managed sandboxes).
Services: nginx, frontend, gateway, and optionally provisioner (for K8s-managed sandboxes).
Access the app at **http://localhost:2026**.
@ -99,7 +97,7 @@ The `docker-compose*.yaml` files include an `env_file: ../.env` directive that l
### Data persistence
Thread data is stored in `backend/.deer-flow/threads/`. In Docker deployments, this directory is bind-mounted into the langgraph container.
Thread data is stored in `backend/.deer-flow/threads/`. In Docker deployments, this directory is bind-mounted into the gateway container.
To avoid data loss when containers are recreated:
@ -161,14 +159,7 @@ When `USERDATA_PVC_NAME` is set, the provisioner automatically uses subPath (`th
### nginx configuration
nginx routes all traffic. Key environment variables that control routing:
| Variable | Default | Description |
| -------------------- | ---------------- | --------------------------------------- |
| `LANGGRAPH_UPSTREAM` | `langgraph:2024` | LangGraph service address |
| `LANGGRAPH_REWRITE` | `/` | URL rewrite prefix for LangGraph routes |
These are set in the Docker Compose environment and processed by `envsubst` at container startup.
nginx routes all traffic to the frontend or Gateway. `/api/langgraph/*` is rewritten to Gateway's LangGraph-compatible `/api/*` routes, so no separate LangGraph upstream is required.
### Authentication
@ -186,8 +177,7 @@ openssl rand -base64 32
| Service | Minimum | Recommended |
| ------------------------------- | ---------------- | ---------------- |
| LangGraph (agent runtime) | 2 vCPU, 4 GB RAM | 4 vCPU, 8 GB RAM |
| Gateway | 0.5 vCPU, 512 MB | 1 vCPU, 1 GB |
| Gateway + agent runtime | 2 vCPU, 4 GB RAM | 4 vCPU, 8 GB RAM |
| Frontend | 0.5 vCPU, 512 MB | 1 vCPU, 1 GB |
| Sandbox container (per session) | 1 vCPU, 1 GB | 2 vCPU, 2 GB |
@ -199,9 +189,6 @@ After starting, verify the deployment:
# Check Gateway health
curl http://localhost:8001/health
# Check LangGraph health
curl http://localhost:2024/ok
# List configured models (through nginx)
curl http://localhost:2026/api/models
```

View File

@ -25,11 +25,11 @@ DeerFlow App is the reference implementation of what a production DeerFlow exper
| **Streaming responses** | Real-time token streaming with thinking steps and tool call visibility |
| **Artifact viewer** | In-browser preview and download of files and outputs produced by the agent |
| **Extensions UI** | Enable/disable MCP servers and skills without editing config files |
| **Gateway API** | FastAPI-based REST API that bridges the frontend and the LangGraph runtime |
| **Gateway API** | FastAPI-based REST API with the embedded LangGraph-compatible agent runtime |
## Architecture
The DeerFlow App runs as four services behind a single nginx reverse proxy:
The DeerFlow App runs behind a single nginx reverse proxy:
```
┌──────────────────┐
@ -42,19 +42,11 @@ The DeerFlow App runs as four services behind a single nginx reverse proxy:
│ Frontend :3000 │ │ Gateway API :8001 │
│ (Next.js) │ │ (FastAPI) │
└──────────────────┘ └──────────────────────┘
┌─────────┘
┌──────────────────────┐
│ LangGraph :2024 │
│ (DeerFlow Harness) │
└──────────────────────┘
```
- **nginx**: routes requests — `/api/*` to the Gateway, LangGraph streaming endpoints to LangGraph directly, and everything else to the frontend.
- **Frontend** (Next.js + React): the browser UI. Communicates with both the Gateway and LangGraph.
- **Gateway** (FastAPI): handles API operations — model listing, agent CRUD, memory, extensions management, file uploads.
- **LangGraph**: the DeerFlow Harness runtime. Manages thread state, agent execution, and streaming.
- **nginx**: routes requests — `/api/*` and `/api/langgraph/*` to Gateway, and everything else to the frontend.
- **Frontend** (Next.js + React): the browser UI. Communicates with Gateway.
- **Gateway** (FastAPI): handles API operations and the embedded LangGraph-compatible runtime for thread state, agent execution, and streaming.
## Technology stack
@ -64,7 +56,7 @@ The DeerFlow App runs as four services behind a single nginx reverse proxy:
| Gateway | FastAPI, Python 3.12, uvicorn |
| Agent runtime | LangGraph, LangChain, DeerFlow Harness |
| Reverse proxy | nginx |
| State persistence | LangGraph Server (default) + optional SQLite/PostgreSQL checkpointer |
| State persistence | Gateway runtime + optional SQLite/PostgreSQL checkpointer |
<Cards num={2}>
<Cards.Card title="Quick Start" href="/docs/application/quick-start" />

View File

@ -15,15 +15,13 @@ All services write logs to the `logs/` directory when started with `make dev`:
| File | Service |
| -------------------- | ------------------------------------ |
| `logs/langgraph.log` | LangGraph / DeerFlow Harness runtime |
| `logs/gateway.log` | FastAPI Gateway API |
| `logs/gateway.log` | FastAPI Gateway API and agent runtime |
| `logs/frontend.log` | Next.js frontend dev server |
| `logs/nginx.log` | nginx reverse proxy |
Tail logs in real time:
```bash
tail -f logs/langgraph.log
tail -f logs/gateway.log
```
@ -41,9 +39,6 @@ Verify each service is responding:
# Gateway health
curl http://localhost:8001/health
# LangGraph health
curl http://localhost:2024/ok
# Through nginx (verifies full proxy chain)
curl http://localhost:2026/api/models
```
@ -66,7 +61,7 @@ grep config_version config.yaml
### The app loads but the agent doesn't respond
1. Check `logs/langgraph.log` for startup errors.
1. Check `logs/gateway.log` for startup errors.
2. Verify your model is correctly configured in `config.yaml` with a valid API key.
3. Confirm the API key environment variable is set in the shell that ran `make dev`.
4. Test the model endpoint directly with `curl` to rule out network issues.
@ -126,7 +121,7 @@ Connection refused: http://provisioner:8002
If MCP tools appear in `extensions_config.json` but are not available in the agent:
1. Check `logs/langgraph.log` for MCP initialization errors.
1. Check `logs/gateway.log` for MCP initialization errors.
2. Verify the MCP server command is installed (`npx`, `uvx`, or the relevant binary).
3. Test the server command manually to confirm it starts without errors.
4. Set `log_level: debug` to see detailed MCP loading output.
@ -137,7 +132,7 @@ If MCP tools appear in `extensions_config.json` but are not available in the age
- Verify `memory.enabled: true` in `config.yaml`.
- Check that the storage path is writable: `ls -la backend/.deer-flow/`.
- Look for memory update errors in `logs/langgraph.log` (search for "memory").
- Look for memory update errors in `logs/gateway.log` (search for "memory").
## Data backup

View File

@ -1,6 +1,6 @@
---
title: Quick Start
description: This guide walks you through starting DeerFlow App on your local machine using the `make dev` workflow. All four services (LangGraph, Gateway, Frontend, nginx) start together and are accessible through a single URL.
description: This guide walks you through starting DeerFlow App on your local machine using the `make dev` workflow. Gateway, Frontend, and nginx start together and are accessible through a single URL.
---
import { Callout, Cards, Steps } from "nextra/components";
@ -12,7 +12,7 @@ import { Callout, Cards, Steps } from "nextra/components";
Python 3.12+, Node.js 22+, and at least one LLM API key.
</Callout>
This guide walks you through starting DeerFlow App on your local machine using the `make dev` workflow. All four services (LangGraph, Gateway, Frontend, nginx) start together and are accessible through a single URL.
This guide walks you through starting DeerFlow App on your local machine using the `make dev` workflow. Gateway, Frontend, and nginx start together and are accessible through a single URL.
## Prerequisites
@ -88,8 +88,7 @@ make dev
This starts:
- LangGraph server on port `2024`
- Gateway API on port `8001`
- Gateway API and embedded agent runtime on port `8001`
- Frontend on port `3000`
- nginx reverse proxy on port `2026`
@ -113,15 +112,13 @@ Log files:
| Service | Log file |
| --------- | -------------------- |
| LangGraph | `logs/langgraph.log` |
| Gateway | `logs/gateway.log` |
| Frontend | `logs/frontend.log` |
| nginx | `logs/nginx.log` |
<Callout type="tip">
If something is not working, check the log files first. Most startup errors
(missing API keys, config parsing failures) appear in `logs/langgraph.log` or
`logs/gateway.log`.
(missing API keys, config parsing failures) appear in `logs/gateway.log`.
</Callout>
<Cards num={2}>

View File

@ -68,7 +68,7 @@ DeerFlow ships with the following public skills:
### Discovery and loading
`load_skills()` in `skills/loader.py` scans both `public/` and `custom/` directories under the configured skills path. It re-reads `ExtensionsConfig.from_file()` on every call, which means enabling or disabling a skill through the Gateway API takes effect immediately in the running LangGraph server without a restart.
`load_skills()` in `skills/loader.py` scans both `public/` and `custom/` directories under the configured skills path. It re-reads `ExtensionsConfig.from_file()` on every call, which means enabling or disabling a skill through the Gateway API takes effect immediately in the running agent runtime without a restart.
### Parsing

View File

@ -215,7 +215,6 @@ BETTER_AUTH_SECRET=local-dev-secret-at-least-32-chars
| `DEER_FLOW_CONFIG_PATH` | 自动发现 | `config.yaml` 的绝对路径 |
| `LOG_LEVEL` | `info` | 日志详细程度(`debug`/`info`/`warning`/`error` |
| `DEER_FLOW_ROOT` | 仓库根目录 | 用于 Docker 中的技能和线程挂载 |
| `LANGGRAPH_UPSTREAM` | `langgraph:2024` | nginx 代理的 LangGraph 地址 |
<Cards num={2}>
<Cards.Card title="Harness 配置" href="/docs/harness/configuration" />

View File

@ -23,8 +23,7 @@ make dev
| 服务 | 端口 | 描述 |
| ----------- | ---- | ----------------------- |
| LangGraph | 2024 | DeerFlow Harness 运行时 |
| Gateway API | 8001 | FastAPI 后端 |
| Gateway API | 8001 | FastAPI 后端 + 嵌入式 Agent 运行时 |
| 前端 | 3000 | Next.js 界面 |
| nginx | 2026 | 统一反向代理 |
@ -36,13 +35,12 @@ make dev
make stop
```
停止所有四个服务。即使某个服务没有运行也可以安全执行。
停止所有服务。即使某个服务没有运行也可以安全执行。
</Tabs.Tab>
<Tabs.Tab>
```
logs/langgraph.log # Agent 运行时日志
logs/gateway.log # API Gateway 日志
logs/gateway.log # API Gateway 和 Agent 运行时日志
logs/frontend.log # Next.js 开发服务器日志
logs/nginx.log # nginx 访问/错误日志
```
@ -50,7 +48,7 @@ logs/nginx.log # nginx 访问/错误日志
实时追踪日志:
```bash
tail -f logs/langgraph.log
tail -f logs/gateway.log
```
</Tabs.Tab>
@ -96,7 +94,7 @@ BETTER_AUTH_SECRET=your-secret-here-min-32-chars
### 数据持久化
线程数据存储在 `backend/.deer-flow/threads/`。在 Docker 部署中,此目录被绑定挂载到 langgraph 容器中。
线程数据存储在 `backend/.deer-flow/threads/`。在 Docker 部署中,此目录会绑定挂载到 gateway 容器中。
为避免容器重建时数据丢失:
@ -156,14 +154,7 @@ SKILLS_PVC_NAME=deer-flow-skills-pvc
### nginx 配置
nginx 路由所有流量,控制路由的关键环境变量:
| 变量 | 默认值 | 描述 |
| -------------------- | ---------------- | ----------------------------- |
| `LANGGRAPH_UPSTREAM` | `langgraph:2024` | LangGraph 服务地址 |
| `LANGGRAPH_REWRITE` | `/` | LangGraph 路由的 URL 重写前缀 |
这些在 Docker Compose 环境中设置,并在容器启动时由 `envsubst` 处理。
nginx 将流量路由到前端或 Gateway。`/api/langgraph/*` 会被重写到 Gateway 的 LangGraph-compatible `/api/*` 路由,因此不需要单独的 LangGraph upstream。
### 认证配置
@ -181,8 +172,7 @@ openssl rand -base64 32
| 服务 | 最低配置 | 推荐配置 |
| ------------------------- | ---------------- | ---------------- |
| LangGraphAgent 运行时) | 2 vCPU、4 GB RAM | 4 vCPU、8 GB RAM |
| Gateway | 0.5 vCPU、512 MB | 1 vCPU、1 GB |
| Gateway + Agent 运行时 | 2 vCPU、4 GB RAM | 4 vCPU、8 GB RAM |
| 前端 | 0.5 vCPU、512 MB | 1 vCPU、1 GB |
| 沙箱容器(每会话) | 1 vCPU、1 GB | 2 vCPU、2 GB |
@ -194,9 +184,6 @@ openssl rand -base64 32
# 检查 Gateway 健康状态
curl http://localhost:8001/health
# 检查 LangGraph 健康状态
curl http://localhost:2024/ok
# 通过 nginx 列出配置的模型(验证完整代理链)
curl http://localhost:2026/api/models
```

View File

@ -25,11 +25,11 @@ DeerFlow 应用是 DeerFlow 生产体验的参考实现。它将 Harness 运行
| **流式响应** | 实时 token 流式传输,带思考步骤和工具调用可见性 |
| **产出物查看器** | Agent 生成文件和输出的浏览器内预览和下载 |
| **扩展界面** | 无需编辑配置文件即可启用/禁用 MCP 服务器和技能 |
| **Gateway API** | 桥接前端和 LangGraph 运行时的基于 FastAPI 的 REST API |
| **Gateway API** | 基于 FastAPI 的 REST API,并内置 LangGraph-compatible Agent 运行时 |
## 架构
DeerFlow 应用以四个服务的形式运行,通过单个 nginx 反向代理提供:
DeerFlow 应用通过单个 nginx 反向代理提供:
```
┌──────────────────┐
@ -42,19 +42,11 @@ DeerFlow 应用以四个服务的形式运行,通过单个 nginx 反向代理
│ 前端 :3000 │ │ Gateway API :8001 │
│ (Next.js) │ │ (FastAPI) │
└──────────────────┘ └──────────────────────┘
┌─────────┘
┌──────────────────────┐
│ LangGraph :2024 │
│ (DeerFlow Harness) │
└──────────────────────┘
```
- **nginx**:路由请求——`/api/*` 到 GatewayLangGraph 流式端点到 LangGraph其余到前端。
- **前端**Next.js + React浏览器界面与 Gateway 和 LangGraph 通信。
- **Gateway**FastAPI处理 API 操作——模型列表、Agent CRUD、记忆、扩展管理、文件上传。
- **LangGraph**DeerFlow Harness 运行时管理线程状态、Agent 执行和流式传输。
- **nginx**:路由请求——`/api/*` 和 `/api/langgraph/*` 到 Gateway其余到前端。
- **前端**Next.js + React浏览器界面与 Gateway 通信。
- **Gateway**FastAPI处理 API 操作,并通过内置 LangGraph-compatible 运行时管理线程状态、Agent 执行和流式传输。
## 技术栈
@ -64,7 +56,7 @@ DeerFlow 应用以四个服务的形式运行,通过单个 nginx 反向代理
| Gateway | FastAPI、Python 3.12、uvicorn |
| Agent 运行时 | LangGraph、LangChain、DeerFlow Harness |
| 反向代理 | nginx |
| 状态持久化 | LangGraph Server默认+ 可选 SQLite/PostgreSQL 检查点 |
| 状态持久化 | Gateway 运行时 + 可选 SQLite/PostgreSQL 检查点 |
<Cards num={2}>
<Cards.Card title="快速上手" href="/docs/application/quick-start" />

View File

@ -15,16 +15,14 @@ DeerFlow 应用在 `logs/` 目录中写入每个服务的日志:
| 文件 | 内容 |
| -------------------- | -------------------------------------- |
| `logs/langgraph.log` | Agent 运行时、工具调用、LangGraph 错误 |
| `logs/gateway.log` | API 请求/响应、Gateway 错误 |
| `logs/gateway.log` | API 请求/响应、Agent 运行时和 Gateway 错误 |
| `logs/frontend.log` | Next.js 服务器日志 |
| `logs/nginx.log` | 代理访问和错误日志 |
**实时追踪日志**
```bash
tail -f logs/langgraph.log # 查看 Agent 活动
tail -f logs/gateway.log # 查看 API 请求
tail -f logs/gateway.log # 查看 API 请求和 Agent 活动
```
**调整日志级别**
@ -42,9 +40,6 @@ DeerFlow 暴露健康检查端点:
# Gateway 健康状态
curl http://localhost:8001/health
# LangGraph 健康状态
curl http://localhost:2024/ok
# 通过 nginx 完整代理链验证
curl http://localhost:2026/api/models
```
@ -68,8 +63,8 @@ make config-upgrade
**诊断**
```bash
# 检查 LangGraph 日志中的模型错误
grep -i "error\|apikey\|unauthorized" logs/langgraph.log | tail -20
# 检查 Gateway 日志中的模型错误
grep -i "error\|apikey\|unauthorized" logs/gateway.log | tail -20
```
**解决**
@ -118,13 +113,13 @@ SKIP_ENV_VALIDATION=1 pnpm build
### MCP 服务器连接失败
**症状**MCP 工具未出现,`logs/langgraph.log` 中有超时错误。
**症状**MCP 工具未出现,`logs/gateway.log` 中有超时错误。
**诊断**
```bash
# 检查 MCP 相关错误
grep -i "mcp\|timeout" logs/langgraph.log | tail -20
grep -i "mcp\|timeout" logs/gateway.log | tail -20
```
**解决**

View File

@ -1,6 +1,6 @@
---
title: 快速上手
description: 本指南引导你使用 `make dev` 工作流在本地机器上启动 DeerFlow 应用。所有四个服务LangGraph、Gateway、前端、nginx一起启动,通过单个 URL 访问。
description: 本指南引导你使用 `make dev` 工作流在本地机器上启动 DeerFlow 应用。Gateway、前端和 nginx 会一起启动,通过单个 URL 访问。
---
import { Callout, Cards, Steps } from "nextra/components";
@ -12,7 +12,7 @@ import { Callout, Cards, Steps } from "nextra/components";
3.12+、Node.js 22+ 的机器,以及至少一个 LLM API Key。
</Callout>
本指南引导你使用 `make dev` 工作流在本地机器上启动 DeerFlow 应用。所有四个服务LangGraph、Gateway、前端、nginx一起启动,通过单个 URL 访问。
本指南引导你使用 `make dev` 工作流在本地机器上启动 DeerFlow 应用。Gateway、前端和 nginx 会一起启动,通过单个 URL 访问。
## 前置条件
@ -88,8 +88,7 @@ make dev
这会启动:
- LangGraph 服务,端口 `2024`
- Gateway API端口 `8001`
- Gateway API 和嵌入式 Agent 运行时,端口 `8001`
- 前端,端口 `3000`
- nginx 反向代理,端口 `2026`
@ -113,15 +112,13 @@ make stop
| 服务 | 日志文件 |
| --------- | -------------------- |
| LangGraph | `logs/langgraph.log` |
| Gateway | `logs/gateway.log` |
| 前端 | `logs/frontend.log` |
| nginx | `logs/nginx.log` |
<Callout type="tip">
如果有问题,先检查日志文件。大多数启动错误(缺失 API
Key、配置解析失败会出现在 <code>logs/langgraph.log</code> 或{" "}
<code>logs/gateway.log</code> 中。
Key、配置解析失败会出现在 <code>logs/gateway.log</code> 中。
</Callout>
<Cards num={2}>

View File

@ -14,8 +14,8 @@ DeerFlow exposes two API surfaces behind an Nginx reverse proxy:
| Service | Direct Port | Via Proxy | Purpose |
|----------------|-------------|----------------------------------|----------------------------------|
| Gateway API | 8001 | `$DEERFLOW_GATEWAY_URL` | REST endpoints (models, skills, memory, uploads) |
| LangGraph API | 2024 | `$DEERFLOW_LANGGRAPH_URL` | Agent threads, runs, streaming |
| Gateway API | 8001 | `$DEERFLOW_GATEWAY_URL` | REST endpoints and embedded agent runtime |
| LangGraph-compatible API | 8001 | `$DEERFLOW_LANGGRAPH_URL` | Agent threads, runs, streaming |
## Environment Variables