Xinmin Zeng f1a0ab699a
fix(tools): preserve tool_search promotions across re-entrant get_available_tools (#2885)
* fix(tools): preserve tool_search promotions across re-entrant get_available_tools

Closes #2884.

``get_available_tools`` used to unconditionally call
``reset_deferred_registry()`` and rebuild a fresh ``DeferredToolRegistry``
on every invocation. That works for the first call of a request (the
ContextVar starts at its default of ``None``), but any RE-ENTRANT call
during the same async context — e.g. ``task_tool`` building a subagent's
toolset, or a custom middleware that rebuilds tools mid-run — wiped any
``tool_search`` promotions the parent agent had already made. The
``DeferredToolFilterMiddleware`` would then re-hide those tools from the
next model call, leaving the agent able to see a tool's name (via the
prior ``tool_search`` result that's still in conversation history) but
unable to invoke it.

Fix: when the ContextVar already holds a registry, reuse it instead of
rebuilding. Fresh requests still get a fresh registry because each new
graph run starts in a new asyncio task with the ContextVar at ``None``.

## Verification

- Unit-level reproduction (``test_get_available_tools_resets_registry_wiping_promotion``):
  promote a tool in the registry, call ``get_available_tools`` again, assert
  the promotion is preserved. Fails on main, passes on this branch.

- Graph-execution reproduction (two tests): drive a real
  ``langchain.agents.create_agent`` graph with the real
  ``DeferredToolFilterMiddleware`` through two model turns, including one
  that issues a re-entrant ``get_available_tools`` call to simulate the
  task_tool subagent path.

- Real-LLM end-to-end (``test_deferred_tool_promotion_real_llm.py``,
  opt-in via ``ONEAPI_E2E=1``): drives the same flow against a real
  OpenAI-compatible model (verified on GPT-5.4-mini through the one-api
  gateway), watches the model call the promoted ``fake_calculator``
  through the deferred-filter middleware, and asserts the right arithmetic
  result. Passes against the fixed branch.

- Companion update to ``test_tool_deduplication.py``: dropped the
  ``@patch("deerflow.tools.tools.reset_deferred_registry")`` decorators
  because the symbol is no longer imported there.

- Test fixtures in the new files patch ``deerflow.tools.tools.get_app_config``
  with a minimal ``model_construct``-ed ``AppConfig`` instead of calling
  the real loader, so they never trigger ``_apply_singleton_configs`` and
  never leak ``_memory_config``/``_title_config``/… mutations into the
  rest of the suite.

Full backend suite: 3208 passed / 14 skipped / 0 failed. ruff check + format clean.

* fix(tools): address Copilot review on #2885

- tools.py: rewrite the reuse-path comment to spell out (a) why we don't
  reconcile the registry against the current ``mcp_tools`` snapshot — the
  MCP cache doesn't refresh mid-graph-run, the lead agent's ``ToolNode``
  is already bound to the previous tool set anyway, and ``promote()``
  drops the entry so a naive re-sync misclassifies promotions as new
  tools — and (b) why the log uses ``max(0, …)`` to avoid negative
  counts when the cache shrinks between snapshots.
- Replace direct ``ts_mod._registry_var.set(None)`` in test fixtures with
  the public ``reset_deferred_registry()`` helper so tests don't couple
  to module internals.
- Correct the docstring path in ``test_deferred_tool_registry_promotion.py``
  to match the actual monkeypatch target (``deerflow.mcp.cache.get_cached_mcp_tools``).
- Rename
  ``test_get_available_tools_resets_registry_wiping_promotion`` to
  ``test_get_available_tools_preserves_promotions_across_reentrant_calls``
  so the test name describes the contract being asserted, not the bug it
  originally reproduced.

Full backend suite: 3208 passed / 14 skipped. Real-LLM e2e: 1 passed.
2026-05-13 23:45:47 +08:00
..
2026-01-14 09:57:52 +08:00

DeerFlow Backend

DeerFlow is a LangGraph-based AI super agent with sandbox execution, persistent memory, and extensible tool integration. The backend enables AI agents to execute code, browse the web, manage files, delegate tasks to subagents, and retain context across conversations - all in isolated, per-thread environments.


Architecture

                        ┌──────────────────────────────────────┐
                        │          Nginx (Port 2026)           │
                        │      Unified reverse proxy           │
                        └───────┬──────────────────┬───────────┘
                                │
            /api/langgraph/*    │    /api/* (other)
            rewritten to /api/* │
                                ▼
               ┌────────────────────────────────────────┐
               │        Gateway API (8001)              │
               │        FastAPI REST + agent runtime    │
               │                                        │
               │ Models, MCP, Skills, Memory, Uploads,  │
               │ Artifacts, Threads, Runs, Streaming    │
               │                                        │
               │ ┌────────────────────────────────────┐ │
               │ │ Lead Agent                         │ │
               │ │ Middleware Chain, Tools, Subagents │ │
               │ └────────────────────────────────────┘ │
               └────────────────────────────────────────┘

Request Routing (via Nginx):

  • /api/langgraph/* → Gateway LangGraph-compatible API - agent interactions, threads, streaming
  • /api/* (other) → Gateway API - models, MCP, skills, memory, artifacts, uploads, thread-local cleanup
  • / (non-API) → Frontend - Next.js web interface

Core Components

Lead Agent

The single LangGraph agent (lead_agent) is the runtime entry point, created via make_lead_agent(config). It combines:

  • Dynamic model selection with thinking and vision support
  • Middleware chain for cross-cutting concerns (9 middlewares)
  • Tool system with sandbox, MCP, community, and built-in tools
  • Subagent delegation for parallel task execution
  • System prompt with skills injection, memory context, and working directory guidance

Middleware Chain

Middlewares execute in strict order, each handling a specific concern:

# Middleware Purpose
1 ThreadDataMiddleware Creates per-thread isolated directories (workspace, uploads, outputs)
2 UploadsMiddleware Injects newly uploaded files into conversation context
3 SandboxMiddleware Acquires sandbox environment for code execution
4 SummarizationMiddleware Reduces context when approaching token limits (optional)
5 TodoListMiddleware Tracks multi-step tasks in plan mode (optional)
6 TitleMiddleware Auto-generates conversation titles after first exchange
7 MemoryMiddleware Queues conversations for async memory extraction
8 ViewImageMiddleware Injects image data for vision-capable models (conditional)
9 ClarificationMiddleware Intercepts clarification requests and interrupts execution (must be last)

Sandbox System

Per-thread isolated execution with virtual path translation:

  • Abstract interface: execute_command, read_file, write_file, list_dir
  • Providers: LocalSandboxProvider (filesystem) and AioSandboxProvider (Docker, in community/)
  • Virtual paths: /mnt/user-data/{workspace,uploads,outputs} → thread-specific physical directories
  • Skills path: /mnt/skillsdeer-flow/skills/ directory
  • Skills loading: Recursively discovers nested SKILL.md files under skills/{public,custom} and preserves nested container paths
  • File-write safety: str_replace serializes read-modify-write per (sandbox.id, path) so isolated sandboxes keep concurrency even when virtual paths match
  • Tools: bash, ls, read_file, write_file, str_replace (write_file overwrites by default and exposes append for end-of-file writes; bash is disabled by default when using LocalSandboxProvider; use AioSandboxProvider for isolated shell access)

Subagent System

Async task delegation with concurrent execution:

  • Built-in agents: general-purpose (full toolset) and bash (command specialist, exposed only when shell access is available)
  • Concurrency: Max 3 subagents per turn, 15-minute timeout
  • Execution: Background thread pools with status tracking and SSE events
  • Flow: Agent calls task() tool → executor runs subagent in background → polls for completion → returns result

Memory System

LLM-powered persistent context retention across conversations:

  • Automatic extraction: Analyzes conversations for user context, facts, and preferences
  • Structured storage: User context (work, personal, top-of-mind), history, and confidence-scored facts
  • Debounced updates: Batches updates to minimize LLM calls (configurable wait time)
  • System prompt injection: Top facts + context injected into agent prompts
  • Storage: JSON file with mtime-based cache invalidation

Tool Ecosystem

Category Tools
Sandbox bash, ls, read_file, write_file, str_replace
Built-in present_files, ask_clarification, view_image, task (subagent)
Community Tavily (web search), Jina AI (web fetch), Firecrawl (scraping), DuckDuckGo (image search)
MCP Any Model Context Protocol server (stdio, SSE, HTTP transports)
Skills Domain-specific workflows injected via system prompt

Gateway API

FastAPI application providing REST endpoints for frontend integration:

Route Purpose
GET /api/models List available LLM models
GET/PUT /api/mcp/config Manage MCP server configurations
GET/PUT /api/skills List and manage skills
POST /api/skills/install Install skill from .skill archive
GET /api/memory Retrieve memory data
POST /api/memory/reload Force memory reload
GET /api/memory/config Memory configuration
GET /api/memory/status Combined config + data
POST /api/threads/{id}/uploads Upload files (auto-converts PDF/PPT/Excel/Word to Markdown, rejects directory paths, auto-renames duplicate filenames in one request)
GET /api/threads/{id}/uploads/list List uploaded files
DELETE /api/threads/{id} Delete DeerFlow-managed local thread data after LangGraph thread deletion; unexpected failures are logged server-side and return a generic 500 detail
GET /api/threads/{id}/artifacts/{path} Serve generated artifacts

IM Channels

The IM bridge supports Feishu, Slack, and Telegram. Slack and Telegram still use the final runs.wait() response path, while Feishu now streams through runs.stream(["messages-tuple", "values"]) and updates a single in-thread card in place.

For Feishu card updates, DeerFlow stores the running card's message_id per inbound message and patches that same card until the run finishes, preserving the existing OK / DONE reaction flow.


Quick Start

Prerequisites

  • Python 3.12+
  • uv package manager
  • API keys for your chosen LLM provider

Installation

cd deer-flow

# Copy configuration files
cp config.example.yaml config.yaml

# Install backend dependencies
cd backend
make install

Configuration

Edit config.yaml in the project root:

models:
  - name: gpt-4o
    display_name: GPT-4o
    use: langchain_openai:ChatOpenAI
    model: gpt-4o
    api_key: $OPENAI_API_KEY
    supports_thinking: false
    supports_vision: true

  - name: gpt-5-responses
    display_name: GPT-5 (Responses API)
    use: langchain_openai:ChatOpenAI
    model: gpt-5
    api_key: $OPENAI_API_KEY
    use_responses_api: true
    output_version: responses/v1
    supports_vision: true

Set your API keys:

export OPENAI_API_KEY="your-api-key-here"

Running

Full Application (from project root):

make dev  # Starts Gateway + Frontend + Nginx

Access at: http://localhost:2026

Backend Only (from backend directory):

# Gateway API + embedded agent runtime
make dev

Direct access: Gateway at http://localhost:8001


Project Structure

backend/
├── src/
│   ├── agents/                  # Agent system
│   │   ├── lead_agent/         # Main agent (factory, prompts)
│   │   ├── middlewares/        # 9 middleware components
│   │   ├── memory/             # Memory extraction & storage
│   │   └── thread_state.py    # ThreadState schema
│   ├── gateway/                # FastAPI Gateway API
│   │   ├── app.py             # Application setup
│   │   └── routers/           # 6 route modules
│   ├── sandbox/                # Sandbox execution
│   │   ├── local/             # Local filesystem provider
│   │   ├── sandbox.py         # Abstract interface
│   │   ├── tools.py           # bash, ls, read/write/str_replace
│   │   └── middleware.py      # Sandbox lifecycle
│   ├── subagents/              # Subagent delegation
│   │   ├── builtins/          # general-purpose, bash agents
│   │   ├── executor.py        # Background execution engine
│   │   └── registry.py        # Agent registry
│   ├── tools/builtins/         # Built-in tools
│   ├── mcp/                    # MCP protocol integration
│   ├── models/                 # Model factory
│   ├── skills/                 # Skill discovery & loading
│   ├── config/                 # Configuration system
│   ├── community/              # Community tools & providers
│   ├── reflection/             # Dynamic module loading
│   └── utils/                  # Utilities
├── docs/                       # Documentation
├── tests/                      # Test suite
├── langgraph.json              # LangGraph graph registry for tooling/Studio compatibility
├── pyproject.toml              # Python dependencies
├── Makefile                    # Development commands
└── Dockerfile                  # Container build

langgraph.json is not the default service entrypoint. The scripts and Docker deployments run the Gateway embedded runtime; the file is kept for LangGraph tooling, Studio, or direct LangGraph Server compatibility.


Configuration

Main Configuration (config.yaml)

Place in project root. Config values starting with $ resolve as environment variables.

Key sections:

  • models - LLM configurations with class paths, API keys, thinking/vision flags
  • tools - Tool definitions with module paths and groups
  • tool_groups - Logical tool groupings
  • sandbox - Execution environment provider
  • skills - Skills directory paths
  • title - Auto-title generation settings
  • summarization - Context summarization settings
  • subagents - Subagent system (enabled/disabled)
  • memory - Memory system settings (enabled, storage, debounce, facts limits)

Provider note:

  • models[*].use references provider classes by module path (for example langchain_openai:ChatOpenAI).
  • If a provider module is missing, DeerFlow now returns an actionable error with install guidance (for example uv add langchain-google-genai).

Extensions Configuration (extensions_config.json)

MCP servers and skill states in a single file:

{
  "mcpServers": {
    "github": {
      "enabled": true,
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {"GITHUB_TOKEN": "$GITHUB_TOKEN"}
    },
    "secure-http": {
      "enabled": true,
      "type": "http",
      "url": "https://api.example.com/mcp",
      "oauth": {
        "enabled": true,
        "token_url": "https://auth.example.com/oauth/token",
        "grant_type": "client_credentials",
        "client_id": "$MCP_OAUTH_CLIENT_ID",
        "client_secret": "$MCP_OAUTH_CLIENT_SECRET"
      }
    }
  },
  "skills": {
    "pdf-processing": {"enabled": true}
  }
}

Environment Variables

  • DEER_FLOW_CONFIG_PATH - Override config.yaml location
  • DEER_FLOW_EXTENSIONS_CONFIG_PATH - Override extensions_config.json location
  • Model API keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, DEEPSEEK_API_KEY, etc.
  • Tool API keys: TAVILY_API_KEY, GITHUB_TOKEN, etc.

LangSmith Tracing

DeerFlow has built-in LangSmith integration for observability. When enabled, all LLM calls, agent runs, tool executions, and middleware processing are traced and visible in the LangSmith dashboard.

Setup:

  1. Sign up at smith.langchain.com and create a project.
  2. Add the following to your .env file in the project root:
LANGSMITH_TRACING=true
LANGSMITH_ENDPOINT=https://api.smith.langchain.com
LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxx
LANGSMITH_PROJECT=xxx

Legacy variables: The LANGCHAIN_TRACING_V2, LANGCHAIN_API_KEY, LANGCHAIN_PROJECT, and LANGCHAIN_ENDPOINT variables are also supported for backward compatibility. LANGSMITH_* variables take precedence when both are set.

Langfuse Tracing

DeerFlow also supports Langfuse observability for LangChain-compatible runs.

Add the following to your .env file:

LANGFUSE_TRACING=true
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxxxxxxxxxxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxxxxxxxxxxxxx
LANGFUSE_BASE_URL=https://cloud.langfuse.com

If you are using a self-hosted Langfuse deployment, set LANGFUSE_BASE_URL to your Langfuse host.

Dual Provider Behavior

If both LangSmith and Langfuse are enabled, DeerFlow initializes and attaches both callbacks so the same run data is reported to both systems.

If a provider is explicitly enabled but required credentials are missing, or the provider callback cannot be initialized, DeerFlow raises an error when tracing is initialized during model creation instead of silently disabling tracing.

Docker: In docker-compose.yaml, tracing is disabled by default (LANGSMITH_TRACING=false). Set LANGSMITH_TRACING=true and/or LANGFUSE_TRACING=true in your .env, together with the required credentials, to enable tracing in containerized deployments.


Development

Commands

make install    # Install dependencies
make dev        # Run Gateway API + embedded agent runtime (port 8001)
make gateway    # Run Gateway API without reload (port 8001)
make lint       # Run linter (ruff)
make format     # Format code (ruff)

Code Style

  • Linter/Formatter: ruff
  • Line length: 240 characters
  • Python: 3.12+ with type hints
  • Quotes: Double quotes
  • Indentation: 4 spaces

Testing

uv run pytest

Technology Stack

  • LangGraph (1.0.6+) - Agent framework and multi-agent orchestration
  • LangChain (1.2.3+) - LLM abstractions and tool system
  • FastAPI (0.115.0+) - Gateway REST API
  • langchain-mcp-adapters - Model Context Protocol support
  • agent-sandbox - Sandboxed code execution
  • markitdown - Multi-format document conversion
  • tavily-python / firecrawl-py - Web search and scraping

Documentation


License

See the LICENSE file in the project root.

Contributing

See CONTRIBUTING.md for contribution guidelines.