* fix(agents): propagate agent_name into ToolRuntime.context for setup_agent (#2677)
When creating a custom agent via the web UI, SOUL.md was always written
to the global base_dir/SOUL.md instead of agents/<name>/SOUL.md.
Root cause: the bootstrap flow sends agent_name via body.context, but
two layers were broken:
1. services.py only forwarded body.context keys into config["configurable"];
config["context"] was never populated.
2. worker.py constructed the parent Runtime with a hard-coded
{thread_id, run_id} context, ignoring config["context"] entirely.
After the langgraph >= 1.1.9 bump (#98a5b34f), ToolRuntime.context no
longer falls back to configurable, so setup_agent's
runtime.context.get("agent_name") returned None and the tool's silent
agent_name=None -> base_dir fallback kicked in, overwriting the global
SOUL.md.
Fix:
- services.py: extract merge_run_context_overrides() and write the
whitelisted context keys into both configurable (legacy readers) and
context (langgraph 1.1+ ToolRuntime consumers).
- worker.py: extract _build_runtime_context() and merge config["context"]
into the Runtime's context (without letting callers override
thread_id/run_id).
The base_dir fallback in setup_agent_tool.py is left in place because
the IM /bootstrap channel command depends on it. That code path can
be tightened in a follow-up.
Adds regression tests covering both helpers.
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Centralize log level parsing in `logging_level_from_config()` and
application in `apply_logging_level()` within `deerflow.config.app_config`.
- Gateway lifespan applies configured log level on startup
- `debug.py` uses shared helpers instead of local duplicates
- `apply_logging_level()` targets only `deerflow`/`app` logger hierarchies
so third-party library verbosity is not affected; root handler levels
are only lowered (never raised) to allow configured loggers through
without suppressing third-party output; root logger level is not modified
- Config field description updated to clarify scope
- Tests save/restore global logging state to avoid test pollution
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(memory): replace short-lived asyncio.run() with persistent event loop to prevent zombie httpx connections
The memory updater used asyncio.run() inside daemon threads, creating
and destroying short-lived event loops on every update. Langchain
providers (e.g. langchain-anthropic) cache httpx AsyncClient instances
globally via @lru_cache, so SSL connections created on a loop that is
subsequently destroyed become zombie connections in the shared pool.
When the main agent's lead run later reuses one of these connections,
httpx/anyio triggers RuntimeError: Event loop is closed during
connection cleanup.
Replace the ThreadPoolExecutor + asyncio.run() pattern with a
_MemoryLoopRunner that maintains a single persistent event loop in a
daemon thread for the process lifetime. Since the loop never closes,
connections bound to it never become invalid. The _run_async_update_sync
function now submits coroutines to this persistent loop via
run_coroutine_threadsafe instead of creating throwaway loops.
* update the code to address the review comments
* Fix the review comments of 2615
P1 — user_id forwarded through sync path: Added user_id parameter to _prepare_update_prompt, _finalize_update, and _do_update_memory_sync, and forwarded it to get_memory_data(agent_name, user_id=user_id) and
save(..., user_id=user_id). The update_memory() entry point now passes user_id through both the executor.submit path and the direct call path. Added TestUserIdForwarding with two regression tests (sync + async)
verifying get_memory_data and save receive the correct user_id.
P2 — aupdate_memory() delegates to sync: Replaced the model.ainvoke() call with asyncio.to_thread(self._do_update_memory_sync, ...). This eliminates the unsafe async provider client path entirely — all memory
updater entry points now use the isolated sync model.invoke() path. Updated the test from asserting ainvoke is awaited to asserting invoke is called and ainvoke is not.
Nit — duplicate comment removed: Removed the duplicated # Matches sentences... comment on line 230.
* Chore(test): update the code of test_memory_updater
---------
Co-authored-by: rayhpeng <rayhpeng@gmail.com>
The recent refactor (7bf618de) removed the langgraph upstream and added
individual /api/* prefix locations for models, memory, mcp, skills,
agents, threads, and sandboxes. However, /api/v1/auth/* routes (login,
register, setup-status, etc.) were not covered by any explicit location
block, causing them to fall through to the frontend catch-all.
In Docker production builds, Next.js rewrites are baked at build time
with http://127.0.0.1:8001 (the gateway is unreachable from the
frontend container's localhost), resulting in ECONNREFUSED errors for
all auth operations.
Adding a catch-all `location /api/` block after the more specific
prefix/regex locations ensures all gateway API routes are properly
proxied. nginx's location matching priority means the more specific
locations above still take precedence for /api/langgraph/, /api/models,
/api/memory, /api/mcp, /api/skills, /api/agents, /api/threads/*, and
/api/sandboxes.
Co-authored-by: Chincherry93 <Chincherry93@users.noreply.github.com>
* refactor: thread app_config through middleware factories
Continues the incremental config-refactor sequence (#2611 root, #2612 lead
path) one layer deeper into the middleware factories. Two ambient lookups
inside _build_runtime_middlewares are eliminated and the LLMErrorHandling
band-aid removed:
- _build_runtime_middlewares / build_lead_runtime_middlewares /
build_subagent_runtime_middlewares now require app_config: AppConfig.
- get_guardrails_config() inside the factory is replaced with
app_config.guardrails (semantically identical — same default-factory
GuardrailsConfig — verified by direct equality check).
- LLMErrorHandlingMiddleware.__init__ now requires app_config and reads
circuit_breaker fields directly. The class-level
circuit_failure_threshold / circuit_recovery_timeout_sec defaults are
removed along with the try/except (FileNotFoundError, RuntimeError):
pass band-aid — the let-it-crash invariant the rest of the refactor
enforces.
Caller chain (already-resolved app_config sources):
- _build_middlewares in lead_agent/agent.py: reorder so
resolved_app_config = app_config or get_app_config() is computed BEFORE
build_lead_runtime_middlewares is called, then passed as kwarg.
- SubagentExecutor: optional app_config parameter (mirrors the lead-agent
pattern); _create_agent does the same `or get_app_config()` fallback at
agent-build time, so task_tool callers don't need to plumb app_config
through yet (typed-context plumbing for tool runtimes is a separate
refactor).
Tests:
- test_llm_error_handling_middleware: _make_app_config helper using
AppConfig(sandbox=SandboxConfig(use="test")) — same minimal-config
pattern conftest already uses. Three direct LLMErrorHandlingMiddleware()
calls each followed by post-construction circuit_breaker mutation fold
cleanly into _build_middleware(circuit_failure_threshold=...,
circuit_recovery_timeout_sec=...).
Verification:
- tests/test_llm_error_handling_middleware.py — 14 passed
- tests/test_subagent_executor.py — 28 passed
- tests/test_tool_error_handling_middleware.py — 6 passed
- tests/test_task_tool_core_logic.py — 18 passed (verifies task_tool
unchanged behavior)
- Full suite: 2697 passed, 3 skipped. The single intermittent failure in
tests/test_client_e2e.py::test_tool_call_produces_events is pre-existing
LLM flakiness (the test asserts the model decided to call a tool;
reproduces 1/3 on unchanged main as well).
* fix: address middleware app config review comments
* fix: satisfy app config annotation lint
* test: cover explicit app config middleware wiring
---------
Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com>
In convertToSteps(), the extractReasoningContentFromMessage function was
called twice for the same message - once to check if reasoning exists and
again to assign it to the step object. Reuse the already-extracted value
from the local variable instead.
* feat(channels): add DingTalk channel integration
Add a new DingTalk messaging channel using the dingtalk-stream SDK
with Stream Push (WebSocket), requiring no public IP. Supports both
plain sampleMarkdown replies and optional AI Card streaming for a
typewriter effect when card_template_id is configured.
- Add DingTalkChannel implementation with token management, message
routing, allowed_users filtering, and markdown adaptation
- Register dingtalk in channel service registry and capability map
- Propagate inbound metadata to outbound messages in ChannelManager
for DingTalk sender context (sender_staff_id, conversation_type)
- Add dingtalk-stream dependency to pyproject.toml
- Add configuration examples in config.example.yaml and .env.example
- Update all README translations with setup instructions
- Add comprehensive test suite (test_dingtalk_channel.py) and
metadata propagation test in test_channels.py
- Update backend CLAUDE.md to document DingTalk channel
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channels): address PR review feedback for DingTalk integration
- Replace runtime mutation of CHANNEL_CAPABILITIES with a
`supports_streaming` property on the Channel base class, overridden
by DingTalkChannel, FeishuChannel, and WeComChannel
- Store stream client reference and attempt graceful disconnect in
stop(); guard _on_chatbot_message with _running check to prevent
post-stop message processing
- Use msg.chat_id as the primary routing key in send/send_file via
a shared _resolve_routing helper, with metadata as fallback
- Fix process() return type annotation from tuple[str, str] to
tuple[int, str] to match AckMessage.STATUS_OK
- Protect _incoming_messages with threading.Lock for cross-thread
safety between the Stream Push thread and the asyncio loop
- Re-add Docker Compose URL guidance removed during DingTalk setup
docs addition in README.md
- Fix incomplete sentence in README_zh.md (missing verb "启用")
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(docs): restore plain paragraph format for Docker Compose note
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channels): fix isinstance TypeError and add file size guard in DingTalk channel
Use tuple syntax for isinstance() type check to avoid runtime TypeError
with PEP 604 union types. Add upload size limit (20MB) before reading
files into memory. Narrow exception handlers to specific types.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channels): propagate markdown fallback errors and validate access token response
- Re-raise exceptions in _send_markdown_fallback to prevent partial
deliveries (files sent without accompanying text)
- Validate _get_access_token response: reject non-dict bodies, empty
tokens, and coerce invalid expireIn to a safe default
- Add tests for both fixes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channels): validate upload response and broaden send_file exception handling
- Validate _upload_media JSON response: handle JSONDecodeError and
non-dict payloads gracefully by returning None
- Broaden send_file exception tuple to include TypeError and
AttributeError for unexpected JSON shapes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(channels): fix streaming race on channel registration and slim outbound metadata
- Register channel in service before calling start() to avoid race
where background receiver publishes inbound before registration,
causing manager to fall back to static CHANNEL_CAPABILITIES
- Strip known-large metadata keys (raw_message, ref_msg) from outbound
messages to prevent memory bloat from propagated inbound payloads
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Update service.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update CLAUDE.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix(security): allow disabling API docs in production via GATEWAY_ENABLE_DOCS
Expose /docs, /redoc, and /openapi.json only when GATEWAY_ENABLE_DOCS=true
(default). Setting GATEWAY_ENABLE_DOCS=false disables all three endpoints,
preventing unauthorized API surface discovery in production deployments.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* test(security): add unit tests and docs for GATEWAY_ENABLE_DOCS
Add 7 tests covering default behavior, env var parsing (case-insensitive,
fail-closed), endpoint visibility, and health endpoint independence.
Update CONFIGURATION.md and CLAUDE.md with the new toggle.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(security): apply ruff formatting to gateway app.py
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
The new-agent page pre-generates a thread UUID and passed it directly
to useThreadStream, which made the LangGraph SDK POST to
/threads/{uuid}/runs/stream against a thread the backend had never
created. After PR #2566 introduced multi-tenant owner checks on the
runs endpoints, that request now 404s with "Thread not found".
Pass threadId: undefined to useThreadStream so the SDK takes the
create-then-run path. The pre-generated UUID is still forwarded via
SubmitOptions.threadId in sendMessage, so the new thread is created
with that exact id and onCreated rebinds the hook to it.
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
introduces a complete authentication/authorization system, SQL persistence layer, run event history, user data isolation, and extensive documentation in both English and Chinese. The core additions are:
Auth system: JWT-based auth with local email/password provider, CSRF protection, rate limiting
Persistence layer: SQLAlchemy 2.0 async ORM with SQLite backend (users, threads, runs, events, feedback)
User isolation: Per-user data scoping via contextvars sentinel pattern
Frontend: Login/setup pages, AuthProvider, CSRF-aware fetcher
Documentation: Comprehensive EN/ZH docs for harness, application, tutorials
* feat(models): 适配 MindIE引擎的模型
* test: add unit tests for MindIEChatModel adapter and fix PR review comments
* chore: update uv.lock with pytest-asyncio
* build: add pytest-asyncio to test dependencies
* fix: address PR review comments (lazy import, cache clients, safe newline escape, strict xml regex)
* fix(mindie): preserve string args without JSON quotes in XML tool call serialization
* fix(mindie): preserve string args without JSON quotes in XML tool call serialization
* test_mindie_provider:format
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* fix(mindie): prevent nested tool_call params from leaking into outer args
* fixed by escaping XML entities in _fix_messages and unescaping during parse, with regression tests added.
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* fix(sandbox): block host bash traversal escapes
Fixes#2535
* fix(sandbox): harden local bash path guards
* fix(sandbox): avoid bash cd argument false positives
* Fix the lint error
Add function to resolve and validate user data path.
* Fix the lint error
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix(security): harden auth system and fix run journal logic bug
- Fix inverted condition in RunJournal.on_chat_model_start that prevented
first human message capture (not messages → messages)
- Pre-hash passwords with SHA-256 before bcrypt to avoid silent 72-byte
truncation vulnerability
- Move load_dotenv() from module scope into get_auth_config() to prevent
import-time os.environ mutation breaking test isolation
- Return generic ‘Invalid token’ instead of exposing specific error
variants (expired, malformed, invalid_signature) to clients
- Make @require_auth independently enforce 401 instead of silently
passing through when AuthMiddleware is absent
- Rate-limit /setup-status endpoint with per-IP cooldown to mitigate
initialization-state information leak
- Document in-process rate limiter limitation for multi-worker deployments
* fix(security): return 429+Retry-After on setup-status rate limit, bound cooldown dict
Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/070d0be8-99a5-46c8-85bb-6b81b5284021
Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>
* fix(security): add versioned password hashes with auto-migration on login
The SHA-256 pre-hash change silently broke verification for any existing
bcrypt-only password hashes. Introduce a <N>$ prefix scheme so hashes
are self-describing:
- v2 (current): bcrypt(b64(sha256(password))) with $ prefix
- v1 (legacy): plain bcrypt, prefixed $ or bare (no prefix)
verify_password auto-detects the version and falls back to v1 for older
hashes. LocalAuthProvider.authenticate() now rehashes legacy hashes to v2
on successful login via needs_rehash(), so existing users upgrade
transparently without a dedicated migration step.
* fix(auth): harden verify_password, best-effort rehash, update require_auth docstring, downgrade journal logging
- password.py: wrap bcrypt.checkpw in try/except → return False for malformed/corrupt hashes instead of crashing
- local_provider.py: wrap auto-rehash update_user() in try/except so transient DB errors don't fail valid logins
- authz.py: update require_auth docstring to reflect independent 401 enforcement
- journal.py: downgrade on_chat_model_start from INFO to DEBUG, log only metadata (batch_count, message_counts) instead of full serialized/messages content
Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/48c5cf31-a4ab-418a-982a-6343c37bb299
Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>
* fix(auth): address code review - narrow ValueError catch, add rehash warning log, rename num_batches
Agent-Logs-Url: https://github.com/bytedance/deer-flow/sessions/48c5cf31-a4ab-418a-982a-6343c37bb299
Co-authored-by: WillemJiang <219644+WillemJiang@users.noreply.github.com>
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
* fix(frontend): add missing mock routes for runs-list, models, and suggestions
* Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
- Updated documentation and comments to reflect the transition from LangGraph Server to Gateway.
- Changed default URLs in ChannelManager and tests to point to Gateway.
- Removed references to LangGraph Server in deployment scripts and configurations.
- Updated Nginx configuration to route API traffic to Gateway.
- Adjusted frontend configurations to utilize Gateway's API.
- Removed LangGraph service from Docker Compose files, consolidating services under Gateway.
- Added regression tests to ensure Gateway integration works as expected.
Co-authored-by: Copilot <copilot@github.com>
* Refactor API fetch calls to use a unified fetch function; enhance chat history loading with new hooks and UI components
- Replaced `fetchWithAuth` with a generic `fetch` function across various API modules for consistency.
- Updated `useThreadStream` and `useThreadHistory` hooks to manage chat history loading, including loading states and pagination.
- Introduced `LoadMoreHistoryIndicator` component for better user experience when loading more chat history.
- Enhanced message handling in `MessageList` to accommodate new loading states and history management.
- Added support for run messages in the thread context, improving the overall message handling logic.
- Updated translations for loading indicators in English and Chinese.
* Fix test assertions for run ordering in RunManager tests
- Updated assertions in `test_list_by_thread` to reflect correct ordering of runs.
- Modified `test_list_by_thread_is_stable_when_timestamps_tie` to ensure stable ordering when timestamps are tied.