mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-04-29 05:08:26 +00:00
* feat(persistence): add unified persistence layer with event store, token tracking, and feedback (#1930) * feat(persistence): add SQLAlchemy 2.0 async ORM scaffold Introduce a unified database configuration (DatabaseConfig) that controls both the LangGraph checkpointer and the DeerFlow application persistence layer from a single `database:` config section. New modules: - deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends - deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton - deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation Gateway integration initializes/tears down the persistence engine in the existing langgraph_runtime() context manager. Legacy checkpointer config is preserved for backward compatibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add RunEventStore ABC + MemoryRunEventStore Phase 2-A prerequisite for event storage: adds the unified run event stream interface (RunEventStore) with an in-memory implementation, RunEventsConfig, gateway integration, and comprehensive tests (27 cases). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints Phase 2-B: run persistence + event storage + token tracking. - ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow - RunRepository implements RunStore ABC via SQLAlchemy ORM - ThreadMetaRepository with owner access control - DbRunEventStore with trace content truncation and cursor pagination - JsonlRunEventStore with per-run files and seq recovery from disk - RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events, accumulates token usage by caller type, buffers and flushes to store - RunManager now accepts optional RunStore for persistent backing - Worker creates RunJournal, writes human_message, injects callbacks - Gateway deps use factory functions (RunRepository when DB available) - New endpoints: messages, run messages, run events, token-usage - ThreadCreateRequest gains assistant_id field - 92 tests pass (33 new), zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add user feedback + follow-up run association Phase 2-C: feedback and follow-up tracking. - FeedbackRow ORM model (rating +1/-1, optional message_id, comment) - FeedbackRepository with CRUD, list_by_run/thread, aggregate stats - Feedback API endpoints: create, list, stats, delete - follow_up_to_run_id in RunCreateRequest (explicit or auto-detected from latest successful run on the thread) - Worker writes follow_up_to_run_id into human_message event metadata - Gateway deps: feedback_repo factory + getter - 17 new tests (14 FeedbackRepository + 3 follow-up association) - 109 total tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config - config.example.yaml: deprecate standalone checkpointer section, activate unified database:sqlite as default (drives both checkpointer + app data) - New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage including check_access owner logic, list_by_owner pagination - Extended test_run_repository.py (+4 tests) — completion preserves fields, list ordering desc, limit, owner_none returns all - Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false, middleware no ai_message, unknown caller tokens, convenience fields, tool_error, non-summarization custom event - Extended test_run_event_store.py (+7 tests) — DB batch seq continuity, make_run_event_store factory (memory/db/jsonl/fallback/unknown) - Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists, follow-up metadata, summarization in history, full DB-backed lifecycle - Fixed DB integration test to use proper fake objects (not MagicMock) for JSON-serializable metadata - 157 total Phase 2 tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * config: move default sqlite_dir to .deer-flow/data Keep SQLite databases alongside other DeerFlow-managed data (threads, memory) under the .deer-flow/ directory instead of a top-level ./data folder. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now() - Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM models. Add json_serializer=json.dumps(ensure_ascii=False) to all create_async_engine calls so non-ASCII text (Chinese etc.) is stored as-is in both SQLite and Postgres. - Change ORM datetime defaults from datetime.now(UTC) to datetime.now(), remove UTC imports. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): simplify deps.py with getter factory + inline repos - Replace 6 identical getter functions with _require() factory. - Inline 3 _make_*_repo() factories into langgraph_runtime(), call get_session_factory() once instead of 3 times. - Add thread_meta upsert in start_run (services.py). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(docker): add UV_EXTRAS build arg for optional dependencies Support installing optional dependency groups (e.g. postgres) at Docker build time via UV_EXTRAS build arg: UV_EXTRAS=postgres docker compose build Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(journal): fix flush, token tracking, and consolidate tests RunJournal fixes: - _flush_sync: retain events in buffer when no event loop instead of dropping them; worker's finally block flushes via async flush(). - on_llm_end: add tool_calls filter and caller=="lead_agent" guard for ai_message events; mark message IDs for dedup with record_llm_usage. - worker.py: persist completion data (tokens, message count) to RunStore in finally block. Model factory: - Auto-inject stream_usage=True for BaseChatOpenAI subclasses with custom api_base, so usage_metadata is populated in streaming responses. Test consolidation: - Delete test_phase2b_integration.py (redundant with existing tests). - Move DB-backed lifecycle test into test_run_journal.py. - Add tests for stream_usage injection in test_model_factory.py. - Clean up executor/task_tool dead journal references. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): widen content type to str|dict in all store backends Allow event content to be a dict (for structured OpenAI-format messages) in addition to plain strings. Dict values are JSON-serialized for the DB backend and deserialized on read; memory and JSONL backends handle dicts natively. Trace truncation now serializes dicts to JSON before measuring. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(events): use metadata flag instead of heuristic for dict content detection Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(converters): add LangChain-to-OpenAI message format converters Pure functions langchain_to_openai_message, langchain_to_openai_completion, langchain_messages_to_openai, and _infer_finish_reason for converting LangChain BaseMessage objects to OpenAI Chat Completions format, used by RunJournal for event storage. 15 unit tests added. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(converters): handle empty list content as null, clean up test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): human_message content uses OpenAI user message format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): ai_message uses OpenAI format, add ai_tool_call message event - ai_message content now uses {"role": "assistant", "content": "..."} format - New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls - ai_tool_call uses langchain_to_openai_message converter for consistent format - Both events include finish_reason in metadata ("stop" or "tool_calls") Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add tool_result message event with OpenAI tool message format Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end, then emit a tool_result message event (role=tool, tool_call_id, content) after each successful tool completion. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): summary content uses OpenAI system message format Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format Add on_chat_model_start to capture structured prompt messages as llm_request events. Replace llm_end trace events with llm_response using OpenAI Chat Completions format. Track llm_call_index to pair request/response events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add record_middleware method for middleware trace events Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(events): add full run sequence integration test for OpenAI content format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): align message events with checkpoint format and add middleware tag injection - Message events (ai_message, ai_tool_call, tool_result, human_message) now use BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages - on_tool_end extracts tool_call_id/name/status from ToolMessage objects - on_tool_error now emits tool_result message events with error status - record_middleware uses middleware:{tag} event_type and middleware category - Summarization custom events use middleware:summarize category - TitleMiddleware injects middleware:title tag via get_config() inheritance - SummarizationMiddleware model bound with middleware:summarize tag - Worker writes human_message using HumanMessage.model_dump() Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): switch search endpoint to threads_meta table and sync title - POST /api/threads/search now queries threads_meta table directly, removing the two-phase Store + Checkpointer scan approach - Add ThreadMetaRepository.search() with metadata/status filters - Add ThreadMetaRepository.update_display_name() for title sync - Worker syncs checkpoint title to threads_meta.display_name on run completion - Map display_name to values.title in search response for API compatibility Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): history endpoint reads messages from event store - POST /api/threads/{thread_id}/history now combines two data sources: checkpointer for checkpoint_id, metadata, title, thread_data; event store for messages (complete history, not truncated by summarization) - Strip internal LangGraph metadata keys from response - Remove full channel_values serialization in favor of selective fields Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove duplicate optional-dependencies header in pyproject.toml Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(middleware): pass tagged config to TitleMiddleware ainvoke call Without the config, the middleware:title tag was not injected, causing the LLM response to be recorded as a lead_agent ai_message in run_events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve merge conflict in .env.example Keep both DATABASE_URL (from persistence-scaffold) and WECOM credentials (from main) after the merge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address review feedback on PR #1851 - Fix naive datetime.now() → datetime.now(UTC) in all ORM models - Fix seq race condition in DbRunEventStore.put() with FOR UPDATE and UNIQUE(thread_id, seq) constraint - Encapsulate _store access in RunManager.update_run_completion() - Deduplicate _store.put() logic in RunManager via _persist_to_store() - Add update_run_completion to RunStore ABC + MemoryRunStore - Wire follow_up_to_run_id through the full create path - Add error recovery to RunJournal._flush_sync() lost-event scenario - Add migration note for search_threads breaking change - Fix test_checkpointer_none_fix mock to set database=None Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: update uv.lock Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality Bug fixes: - Sanitize log params to prevent log injection (CodeQL) - Reset threads_meta.status to idle/error when run completes - Attach messages only to latest checkpoint in /history response - Write threads_meta on POST /threads so new threads appear in search Lint fixes: - Remove unused imports (journal.py, migrations/env.py, test_converters.py) - Convert lambda to named function (engine.py, Ruff E731) - Remove unused logger definitions in repos (Ruff F841) - Add logging to JSONL decode errors and empty except blocks - Separate assert side-effects in tests (CodeQL) - Remove unused local variables in tests (Ruff F841) - Fix max_trace_content truncation to use byte length, not char length Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: apply ruff format to persistence and runtime files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Potential fix for pull request finding 'Statement has no effect' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> * refactor(runtime): introduce RunContext to reduce run_agent parameter bloat Extract checkpointer, store, event_store, run_events_config, thread_meta_repo, and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context() in deps.py to build the base context from app.state singletons. start_run() uses dataclasses.replace() to enrich per-run fields before passing ctx to run_agent. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): move sanitize_log_param to app/gateway/utils.py Extract the log-injection sanitizer from routers/threads.py into a shared utils module and rename to sanitize_log_param (public API). Eliminates the reverse service → router import in services.py. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: use SQL aggregation for feedback stats and thread token usage Replace Python-side counting in FeedbackRepository.aggregate_by_run with a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread abstract method with SQL GROUP BY implementation in RunRepository and Python fallback in MemoryRunStore. Simplify the thread_token_usage endpoint to delegate to the new method, eliminating the limit=10000 truncation risk. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: annotate DbRunEventStore.put() as low-frequency path Add docstring clarifying that put() opens a per-call transaction with FOR UPDATE and should only be used for infrequent writes (currently just the initial human_message event). High-throughput callers should use put_batch() instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(threads): fall back to Store search when ThreadMetaRepository is unavailable When database.backend=memory (default) or no SQL session factory is configured, search_threads now queries the LangGraph Store instead of returning 503. Returns empty list if neither Store nor repo is available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata Add ThreadMetaStore abstract base class with create/get/search/update/delete interface. ThreadMetaRepository (SQL) now inherits from it. New MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments. deps.py now always provides a non-None thread_meta_repo, eliminating all `if thread_meta_repo is not None` guards in services.py, worker.py, and routers/threads.py. search_threads no longer needs a Store fallback branch. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(history): read messages from checkpointer instead of RunEventStore The /history endpoint now reads messages directly from the checkpointer's channel_values (the authoritative source) instead of querying RunEventStore.list_messages(). The RunEventStore API is preserved for other consumers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address new Copilot review comments - feedback.py: validate thread_id/run_id before deleting feedback - jsonl.py: add path traversal protection with ID validation - run_repo.py: parse `before` to datetime for PostgreSQL compat - thread_meta_repo.py: fix pagination when metadata filter is active - database_config.py: use resolve_path for sqlite_dir consistency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Implement skill self-evolution and skill_manage flow (#1874) * chore: ignore .worktrees directory * Add skill_manage self-evolution flow * Fix CI regressions for skill_manage * Address PR review feedback for skill evolution * fix(skill-evolution): preserve history on delete * fix(skill-evolution): tighten scanner fallbacks * docs: add skill_manage e2e evidence screenshot * fix(skill-manage): avoid blocking fs ops in session runtime --------- Co-authored-by: Willem Jiang <willem.jiang@gmail.com> * fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir resolve_path() resolves relative to Paths.base_dir (.deer-flow), which double-nested the path to .deer-flow/.deer-flow/data/app.db. Use Path.resolve() (CWD-relative) instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Feature/feishu receive file (#1608) * feat(feishu): add channel file materialization hook for inbound messages - Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op. - Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text. - Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files. - No impact on Slack/Telegram or other channels (they inherit the default no-op). * style(backend): format code with ruff for lint compliance - Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format` - Ensured both files conform to project linting standards - Fixes CI lint check failures caused by code style issues * fix(feishu): handle file write operation asynchronously to prevent blocking * fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code * test(feishu): add tests for receive_file method and placeholder replacement * fix(manager): remove unnecessary type casting for channel retrieval * fix(feishu): update logging messages to reflect resource handling instead of image * fix(feishu): sanitize filename by replacing invalid characters in file uploads * fix(feishu): improve filename sanitization and reorder image key handling in message processing * fix(feishu): add thread lock to prevent filename conflicts during file downloads * fix(test): correct bad merge in test_feishu_parser.py * chore: run ruff and apply formatting cleanup fix(feishu): preserve rich-text attachment order and improve fallback filename handling * fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915) Two production docker-compose.yaml bugs prevent `make up` from working: 1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH environment overrides. Added in fb2d99f (#1836) but accidentally reverted by ca2fb95 (#1847). Without them, gateway reads host paths from .env via env_file, causing FileNotFoundError inside the container. 2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default). Empty $${allow_blocking} inserts a bare space between flags, causing ' --no-reload' to be parsed as unexpected extra argument. Fix by building args string first and conditionally appending --allow-blocking. Co-authored-by: cooper <cooperfu@tencent.com> * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904) * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities Fix `<button>` inside `<a>` invalid HTML in artifact components and add missing `noopener,noreferrer` to `window.open` calls to prevent reverse tabnabbing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(frontend): address Copilot review on tabnabbing and double-tab-open Remove redundant parent onClick on web_fetch ChainOfThoughtStep to prevent opening two tabs on link click, and explicitly null out window.opener after window.open() for defensive tabnabbing hardening. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * refactor(persistence): organize entities into per-entity directories Restructure the persistence layer from horizontal "models/ + repositories/" split into vertical entity-aligned directories. Each entity (thread_meta, run, feedback) now owns its ORM model, abstract interface (where applicable), and concrete implementations under a single directory with an aggregating __init__.py for one-line imports. Layout: persistence/thread_meta/{base,model,sql,memory}.py persistence/run/{model,sql}.py persistence/feedback/{model,sql}.py models/__init__.py is kept as a facade so Alembic autogenerate continues to discover all ORM tables via Base.metadata. RunEventRow remains under models/run_event.py because its storage implementation lives in runtime/events/store/db.py and has no matching repository directory. The repositories/ directory is removed entirely. All call sites in gateway/deps.py and tests are updated to import from the new entity packages, e.g.: from deerflow.persistence.thread_meta import ThreadMetaRepository from deerflow.persistence.run import RunRepository from deerflow.persistence.feedback import FeedbackRepository Full test suite passes (1690 passed, 14 skipped). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(gateway): sync thread rename and delete through ThreadMetaStore The POST /threads/{id}/state endpoint previously synced title changes only to the LangGraph Store via _store_upsert. In sqlite mode the search endpoint reads from the ThreadMetaRepository SQL table, so renames never appeared in /threads/search until the next agent run completed (worker.py syncs title from checkpoint to thread_meta in its finally block). Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem, Store, and checkpointer but left the threads_meta row orphaned in sqlite, so deleted threads kept appearing in /threads/search. Fix both endpoints by routing through the ThreadMetaStore abstraction which already has the correct sqlite/memory implementations wired up by deps.py. The rename path now calls update_display_name() and the delete path calls delete() — both work uniformly across backends. Verified end-to-end with curl in gateway mode against sqlite backend. Existing test suite (1690 passed) and focused router/repo tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): route all thread metadata access through ThreadMetaStore Following the rename/delete bug fix in PR1, migrate the remaining direct LangGraph Store reads/writes in the threads router and services to the ThreadMetaStore abstraction so that the sqlite and memory backends behave identically and the legacy dual-write paths can be removed. Migrated endpoints (threads.py): - create_thread: idempotency check + write now use thread_meta_repo.get/create instead of dual-writing the LangGraph Store and the SQL row. - get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback for legacy threads is preserved. - patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata. - delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete already covers it. Removed dead code (services.py): - _upsert_thread_in_store — redundant with the immediately following thread_meta_repo.create() call. - _sync_thread_title_after_run — worker.py's finally block already syncs the title via thread_meta_repo.update_display_name() after each run. Removed dead code (threads.py): - _store_get / _store_put / _store_upsert helpers (no remaining callers). - THREADS_NS constant. - get_store import (router no longer touches the LangGraph Store directly). New abstract method: - ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into the thread's metadata field. Implemented in both ThreadMetaRepository (SQL, read-modify-write inside one session) and MemoryThreadMetaStore. Three new unit tests cover merge / empty / nonexistent behaviour. Net change: -134 lines. Full test suite: 1693 passed, 14 skipped. Verified end-to-end with curl in gateway mode against sqlite backend (create / patch / get / rename / search / delete). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com> Co-authored-by: JilongSun <965640067@qq.com> Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com> Co-authored-by: cooper <cooperfu@tencent.com> Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com> * feat(auth): release-validation pass for 2.0-rc — 12 blockers + simplify follow-ups (#2008) * feat(auth): introduce backend auth module Port RFC-001 authentication core from PR #1728: - JWT token handling (create_access_token, decode_token, TokenPayload) - Password hashing (bcrypt) with verify_password - SQLite UserRepository with base interface - Provider Factory pattern (LocalAuthProvider) - CLI reset_admin tool - Auth-specific errors (AuthErrorCode, TokenError, AuthErrorResponse) Deps: - bcrypt>=4.0.0 - pyjwt>=2.9.0 - email-validator>=2.0.0 - backend/uv.toml pins public PyPI index Tests: 12 pure unit tests (test_auth_config.py, test_auth_errors.py). Scope note: authz.py, test_auth.py, and test_auth_type_system.py are deferred to commit 2 because they depend on middleware and deps wiring that is not yet in place. Commit 1 stays "pure new files only" as the spec mandates. * feat(auth): wire auth end-to-end (middleware + frontend replacement) Backend: - Port auth_middleware, csrf_middleware, langgraph_auth, routers/auth - Port authz decorator (owner_filter_key defaults to 'owner_id') - Merge app.py: register AuthMiddleware + CSRFMiddleware + CORS, add _ensure_admin_user lifespan hook, _migrate_orphaned_threads helper, register auth router - Merge deps.py: add get_local_provider, get_current_user_from_request, get_optional_user_from_request; keep get_current_user as thin str|None adapter for feedback router - langgraph.json: add auth path pointing to langgraph_auth.py:auth - Rename metadata['user_id'] -> metadata['owner_id'] in langgraph_auth (both metadata write and LangGraph filter dict) + test fixtures Frontend: - Delete better-auth library and api catch-all route - Remove better-auth npm dependency and env vars (BETTER_AUTH_SECRET, BETTER_AUTH_GITHUB_*) from env.js - Port frontend/src/core/auth/* (AuthProvider, gateway-config, proxy-policy, server-side getServerSideUser, types) - Port frontend/src/core/api/fetcher.ts - Port (auth)/layout, (auth)/login, (auth)/setup pages - Rewrite workspace/layout.tsx as server component that calls getServerSideUser and wraps in AuthProvider - Port workspace/workspace-content.tsx for the client-side sidebar logic Tests: - Port 5 auth test files (test_auth, test_auth_middleware, test_auth_type_system, test_ensure_admin, test_langgraph_auth) - 176 auth tests PASS After this commit: login/logout/registration flow works, but persistence layer does not yet filter by owner_id. Commit 4 closes that gap. * feat(auth): account settings page + i18n - Port account-settings-page.tsx (change password, change email, logout) - Wire into settings-dialog.tsx as new "account" section with UserIcon, rendered first in the section list - Add i18n keys: - en-US/zh-CN: settings.sections.account ("Account" / "账号") - en-US/zh-CN: button.logout ("Log out" / "退出登录") - types.ts: matching type declarations * feat(auth): enforce owner_id across 2.0-rc persistence layer Add request-scoped contextvar-based owner filtering to threads_meta, runs, run_events, and feedback repositories. Router code is unchanged — isolation is enforced at the storage layer so that any caller that forgets to pass owner_id still gets filtered results, and new routes cannot accidentally leak data. Core infrastructure ------------------- - deerflow/runtime/user_context.py (new): - ContextVar[CurrentUser | None] with default None - runtime_checkable CurrentUser Protocol (structural subtype with .id) - set/reset/get/require helpers - AUTO sentinel + resolve_owner_id(value, method_name) for sentinel three-state resolution: AUTO reads contextvar, explicit str overrides, explicit None bypasses the filter (for migration/CLI) Repository changes ------------------ - ThreadMetaRepository: create/get/search/update_*/delete gain owner_id=AUTO kwarg; read paths filter by owner, writes stamp it, mutations check ownership before applying - RunRepository: put/get/list_by_thread/delete gain owner_id=AUTO kwarg - FeedbackRepository: create/get/list_by_run/list_by_thread/delete gain owner_id=AUTO kwarg - DbRunEventStore: list_messages/list_events/list_messages_by_run/ count_messages/delete_by_thread/delete_by_run gain owner_id=AUTO kwarg. Write paths (put/put_batch) read contextvar softly: when a request-scoped user is available, owner_id is stamped; background worker writes without a user context pass None which is valid (orphan row to be bound by migration) Schema ------ - persistence/models/run_event.py: RunEventRow.owner_id = Mapped[ str | None] = mapped_column(String(64), nullable=True, index=True) - No alembic migration needed: 2.0 ships fresh, Base.metadata.create_all picks up the new column automatically Middleware ---------- - auth_middleware.py: after cookie check, call get_optional_user_from_ request to load the real User, stamp it into request.state.user AND the contextvar via set_current_user, reset in a try/finally. Public paths and unauthenticated requests continue without contextvar, and @require_auth handles the strict 401 path Test infrastructure ------------------- - tests/conftest.py: @pytest.fixture(autouse=True) _auto_user_context sets a default SimpleNamespace(id="test-user-autouse") on every test unless marked @pytest.mark.no_auto_user. Keeps existing 20+ persistence tests passing without modification - pyproject.toml [tool.pytest.ini_options]: register no_auto_user marker so pytest does not emit warnings for opt-out tests - tests/test_user_context.py: 6 tests covering three-state semantics, Protocol duck typing, and require/optional APIs - tests/test_thread_meta_repo.py: one test updated to pass owner_id= None explicitly where it was previously relying on the old default Test results ------------ - test_user_context.py: 6 passed - test_auth*.py + test_langgraph_auth.py + test_ensure_admin.py: 127 - test_run_event_store / test_run_repository / test_thread_meta_repo / test_feedback: 92 passed - Full backend suite: 1905 passed, 2 failed (both @requires_llm flaky integration tests unrelated to auth), 1 skipped * feat(auth): extend orphan migration to 2.0-rc persistence tables _ensure_admin_user now runs a three-step pipeline on every boot: Step 1 (fatal): admin user exists / is created / password is reset Step 2 (non-fatal): LangGraph store orphan threads → admin Step 3 (non-fatal): SQL persistence tables → admin - threads_meta - runs - run_events - feedback Each step is idempotent. The fatal/non-fatal split mirrors PR #1728's original philosophy: admin creation failure blocks startup (the system is unusable without an admin), whereas migration failures log a warning and let the service proceed (a partial migration is recoverable; a missing admin is not). Key helpers ----------- - _iter_store_items(store, namespace, *, page_size=500): async generator that cursor-paginates across LangGraph store pages. Fixes PR #1728's hardcoded limit=1000 bug that would silently lose orphans beyond the first page. - _migrate_orphaned_threads(store, admin_user_id): Rewritten to use _iter_store_items. Returns the migrated count so the caller can log it; raises only on unhandled exceptions. - _migrate_orphan_sql_tables(admin_user_id): Imports the 4 ORM models lazily, grabs the shared session factory, runs one UPDATE per table in a single transaction, commits once. No-op when no persistence backend is configured (in-memory dev). Tests: test_ensure_admin.py (8 passed) * test(auth): port AUTH test plan docs + lint/format pass - Port backend/docs/AUTH_TEST_PLAN.md and AUTH_UPGRADE.md from PR #1728 - Rename metadata.user_id → metadata.owner_id in AUTH_TEST_PLAN.md (4 occurrences from the original PR doc) - ruff auto-fix UP037 in sentinel type annotations: drop quotes around "str | None | _AutoSentinel" now that from __future__ import annotations makes them implicit string forms - ruff format: 2 files (app/gateway/app.py, runtime/user_context.py) Note on test coverage additions: - conftest.py autouse fixture was already added in commit 4 (had to be co-located with the repository changes to keep pre-existing persistence tests passing) - cross-user isolation E2E tests (test_owner_isolation.py) deferred — enforcement is already proven by the 98-test repository suite via the autouse fixture + explicit _AUTO sentinel exercises - New test cases (TC-API-17..20, TC-ATK-13, TC-MIG-01..07) listed in AUTH_TEST_PLAN.md are deferred to a follow-up PR — they are manual-QA test cases rather than pytest code, and the spec-level coverage is already met by test_user_context.py + the 98-test repository suite. Final test results: - Auth suite (test_auth*, test_langgraph_auth, test_ensure_admin, test_user_context): 186 passed - Persistence suite (test_run_event_store, test_run_repository, test_thread_meta_repo, test_feedback): 98 passed - Lint: ruff check + ruff format both clean * test(auth): add cross-user isolation test suite 10 tests exercising the storage-layer owner filter by manually switching the user_context contextvar between two users. Verifies the safety invariant: After a repository write with owner_id=A, a subsequent read with owner_id=B must not return the row, and vice versa. Covers all 4 tables that own user-scoped data: TC-API-17 threads_meta — read, search, update, delete cross-user TC-API-18 runs — get, list_by_thread, delete cross-user TC-API-19 run_events — list_messages, list_events, count_messages, delete_by_thread (CRITICAL: raw conversation content leak vector) TC-API-20 feedback — get, list_by_run, delete cross-user Plus two meta-tests verifying the sentinel pattern itself: - AUTO + unset contextvar raises RuntimeError - explicit owner_id=None bypasses the filter (migration escape hatch) Architecture note ----------------- These tests bypass the HTTP layer by design. The full chain (cookie → middleware → contextvar → repository) is covered piecewise: - test_auth_middleware.py: middleware sets contextvar from cookies - test_owner_isolation.py: repositories enforce isolation when contextvar is set to different users Together they prove the end-to-end safety property without the ceremony of spinning up a full TestClient + in-memory DB for every router endpoint. Tests pass: 231 (full auth + persistence + isolation suite) Lint: clean * refactor(auth): migrate user repository to SQLAlchemy ORM Move the users table into the shared persistence engine so auth matches the pattern of threads_meta, runs, run_events, and feedback — one engine, one session factory, one schema init codepath. New files --------- - persistence/user/__init__.py, persistence/user/model.py: UserRow ORM class with partial unique index on (oauth_provider, oauth_id) - Registered in persistence/models/__init__.py so Base.metadata.create_all() picks it up Modified -------- - auth/repositories/sqlite.py: rewritten as async SQLAlchemy, identical constructor pattern to the other four repositories (def __init__(self, session_factory) + self._sf = session_factory) - auth/config.py: drop users_db_path field — storage is configured through config.database like every other table - deps.py/get_local_provider: construct SQLiteUserRepository with the shared session factory, fail fast if engine is not initialised - tests/test_auth.py: rewrite test_sqlite_round_trip_new_fields to use the shared engine (init_engine + close_engine in a tempdir) - tests/test_auth_type_system.py: add per-test autouse fixture that spins up a scratch engine and resets deps._cached_* singletons * refactor(auth): remove SQL orphan migration (unused in supported scenarios) The _migrate_orphan_sql_tables helper existed to bind NULL owner_id rows in threads_meta, runs, run_events, and feedback to the admin on first boot. But in every supported upgrade path, it's a no-op: 1. Fresh install: create_all builds fresh tables, no legacy rows 2. No-auth → with-auth (no existing persistence DB): persistence tables are created fresh by create_all, no legacy rows 3. No-auth → with-auth (has existing persistence DB from #1930): NOT a supported upgrade path — "有 DB 到有 DB" schema evolution is out of scope; users wipe DB or run manual ALTER So the SQL orphan migration never has anything to do in the supported matrix. Delete the function, simplify _ensure_admin_user from a 3-step pipeline to a 2-step one (admin creation + LangGraph store orphan migration only). LangGraph store orphan migration stays: it serves the real "no-auth → with-auth" upgrade path where a user's existing LangGraph thread metadata has no owner_id field and needs to be stamped with the newly-created admin's id. Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): write initial admin password to 0600 file instead of logs CodeQL py/clear-text-logging-sensitive-data flagged 3 call sites that logged the auto-generated admin password to stdout via logger.info(). Production log aggregators (ELK/Splunk/etc) would have captured those cleartext secrets. Replace with a shared helper that writes to .deer-flow/admin_initial_credentials.txt with mode 0600, and log only the path. New file -------- - app/gateway/auth/credential_file.py: write_initial_credentials() helper. Takes email, password, and a "initial"/"reset" label. Creates .deer-flow/ if missing, writes a header comment plus the email+password, chmods 0o600, returns the absolute Path. Modified -------- - app/gateway/app.py: both _ensure_admin_user paths (fresh creation + needs_setup password reset) now write to file and log the path - app/gateway/auth/reset_admin.py: rewritten to use the shared ORM repo (SQLiteUserRepository with session_factory) and the credential_file helper. The previous implementation was broken after the earlier ORM refactor — it still imported _get_users_conn and constructed SQLiteUserRepository() without a session factory. No tests changed — the three password-log sites are all exercised via existing test_ensure_admin.py which checks that startup succeeds, not that a specific string appears in logs. CodeQL alerts 272, 283, 284: all resolved. * security(auth): strict JWT validation in middleware (fix junk cookie bypass) AUTH_TEST_PLAN test 7.5.8 expects junk cookies to be rejected with 401. The previous middleware behaviour was "presence-only": check that some access_token cookie exists, then pass through. In combination with my Task-12 decision to skip @require_auth decorators on routes, this created a gap where a request with any cookie-shaped string (e.g. access_token=not-a-jwt) would bypass authentication on routes that do not touch the repository (/api/models, /api/mcp/config, /api/memory, /api/skills, …). Fix: middleware now calls get_current_user_from_request() strictly and catches the resulting HTTPException to render a 401 with the proper fine-grained error code (token_invalid, token_expired, user_not_found, …). On success it stamps request.state.user and the contextvar so repository-layer owner filters work downstream. The 4 old "_with_cookie_passes" tests in test_auth_middleware.py were written for the presence-only behaviour; they asserted that a junk cookie would make the handler return 200. They are renamed to "_with_junk_cookie_rejected" and their assertions flipped to 401. The negative path (no cookie → 401 not_authenticated) is unchanged. Verified: no cookie → 401 not_authenticated junk cookie → 401 token_invalid (the fixed bug) expired cookie → 401 token_expired Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): wire @require_permission(owner_check=True) on isolation routes Apply the require_permission decorator to all 28 routes that take a {thread_id} path parameter. Combined with the strict middleware (previous commit), this gives the double-layer protection that AUTH_TEST_PLAN test 7.5.9 documents: Layer 1 (AuthMiddleware): cookie + JWT validation, rejects junk cookies and stamps request.state.user Layer 2 (@require_permission with owner_check=True): per-resource ownership verification via ThreadMetaStore.check_access — returns 404 if a different user owns the thread The decorator's owner_check branch is rewritten to use the SQL thread_meta_repo (the 2.0-rc persistence layer) instead of the LangGraph store path that PR #1728 used (_store_get / get_store in routers/threads.py). The inject_record convenience is dropped — no caller in 2.0 needs the LangGraph blob, and the SQL repo has a different shape. Routes decorated (28 total): - threads.py: delete, patch, get, get-state, post-state, post-history - thread_runs.py: post-runs, post-runs-stream, post-runs-wait, list_runs, get_run, cancel_run, join_run, stream_existing_run, list_thread_messages, list_run_messages, list_run_events, thread_token_usage - feedback.py: create, list, stats, delete - uploads.py: upload (added Request param), list, delete - artifacts.py: get_artifact - suggestions.py: generate (renamed body parameter to avoid conflict with FastAPI Request) Test fixes: - test_suggestions_router.py: bypass the decorator via __wrapped__ (the unit tests cover parsing logic, not auth — no point spinning up a thread_meta_repo just to test JSON unwrapping) - test_auth_middleware.py 4 fake-cookie tests: already updated in the previous commit (745bf432) Tests: 293 passed (auth + persistence + isolation + suggestions) Lint: clean * security(auth): defense-in-depth fixes from release validation pass Eight findings caught while running the AUTH_TEST_PLAN end-to-end against the deployed sg_dev stack. Each is a pre-condition for shipping release/2.0-rc that the previous PRs missed. Backend hardening - routers/auth.py: rate limiter X-Real-IP now requires AUTH_TRUSTED_PROXIES whitelist (CIDR/IP allowlist). Without nginx in front, the previous code honored arbitrary X-Real-IP, letting an attacker rotate the header to fully bypass the per-IP login lockout. - routers/auth.py: 36-entry common-password blocklist via Pydantic field_validator on RegisterRequest + ChangePasswordRequest. The shared _validate_strong_password helper keeps the constraint in one place. - routers/threads.py: ThreadCreateRequest + ThreadPatchRequest strip server-reserved metadata keys (owner_id, user_id) via Pydantic field_validator so a forged value can never round-trip back to other clients reading the same thread. The actual ownership invariant stays on the threads_meta row; this closes the metadata-blob echo gap. - authz.py + thread_meta/sql.py: require_permission gains a require_existing flag plumbed through check_access(require_existing=True). Destructive routes (DELETE/PATCH/state-update/runs/feedback) now treat a missing thread_meta row as 404 instead of "untracked legacy thread, allow", closing the cross-user delete-idempotence gap where any user could successfully DELETE another user's deleted thread. - repositories/sqlite.py + base.py: update_user raises UserNotFoundError on a vanished row instead of silently returning the input. Concurrent delete during password reset can no longer look like a successful update. - runtime/user_context.py: resolve_owner_id() coerces User.id (UUID) to str at the contextvar boundary so SQLAlchemy String(64) columns can bind it. The whole 2.0-rc isolation pipeline was previously broken end-to-end (POST /api/threads → 500 "type 'UUID' is not supported"). - persistence/engine.py: SQLAlchemy listener enables PRAGMA journal_mode=WAL, synchronous=NORMAL, foreign_keys=ON on every new SQLite connection. TC-UPG-06 in the test plan expects WAL; previous code shipped with the default 'delete' journal. - auth_middleware.py: stamp request.state.auth = AuthContext(...) so @require_permission's short-circuit fires; previously every isolation request did a duplicate JWT decode + users SELECT. Also unifies the 401 payload through AuthErrorResponse(...).model_dump(). - app.py: _ensure_admin_user restructure removes the noqa F821 scoping bug where 'password' was referenced outside the branch that defined it. New _announce_credentials helper absorbs the duplicate log block in the fresh-admin and reset-admin branches. * fix(frontend+nginx): rollout CSRF on every state-changing client path The frontend was 100% broken in gateway-pro mode for any user trying to open a specific chat thread. Three cumulative bugs each silently masked the next. LangGraph SDK CSRF gap (api-client.ts) - The Client constructor took only apiUrl, no defaultHeaders, no fetch interceptor. The SDK's internal fetch never sent X-CSRF-Token, so every state-changing /api/langgraph-compat/* call (runs/stream, threads/search, threads/{tid}/history, ...) hit CSRFMiddleware and got 403 before reaching the auth check. UI symptom: empty thread page with no error message; the SPA's hooks swallowed the rejection. - Fix: pass an onRequest hook that injects X-CSRF-Token from the csrf_token cookie per request. Reading the cookie per call (not at construction time) handles login / logout / password-change cookie rotation transparently. The SDK's prepareFetchOptions calls onRequest for both regular requests AND streaming/SSE/reconnect, so the same hook covers runs.stream and runs.joinStream. Raw fetch CSRF gap (7 files) - Audit: 11 frontend fetch sites, only 2 included CSRF (login/setup + account-settings change-password). The other 7 routed through raw fetch() with no header — suggestions, memory, agents, mcp, skills, uploads, and the local thread cleanup hook all 403'd silently. - Fix: enhance fetcher.ts:fetchWithAuth to auto-inject X-CSRF-Token on POST/PUT/DELETE/PATCH from a single shared readCsrfCookie() helper. Convert all 7 raw fetch() callers to fetchWithAuth so the contract is centrally enforced. api-client.ts and fetcher.ts share readCsrfCookie + STATE_CHANGING_METHODS to avoid drift. nginx routing + buffering (nginx.local.conf) - The auth feature shipped without updating the nginx config: per-API explicit location blocks but no /api/v1/auth/, /api/feedback, /api/runs. The frontend's client-side fetches to /api/v1/auth/login/local 404'd from the Next.js side because nginx routed /api/* to the frontend. - Fix: add catch-all `location /api/` that proxies to the gateway. nginx longest-prefix matching keeps the explicit blocks (/api/models, /api/threads regex, /api/langgraph/, ...) winning for their paths. - Fix: disable proxy_buffering + proxy_request_buffering for the frontend `location /` block. Without it, nginx tries to spool large Next.js chunks into /var/lib/nginx/proxy (root-owned) and fails with Permission denied → ERR_INCOMPLETE_CHUNKED_ENCODING → ChunkLoadError. * test(auth): release-validation test infra and new coverage Test fixtures and unit tests added during the validation pass. Router test helpers (NEW: tests/_router_auth_helpers.py) - make_authed_test_app(): builds a FastAPI test app with a stub middleware that stamps request.state.user + request.state.auth and a permissive thread_meta_repo mock. TestClient-based router tests (test_artifacts_router, test_threads_router) use it instead of bare FastAPI() so the new @require_permission(owner_check=True) decorators short-circuit cleanly. - call_unwrapped(): walks the __wrapped__ chain to invoke the underlying handler without going through the authz wrappers. Direct-call tests (test_uploads_router) use it. Typed with ParamSpec so the wrapped signature flows through. Backend test additions - test_auth.py: 7 tests for the new _get_client_ip trust model (no proxy / trusted proxy / untrusted peer / XFF rejection / invalid CIDR / no client). 5 tests for the password blocklist (literal, case-insensitive, strong password accepted, change-password binding, short-password length-check still fires before blocklist). test_update_user_raises_when_row_concurrently_deleted: closes a shipped-without-coverage gap on the new UserNotFoundError contract. - test_thread_meta_repo.py: 4 tests for check_access(require_existing=True) — strict missing-row denial, strict owner match, strict owner mismatch, strict null-owner still allowed (shared rows survive the tightening). - test_ensure_admin.py: 3 tests for _migrate_orphaned_threads / _iter_store_items pagination, covering the TC-UPG-02 upgrade story end-to-end via mock store. Closes the gap where the cursor pagination was untested even though the previous PR rewrote it. - test_threads_router.py: 5 tests for _strip_reserved_metadata (owner_id removal, user_id removal, safe-keys passthrough, empty input, both-stripped). - test_auth_type_system.py: replace "password123" fixtures with Tr0ub4dor3a / AnotherStr0ngPwd! so the new password blocklist doesn't reject the test data. * docs(auth): refresh TC-DOCKER-05 + document Docker validation gap - AUTH_TEST_PLAN.md TC-DOCKER-05: the previous expectation ("admin password visible in docker logs") was stale after the simplify pass that moved credentials to a 0600 file. The grep "Password:" check would have silently failed and given a false sense of coverage. New expectation matches the actual file-based path: 0600 file in DEER_FLOW_HOME, log shows the path (not the secret), reverse-grep asserts no leaked password in container logs. - NEW: docs/AUTH_TEST_DOCKER_GAP.md documents the only un-executed block in the test plan (TC-DOCKER-01..06). Reason: sg_dev validation host has no Docker daemon installed. The doc maps each Docker case to an already-validated bare-metal equivalent (TC-1.1, TC-REENT-01, TC-API-02 etc.) so the gap is auditable, and includes pre-flight reproduction steps for whoever has Docker available. --------- Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com> * refactor(persistence): unify SQLite to single deerflow.db and move checkpointer to runtime Merge checkpoints.db and app.db into a single deerflow.db file (WAL mode handles concurrent access safely). Move checkpointer module from agents/checkpointer to runtime/checkpointer to better reflect its role as a runtime infrastructure concern. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): rename owner_id to user_id and thread_meta_repo to thread_store Rename owner_id to user_id across all persistence models, repositories, stores, routers, and tests for clearer semantics. Rename thread_meta_repo to thread_store for consistency with run_store/run_event_store naming. Add ThreadMetaStore return type annotation to get_thread_store(). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): unify ThreadMetaStore interface with user isolation and factory Add user_id parameter to all ThreadMetaStore abstract methods. Implement owner isolation in MemoryThreadMetaStore with _get_owned_record helper. Add check_access to base class and memory implementation. Add make_thread_store factory to simplify deps.py initialization. Add memory-backend isolation tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(feedback): add UNIQUE(thread_id, run_id, user_id) constraint Add UNIQUE constraint to FeedbackRow to enforce one feedback per user per run, enabling upsert behavior in Task 2. Update tests to use distinct user_ids for multiple feedback records per run, and pass user_id=None to list_by_run for admin-style queries that bypass user isolation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(feedback): add upsert() method with UNIQUE enforcement Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(feedback): add delete_by_run() and list_by_thread_grouped() Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(feedback): add PUT upsert and DELETE-by-run endpoints Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(feedback): enrich messages endpoint with per-run feedback data Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(feedback): add frontend feedback API client Adds upsertFeedback and deleteFeedback API functions backed by fetchWithAuth, targeting the /api/threads/{id}/runs/{id}/feedback endpoint. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(feedback): wire feedback data into message rendering for history echo Adds useThreadFeedback hook that fetches run-level feedback from the messages API and builds a runId->FeedbackData map. MessageList now calls this hook and passes feedback and runId to each MessageListItem so previously-submitted thumbs are pre-filled when revisiting a thread. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(feedback): correct run_id mapping for feedback echo The feedbackMap was keyed by run_id but looked up by LangGraph message ID. Fixed by tracking AI message ordinal index to correlate event store run_ids with LangGraph SDK messages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(feedback): use real threadId and refresh after stream - Pass threadId prop to MessageListItem instead of reading "new" from URL params - Invalidate thread-feedback query on stream finish so buttons appear immediately - Show feedback buttons always visible, copy button on hover only Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style(feedback): group copy and feedback buttons together on the left Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style(feedback): always show toolbar buttons without hover Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): stream hang when run_events.backend=db DbRunEventStore._user_id_from_context() returned user.id without coercing it to str. User.id is a Pydantic UUID, and aiosqlite cannot bind a raw UUID object to a VARCHAR column, so the INSERT for the initial human_message event silently rolled back and raised out of the worker task. Because that put() sat outside the worker's try block, the finally-clause that publishes end-of-stream never ran and the SSE stream hung forever. jsonl mode was unaffected because json.dumps(default=str) coerces UUID objects transparently. Fixes: - db.py: coerce user.id to str at the context-read boundary (matches what resolve_user_id already does for the other repositories) - worker.py: move RunJournal init + human_message put inside the try block so any failure flows through the finally/publish_end path instead of hanging the subscriber Defense-in-depth: - engine.py: add PRAGMA busy_timeout=5000 so checkpointer and event store wait for each other on the shared deerflow.db file instead of failing immediately under write-lock contention - journal.py: skip fire-and-forget _flush_sync when a previous flush task is still in flight, to avoid piling up concurrent put_batch writes on the same SQLAlchemy engine during streaming; flush() now waits for pending tasks before draining the buffer - database_config.py: doc-only update clarifying WAL + busy_timeout keep the unified deerflow.db safe for both workloads Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore(persistence): drop redundant busy_timeout PRAGMA Python's sqlite3 driver defaults to a 5-second busy timeout via the ``timeout`` kwarg of ``sqlite3.connect``, and aiosqlite + SQLAlchemy's aiosqlite dialect inherit that default. Setting ``PRAGMA busy_timeout=5000`` explicitly was a no-op — verified by reading back the PRAGMA on a fresh connection (it already reports 5000ms without our PRAGMA). Concurrent stress test (50 checkpoint writes + 20 event batches + 50 thread_meta updates on the same deerflow.db) still completes with zero errors and 200/200 rows after removing the explicit PRAGMA. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(journal): unwrap Command tool results in on_tool_end Tools that update graph state (e.g. ``present_files``) return ``Command(update={'messages': [ToolMessage(...)], 'artifacts': [...]})``. LangGraph later unwraps the inner ``ToolMessage`` into checkpoint state, but ``RunJournal.on_tool_end`` was receiving the ``Command`` object directly via the LangChain callback chain and storing ``str(Command(update={...}))`` as the tool_result content. This produced a visible divergence between the event-store and the checkpoint for any thread that used a Command-returning tool, blocking the event-store-backed history fix in the follow-up commit. Concrete example from thread ``6d30913e-dcd4-41c8-8941-f66c716cf359`` (seq=48): checkpoint had ``'Successfully presented files'`` while event_store stored the full Command repr. The fix detects ``Command`` in ``on_tool_end``, extracts the first ``ToolMessage`` from ``update['messages']``, and lets the existing ToolMessage branch handle the ``model_dump()`` path. Legacy rows still containing the Command repr are separately cleaned up by the history helper in the follow-up commit. Tests: - ``test_tool_end_unwraps_command_with_inner_tool_message`` — unit test of the unwrap branch with a constructed Command - ``test_tool_invoke_end_to_end_unwraps_command`` — end-to-end via ``CallbackManager`` + ``tool.invoke`` to exercise the real LangChain dispatch path that production uses, matching the repro shape from ``present_files`` - Counter-proof: temporarily reverted the patch, both tests failed with the exact ``Command(update={...})`` repr that was stored in the production SQLite row at seq=48, confirming LangChain does pass the ``Command`` through callbacks (the unwrap is load-bearing) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(threads): load history messages from event store, immune to summarize ``get_thread_history`` and ``get_thread_state`` in Gateway mode read messages from ``checkpoint.channel_values["messages"]``. After SummarizationMiddleware runs mid-run, that list is rewritten in-place: pre-summarize messages are dropped and a synthetic summary-as-human message takes position 0. The frontend then renders a chat history that starts with ``"Here is a summary of the conversation to date:..."`` instead of the user's original query, and all earlier turns are gone. The event store (``RunEventStore``) is append-only and never rewritten, so it retains the full transcript. This commit adds a helper ``_get_event_store_messages`` that loads the event store's message stream and overrides ``values["messages"]`` in both endpoints; the checkpoint fallback kicks in only when the event store is unavailable. Behavior contract of the helper: - **Full pagination.** ``list_messages`` returns the newest ``limit`` records when no cursor is given, so a fixed limit silently drops older messages on long threads. The helper sizes the read from ``count_messages()`` and pages forward with ``after_seq`` cursors. - **Copy-on-read.** Each content dict is copied before ``id`` is patched so the live store object (``MemoryRunEventStore`` returns references) is never mutated. - **Stable ids.** Messages with ``id=None`` (human + tool_result, which don't receive an id until checkpoint persistence) get a deterministic ``uuid5(NAMESPACE_URL, f"{thread_id}:{seq}")`` so React keys stay stable across requests. AI messages keep their LLM-assigned ``lc_run--*`` ids. - **Legacy ``Command`` repr sanitization.** Rows captured before the ``journal.py`` ``on_tool_end`` fix (previous commit) stored ``str(Command(update={'messages': [ToolMessage(content='X', ...)]}))`` as the tool_result content. ``_sanitize_legacy_command_repr`` regex-extracts the inner text so old threads render cleanly. - **Inline feedback.** When loading the stream, the helper also pulls ``feedback_repo.list_by_thread_grouped`` and attaches ``run_id`` to every message plus ``feedback`` to the final ``ai_message`` of each run. This removes the frontend's need to fetch a second endpoint and positional-index-map its way back to the right run. When the feedback subsystem is unavailable, the ``feedback`` field is left absent entirely so the frontend hides the button rather than rendering it over a broken write path. - **User context.** ``DbRunEventStore`` is user-scoped by default via ``resolve_user_id(AUTO)``. The helper relies on the ``@require_permission`` decorator having populated the user contextvar on both callers; the docstring documents this dependency explicitly so nobody wires it into a CLI or migration script without passing ``user_id=None``. Real data verification against thread ``6d30913e-dcd4-41c8-8941-f66c716cf359``: checkpoint showed 12 messages (summarize-corrupted), event store had 16. The original human message ``"最新伊美局势"`` was preserved as seq=1 in the event store and correctly restored to position 0 in the helper output. Helper output for AI messages was byte-identical to checkpoint for every overlapping message; only tool_result ids differed (patched to uuid5) and the legacy Command repr at seq=48 was sanitized. Tests: - ``test_thread_state_event_store.py`` — 18 tests covering ``_sanitize_legacy_command_repr`` (passthrough, single/double-quote extraction, unparseable fallback), helper happy path (all message types, stable uuid5, store non-mutation), multi-page pagination, summarize regression (recovers pre-summarize messages), feedback attachment (per-run, multi-run threads, repo failure graceful), and dependency failure fallback to ``None``. Docs: - ``docs/superpowers/plans/2026-04-10-event-store-history.md`` — the implementation plan this commit realizes, with Task 1 revised after the evaluation findings (pagination, copy-on-read, Command wrap already landed in journal.py, frontend feedback pagination in the follow-up commit, Standard-mode follow-up noted). - ``docs/superpowers/specs/2026-04-11-runjournal-history-evaluation.md`` — the Claude + second-opinion evaluation document that drove the plan revisions (pagination bug, dict-mutation bug, feedback hidden bug, Command bug). - ``docs/superpowers/specs/2026-04-11-summarize-marker-design.md`` — design for a follow-up PR that visually marks summarize events in history, based on a verified ``adispatch_custom_event`` experiment (``trace=False`` middleware nodes can still forward the Pregel task config via explicit signature injection). Scope: Gateway mode only (``make dev-pro``). Standard mode (``make dev``) hits LangGraph Server directly and bypasses these endpoints; the summarize symptom is still present there and is tracked as a separate follow-up in the plan. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(feedback): inline feedback on history and drop positional mapping The old ``useThreadFeedback`` hook loaded ``GET /api/threads/{id}/messages?limit=200`` and built two parallel lookup tables: ``runIdByAiIndex`` (an ordinal array of run_ids for every ``ai_message``-typed event) and ``feedbackByRunId``. The render loop in ``message-list.tsx`` walked the AI messages in order, incrementing ``aiMessageIndex`` on each non-human message, and used that ordinal to look up the run_id and feedback. This shape had three latent bugs we could observe on real threads: 1. **Fetch was capped at 200 messages.** Long or tool-heavy threads silently dropped earlier entries from the map, so feedback buttons could be missing on messages they should own. 2. **Ordinal mismatch.** The render loop counted every non-human message (including each intermediate ``ai_tool_call``), but ``runIdByAiIndex`` only pushed entries for ``event_type == "ai_message"``. A run with 3 tool_calls + 1 final AI message would push 1 entry while the render consumed 4 positions, so buttons mapped to the wrong positions across multi-run threads. 3. **Two parallel data paths.** The ``/history`` render path and the ``/messages`` feedback-lookup path could drift in-between an ``invalidateQueries`` call and the next refetch, producing transient mismaps. The previous commit moved the authoritative message source for history to the event store and added ``run_id`` + ``feedback`` inline on each message dict returned by ``_get_event_store_messages``. This commit aligns the frontend with that contract: - **Delete** ``useThreadFeedback``, ``ThreadFeedbackData``, ``runIdByAiIndex``, ``feedbackByRunId``, and ``fetchAllThreadMessages``. - **Introduce** ``useThreadMessageEnrichment`` that fetches ``POST /history?limit=1`` once, indexes the returned messages by ``message.id`` into a ``Map<id, {run_id, feedback?}>``, and invalidates on stream completion (``onFinish`` in ``useThreadStream``). Keying by ``message.id`` is stable across runs, tool_call chains, and summarize. - **Simplify** ``message-list.tsx`` to drop the ``aiMessageIndex`` counter and read ``enrichment?.get(msg.id)`` at each render step. - **Rewire** ``message-list-item.tsx`` so the feedback button renders when ``feedback !== undefined`` rather than when the message happens to be non-human. ``feedback`` is ``undefined`` for non-eligible messages (humans, non-final AI, tools), ``null`` for the final ai_message of an unrated run, and a ``FeedbackData`` object once rated — cleanly distinguishing "not eligible" from "eligible but unrated". ``/api/threads/{id}/messages`` is kept as a debug/export surface; no frontend code calls it anymore but the backend router is untouched. Validation: - ``pnpm check`` clean (0 errors, 1 pre-existing unrelated warning) - Live test on thread ``3d5dea4a`` after gateway restart confirmed the original user query is restored to position 0 and the feedback button behaves correctly on the final AI message. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(rebase): remove duplicate definitions and update stale module paths Rebase left duplicate function blocks in worker.py (triple human_message write causing 3x user messages in /history), deps.py, and prompt.py. Also update checkpointer imports from the old deerflow.agents.checkpointer path to deerflow.runtime.checkpointer, and clean up orphaned feedback props in the frontend message components. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(rebase): restore FeedbackButtons component and enrichment lost during rebase The FeedbackButtons component (defined inline in message-list-item.tsx) was introduced in commit 95df8d13 but lost during rebase. The previous rebase cleanup commit incorrectly removed the feedback/runId props and enrichment hook as "orphaned code" instead of restoring the missing component. This commit restores: - FeedbackButtons component with thumbs up/down toggle and optimistic state - FeedbackData/upsertFeedback/deleteFeedback imports - feedback and runId props on MessageListItem - useThreadMessageEnrichment hook and entry lookup in message-list.tsx Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(user-context): add DEFAULT_USER_ID and get_effective_user_id helper Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(paths): add user-aware path methods with optional user_id parameter Add _validate_user_id(), user_dir(), user_memory_file(), user_agent_memory_file() and optional keyword-only user_id parameter to all thread-related path methods. When user_id is provided, paths resolve under users/{user_id}/threads/{thread_id}/; when omitted, legacy layout is preserved for backward compatibility. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): add user_id to MemoryStorage interface for per-user isolation Thread user_id through MemoryStorage.load/reload/save abstract methods and FileMemoryStorage, re-keying the in-memory cache from bare agent_name to a (user_id, agent_name) tuple to prevent cross-user cache collisions. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): thread user_id through memory updater layer Add `user_id` keyword-only parameter to all public updater functions (_save_memory_to_file, get_memory_data, reload_memory_data, import_memory_data, clear_memory_data, create/delete/update_memory_fact) and regular keyword param to MemoryUpdater.update_memory + update_memory_from_conversation, propagating it to every storage load/save/reload call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): capture user_id at enqueue time for async-safe thread isolation Add user_id field to ConversationContext and MemoryUpdateQueue.add() so the user identity is stored explicitly at request time, before threading.Timer fires on a different thread where ContextVar values do not propagate. MemoryMiddleware.after_agent() now calls get_effective_user_id() at enqueue time and passes the value through to updater.update_memory(). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(isolation): wire user_id through all Paths and memory callsites Pass user_id=get_effective_user_id() at every callsite that invokes Paths methods or memory functions, enabling per-user filesystem isolation throughout the harness and app layers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(migration): add idempotent script for per-user data migration Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update CLAUDE.md and config docs for per-user isolation * feat(events): add pagination to list_messages_by_run on all store backends Replicates the existing before_seq/after_seq/limit cursor-pagination pattern from list_messages onto list_messages_by_run across the abstract interface, MemoryRunEventStore, JsonlRunEventStore, and DbRunEventStore. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): add GET /api/runs/{run_id}/messages with cursor pagination New endpoint resolves thread_id from the run record and delegates to RunEventStore.list_messages_by_run for cursor-based pagination. Ownership is enforced implicitly via RunStore.get() user filtering. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): add GET /api/runs/{run_id}/feedback Delegates to FeedbackRepository.list_by_run via the existing _resolve_run helper; includes tests for success, 404, empty list, and 503 (no DB). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): retrofit cursor pagination onto GET /threads/{tid}/runs/{rid}/messages Replace bare list[dict] response with {data: [...], has_more: bool} envelope, forwarding limit/before_seq/after_seq query params to the event store. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add run-level API endpoints to CLAUDE.md routers table * refactor(threads): remove event-store message loader and feedback from state/history endpoints State and history endpoints now return messages purely from the checkpointer's channel_values. The _get_event_store_messages helper (which loaded the full event-store transcript with feedback attached) is removed along with its tests. Frontend will use the dedicated GET /api/runs/{run_id}/messages and /feedback endpoints instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add unified persistence layer with event store, token tracking, and feedback (#1930) * feat(persistence): add SQLAlchemy 2.0 async ORM scaffold Introduce a unified database configuration (DatabaseConfig) that controls both the LangGraph checkpointer and the DeerFlow application persistence layer from a single `database:` config section. New modules: - deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends - deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton - deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation Gateway integration initializes/tears down the persistence engine in the existing langgraph_runtime() context manager. Legacy checkpointer config is preserved for backward compatibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add RunEventStore ABC + MemoryRunEventStore Phase 2-A prerequisite for event storage: adds the unified run event stream interface (RunEventStore) with an in-memory implementation, RunEventsConfig, gateway integration, and comprehensive tests (27 cases). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints Phase 2-B: run persistence + event storage + token tracking. - ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow - RunRepository implements RunStore ABC via SQLAlchemy ORM - ThreadMetaRepository with owner access control - DbRunEventStore with trace content truncation and cursor pagination - JsonlRunEventStore with per-run files and seq recovery from disk - RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events, accumulates token usage by caller type, buffers and flushes to store - RunManager now accepts optional RunStore for persistent backing - Worker creates RunJournal, writes human_message, injects callbacks - Gateway deps use factory functions (RunRepository when DB available) - New endpoints: messages, run messages, run events, token-usage - ThreadCreateRequest gains assistant_id field - 92 tests pass (33 new), zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(persistence): add user feedback + follow-up run association Phase 2-C: feedback and follow-up tracking. - FeedbackRow ORM model (rating +1/-1, optional message_id, comment) - FeedbackRepository with CRUD, list_by_run/thread, aggregate stats - Feedback API endpoints: create, list, stats, delete - follow_up_to_run_id in RunCreateRequest (explicit or auto-detected from latest successful run on the thread) - Worker writes follow_up_to_run_id into human_message event metadata - Gateway deps: feedback_repo factory + getter - 17 new tests (14 FeedbackRepository + 3 follow-up association) - 109 total tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config - config.example.yaml: deprecate standalone checkpointer section, activate unified database:sqlite as default (drives both checkpointer + app data) - New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage including check_access owner logic, list_by_owner pagination - Extended test_run_repository.py (+4 tests) — completion preserves fields, list ordering desc, limit, owner_none returns all - Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false, middleware no ai_message, unknown caller tokens, convenience fields, tool_error, non-summarization custom event - Extended test_run_event_store.py (+7 tests) — DB batch seq continuity, make_run_event_store factory (memory/db/jsonl/fallback/unknown) - Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists, follow-up metadata, summarization in history, full DB-backed lifecycle - Fixed DB integration test to use proper fake objects (not MagicMock) for JSON-serializable metadata - 157 total Phase 2 tests pass, zero regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * config: move default sqlite_dir to .deer-flow/data Keep SQLite databases alongside other DeerFlow-managed data (threads, memory) under the .deer-flow/ directory instead of a top-level ./data folder. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now() - Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM models. Add json_serializer=json.dumps(ensure_ascii=False) to all create_async_engine calls so non-ASCII text (Chinese etc.) is stored as-is in both SQLite and Postgres. - Change ORM datetime defaults from datetime.now(UTC) to datetime.now(), remove UTC imports. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): simplify deps.py with getter factory + inline repos - Replace 6 identical getter functions with _require() factory. - Inline 3 _make_*_repo() factories into langgraph_runtime(), call get_session_factory() once instead of 3 times. - Add thread_meta upsert in start_run (services.py). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(docker): add UV_EXTRAS build arg for optional dependencies Support installing optional dependency groups (e.g. postgres) at Docker build time via UV_EXTRAS build arg: UV_EXTRAS=postgres docker compose build Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(journal): fix flush, token tracking, and consolidate tests RunJournal fixes: - _flush_sync: retain events in buffer when no event loop instead of dropping them; worker's finally block flushes via async flush(). - on_llm_end: add tool_calls filter and caller=="lead_agent" guard for ai_message events; mark message IDs for dedup with record_llm_usage. - worker.py: persist completion data (tokens, message count) to RunStore in finally block. Model factory: - Auto-inject stream_usage=True for BaseChatOpenAI subclasses with custom api_base, so usage_metadata is populated in streaming responses. Test consolidation: - Delete test_phase2b_integration.py (redundant with existing tests). - Move DB-backed lifecycle test into test_run_journal.py. - Add tests for stream_usage injection in test_model_factory.py. - Clean up executor/task_tool dead journal references. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): widen content type to str|dict in all store backends Allow event content to be a dict (for structured OpenAI-format messages) in addition to plain strings. Dict values are JSON-serialized for the DB backend and deserialized on read; memory and JSONL backends handle dicts natively. Trace truncation now serializes dicts to JSON before measuring. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(events): use metadata flag instead of heuristic for dict content detection Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(converters): add LangChain-to-OpenAI message format converters Pure functions langchain_to_openai_message, langchain_to_openai_completion, langchain_messages_to_openai, and _infer_finish_reason for converting LangChain BaseMessage objects to OpenAI Chat Completions format, used by RunJournal for event storage. 15 unit tests added. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(converters): handle empty list content as null, clean up test Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): human_message content uses OpenAI user message format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): ai_message uses OpenAI format, add ai_tool_call message event - ai_message content now uses {"role": "assistant", "content": "..."} format - New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls - ai_tool_call uses langchain_to_openai_message converter for consistent format - Both events include finish_reason in metadata ("stop" or "tool_calls") Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add tool_result message event with OpenAI tool message format Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end, then emit a tool_result message event (role=tool, tool_call_id, content) after each successful tool completion. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): summary content uses OpenAI system message format Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format Add on_chat_model_start to capture structured prompt messages as llm_request events. Replace llm_end trace events with llm_response using OpenAI Chat Completions format. Track llm_call_index to pair request/response events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(events): add record_middleware method for middleware trace events Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(events): add full run sequence integration test for OpenAI content format Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(events): align message events with checkpoint format and add middleware tag injection - Message events (ai_message, ai_tool_call, tool_result, human_message) now use BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages - on_tool_end extracts tool_call_id/name/status from ToolMessage objects - on_tool_error now emits tool_result message events with error status - record_middleware uses middleware:{tag} event_type and middleware category - Summarization custom events use middleware:summarize category - TitleMiddleware injects middleware:title tag via get_config() inheritance - SummarizationMiddleware model bound with middleware:summarize tag - Worker writes human_message using HumanMessage.model_dump() Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): switch search endpoint to threads_meta table and sync title - POST /api/threads/search now queries threads_meta table directly, removing the two-phase Store + Checkpointer scan approach - Add ThreadMetaRepository.search() with metadata/status filters - Add ThreadMetaRepository.update_display_name() for title sync - Worker syncs checkpoint title to threads_meta.display_name on run completion - Map display_name to values.title in search response for API compatibility Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(threads): history endpoint reads messages from event store - POST /api/threads/{thread_id}/history now combines two data sources: checkpointer for checkpoint_id, metadata, title, thread_data; event store for messages (complete history, not truncated by summarization) - Strip internal LangGraph metadata keys from response - Remove full channel_values serialization in favor of selective fields Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: remove duplicate optional-dependencies header in pyproject.toml Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(middleware): pass tagged config to TitleMiddleware ainvoke call Without the config, the middleware:title tag was not injected, causing the LLM response to be recorded as a lead_agent ai_message in run_events. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve merge conflict in .env.example Keep both DATABASE_URL (from persistence-scaffold) and WECOM credentials (from main) after the merge. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address review feedback on PR #1851 - Fix naive datetime.now() → datetime.now(UTC) in all ORM models - Fix seq race condition in DbRunEventStore.put() with FOR UPDATE and UNIQUE(thread_id, seq) constraint - Encapsulate _store access in RunManager.update_run_completion() - Deduplicate _store.put() logic in RunManager via _persist_to_store() - Add update_run_completion to RunStore ABC + MemoryRunStore - Wire follow_up_to_run_id through the full create path - Add error recovery to RunJournal._flush_sync() lost-event scenario - Add migration note for search_threads breaking change - Fix test_checkpointer_none_fix mock to set database=None Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: update uv.lock Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality Bug fixes: - Sanitize log params to prevent log injection (CodeQL) - Reset threads_meta.status to idle/error when run completes - Attach messages only to latest checkpoint in /history response - Write threads_meta on POST /threads so new threads appear in search Lint fixes: - Remove unused imports (journal.py, migrations/env.py, test_converters.py) - Convert lambda to named function (engine.py, Ruff E731) - Remove unused logger definitions in repos (Ruff F841) - Add logging to JSONL decode errors and empty except blocks - Separate assert side-effects in tests (CodeQL) - Remove unused local variables in tests (Ruff F841) - Fix max_trace_content truncation to use byte length, not char length Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * style: apply ruff format to persistence and runtime files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Potential fix for pull request finding 'Statement has no effect' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> * refactor(runtime): introduce RunContext to reduce run_agent parameter bloat Extract checkpointer, store, event_store, run_events_config, thread_meta_repo, and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context() in deps.py to build the base context from app.state singletons. start_run() uses dataclasses.replace() to enrich per-run fields before passing ctx to run_agent. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): move sanitize_log_param to app/gateway/utils.py Extract the log-injection sanitizer from routers/threads.py into a shared utils module and rename to sanitize_log_param (public API). Eliminates the reverse service → router import in services.py. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: use SQL aggregation for feedback stats and thread token usage Replace Python-side counting in FeedbackRepository.aggregate_by_run with a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread abstract method with SQL GROUP BY implementation in RunRepository and Python fallback in MemoryRunStore. Simplify the thread_token_usage endpoint to delegate to the new method, eliminating the limit=10000 truncation risk. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: annotate DbRunEventStore.put() as low-frequency path Add docstring clarifying that put() opens a per-call transaction with FOR UPDATE and should only be used for infrequent writes (currently just the initial human_message event). High-throughput callers should use put_batch() instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(threads): fall back to Store search when ThreadMetaRepository is unavailable When database.backend=memory (default) or no SQL session factory is configured, search_threads now queries the LangGraph Store instead of returning 503. Returns empty list if neither Store nor repo is available. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata Add ThreadMetaStore abstract base class with create/get/search/update/delete interface. ThreadMetaRepository (SQL) now inherits from it. New MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments. deps.py now always provides a non-None thread_meta_repo, eliminating all `if thread_meta_repo is not None` guards in services.py, worker.py, and routers/threads.py. search_threads no longer needs a Store fallback branch. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(history): read messages from checkpointer instead of RunEventStore The /history endpoint now reads messages directly from the checkpointer's channel_values (the authoritative source) instead of querying RunEventStore.list_messages(). The RunEventStore API is preserved for other consumers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(persistence): address new Copilot review comments - feedback.py: validate thread_id/run_id before deleting feedback - jsonl.py: add path traversal protection with ID validation - run_repo.py: parse `before` to datetime for PostgreSQL compat - thread_meta_repo.py: fix pagination when metadata filter is active - database_config.py: use resolve_path for sqlite_dir consistency Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Implement skill self-evolution and skill_manage flow (#1874) * chore: ignore .worktrees directory * Add skill_manage self-evolution flow * Fix CI regressions for skill_manage * Address PR review feedback for skill evolution * fix(skill-evolution): preserve history on delete * fix(skill-evolution): tighten scanner fallbacks * docs: add skill_manage e2e evidence screenshot * fix(skill-manage): avoid blocking fs ops in session runtime --------- Co-authored-by: Willem Jiang <willem.jiang@gmail.com> * fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir resolve_path() resolves relative to Paths.base_dir (.deer-flow), which double-nested the path to .deer-flow/.deer-flow/data/app.db. Use Path.resolve() (CWD-relative) instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Feature/feishu receive file (#1608) * feat(feishu): add channel file materialization hook for inbound messages - Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op. - Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text. - Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files. - No impact on Slack/Telegram or other channels (they inherit the default no-op). * style(backend): format code with ruff for lint compliance - Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format` - Ensured both files conform to project linting standards - Fixes CI lint check failures caused by code style issues * fix(feishu): handle file write operation asynchronously to prevent blocking * fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code * test(feishu): add tests for receive_file method and placeholder replacement * fix(manager): remove unnecessary type casting for channel retrieval * fix(feishu): update logging messages to reflect resource handling instead of image * fix(feishu): sanitize filename by replacing invalid characters in file uploads * fix(feishu): improve filename sanitization and reorder image key handling in message processing * fix(feishu): add thread lock to prevent filename conflicts during file downloads * fix(test): correct bad merge in test_feishu_parser.py * chore: run ruff and apply formatting cleanup fix(feishu): preserve rich-text attachment order and improve fallback filename handling * fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915) Two production docker-compose.yaml bugs prevent `make up` from working: 1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH environment overrides. Added in fb2d99f (#1836) but accidentally reverted by ca2fb95 (#1847). Without them, gateway reads host paths from .env via env_file, causing FileNotFoundError inside the container. 2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default). Empty $${allow_blocking} inserts a bare space between flags, causing ' --no-reload' to be parsed as unexpected extra argument. Fix by building args string first and conditionally appending --allow-blocking. Co-authored-by: cooper <cooperfu@tencent.com> * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904) * fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities Fix `<button>` inside `<a>` invalid HTML in artifact components and add missing `noopener,noreferrer` to `window.open` calls to prevent reverse tabnabbing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(frontend): address Copilot review on tabnabbing and double-tab-open Remove redundant parent onClick on web_fetch ChainOfThoughtStep to prevent opening two tabs on link click, and explicitly null out window.opener after window.open() for defensive tabnabbing hardening. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * refactor(persistence): organize entities into per-entity directories Restructure the persistence layer from horizontal "models/ + repositories/" split into vertical entity-aligned directories. Each entity (thread_meta, run, feedback) now owns its ORM model, abstract interface (where applicable), and concrete implementations under a single directory with an aggregating __init__.py for one-line imports. Layout: persistence/thread_meta/{base,model,sql,memory}.py persistence/run/{model,sql}.py persistence/feedback/{model,sql}.py models/__init__.py is kept as a facade so Alembic autogenerate continues to discover all ORM tables via Base.metadata. RunEventRow remains under models/run_event.py because its storage implementation lives in runtime/events/store/db.py and has no matching repository directory. The repositories/ directory is removed entirely. All call sites in gateway/deps.py and tests are updated to import from the new entity packages, e.g.: from deerflow.persistence.thread_meta import ThreadMetaRepository from deerflow.persistence.run import RunRepository from deerflow.persistence.feedback import FeedbackRepository Full test suite passes (1690 passed, 14 skipped). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(gateway): sync thread rename and delete through ThreadMetaStore The POST /threads/{id}/state endpoint previously synced title changes only to the LangGraph Store via _store_upsert. In sqlite mode the search endpoint reads from the ThreadMetaRepository SQL table, so renames never appeared in /threads/search until the next agent run completed (worker.py syncs title from checkpoint to thread_meta in its finally block). Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem, Store, and checkpointer but left the threads_meta row orphaned in sqlite, so deleted threads kept appearing in /threads/search. Fix both endpoints by routing through the ThreadMetaStore abstraction which already has the correct sqlite/memory implementations wired up by deps.py. The rename path now calls update_display_name() and the delete path calls delete() — both work uniformly across backends. Verified end-to-end with curl in gateway mode against sqlite backend. Existing test suite (1690 passed) and focused router/repo tests pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(gateway): route all thread metadata access through ThreadMetaStore Following the rename/delete bug fix in PR1, migrate the remaining direct LangGraph Store reads/writes in the threads router and services to the ThreadMetaStore abstraction so that the sqlite and memory backends behave identically and the legacy dual-write paths can be removed. Migrated endpoints (threads.py): - create_thread: idempotency check + write now use thread_meta_repo.get/create instead of dual-writing the LangGraph Store and the SQL row. - get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback for legacy threads is preserved. - patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata. - delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete already covers it. Removed dead code (services.py): - _upsert_thread_in_store — redundant with the immediately following thread_meta_repo.create() call. - _sync_thread_title_after_run — worker.py's finally block already syncs the title via thread_meta_repo.update_display_name() after each run. Removed dead code (threads.py): - _store_get / _store_put / _store_upsert helpers (no remaining callers). - THREADS_NS constant. - get_store import (router no longer touches the LangGraph Store directly). New abstract method: - ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into the thread's metadata field. Implemented in both ThreadMetaRepository (SQL, read-modify-write inside one session) and MemoryThreadMetaStore. Three new unit tests cover merge / empty / nonexistent behaviour. Net change: -134 lines. Full test suite: 1693 passed, 14 skipped. Verified end-to-end with curl in gateway mode against sqlite backend (create / patch / get / rename / search / delete). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com> Co-authored-by: JilongSun <965640067@qq.com> Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com> Co-authored-by: cooper <cooperfu@tencent.com> Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com> * feat(auth): release-validation pass for 2.0-rc — 12 blockers + simplify follow-ups (#2008) * feat(auth): introduce backend auth module Port RFC-001 authentication core from PR #1728: - JWT token handling (create_access_token, decode_token, TokenPayload) - Password hashing (bcrypt) with verify_password - SQLite UserRepository with base interface - Provider Factory pattern (LocalAuthProvider) - CLI reset_admin tool - Auth-specific errors (AuthErrorCode, TokenError, AuthErrorResponse) Deps: - bcrypt>=4.0.0 - pyjwt>=2.9.0 - email-validator>=2.0.0 - backend/uv.toml pins public PyPI index Tests: 12 pure unit tests (test_auth_config.py, test_auth_errors.py). Scope note: authz.py, test_auth.py, and test_auth_type_system.py are deferred to commit 2 because they depend on middleware and deps wiring that is not yet in place. Commit 1 stays "pure new files only" as the spec mandates. * feat(auth): wire auth end-to-end (middleware + frontend replacement) Backend: - Port auth_middleware, csrf_middleware, langgraph_auth, routers/auth - Port authz decorator (owner_filter_key defaults to 'owner_id') - Merge app.py: register AuthMiddleware + CSRFMiddleware + CORS, add _ensure_admin_user lifespan hook, _migrate_orphaned_threads helper, register auth router - Merge deps.py: add get_local_provider, get_current_user_from_request, get_optional_user_from_request; keep get_current_user as thin str|None adapter for feedback router - langgraph.json: add auth path pointing to langgraph_auth.py:auth - Rename metadata['user_id'] -> metadata['owner_id'] in langgraph_auth (both metadata write and LangGraph filter dict) + test fixtures Frontend: - Delete better-auth library and api catch-all route - Remove better-auth npm dependency and env vars (BETTER_AUTH_SECRET, BETTER_AUTH_GITHUB_*) from env.js - Port frontend/src/core/auth/* (AuthProvider, gateway-config, proxy-policy, server-side getServerSideUser, types) - Port frontend/src/core/api/fetcher.ts - Port (auth)/layout, (auth)/login, (auth)/setup pages - Rewrite workspace/layout.tsx as server component that calls getServerSideUser and wraps in AuthProvider - Port workspace/workspace-content.tsx for the client-side sidebar logic Tests: - Port 5 auth test files (test_auth, test_auth_middleware, test_auth_type_system, test_ensure_admin, test_langgraph_auth) - 176 auth tests PASS After this commit: login/logout/registration flow works, but persistence layer does not yet filter by owner_id. Commit 4 closes that gap. * feat(auth): account settings page + i18n - Port account-settings-page.tsx (change password, change email, logout) - Wire into settings-dialog.tsx as new "account" section with UserIcon, rendered first in the section list - Add i18n keys: - en-US/zh-CN: settings.sections.account ("Account" / "账号") - en-US/zh-CN: button.logout ("Log out" / "退出登录") - types.ts: matching type declarations * feat(auth): enforce owner_id across 2.0-rc persistence layer Add request-scoped contextvar-based owner filtering to threads_meta, runs, run_events, and feedback repositories. Router code is unchanged — isolation is enforced at the storage layer so that any caller that forgets to pass owner_id still gets filtered results, and new routes cannot accidentally leak data. Core infrastructure ------------------- - deerflow/runtime/user_context.py (new): - ContextVar[CurrentUser | None] with default None - runtime_checkable CurrentUser Protocol (structural subtype with .id) - set/reset/get/require helpers - AUTO sentinel + resolve_owner_id(value, method_name) for sentinel three-state resolution: AUTO reads contextvar, explicit str overrides, explicit None bypasses the filter (for migration/CLI) Repository changes ------------------ - ThreadMetaRepository: create/get/search/update_*/delete gain owner_id=AUTO kwarg; read paths filter by owner, writes stamp it, mutations check ownership before applying - RunRepository: put/get/list_by_thread/delete gain owner_id=AUTO kwarg - FeedbackRepository: create/get/list_by_run/list_by_thread/delete gain owner_id=AUTO kwarg - DbRunEventStore: list_messages/list_events/list_messages_by_run/ count_messages/delete_by_thread/delete_by_run gain owner_id=AUTO kwarg. Write paths (put/put_batch) read contextvar softly: when a request-scoped user is available, owner_id is stamped; background worker writes without a user context pass None which is valid (orphan row to be bound by migration) Schema ------ - persistence/models/run_event.py: RunEventRow.owner_id = Mapped[ str | None] = mapped_column(String(64), nullable=True, index=True) - No alembic migration needed: 2.0 ships fresh, Base.metadata.create_all picks up the new column automatically Middleware ---------- - auth_middleware.py: after cookie check, call get_optional_user_from_ request to load the real User, stamp it into request.state.user AND the contextvar via set_current_user, reset in a try/finally. Public paths and unauthenticated requests continue without contextvar, and @require_auth handles the strict 401 path Test infrastructure ------------------- - tests/conftest.py: @pytest.fixture(autouse=True) _auto_user_context sets a default SimpleNamespace(id="test-user-autouse") on every test unless marked @pytest.mark.no_auto_user. Keeps existing 20+ persistence tests passing without modification - pyproject.toml [tool.pytest.ini_options]: register no_auto_user marker so pytest does not emit warnings for opt-out tests - tests/test_user_context.py: 6 tests covering three-state semantics, Protocol duck typing, and require/optional APIs - tests/test_thread_meta_repo.py: one test updated to pass owner_id= None explicitly where it was previously relying on the old default Test results ------------ - test_user_context.py: 6 passed - test_auth*.py + test_langgraph_auth.py + test_ensure_admin.py: 127 - test_run_event_store / test_run_repository / test_thread_meta_repo / test_feedback: 92 passed - Full backend suite: 1905 passed, 2 failed (both @requires_llm flaky integration tests unrelated to auth), 1 skipped * feat(auth): extend orphan migration to 2.0-rc persistence tables _ensure_admin_user now runs a three-step pipeline on every boot: Step 1 (fatal): admin user exists / is created / password is reset Step 2 (non-fatal): LangGraph store orphan threads → admin Step 3 (non-fatal): SQL persistence tables → admin - threads_meta - runs - run_events - feedback Each step is idempotent. The fatal/non-fatal split mirrors PR #1728's original philosophy: admin creation failure blocks startup (the system is unusable without an admin), whereas migration failures log a warning and let the service proceed (a partial migration is recoverable; a missing admin is not). Key helpers ----------- - _iter_store_items(store, namespace, *, page_size=500): async generator that cursor-paginates across LangGraph store pages. Fixes PR #1728's hardcoded limit=1000 bug that would silently lose orphans beyond the first page. - _migrate_orphaned_threads(store, admin_user_id): Rewritten to use _iter_store_items. Returns the migrated count so the caller can log it; raises only on unhandled exceptions. - _migrate_orphan_sql_tables(admin_user_id): Imports the 4 ORM models lazily, grabs the shared session factory, runs one UPDATE per table in a single transaction, commits once. No-op when no persistence backend is configured (in-memory dev). Tests: test_ensure_admin.py (8 passed) * test(auth): port AUTH test plan docs + lint/format pass - Port backend/docs/AUTH_TEST_PLAN.md and AUTH_UPGRADE.md from PR #1728 - Rename metadata.user_id → metadata.owner_id in AUTH_TEST_PLAN.md (4 occurrences from the original PR doc) - ruff auto-fix UP037 in sentinel type annotations: drop quotes around "str | None | _AutoSentinel" now that from __future__ import annotations makes them implicit string forms - ruff format: 2 files (app/gateway/app.py, runtime/user_context.py) Note on test coverage additions: - conftest.py autouse fixture was already added in commit 4 (had to be co-located with the repository changes to keep pre-existing persistence tests passing) - cross-user isolation E2E tests (test_owner_isolation.py) deferred — enforcement is already proven by the 98-test repository suite via the autouse fixture + explicit _AUTO sentinel exercises - New test cases (TC-API-17..20, TC-ATK-13, TC-MIG-01..07) listed in AUTH_TEST_PLAN.md are deferred to a follow-up PR — they are manual-QA test cases rather than pytest code, and the spec-level coverage is already met by test_user_context.py + the 98-test repository suite. Final test results: - Auth suite (test_auth*, test_langgraph_auth, test_ensure_admin, test_user_context): 186 passed - Persistence suite (test_run_event_store, test_run_repository, test_thread_meta_repo, test_feedback): 98 passed - Lint: ruff check + ruff format both clean * test(auth): add cross-user isolation test suite 10 tests exercising the storage-layer owner filter by manually switching the user_context contextvar between two users. Verifies the safety invariant: After a repository write with owner_id=A, a subsequent read with owner_id=B must not return the row, and vice versa. Covers all 4 tables that own user-scoped data: TC-API-17 threads_meta — read, search, update, delete cross-user TC-API-18 runs — get, list_by_thread, delete cross-user TC-API-19 run_events — list_messages, list_events, count_messages, delete_by_thread (CRITICAL: raw conversation content leak vector) TC-API-20 feedback — get, list_by_run, delete cross-user Plus two meta-tests verifying the sentinel pattern itself: - AUTO + unset contextvar raises RuntimeError - explicit owner_id=None bypasses the filter (migration escape hatch) Architecture note ----------------- These tests bypass the HTTP layer by design. The full chain (cookie → middleware → contextvar → repository) is covered piecewise: - test_auth_middleware.py: middleware sets contextvar from cookies - test_owner_isolation.py: repositories enforce isolation when contextvar is set to different users Together they prove the end-to-end safety property without the ceremony of spinning up a full TestClient + in-memory DB for every router endpoint. Tests pass: 231 (full auth + persistence + isolation suite) Lint: clean * refactor(auth): migrate user repository to SQLAlchemy ORM Move the users table into the shared persistence engine so auth matches the pattern of threads_meta, runs, run_events, and feedback — one engine, one session factory, one schema init codepath. New files --------- - persistence/user/__init__.py, persistence/user/model.py: UserRow ORM class with partial unique index on (oauth_provider, oauth_id) - Registered in persistence/models/__init__.py so Base.metadata.create_all() picks it up Modified -------- - auth/repositories/sqlite.py: rewritten as async SQLAlchemy, identical constructor pattern to the other four repositories (def __init__(self, session_factory) + self._sf = session_factory) - auth/config.py: drop users_db_path field — storage is configured through config.database like every other table - deps.py/get_local_provider: construct SQLiteUserRepository with the shared session factory, fail fast if engine is not initialised - tests/test_auth.py: rewrite test_sqlite_round_trip_new_fields to use the shared engine (init_engine + close_engine in a tempdir) - tests/test_auth_type_system.py: add per-test autouse fixture that spins up a scratch engine and resets deps._cached_* singletons * refactor(auth): remove SQL orphan migration (unused in supported scenarios) The _migrate_orphan_sql_tables helper existed to bind NULL owner_id rows in threads_meta, runs, run_events, and feedback to the admin on first boot. But in every supported upgrade path, it's a no-op: 1. Fresh install: create_all builds fresh tables, no legacy rows 2. No-auth → with-auth (no existing persistence DB): persistence tables are created fresh by create_all, no legacy rows 3. No-auth → with-auth (has existing persistence DB from #1930): NOT a supported upgrade path — "有 DB 到有 DB" schema evolution is out of scope; users wipe DB or run manual ALTER So the SQL orphan migration never has anything to do in the supported matrix. Delete the function, simplify _ensure_admin_user from a 3-step pipeline to a 2-step one (admin creation + LangGraph store orphan migration only). LangGraph store orphan migration stays: it serves the real "no-auth → with-auth" upgrade path where a user's existing LangGraph thread metadata has no owner_id field and needs to be stamped with the newly-created admin's id. Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): write initial admin password to 0600 file instead of logs CodeQL py/clear-text-logging-sensitive-data flagged 3 call sites that logged the auto-generated admin password to stdout via logger.info(). Production log aggregators (ELK/Splunk/etc) would have captured those cleartext secrets. Replace with a shared helper that writes to .deer-flow/admin_initial_credentials.txt with mode 0600, and log only the path. New file -------- - app/gateway/auth/credential_file.py: write_initial_credentials() helper. Takes email, password, and a "initial"/"reset" label. Creates .deer-flow/ if missing, writes a header comment plus the email+password, chmods 0o600, returns the absolute Path. Modified -------- - app/gateway/app.py: both _ensure_admin_user paths (fresh creation + needs_setup password reset) now write to file and log the path - app/gateway/auth/reset_admin.py: rewritten to use the shared ORM repo (SQLiteUserRepository with session_factory) and the credential_file helper. The previous implementation was broken after the earlier ORM refactor — it still imported _get_users_conn and constructed SQLiteUserRepository() without a session factory. No tests changed — the three password-log sites are all exercised via existing test_ensure_admin.py which checks that startup succeeds, not that a specific string appears in logs. CodeQL alerts 272, 283, 284: all resolved. * security(auth): strict JWT validation in middleware (fix junk cookie bypass) AUTH_TEST_PLAN test 7.5.8 expects junk cookies to be rejected with 401. The previous middleware behaviour was "presence-only": check that some access_token cookie exists, then pass through. In combination with my Task-12 decision to skip @require_auth decorators on routes, this created a gap where a request with any cookie-shaped string (e.g. access_token=not-a-jwt) would bypass authentication on routes that do not touch the repository (/api/models, /api/mcp/config, /api/memory, /api/skills, …). Fix: middleware now calls get_current_user_from_request() strictly and catches the resulting HTTPException to render a 401 with the proper fine-grained error code (token_invalid, token_expired, user_not_found, …). On success it stamps request.state.user and the contextvar so repository-layer owner filters work downstream. The 4 old "_with_cookie_passes" tests in test_auth_middleware.py were written for the presence-only behaviour; they asserted that a junk cookie would make the handler return 200. They are renamed to "_with_junk_cookie_rejected" and their assertions flipped to 401. The negative path (no cookie → 401 not_authenticated) is unchanged. Verified: no cookie → 401 not_authenticated junk cookie → 401 token_invalid (the fixed bug) expired cookie → 401 token_expired Tests: 284 passed (auth + persistence + isolation) Lint: clean * security(auth): wire @require_permission(owner_check=True) on isolation routes Apply the require_permission decorator to all 28 routes that take a {thread_id} path parameter. Combined with the strict middleware (previous commit), this gives the double-layer protection that AUTH_TEST_PLAN test 7.5.9 documents: Layer 1 (AuthMiddleware): cookie + JWT validation, rejects junk cookies and stamps request.state.user Layer 2 (@require_permission with owner_check=True): per-resource ownership verification via ThreadMetaStore.check_access — returns 404 if a different user owns the thread The decorator's owner_check branch is rewritten to use the SQL thread_meta_repo (the 2.0-rc persistence layer) instead of the LangGraph store path that PR #1728 used (_store_get / get_store in routers/threads.py). The inject_record convenience is dropped — no caller in 2.0 needs the LangGraph blob, and the SQL repo has a different shape. Routes decorated (28 total): - threads.py: delete, patch, get, get-state, post-state, post-history - thread_runs.py: post-runs, post-runs-stream, post-runs-wait, list_runs, get_run, cancel_run, join_run, stream_existing_run, list_thread_messages, list_run_messages, list_run_events, thread_token_usage - feedback.py: create, list, stats, delete - uploads.py: upload (added Request param), list, delete - artifacts.py: get_artifact - suggestions.py: generate (renamed body parameter to avoid conflict with FastAPI Request) Test fixes: - test_suggestions_router.py: bypass the decorator via __wrapped__ (the unit tests cover parsing logic, not auth — no point spinning up a thread_meta_repo just to test JSON unwrapping) - test_auth_middleware.py 4 fake-cookie tests: already updated in the previous commit (745bf432) Tests: 293 passed (auth + persistence + isolation + suggestions) Lint: clean * security(auth): defense-in-depth fixes from release validation pass Eight findings caught while running the AUTH_TEST_PLAN end-to-end against the deployed sg_dev stack. Each is a pre-condition for shipping release/2.0-rc that the previous PRs missed. Backend hardening - routers/auth.py: rate limiter X-Real-IP now requires AUTH_TRUSTED_PROXIES whitelist (CIDR/IP allowlist). Without nginx in front, the previous code honored arbitrary X-Real-IP, letting an attacker rotate the header to fully bypass the per-IP login lockout. - routers/auth.py: 36-entry common-password blocklist via Pydantic field_validator on RegisterRequest + ChangePasswordRequest. The shared _validate_strong_password helper keeps the constraint in one place. - routers/threads.py: ThreadCreateRequest + ThreadPatchRequest strip server-reserved metadata keys (owner_id, user_id) via Pydantic field_validator so a forged value can never round-trip back to other clients reading the same thread. The actual ownership invariant stays on the threads_meta row; this closes the metadata-blob echo gap. - authz.py + thread_meta/sql.py: require_permission gains a require_existing flag plumbed through check_access(require_existing=True). Destructive routes (DELETE/PATCH/state-update/runs/feedback) now treat a missing thread_meta row as 404 instead of "untracked legacy thread, allow", closing the cross-user delete-idempotence gap where any user could successfully DELETE another user's deleted thread. - repositories/sqlite.py + base.py: update_user raises UserNotFoundError on a vanished row instead of silently returning the input. Concurrent delete during password reset can no longer look like a successful update. - runtime/user_context.py: resolve_owner_id() coerces User.id (UUID) to str at the contextvar boundary so SQLAlchemy String(64) columns can bind it. The whole 2.0-rc isolation pipeline was previously broken end-to-end (POST /api/threads → 500 "type 'UUID' is not supported"). - persistence/engine.py: SQLAlchemy listener enables PRAGMA journal_mode=WAL, synchronous=NORMAL, foreign_keys=ON on every new SQLite connection. TC-UPG-06 in the test plan expects WAL; previous code shipped with the default 'delete' journal. - auth_middleware.py: stamp request.state.auth = AuthContext(...) so @require_permission's short-circuit fires; previously every isolation request did a duplicate JWT decode + users SELECT. Also unifies the 401 payload through AuthErrorResponse(...).model_dump(). - app.py: _ensure_admin_user restructure removes the noqa F821 scoping bug where 'password' was referenced outside the branch that defined it. New _announce_credentials helper absorbs the duplicate log block in the fresh-admin and reset-admin branches. * fix(frontend+nginx): rollout CSRF on every state-changing client path The frontend was 100% broken in gateway-pro mode for any user trying to open a specific chat thread. Three cumulative bugs each silently masked the next. LangGraph SDK CSRF gap (api-client.ts) - The Client constructor took only apiUrl, no defaultHeaders, no fetch interceptor. The SDK's internal fetch never sent X-CSRF-Token, so every state-changing /api/langgraph-compat/* call (runs/stream, threads/search, threads/{tid}/history, ...) hit CSRFMiddleware and got 403 before reaching the auth check. UI symptom: empty thread page with no error message; the SPA's hooks swallowed the rejection. - Fix: pass an onRequest hook that injects X-CSRF-Token from the csrf_token cookie per request. Reading the cookie per call (not at construction time) handles login / logout / password-change cookie rotation transparently. The SDK's prepareFetchOptions calls onRequest for both regular requests AND streaming/SSE/reconnect, so the same hook covers runs.stream and runs.joinStream. Raw fetch CSRF gap (7 files) - Audit: 11 frontend fetch sites, only 2 included CSRF (login/setup + account-settings change-password). The other 7 routed through raw fetch() with no header — suggestions, memory, agents, mcp, skills, uploads, and the local thread cleanup hook all 403'd silently. - Fix: enhance fetcher.ts:fetchWithAuth to auto-inject X-CSRF-Token on POST/PUT/DELETE/PATCH from a single shared readCsrfCookie() helper. Convert all 7 raw fetch() callers to fetchWithAuth so the contract is centrally enforced. api-client.ts and fetcher.ts share readCsrfCookie + STATE_CHANGING_METHODS to avoid drift. nginx routing + buffering (nginx.local.conf) - The auth feature shipped without updating the nginx config: per-API explicit location blocks but no /api/v1/auth/, /api/feedback, /api/runs. The frontend's client-side fetches to /api/v1/auth/login/local 404'd from the Next.js side because nginx routed /api/* to the frontend. - Fix: add catch-all `location /api/` that proxies to the gateway. nginx longest-prefix matching keeps the explicit blocks (/api/models, /api/threads regex, /api/langgraph/, ...) winning for their paths. - Fix: disable proxy_buffering + proxy_request_buffering for the frontend `location /` block. Without it, nginx tries to spool large Next.js chunks into /var/lib/nginx/proxy (root-owned) and fails with Permission denied → ERR_INCOMPLETE_CHUNKED_ENCODING → ChunkLoadError. * test(auth): release-validation test infra and new coverage Test fixtures and unit tests added during the validation pass. Router test helpers (NEW: tests/_router_auth_helpers.py) - make_authed_test_app(): builds a FastAPI test app with a stub middleware that stamps request.state.user + request.state.auth and a permissive thread_meta_repo mock. TestClient-based router tests (test_artifacts_router, test_threads_router) use it instead of bare FastAPI() so the new @require_permission(owner_check=True) decorators short-circuit cleanly. - call_unwrapped(): walks the __wrapped__ chain to invoke the underlying handler without going through the authz wrappers. Direct-call tests (test_uploads_router) use it. Typed with ParamSpec so the wrapped signature flows through. Backend test additions - test_auth.py: 7 tests for the new _get_client_ip trust model (no proxy / trusted proxy / untrusted peer / XFF rejection / invalid CIDR / no client). 5 tests for the password blocklist (literal, case-insensitive, strong password accepted, change-password binding, short-password length-check still fires before blocklist). test_update_user_raises_when_row_concurrently_deleted: closes a shipped-without-coverage gap on the new UserNotFoundError contract. - test_thread_meta_repo.py: 4 tests for check_access(require_existing=True) — strict missing-row denial, strict owner match, strict owner mismatch, strict null-owner still allowed (shared rows survive the tightening). - test_ensure_admin.py: 3 tests for _migrate_orphaned_threads / _iter_store_items pagination, covering the TC-UPG-02 upgrade story end-to-end via mock store. Closes the gap where the cursor pagination was untested even though the previous PR rewrote it. - test_threads_router.py: 5 tests for _strip_reserved_metadata (owner_id removal, user_id removal, safe-keys passthrough, empty input, both-stripped). - test_auth_type_system.py: replace "password123" fixtures with Tr0ub4dor3a / AnotherStr0ngPwd! so the new password blocklist doesn't reject the test data. * docs(auth): refresh TC-DOCKER-05 + document Docker validation gap - AUTH_TEST_PLAN.md TC-DOCKER-05: the previous expectation ("admin password visible in docker logs") was stale after the simplify pass that moved credentials to a 0600 file. The grep "Password:" check would have silently failed and given a false sense of coverage. New expectation matches the actual file-based path: 0600 file in DEER_FLOW_HOME, log shows the path (not the secret), reverse-grep asserts no leaked password in container logs. - NEW: docs/AUTH_TEST_DOCKER_GAP.md documents the only un-executed block in the test plan (TC-DOCKER-01..06). Reason: sg_dev validation host has no Docker daemon installed. The doc maps each Docker case to an already-validated bare-metal equivalent (TC-1.1, TC-REENT-01, TC-API-02 etc.) so the gap is auditable, and includes pre-flight reproduction steps for whoever has Docker available. --------- Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com> * fix(persistence): stream hang when run_events.backend=db DbRunEventStore._user_id_from_context() returned user.id without coercing it to str. User.id is a Pydantic UUID, and aiosqlite cannot bind a raw UUID object to a VARCHAR column, so the INSERT for the initial human_message event silently rolled back and raised out of the worker task. Because that put() sat outside the worker's try block, the finally-clause that publishes end-of-stream never ran and the SSE stream hung forever. jsonl mode was unaffected because json.dumps(default=str) coerces UUID objects transparently. Fixes: - db.py: coerce user.id to str at the context-read boundary (matches what resolve_user_id already does for the other repositories) - worker.py: move RunJournal init + human_message put inside the try block so any failure flows through the finally/publish_end path instead of hanging the subscriber Defense-in-depth: - engine.py: add PRAGMA busy_timeout=5000 so checkpointer and event store wait for each other on the shared deerflow.db file instead of failing immediately under write-lock contention - journal.py: skip fire-and-forget _flush_sync when a previous flush task is still in flight, to avoid piling up concurrent put_batch writes on the same SQLAlchemy engine during streaming; flush() now waits for pending tasks before draining the buffer - database_config.py: doc-only update clarifying WAL + busy_timeout keep the unified deerflow.db safe for both workloads Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore(persistence): drop redundant busy_timeout PRAGMA Python's sqlite3 driver defaults to a 5-second busy timeout via the ``timeout`` kwarg of ``sqlite3.connect``, and aiosqlite + SQLAlchemy's aiosqlite dialect inherit that default. Setting ``PRAGMA busy_timeout=5000`` explicitly was a no-op — verified by reading back the PRAGMA on a fresh connection (it already reports 5000ms without our PRAGMA). Concurrent stress test (50 checkpoint writes + 20 event batches + 50 thread_meta updates on the same deerflow.db) still completes with zero errors and 200/200 rows after removing the explicit PRAGMA. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(rebase): remove duplicate definitions and update stale module paths Rebase left duplicate function blocks in worker.py (triple human_message write causing 3x user messages in /history), deps.py, and prompt.py. Also update checkpointer imports from the old deerflow.agents.checkpointer path to deerflow.runtime.checkpointer, and clean up orphaned feedback props in the frontend message components. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(user-context): add DEFAULT_USER_ID and get_effective_user_id helper Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(paths): add user-aware path methods with optional user_id parameter Add _validate_user_id(), user_dir(), user_memory_file(), user_agent_memory_file() and optional keyword-only user_id parameter to all thread-related path methods. When user_id is provided, paths resolve under users/{user_id}/threads/{thread_id}/; when omitted, legacy layout is preserved for backward compatibility. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): add user_id to MemoryStorage interface for per-user isolation Thread user_id through MemoryStorage.load/reload/save abstract methods and FileMemoryStorage, re-keying the in-memory cache from bare agent_name to a (user_id, agent_name) tuple to prevent cross-user cache collisions. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): thread user_id through memory updater layer Add `user_id` keyword-only parameter to all public updater functions (_save_memory_to_file, get_memory_data, reload_memory_data, import_memory_data, clear_memory_data, create/delete/update_memory_fact) and regular keyword param to MemoryUpdater.update_memory + update_memory_from_conversation, propagating it to every storage load/save/reload call. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(memory): capture user_id at enqueue time for async-safe thread isolation Add user_id field to ConversationContext and MemoryUpdateQueue.add() so the user identity is stored explicitly at request time, before threading.Timer fires on a different thread where ContextVar values do not propagate. MemoryMiddleware.after_agent() now calls get_effective_user_id() at enqueue time and passes the value through to updater.update_memory(). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(isolation): wire user_id through all Paths and memory callsites Pass user_id=get_effective_user_id() at every callsite that invokes Paths methods or memory functions, enabling per-user filesystem isolation throughout the harness and app layers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(migration): add idempotent script for per-user data migration Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: update CLAUDE.md and config docs for per-user isolation * feat(events): add pagination to list_messages_by_run on all store backends Replicates the existing before_seq/after_seq/limit cursor-pagination pattern from list_messages onto list_messages_by_run across the abstract interface, MemoryRunEventStore, JsonlRunEventStore, and DbRunEventStore. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): add GET /api/runs/{run_id}/messages with cursor pagination New endpoint resolves thread_id from the run record and delegates to RunEventStore.list_messages_by_run for cursor-based pagination. Ownership is enforced implicitly via RunStore.get() user filtering. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): add GET /api/runs/{run_id}/feedback Delegates to FeedbackRepository.list_by_run via the existing _resolve_run helper; includes tests for success, 404, empty list, and 503 (no DB). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * feat(api): retrofit cursor pagination onto GET /threads/{tid}/runs/{rid}/messages Replace bare list[dict] response with {data: [...], has_more: bool} envelope, forwarding limit/before_seq/after_seq query params to the event store. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add run-level API endpoints to CLAUDE.md routers table * refactor(threads): remove event-store message loader and feedback from state/history endpoints State and history endpoints now return messages purely from the checkpointer's channel_values. The _get_event_store_messages helper (which loaded the full event-store transcript with feedback attached) is removed along with its tests. Frontend will use the dedicated GET /api/runs/{run_id}/messages and /feedback endpoints instead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com> Co-authored-by: Willem Jiang <willem.jiang@gmail.com> Co-authored-by: JilongSun <965640067@qq.com> Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com> Co-authored-by: cooper <cooperfu@tencent.com> Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com> Co-authored-by: greatmengqi <chenmengqi.0376@gmail.com> Co-authored-by: greatmengqi <chenmengqi.0376@bytedance.com>
962 lines
37 KiB
Python
962 lines
37 KiB
Python
"""ChannelManager — consumes inbound messages and dispatches them to the DeerFlow agent via LangGraph Server."""
|
|
|
|
from __future__ import annotations
|
|
|
|
import asyncio
|
|
import logging
|
|
import mimetypes
|
|
import re
|
|
import time
|
|
from collections.abc import Awaitable, Callable, Mapping
|
|
from pathlib import Path
|
|
from typing import Any
|
|
|
|
import httpx
|
|
from langgraph_sdk.errors import ConflictError
|
|
|
|
from app.channels.commands import KNOWN_CHANNEL_COMMANDS
|
|
from app.channels.message_bus import InboundMessage, InboundMessageType, MessageBus, OutboundMessage, ResolvedAttachment
|
|
from app.channels.store import ChannelStore
|
|
from deerflow.runtime.user_context import get_effective_user_id
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
DEFAULT_LANGGRAPH_URL = "http://localhost:2024"
|
|
DEFAULT_GATEWAY_URL = "http://localhost:8001"
|
|
DEFAULT_ASSISTANT_ID = "lead_agent"
|
|
CUSTOM_AGENT_NAME_PATTERN = re.compile(r"^[A-Za-z0-9-]+$")
|
|
|
|
DEFAULT_RUN_CONFIG: dict[str, Any] = {"recursion_limit": 100}
|
|
DEFAULT_RUN_CONTEXT: dict[str, Any] = {
|
|
"thinking_enabled": True,
|
|
"is_plan_mode": False,
|
|
"subagent_enabled": False,
|
|
}
|
|
STREAM_UPDATE_MIN_INTERVAL_SECONDS = 0.35
|
|
THREAD_BUSY_MESSAGE = "This conversation is already processing another request. Please wait for it to finish and try again."
|
|
|
|
CHANNEL_CAPABILITIES = {
|
|
"feishu": {"supports_streaming": True},
|
|
"slack": {"supports_streaming": False},
|
|
"telegram": {"supports_streaming": False},
|
|
"wechat": {"supports_streaming": False},
|
|
"wecom": {"supports_streaming": True},
|
|
}
|
|
|
|
InboundFileReader = Callable[[dict[str, Any], httpx.AsyncClient], Awaitable[bytes | None]]
|
|
|
|
|
|
INBOUND_FILE_READERS: dict[str, InboundFileReader] = {}
|
|
|
|
|
|
def register_inbound_file_reader(channel_name: str, reader: InboundFileReader) -> None:
|
|
INBOUND_FILE_READERS[channel_name] = reader
|
|
|
|
|
|
async def _read_http_inbound_file(file_info: dict[str, Any], client: httpx.AsyncClient) -> bytes | None:
|
|
url = file_info.get("url")
|
|
if not isinstance(url, str) or not url:
|
|
return None
|
|
|
|
resp = await client.get(url)
|
|
resp.raise_for_status()
|
|
return resp.content
|
|
|
|
|
|
async def _read_wecom_inbound_file(file_info: dict[str, Any], client: httpx.AsyncClient) -> bytes | None:
|
|
data = await _read_http_inbound_file(file_info, client)
|
|
if data is None:
|
|
return None
|
|
|
|
aeskey = file_info.get("aeskey") if isinstance(file_info.get("aeskey"), str) else None
|
|
if not aeskey:
|
|
return data
|
|
|
|
try:
|
|
from aibot.crypto_utils import decrypt_file
|
|
except Exception:
|
|
logger.exception("[Manager] failed to import WeCom decrypt_file")
|
|
return None
|
|
|
|
return decrypt_file(data, aeskey)
|
|
|
|
|
|
async def _read_wechat_inbound_file(file_info: dict[str, Any], client: httpx.AsyncClient) -> bytes | None:
|
|
raw_path = file_info.get("path")
|
|
if isinstance(raw_path, str) and raw_path.strip():
|
|
try:
|
|
return await asyncio.to_thread(Path(raw_path).read_bytes)
|
|
except OSError:
|
|
logger.exception("[Manager] failed to read WeChat inbound file from local path: %s", raw_path)
|
|
return None
|
|
|
|
full_url = file_info.get("full_url")
|
|
if isinstance(full_url, str) and full_url.strip():
|
|
return await _read_http_inbound_file({"url": full_url}, client)
|
|
|
|
return None
|
|
|
|
|
|
register_inbound_file_reader("wecom", _read_wecom_inbound_file)
|
|
register_inbound_file_reader("wechat", _read_wechat_inbound_file)
|
|
|
|
|
|
class InvalidChannelSessionConfigError(ValueError):
|
|
"""Raised when IM channel session overrides contain invalid agent config."""
|
|
|
|
|
|
def _is_thread_busy_error(exc: BaseException | None) -> bool:
|
|
if exc is None:
|
|
return False
|
|
if isinstance(exc, ConflictError):
|
|
return True
|
|
return "already running a task" in str(exc)
|
|
|
|
|
|
def _as_dict(value: Any) -> dict[str, Any]:
|
|
return dict(value) if isinstance(value, Mapping) else {}
|
|
|
|
|
|
def _merge_dicts(*layers: Any) -> dict[str, Any]:
|
|
merged: dict[str, Any] = {}
|
|
for layer in layers:
|
|
if isinstance(layer, Mapping):
|
|
merged.update(layer)
|
|
return merged
|
|
|
|
|
|
def _normalize_custom_agent_name(raw_value: str) -> str:
|
|
"""Normalize legacy channel assistant IDs into valid custom agent names."""
|
|
normalized = raw_value.strip().lower().replace("_", "-")
|
|
if not normalized:
|
|
raise InvalidChannelSessionConfigError("Channel session assistant_id is empty. Use 'lead_agent' or a valid custom agent name.")
|
|
if not CUSTOM_AGENT_NAME_PATTERN.fullmatch(normalized):
|
|
raise InvalidChannelSessionConfigError(f"Invalid channel session assistant_id {raw_value!r}. Use 'lead_agent' or a custom agent name containing only letters, digits, and hyphens.")
|
|
return normalized
|
|
|
|
|
|
def _extract_response_text(result: dict | list) -> str:
|
|
"""Extract the last AI message text from a LangGraph runs.wait result.
|
|
|
|
``runs.wait`` returns the final state dict which contains a ``messages``
|
|
list. Each message is a dict with at least ``type`` and ``content``.
|
|
|
|
Handles special cases:
|
|
- Regular AI text responses
|
|
- Clarification interrupts (``ask_clarification`` tool messages)
|
|
- AI messages with tool_calls but no text content
|
|
"""
|
|
if isinstance(result, list):
|
|
messages = result
|
|
elif isinstance(result, dict):
|
|
messages = result.get("messages", [])
|
|
else:
|
|
return ""
|
|
|
|
# Walk backwards to find usable response text, but stop at the last
|
|
# human message to avoid returning text from a previous turn.
|
|
for msg in reversed(messages):
|
|
if not isinstance(msg, dict):
|
|
continue
|
|
|
|
msg_type = msg.get("type")
|
|
|
|
# Stop at the last human message — anything before it is a previous turn
|
|
if msg_type == "human":
|
|
break
|
|
|
|
# Check for tool messages from ask_clarification (interrupt case)
|
|
if msg_type == "tool" and msg.get("name") == "ask_clarification":
|
|
content = msg.get("content", "")
|
|
if isinstance(content, str) and content:
|
|
return content
|
|
|
|
# Regular AI message with text content
|
|
if msg_type == "ai":
|
|
content = msg.get("content", "")
|
|
if isinstance(content, str) and content:
|
|
return content
|
|
# content can be a list of content blocks
|
|
if isinstance(content, list):
|
|
parts = []
|
|
for block in content:
|
|
if isinstance(block, dict) and block.get("type") == "text":
|
|
parts.append(block.get("text", ""))
|
|
elif isinstance(block, str):
|
|
parts.append(block)
|
|
text = "".join(parts)
|
|
if text:
|
|
return text
|
|
return ""
|
|
|
|
|
|
def _extract_text_content(content: Any) -> str:
|
|
"""Extract text from a streaming payload content field."""
|
|
if isinstance(content, str):
|
|
return content
|
|
if isinstance(content, list):
|
|
parts: list[str] = []
|
|
for block in content:
|
|
if isinstance(block, str):
|
|
parts.append(block)
|
|
elif isinstance(block, Mapping):
|
|
text = block.get("text")
|
|
if isinstance(text, str):
|
|
parts.append(text)
|
|
else:
|
|
nested = block.get("content")
|
|
if isinstance(nested, str):
|
|
parts.append(nested)
|
|
return "".join(parts)
|
|
if isinstance(content, Mapping):
|
|
for key in ("text", "content"):
|
|
value = content.get(key)
|
|
if isinstance(value, str):
|
|
return value
|
|
return ""
|
|
|
|
|
|
def _merge_stream_text(existing: str, chunk: str) -> str:
|
|
"""Merge either delta text or cumulative text into a single snapshot."""
|
|
if not chunk:
|
|
return existing
|
|
if not existing or chunk == existing:
|
|
return chunk or existing
|
|
if chunk.startswith(existing):
|
|
return chunk
|
|
if existing.endswith(chunk):
|
|
return existing
|
|
return existing + chunk
|
|
|
|
|
|
def _extract_stream_message_id(payload: Any, metadata: Any) -> str | None:
|
|
"""Best-effort extraction of the streamed AI message identifier."""
|
|
candidates = [payload, metadata]
|
|
if isinstance(payload, Mapping):
|
|
candidates.append(payload.get("kwargs"))
|
|
|
|
for candidate in candidates:
|
|
if not isinstance(candidate, Mapping):
|
|
continue
|
|
for key in ("id", "message_id"):
|
|
value = candidate.get(key)
|
|
if isinstance(value, str) and value:
|
|
return value
|
|
return None
|
|
|
|
|
|
def _accumulate_stream_text(
|
|
buffers: dict[str, str],
|
|
current_message_id: str | None,
|
|
event_data: Any,
|
|
) -> tuple[str | None, str | None]:
|
|
"""Convert a ``messages-tuple`` event into the latest displayable AI text."""
|
|
payload = event_data
|
|
metadata: Any = None
|
|
if isinstance(event_data, (list, tuple)):
|
|
if event_data:
|
|
payload = event_data[0]
|
|
if len(event_data) > 1:
|
|
metadata = event_data[1]
|
|
|
|
if isinstance(payload, str):
|
|
message_id = current_message_id or "__default__"
|
|
buffers[message_id] = _merge_stream_text(buffers.get(message_id, ""), payload)
|
|
return buffers[message_id], message_id
|
|
|
|
if not isinstance(payload, Mapping):
|
|
return None, current_message_id
|
|
|
|
payload_type = str(payload.get("type", "")).lower()
|
|
if "tool" in payload_type:
|
|
return None, current_message_id
|
|
|
|
text = _extract_text_content(payload.get("content"))
|
|
if not text and isinstance(payload.get("kwargs"), Mapping):
|
|
text = _extract_text_content(payload["kwargs"].get("content"))
|
|
if not text:
|
|
return None, current_message_id
|
|
|
|
message_id = _extract_stream_message_id(payload, metadata) or current_message_id or "__default__"
|
|
buffers[message_id] = _merge_stream_text(buffers.get(message_id, ""), text)
|
|
return buffers[message_id], message_id
|
|
|
|
|
|
def _extract_artifacts(result: dict | list) -> list[str]:
|
|
"""Extract artifact paths from the last AI response cycle only.
|
|
|
|
Instead of reading the full accumulated ``artifacts`` state (which contains
|
|
all artifacts ever produced in the thread), this inspects the messages after
|
|
the last human message and collects file paths from ``present_files`` tool
|
|
calls. This ensures only newly-produced artifacts are returned.
|
|
"""
|
|
if isinstance(result, list):
|
|
messages = result
|
|
elif isinstance(result, dict):
|
|
messages = result.get("messages", [])
|
|
else:
|
|
return []
|
|
|
|
artifacts: list[str] = []
|
|
for msg in reversed(messages):
|
|
if not isinstance(msg, dict):
|
|
continue
|
|
# Stop at the last human message — anything before it is a previous turn
|
|
if msg.get("type") == "human":
|
|
break
|
|
# Look for AI messages with present_files tool calls
|
|
if msg.get("type") == "ai":
|
|
for tc in msg.get("tool_calls", []):
|
|
if isinstance(tc, dict) and tc.get("name") == "present_files":
|
|
args = tc.get("args", {})
|
|
paths = args.get("filepaths", [])
|
|
if isinstance(paths, list):
|
|
artifacts.extend(p for p in paths if isinstance(p, str))
|
|
return artifacts
|
|
|
|
|
|
def _format_artifact_text(artifacts: list[str]) -> str:
|
|
"""Format artifact paths into a human-readable text block listing filenames."""
|
|
import posixpath
|
|
|
|
filenames = [posixpath.basename(p) for p in artifacts]
|
|
if len(filenames) == 1:
|
|
return f"Created File: 📎 {filenames[0]}"
|
|
return "Created Files: 📎 " + "、".join(filenames)
|
|
|
|
|
|
_OUTPUTS_VIRTUAL_PREFIX = "/mnt/user-data/outputs/"
|
|
|
|
|
|
def _resolve_attachments(thread_id: str, artifacts: list[str]) -> list[ResolvedAttachment]:
|
|
"""Resolve virtual artifact paths to host filesystem paths with metadata.
|
|
|
|
Only paths under ``/mnt/user-data/outputs/`` are accepted; any other
|
|
virtual path is rejected with a warning to prevent exfiltrating uploads
|
|
or workspace files via IM channels.
|
|
|
|
Skips artifacts that cannot be resolved (missing files, invalid paths)
|
|
and logs warnings for them.
|
|
"""
|
|
from deerflow.config.paths import get_paths
|
|
|
|
attachments: list[ResolvedAttachment] = []
|
|
paths = get_paths()
|
|
user_id = get_effective_user_id()
|
|
outputs_dir = paths.sandbox_outputs_dir(thread_id, user_id=user_id).resolve()
|
|
for virtual_path in artifacts:
|
|
# Security: only allow files from the agent outputs directory
|
|
if not virtual_path.startswith(_OUTPUTS_VIRTUAL_PREFIX):
|
|
logger.warning("[Manager] rejected non-outputs artifact path: %s", virtual_path)
|
|
continue
|
|
try:
|
|
actual = paths.resolve_virtual_path(thread_id, virtual_path, user_id=user_id)
|
|
# Verify the resolved path is actually under the outputs directory
|
|
# (guards against path-traversal even after prefix check)
|
|
try:
|
|
actual.resolve().relative_to(outputs_dir)
|
|
except ValueError:
|
|
logger.warning("[Manager] artifact path escapes outputs dir: %s -> %s", virtual_path, actual)
|
|
continue
|
|
if not actual.is_file():
|
|
logger.warning("[Manager] artifact not found on disk: %s -> %s", virtual_path, actual)
|
|
continue
|
|
mime, _ = mimetypes.guess_type(str(actual))
|
|
mime = mime or "application/octet-stream"
|
|
attachments.append(
|
|
ResolvedAttachment(
|
|
virtual_path=virtual_path,
|
|
actual_path=actual,
|
|
filename=actual.name,
|
|
mime_type=mime,
|
|
size=actual.stat().st_size,
|
|
is_image=mime.startswith("image/"),
|
|
)
|
|
)
|
|
except (ValueError, OSError) as exc:
|
|
logger.warning("[Manager] failed to resolve artifact %s: %s", virtual_path, exc)
|
|
return attachments
|
|
|
|
|
|
def _prepare_artifact_delivery(
|
|
thread_id: str,
|
|
response_text: str,
|
|
artifacts: list[str],
|
|
) -> tuple[str, list[ResolvedAttachment]]:
|
|
"""Resolve attachments and append filename fallbacks to the text response."""
|
|
attachments: list[ResolvedAttachment] = []
|
|
if not artifacts:
|
|
return response_text, attachments
|
|
|
|
attachments = _resolve_attachments(thread_id, artifacts)
|
|
resolved_virtuals = {attachment.virtual_path for attachment in attachments}
|
|
unresolved = [path for path in artifacts if path not in resolved_virtuals]
|
|
|
|
if unresolved:
|
|
artifact_text = _format_artifact_text(unresolved)
|
|
response_text = (response_text + "\n\n" + artifact_text) if response_text else artifact_text
|
|
|
|
# Always include resolved attachment filenames as a text fallback so files
|
|
# remain discoverable even when the upload is skipped or fails.
|
|
if attachments:
|
|
resolved_text = _format_artifact_text([attachment.virtual_path for attachment in attachments])
|
|
response_text = (response_text + "\n\n" + resolved_text) if response_text else resolved_text
|
|
|
|
return response_text, attachments
|
|
|
|
|
|
async def _ingest_inbound_files(thread_id: str, msg: InboundMessage) -> list[dict[str, Any]]:
|
|
if not msg.files:
|
|
return []
|
|
|
|
from deerflow.uploads.manager import claim_unique_filename, ensure_uploads_dir, normalize_filename
|
|
|
|
uploads_dir = ensure_uploads_dir(thread_id)
|
|
seen_names = {entry.name for entry in uploads_dir.iterdir() if entry.is_file()}
|
|
|
|
created: list[dict[str, Any]] = []
|
|
file_reader = INBOUND_FILE_READERS.get(msg.channel_name, _read_http_inbound_file)
|
|
async with httpx.AsyncClient(timeout=httpx.Timeout(20.0)) as client:
|
|
for idx, f in enumerate(msg.files):
|
|
if not isinstance(f, dict):
|
|
continue
|
|
|
|
ftype = f.get("type") if isinstance(f.get("type"), str) else "file"
|
|
filename = f.get("filename") if isinstance(f.get("filename"), str) else ""
|
|
|
|
try:
|
|
data = await file_reader(f, client)
|
|
except Exception:
|
|
logger.exception(
|
|
"[Manager] failed to read inbound file: channel=%s, file=%s",
|
|
msg.channel_name,
|
|
f.get("url") or filename or idx,
|
|
)
|
|
continue
|
|
|
|
if data is None:
|
|
logger.warning(
|
|
"[Manager] inbound file reader returned no data: channel=%s, file=%s",
|
|
msg.channel_name,
|
|
f.get("url") or filename or idx,
|
|
)
|
|
continue
|
|
|
|
if not filename:
|
|
ext = ".bin"
|
|
if ftype == "image":
|
|
ext = ".png"
|
|
filename = f"{msg.thread_ts or 'msg'}_{idx}{ext}"
|
|
|
|
try:
|
|
safe_name = claim_unique_filename(normalize_filename(filename), seen_names)
|
|
except ValueError:
|
|
logger.warning(
|
|
"[Manager] skipping inbound file with unsafe filename: channel=%s, file=%r",
|
|
msg.channel_name,
|
|
filename,
|
|
)
|
|
continue
|
|
|
|
dest = uploads_dir / safe_name
|
|
try:
|
|
dest.write_bytes(data)
|
|
except Exception:
|
|
logger.exception("[Manager] failed to write inbound file: %s", dest)
|
|
continue
|
|
|
|
created.append(
|
|
{
|
|
"filename": safe_name,
|
|
"size": len(data),
|
|
"path": f"/mnt/user-data/uploads/{safe_name}",
|
|
"is_image": ftype == "image",
|
|
}
|
|
)
|
|
|
|
return created
|
|
|
|
|
|
def _format_uploaded_files_block(files: list[dict[str, Any]]) -> str:
|
|
lines = [
|
|
"<uploaded_files>",
|
|
"The following files were uploaded in this message:",
|
|
"",
|
|
]
|
|
if not files:
|
|
lines.append("(empty)")
|
|
else:
|
|
for f in files:
|
|
filename = f.get("filename", "")
|
|
size = int(f.get("size") or 0)
|
|
size_kb = size / 1024 if size else 0
|
|
size_str = f"{size_kb:.1f} KB" if size_kb < 1024 else f"{size_kb / 1024:.1f} MB"
|
|
path = f.get("path", "")
|
|
is_image = bool(f.get("is_image"))
|
|
file_kind = "image" if is_image else "file"
|
|
lines.append(f"- {filename} ({size_str})")
|
|
lines.append(f" Type: {file_kind}")
|
|
lines.append(f" Path: {path}")
|
|
lines.append("")
|
|
lines.append("Use `read_file` for text-based files and documents.")
|
|
lines.append("Use `view_image` for image files (jpg, jpeg, png, webp) so the model can inspect the image content.")
|
|
lines.append("</uploaded_files>")
|
|
return "\n".join(lines)
|
|
|
|
|
|
class ChannelManager:
|
|
"""Core dispatcher that bridges IM channels to the DeerFlow agent.
|
|
|
|
It reads from the MessageBus inbound queue, creates/reuses threads on
|
|
the LangGraph Server, sends messages via ``runs.wait``, and publishes
|
|
outbound responses back through the bus.
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
bus: MessageBus,
|
|
store: ChannelStore,
|
|
*,
|
|
max_concurrency: int = 5,
|
|
langgraph_url: str = DEFAULT_LANGGRAPH_URL,
|
|
gateway_url: str = DEFAULT_GATEWAY_URL,
|
|
assistant_id: str = DEFAULT_ASSISTANT_ID,
|
|
default_session: dict[str, Any] | None = None,
|
|
channel_sessions: dict[str, Any] | None = None,
|
|
) -> None:
|
|
self.bus = bus
|
|
self.store = store
|
|
self._max_concurrency = max_concurrency
|
|
self._langgraph_url = langgraph_url
|
|
self._gateway_url = gateway_url
|
|
self._assistant_id = assistant_id
|
|
self._default_session = _as_dict(default_session)
|
|
self._channel_sessions = dict(channel_sessions or {})
|
|
self._client = None # lazy init — langgraph_sdk async client
|
|
self._semaphore: asyncio.Semaphore | None = None
|
|
self._running = False
|
|
self._task: asyncio.Task | None = None
|
|
|
|
@staticmethod
|
|
def _channel_supports_streaming(channel_name: str) -> bool:
|
|
return CHANNEL_CAPABILITIES.get(channel_name, {}).get("supports_streaming", False)
|
|
|
|
def _resolve_session_layer(self, msg: InboundMessage) -> tuple[dict[str, Any], dict[str, Any]]:
|
|
channel_layer = _as_dict(self._channel_sessions.get(msg.channel_name))
|
|
users_layer = _as_dict(channel_layer.get("users"))
|
|
user_layer = _as_dict(users_layer.get(msg.user_id))
|
|
return channel_layer, user_layer
|
|
|
|
def _resolve_run_params(self, msg: InboundMessage, thread_id: str) -> tuple[str, dict[str, Any], dict[str, Any]]:
|
|
channel_layer, user_layer = self._resolve_session_layer(msg)
|
|
|
|
assistant_id = user_layer.get("assistant_id") or channel_layer.get("assistant_id") or self._default_session.get("assistant_id") or self._assistant_id
|
|
if not isinstance(assistant_id, str) or not assistant_id.strip():
|
|
assistant_id = self._assistant_id
|
|
|
|
run_config = _merge_dicts(
|
|
DEFAULT_RUN_CONFIG,
|
|
self._default_session.get("config"),
|
|
channel_layer.get("config"),
|
|
user_layer.get("config"),
|
|
)
|
|
|
|
run_context = _merge_dicts(
|
|
DEFAULT_RUN_CONTEXT,
|
|
self._default_session.get("context"),
|
|
channel_layer.get("context"),
|
|
user_layer.get("context"),
|
|
{"thread_id": thread_id},
|
|
)
|
|
|
|
# Custom agents are implemented as lead_agent + agent_name context.
|
|
# Keep backward compatibility for channel configs that set
|
|
# assistant_id: <custom-agent-name> by routing through lead_agent.
|
|
if assistant_id != DEFAULT_ASSISTANT_ID:
|
|
run_context.setdefault("agent_name", _normalize_custom_agent_name(assistant_id))
|
|
assistant_id = DEFAULT_ASSISTANT_ID
|
|
|
|
return assistant_id, run_config, run_context
|
|
|
|
# -- LangGraph SDK client (lazy) ----------------------------------------
|
|
|
|
def _get_client(self):
|
|
"""Return the ``langgraph_sdk`` async client, creating it on first use."""
|
|
if self._client is None:
|
|
from langgraph_sdk import get_client
|
|
|
|
self._client = get_client(url=self._langgraph_url)
|
|
return self._client
|
|
|
|
# -- lifecycle ---------------------------------------------------------
|
|
|
|
async def start(self) -> None:
|
|
"""Start the dispatch loop."""
|
|
if self._running:
|
|
return
|
|
self._running = True
|
|
self._semaphore = asyncio.Semaphore(self._max_concurrency)
|
|
self._task = asyncio.create_task(self._dispatch_loop())
|
|
logger.info("ChannelManager started (max_concurrency=%d)", self._max_concurrency)
|
|
|
|
async def stop(self) -> None:
|
|
"""Stop the dispatch loop."""
|
|
self._running = False
|
|
if self._task:
|
|
self._task.cancel()
|
|
try:
|
|
await self._task
|
|
except asyncio.CancelledError:
|
|
pass
|
|
self._task = None
|
|
logger.info("ChannelManager stopped")
|
|
|
|
# -- dispatch loop -----------------------------------------------------
|
|
|
|
async def _dispatch_loop(self) -> None:
|
|
logger.info("[Manager] dispatch loop started, waiting for inbound messages")
|
|
while self._running:
|
|
try:
|
|
msg = await asyncio.wait_for(self.bus.get_inbound(), timeout=1.0)
|
|
except TimeoutError:
|
|
continue
|
|
except asyncio.CancelledError:
|
|
break
|
|
|
|
logger.info(
|
|
"[Manager] received inbound: channel=%s, chat_id=%s, type=%s, text=%r",
|
|
msg.channel_name,
|
|
msg.chat_id,
|
|
msg.msg_type.value,
|
|
msg.text[:100] if msg.text else "",
|
|
)
|
|
task = asyncio.create_task(self._handle_message(msg))
|
|
task.add_done_callback(self._log_task_error)
|
|
|
|
@staticmethod
|
|
def _log_task_error(task: asyncio.Task) -> None:
|
|
"""Surface unhandled exceptions from background tasks."""
|
|
if task.cancelled():
|
|
return
|
|
exc = task.exception()
|
|
if exc:
|
|
logger.error("[Manager] unhandled error in message task: %s", exc, exc_info=exc)
|
|
|
|
async def _handle_message(self, msg: InboundMessage) -> None:
|
|
async with self._semaphore:
|
|
try:
|
|
if msg.msg_type == InboundMessageType.COMMAND:
|
|
await self._handle_command(msg)
|
|
else:
|
|
await self._handle_chat(msg)
|
|
except InvalidChannelSessionConfigError as exc:
|
|
logger.warning(
|
|
"Invalid channel session config for %s (chat=%s): %s",
|
|
msg.channel_name,
|
|
msg.chat_id,
|
|
exc,
|
|
)
|
|
await self._send_error(msg, str(exc))
|
|
except Exception:
|
|
logger.exception(
|
|
"Error handling message from %s (chat=%s)",
|
|
msg.channel_name,
|
|
msg.chat_id,
|
|
)
|
|
await self._send_error(msg, "An internal error occurred. Please try again.")
|
|
|
|
# -- chat handling -----------------------------------------------------
|
|
|
|
async def _create_thread(self, client, msg: InboundMessage) -> str:
|
|
"""Create a new thread on the LangGraph Server and store the mapping."""
|
|
thread = await client.threads.create()
|
|
thread_id = thread["thread_id"]
|
|
self.store.set_thread_id(
|
|
msg.channel_name,
|
|
msg.chat_id,
|
|
thread_id,
|
|
topic_id=msg.topic_id,
|
|
user_id=msg.user_id,
|
|
)
|
|
logger.info("[Manager] new thread created on LangGraph Server: thread_id=%s for chat_id=%s topic_id=%s", thread_id, msg.chat_id, msg.topic_id)
|
|
return thread_id
|
|
|
|
async def _handle_chat(self, msg: InboundMessage, extra_context: dict[str, Any] | None = None) -> None:
|
|
client = self._get_client()
|
|
|
|
# Look up existing DeerFlow thread.
|
|
# topic_id may be None (e.g. Telegram private chats) — the store
|
|
# handles this by using the "channel:chat_id" key without a topic suffix.
|
|
thread_id = self.store.get_thread_id(msg.channel_name, msg.chat_id, topic_id=msg.topic_id)
|
|
if thread_id:
|
|
logger.info("[Manager] reusing thread: thread_id=%s for topic_id=%s", thread_id, msg.topic_id)
|
|
|
|
# No existing thread found — create a new one
|
|
if thread_id is None:
|
|
thread_id = await self._create_thread(client, msg)
|
|
|
|
assistant_id, run_config, run_context = self._resolve_run_params(msg, thread_id)
|
|
|
|
# If the inbound message contains file attachments, let the channel
|
|
# materialize (download) them and update msg.text to include sandbox file paths.
|
|
# This enables downstream models to access user-uploaded files by path.
|
|
# Channels that do not support file download will simply return the original message.
|
|
if msg.files:
|
|
from .service import get_channel_service
|
|
|
|
service = get_channel_service()
|
|
channel = service.get_channel(msg.channel_name) if service else None
|
|
logger.info("[Manager] preparing receive file context for %d attachments", len(msg.files))
|
|
msg = await channel.receive_file(msg, thread_id) if channel else msg
|
|
if extra_context:
|
|
run_context.update(extra_context)
|
|
|
|
uploaded = await _ingest_inbound_files(thread_id, msg)
|
|
if uploaded:
|
|
msg.text = f"{_format_uploaded_files_block(uploaded)}\n\n{msg.text}".strip()
|
|
|
|
if self._channel_supports_streaming(msg.channel_name):
|
|
await self._handle_streaming_chat(
|
|
client,
|
|
msg,
|
|
thread_id,
|
|
assistant_id,
|
|
run_config,
|
|
run_context,
|
|
)
|
|
return
|
|
|
|
logger.info("[Manager] invoking runs.wait(thread_id=%s, text=%r)", thread_id, msg.text[:100])
|
|
result = await client.runs.wait(
|
|
thread_id,
|
|
assistant_id,
|
|
input={"messages": [{"role": "human", "content": msg.text}]},
|
|
config=run_config,
|
|
context=run_context,
|
|
)
|
|
|
|
response_text = _extract_response_text(result)
|
|
artifacts = _extract_artifacts(result)
|
|
|
|
logger.info(
|
|
"[Manager] agent response received: thread_id=%s, response_len=%d, artifacts=%d",
|
|
thread_id,
|
|
len(response_text) if response_text else 0,
|
|
len(artifacts),
|
|
)
|
|
|
|
response_text, attachments = _prepare_artifact_delivery(thread_id, response_text, artifacts)
|
|
|
|
if not response_text:
|
|
if attachments:
|
|
response_text = _format_artifact_text([a.virtual_path for a in attachments])
|
|
else:
|
|
response_text = "(No response from agent)"
|
|
|
|
outbound = OutboundMessage(
|
|
channel_name=msg.channel_name,
|
|
chat_id=msg.chat_id,
|
|
thread_id=thread_id,
|
|
text=response_text,
|
|
artifacts=artifacts,
|
|
attachments=attachments,
|
|
thread_ts=msg.thread_ts,
|
|
)
|
|
logger.info("[Manager] publishing outbound message to bus: channel=%s, chat_id=%s", msg.channel_name, msg.chat_id)
|
|
await self.bus.publish_outbound(outbound)
|
|
|
|
async def _handle_streaming_chat(
|
|
self,
|
|
client,
|
|
msg: InboundMessage,
|
|
thread_id: str,
|
|
assistant_id: str,
|
|
run_config: dict[str, Any],
|
|
run_context: dict[str, Any],
|
|
) -> None:
|
|
logger.info("[Manager] invoking runs.stream(thread_id=%s, text=%r)", thread_id, msg.text[:100])
|
|
|
|
last_values: dict[str, Any] | list | None = None
|
|
streamed_buffers: dict[str, str] = {}
|
|
current_message_id: str | None = None
|
|
latest_text = ""
|
|
last_published_text = ""
|
|
last_publish_at = 0.0
|
|
stream_error: BaseException | None = None
|
|
|
|
try:
|
|
async for chunk in client.runs.stream(
|
|
thread_id,
|
|
assistant_id,
|
|
input={"messages": [{"role": "human", "content": msg.text}]},
|
|
config=run_config,
|
|
context=run_context,
|
|
stream_mode=["messages-tuple", "values"],
|
|
multitask_strategy="reject",
|
|
):
|
|
event = getattr(chunk, "event", "")
|
|
data = getattr(chunk, "data", None)
|
|
|
|
if event == "messages-tuple":
|
|
accumulated_text, current_message_id = _accumulate_stream_text(streamed_buffers, current_message_id, data)
|
|
if accumulated_text:
|
|
latest_text = accumulated_text
|
|
elif event == "values" and isinstance(data, (dict, list)):
|
|
last_values = data
|
|
snapshot_text = _extract_response_text(data)
|
|
if snapshot_text:
|
|
latest_text = snapshot_text
|
|
|
|
if not latest_text or latest_text == last_published_text:
|
|
continue
|
|
|
|
now = time.monotonic()
|
|
if last_published_text and now - last_publish_at < STREAM_UPDATE_MIN_INTERVAL_SECONDS:
|
|
continue
|
|
|
|
await self.bus.publish_outbound(
|
|
OutboundMessage(
|
|
channel_name=msg.channel_name,
|
|
chat_id=msg.chat_id,
|
|
thread_id=thread_id,
|
|
text=latest_text,
|
|
is_final=False,
|
|
thread_ts=msg.thread_ts,
|
|
)
|
|
)
|
|
last_published_text = latest_text
|
|
last_publish_at = now
|
|
except Exception as exc:
|
|
stream_error = exc
|
|
if _is_thread_busy_error(exc):
|
|
logger.warning("[Manager] thread busy (concurrent run rejected): thread_id=%s", thread_id)
|
|
else:
|
|
logger.exception("[Manager] streaming error: thread_id=%s", thread_id)
|
|
finally:
|
|
result = last_values if last_values is not None else {"messages": [{"type": "ai", "content": latest_text}]}
|
|
response_text = _extract_response_text(result)
|
|
artifacts = _extract_artifacts(result)
|
|
response_text, attachments = _prepare_artifact_delivery(thread_id, response_text, artifacts)
|
|
|
|
if not response_text:
|
|
if attachments:
|
|
response_text = _format_artifact_text([attachment.virtual_path for attachment in attachments])
|
|
elif stream_error:
|
|
if _is_thread_busy_error(stream_error):
|
|
response_text = THREAD_BUSY_MESSAGE
|
|
else:
|
|
response_text = "An error occurred while processing your request. Please try again."
|
|
else:
|
|
response_text = latest_text or "(No response from agent)"
|
|
|
|
logger.info(
|
|
"[Manager] streaming response completed: thread_id=%s, response_len=%d, artifacts=%d, error=%s",
|
|
thread_id,
|
|
len(response_text),
|
|
len(artifacts),
|
|
stream_error,
|
|
)
|
|
await self.bus.publish_outbound(
|
|
OutboundMessage(
|
|
channel_name=msg.channel_name,
|
|
chat_id=msg.chat_id,
|
|
thread_id=thread_id,
|
|
text=response_text,
|
|
artifacts=artifacts,
|
|
attachments=attachments,
|
|
is_final=True,
|
|
thread_ts=msg.thread_ts,
|
|
)
|
|
)
|
|
|
|
# -- command handling --------------------------------------------------
|
|
|
|
async def _handle_command(self, msg: InboundMessage) -> None:
|
|
text = msg.text.strip()
|
|
parts = text.split(maxsplit=1)
|
|
command = parts[0].lower().lstrip("/")
|
|
|
|
if command == "bootstrap":
|
|
from dataclasses import replace as _dc_replace
|
|
|
|
chat_text = parts[1] if len(parts) > 1 else "Initialize workspace"
|
|
chat_msg = _dc_replace(msg, text=chat_text, msg_type=InboundMessageType.CHAT)
|
|
await self._handle_chat(chat_msg, extra_context={"is_bootstrap": True})
|
|
return
|
|
|
|
if command == "new":
|
|
# Create a new thread on the LangGraph Server
|
|
client = self._get_client()
|
|
thread = await client.threads.create()
|
|
new_thread_id = thread["thread_id"]
|
|
self.store.set_thread_id(
|
|
msg.channel_name,
|
|
msg.chat_id,
|
|
new_thread_id,
|
|
topic_id=msg.topic_id,
|
|
user_id=msg.user_id,
|
|
)
|
|
reply = "New conversation started."
|
|
elif command == "status":
|
|
thread_id = self.store.get_thread_id(msg.channel_name, msg.chat_id, topic_id=msg.topic_id)
|
|
reply = f"Active thread: {thread_id}" if thread_id else "No active conversation."
|
|
elif command == "models":
|
|
reply = await self._fetch_gateway("/api/models", "models")
|
|
elif command == "memory":
|
|
reply = await self._fetch_gateway("/api/memory", "memory")
|
|
elif command == "help":
|
|
reply = (
|
|
"Available commands:\n"
|
|
"/bootstrap — Start a bootstrap session (enables agent setup)\n"
|
|
"/new — Start a new conversation\n"
|
|
"/status — Show current thread info\n"
|
|
"/models — List available models\n"
|
|
"/memory — Show memory status\n"
|
|
"/help — Show this help"
|
|
)
|
|
else:
|
|
available = " | ".join(sorted(KNOWN_CHANNEL_COMMANDS))
|
|
reply = f"Unknown command: /{command}. Available commands: {available}"
|
|
|
|
outbound = OutboundMessage(
|
|
channel_name=msg.channel_name,
|
|
chat_id=msg.chat_id,
|
|
thread_id=self.store.get_thread_id(msg.channel_name, msg.chat_id) or "",
|
|
text=reply,
|
|
thread_ts=msg.thread_ts,
|
|
)
|
|
await self.bus.publish_outbound(outbound)
|
|
|
|
async def _fetch_gateway(self, path: str, kind: str) -> str:
|
|
"""Fetch data from the Gateway API for command responses."""
|
|
import httpx
|
|
|
|
try:
|
|
async with httpx.AsyncClient() as http:
|
|
resp = await http.get(f"{self._gateway_url}{path}", timeout=10)
|
|
resp.raise_for_status()
|
|
data = resp.json()
|
|
except Exception:
|
|
logger.exception("Failed to fetch %s from gateway", kind)
|
|
return f"Failed to fetch {kind} information."
|
|
|
|
if kind == "models":
|
|
names = [m["name"] for m in data.get("models", [])]
|
|
return ("Available models:\n" + "\n".join(f"• {n}" for n in names)) if names else "No models configured."
|
|
elif kind == "memory":
|
|
facts = data.get("facts", [])
|
|
return f"Memory contains {len(facts)} fact(s)."
|
|
return str(data)
|
|
|
|
# -- error helper ------------------------------------------------------
|
|
|
|
async def _send_error(self, msg: InboundMessage, error_text: str) -> None:
|
|
outbound = OutboundMessage(
|
|
channel_name=msg.channel_name,
|
|
chat_id=msg.chat_id,
|
|
thread_id=self.store.get_thread_id(msg.channel_name, msg.chat_id) or "",
|
|
text=error_text,
|
|
thread_ts=msg.thread_ts,
|
|
)
|
|
await self.bus.publish_outbound(outbound)
|