mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-05-07 01:08:25 +00:00
* feat(persistence): add SQLAlchemy 2.0 async ORM scaffold
Introduce a unified database configuration (DatabaseConfig) that
controls both the LangGraph checkpointer and the DeerFlow application
persistence layer from a single `database:` config section.
New modules:
- deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends
- deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton
- deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation
Gateway integration initializes/tears down the persistence engine in
the existing langgraph_runtime() context manager. Legacy checkpointer
config is preserved for backward compatibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add RunEventStore ABC + MemoryRunEventStore
Phase 2-A prerequisite for event storage: adds the unified run event
stream interface (RunEventStore) with an in-memory implementation,
RunEventsConfig, gateway integration, and comprehensive tests (27 cases).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints
Phase 2-B: run persistence + event storage + token tracking.
- ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow
- RunRepository implements RunStore ABC via SQLAlchemy ORM
- ThreadMetaRepository with owner access control
- DbRunEventStore with trace content truncation and cursor pagination
- JsonlRunEventStore with per-run files and seq recovery from disk
- RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events,
accumulates token usage by caller type, buffers and flushes to store
- RunManager now accepts optional RunStore for persistent backing
- Worker creates RunJournal, writes human_message, injects callbacks
- Gateway deps use factory functions (RunRepository when DB available)
- New endpoints: messages, run messages, run events, token-usage
- ThreadCreateRequest gains assistant_id field
- 92 tests pass (33 new), zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add user feedback + follow-up run association
Phase 2-C: feedback and follow-up tracking.
- FeedbackRow ORM model (rating +1/-1, optional message_id, comment)
- FeedbackRepository with CRUD, list_by_run/thread, aggregate stats
- Feedback API endpoints: create, list, stats, delete
- follow_up_to_run_id in RunCreateRequest (explicit or auto-detected
from latest successful run on the thread)
- Worker writes follow_up_to_run_id into human_message event metadata
- Gateway deps: feedback_repo factory + getter
- 17 new tests (14 FeedbackRepository + 3 follow-up association)
- 109 total tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config
- config.example.yaml: deprecate standalone checkpointer section, activate
unified database:sqlite as default (drives both checkpointer + app data)
- New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage
including check_access owner logic, list_by_owner pagination
- Extended test_run_repository.py (+4 tests) — completion preserves fields,
list ordering desc, limit, owner_none returns all
- Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false,
middleware no ai_message, unknown caller tokens, convenience fields,
tool_error, non-summarization custom event
- Extended test_run_event_store.py (+7 tests) — DB batch seq continuity,
make_run_event_store factory (memory/db/jsonl/fallback/unknown)
- Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists,
follow-up metadata, summarization in history, full DB-backed lifecycle
- Fixed DB integration test to use proper fake objects (not MagicMock)
for JSON-serializable metadata
- 157 total Phase 2 tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* config: move default sqlite_dir to .deer-flow/data
Keep SQLite databases alongside other DeerFlow-managed data
(threads, memory) under the .deer-flow/ directory instead of a
top-level ./data folder.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now()
- Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM
models. Add json_serializer=json.dumps(ensure_ascii=False) to all
create_async_engine calls so non-ASCII text (Chinese etc.) is stored
as-is in both SQLite and Postgres.
- Change ORM datetime defaults from datetime.now(UTC) to datetime.now(),
remove UTC imports.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): simplify deps.py with getter factory + inline repos
- Replace 6 identical getter functions with _require() factory.
- Inline 3 _make_*_repo() factories into langgraph_runtime(), call
get_session_factory() once instead of 3 times.
- Add thread_meta upsert in start_run (services.py).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(docker): add UV_EXTRAS build arg for optional dependencies
Support installing optional dependency groups (e.g. postgres) at
Docker build time via UV_EXTRAS build arg:
UV_EXTRAS=postgres docker compose build
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(journal): fix flush, token tracking, and consolidate tests
RunJournal fixes:
- _flush_sync: retain events in buffer when no event loop instead of
dropping them; worker's finally block flushes via async flush().
- on_llm_end: add tool_calls filter and caller=="lead_agent" guard for
ai_message events; mark message IDs for dedup with record_llm_usage.
- worker.py: persist completion data (tokens, message count) to RunStore
in finally block.
Model factory:
- Auto-inject stream_usage=True for BaseChatOpenAI subclasses with
custom api_base, so usage_metadata is populated in streaming responses.
Test consolidation:
- Delete test_phase2b_integration.py (redundant with existing tests).
- Move DB-backed lifecycle test into test_run_journal.py.
- Add tests for stream_usage injection in test_model_factory.py.
- Clean up executor/task_tool dead journal references.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): widen content type to str|dict in all store backends
Allow event content to be a dict (for structured OpenAI-format messages)
in addition to plain strings. Dict values are JSON-serialized for the DB
backend and deserialized on read; memory and JSONL backends handle dicts
natively. Trace truncation now serializes dicts to JSON before measuring.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(events): use metadata flag instead of heuristic for dict content detection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(converters): add LangChain-to-OpenAI message format converters
Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(converters): handle empty list content as null, clean up test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): human_message content uses OpenAI user message format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): ai_message uses OpenAI format, add ai_tool_call message event
- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add tool_result message event with OpenAI tool message format
Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): summary content uses OpenAI system message format
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format
Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add record_middleware method for middleware trace events
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test(events): add full run sequence integration test for OpenAI content format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): align message events with checkpoint format and add middleware tag injection
- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): switch search endpoint to threads_meta table and sync title
- POST /api/threads/search now queries threads_meta table directly,
removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): history endpoint reads messages from event store
- POST /api/threads/{thread_id}/history now combines two data sources:
checkpointer for checkpoint_id, metadata, title, thread_data;
event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove duplicate optional-dependencies header in pyproject.toml
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(middleware): pass tagged config to TitleMiddleware ainvoke call
Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve merge conflict in .env.example
Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address review feedback on PR #1851
- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: update uv.lock
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality
Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search
Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: apply ruff format to persistence and runtime files
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Potential fix for pull request finding 'Statement has no effect'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* refactor(runtime): introduce RunContext to reduce run_agent parameter bloat
Extract checkpointer, store, event_store, run_events_config, thread_meta_repo,
and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context()
in deps.py to build the base context from app.state singletons. start_run() uses
dataclasses.replace() to enrich per-run fields before passing ctx to run_agent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): move sanitize_log_param to app/gateway/utils.py
Extract the log-injection sanitizer from routers/threads.py into a shared
utils module and rename to sanitize_log_param (public API). Eliminates the
reverse service → router import in services.py.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* perf: use SQL aggregation for feedback stats and thread token usage
Replace Python-side counting in FeedbackRepository.aggregate_by_run with
a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread
abstract method with SQL GROUP BY implementation in RunRepository and
Python fallback in MemoryRunStore. Simplify the thread_token_usage
endpoint to delegate to the new method, eliminating the limit=10000
truncation risk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: annotate DbRunEventStore.put() as low-frequency path
Add docstring clarifying that put() opens a per-call transaction with
FOR UPDATE and should only be used for infrequent writes (currently
just the initial human_message event). High-throughput callers should
use put_batch() instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(threads): fall back to Store search when ThreadMetaRepository is unavailable
When database.backend=memory (default) or no SQL session factory is
configured, search_threads now queries the LangGraph Store instead of
returning 503. Returns empty list if neither Store nor repo is available.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata
Add ThreadMetaStore abstract base class with create/get/search/update/delete
interface. ThreadMetaRepository (SQL) now inherits from it. New
MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments.
deps.py now always provides a non-None thread_meta_repo, eliminating all
`if thread_meta_repo is not None` guards in services.py, worker.py, and
routers/threads.py. search_threads no longer needs a Store fallback branch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(history): read messages from checkpointer instead of RunEventStore
The /history endpoint now reads messages directly from the
checkpointer's channel_values (the authoritative source) instead of
querying RunEventStore.list_messages(). The RunEventStore API is
preserved for other consumers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address new Copilot review comments
- feedback.py: validate thread_id/run_id before deleting feedback
- jsonl.py: add path traversal protection with ID validation
- run_repo.py: parse `before` to datetime for PostgreSQL compat
- thread_meta_repo.py: fix pagination when metadata filter is active
- database_config.py: use resolve_path for sqlite_dir consistency
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Implement skill self-evolution and skill_manage flow (#1874)
* chore: ignore .worktrees directory
* Add skill_manage self-evolution flow
* Fix CI regressions for skill_manage
* Address PR review feedback for skill evolution
* fix(skill-evolution): preserve history on delete
* fix(skill-evolution): tighten scanner fallbacks
* docs: add skill_manage e2e evidence screenshot
* fix(skill-manage): avoid blocking fs ops in session runtime
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir
resolve_path() resolves relative to Paths.base_dir (.deer-flow),
which double-nested the path to .deer-flow/.deer-flow/data/app.db.
Use Path.resolve() (CWD-relative) instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Feature/feishu receive file (#1608)
* feat(feishu): add channel file materialization hook for inbound messages
- Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op.
- Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text.
- Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files.
- No impact on Slack/Telegram or other channels (they inherit the default no-op).
* style(backend): format code with ruff for lint compliance
- Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format`
- Ensured both files conform to project linting standards
- Fixes CI lint check failures caused by code style issues
* fix(feishu): handle file write operation asynchronously to prevent blocking
* fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code
* test(feishu): add tests for receive_file method and placeholder replacement
* fix(manager): remove unnecessary type casting for channel retrieval
* fix(feishu): update logging messages to reflect resource handling instead of image
* fix(feishu): sanitize filename by replacing invalid characters in file uploads
* fix(feishu): improve filename sanitization and reorder image key handling in message processing
* fix(feishu): add thread lock to prevent filename conflicts during file downloads
* fix(test): correct bad merge in test_feishu_parser.py
* chore: run ruff and apply formatting cleanup
fix(feishu): preserve rich-text attachment order and improve fallback filename handling
* fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915)
Two production docker-compose.yaml bugs prevent `make up` from working:
1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH
environment overrides. Added in fb2d99f (#1836) but accidentally reverted
by ca2fb95 (#1847). Without them, gateway reads host paths from .env via
env_file, causing FileNotFoundError inside the container.
2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default).
Empty $${allow_blocking} inserts a bare space between flags, causing
' --no-reload' to be parsed as unexpected extra argument. Fix by building
args string first and conditionally appending --allow-blocking.
Co-authored-by: cooper <cooperfu@tencent.com>
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904)
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities
Fix `<button>` inside `<a>` invalid HTML in artifact components and add
missing `noopener,noreferrer` to `window.open` calls to prevent reverse
tabnabbing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(frontend): address Copilot review on tabnabbing and double-tab-open
Remove redundant parent onClick on web_fetch ChainOfThoughtStep to
prevent opening two tabs on link click, and explicitly null out
window.opener after window.open() for defensive tabnabbing hardening.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(persistence): organize entities into per-entity directories
Restructure the persistence layer from horizontal "models/ + repositories/"
split into vertical entity-aligned directories. Each entity (thread_meta,
run, feedback) now owns its ORM model, abstract interface (where applicable),
and concrete implementations under a single directory with an aggregating
__init__.py for one-line imports.
Layout:
persistence/thread_meta/{base,model,sql,memory}.py
persistence/run/{model,sql}.py
persistence/feedback/{model,sql}.py
models/__init__.py is kept as a facade so Alembic autogenerate continues to
discover all ORM tables via Base.metadata. RunEventRow remains under
models/run_event.py because its storage implementation lives in
runtime/events/store/db.py and has no matching repository directory.
The repositories/ directory is removed entirely. All call sites in
gateway/deps.py and tests are updated to import from the new entity
packages, e.g.:
from deerflow.persistence.thread_meta import ThreadMetaRepository
from deerflow.persistence.run import RunRepository
from deerflow.persistence.feedback import FeedbackRepository
Full test suite passes (1690 passed, 14 skipped).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gateway): sync thread rename and delete through ThreadMetaStore
The POST /threads/{id}/state endpoint previously synced title changes
only to the LangGraph Store via _store_upsert. In sqlite mode the search
endpoint reads from the ThreadMetaRepository SQL table, so renames never
appeared in /threads/search until the next agent run completed (worker.py
syncs title from checkpoint to thread_meta in its finally block).
Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem,
Store, and checkpointer but left the threads_meta row orphaned in sqlite,
so deleted threads kept appearing in /threads/search.
Fix both endpoints by routing through the ThreadMetaStore abstraction
which already has the correct sqlite/memory implementations wired up by
deps.py. The rename path now calls update_display_name() and the delete
path calls delete() — both work uniformly across backends.
Verified end-to-end with curl in gateway mode against sqlite backend.
Existing test suite (1690 passed) and focused router/repo tests pass.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): route all thread metadata access through ThreadMetaStore
Following the rename/delete bug fix in PR1, migrate the remaining direct
LangGraph Store reads/writes in the threads router and services to the
ThreadMetaStore abstraction so that the sqlite and memory backends behave
identically and the legacy dual-write paths can be removed.
Migrated endpoints (threads.py):
- create_thread: idempotency check + write now use thread_meta_repo.get/create
instead of dual-writing the LangGraph Store and the SQL row.
- get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback
for legacy threads is preserved.
- patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata.
- delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete
already covers it.
Removed dead code (services.py):
- _upsert_thread_in_store — redundant with the immediately following
thread_meta_repo.create() call.
- _sync_thread_title_after_run — worker.py's finally block already syncs
the title via thread_meta_repo.update_display_name() after each run.
Removed dead code (threads.py):
- _store_get / _store_put / _store_upsert helpers (no remaining callers).
- THREADS_NS constant.
- get_store import (router no longer touches the LangGraph Store directly).
New abstract method:
- ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into
the thread's metadata field. Implemented in both ThreadMetaRepository (SQL,
read-modify-write inside one session) and MemoryThreadMetaStore. Three new
unit tests cover merge / empty / nonexistent behaviour.
Net change: -134 lines. Full test suite: 1693 passed, 14 skipped.
Verified end-to-end with curl in gateway mode against sqlite backend
(create / patch / get / rename / search / delete).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: JilongSun <965640067@qq.com>
Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com>
Co-authored-by: cooper <cooperfu@tencent.com>
Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com>
587 lines
24 KiB
Python
587 lines
24 KiB
Python
"""Thread CRUD, state, and history endpoints.
|
|
|
|
Combines the existing thread-local filesystem cleanup with LangGraph
|
|
Platform-compatible thread management backed by the checkpointer.
|
|
|
|
Channel values returned in state responses are serialized through
|
|
:func:`deerflow.runtime.serialization.serialize_channel_values` to
|
|
ensure LangChain message objects are converted to JSON-safe dicts
|
|
matching the LangGraph Platform wire format expected by the
|
|
``useStream`` React hook.
|
|
"""
|
|
|
|
from __future__ import annotations
|
|
|
|
import logging
|
|
import time
|
|
import uuid
|
|
from typing import Any
|
|
|
|
from fastapi import APIRouter, HTTPException, Request
|
|
from pydantic import BaseModel, Field
|
|
|
|
from app.gateway.deps import get_checkpointer
|
|
from app.gateway.utils import sanitize_log_param
|
|
from deerflow.config.paths import Paths, get_paths
|
|
from deerflow.runtime import serialize_channel_values
|
|
|
|
logger = logging.getLogger(__name__)
|
|
router = APIRouter(prefix="/api/threads", tags=["threads"])
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Response / request models
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class ThreadDeleteResponse(BaseModel):
|
|
"""Response model for thread cleanup."""
|
|
|
|
success: bool
|
|
message: str
|
|
|
|
|
|
class ThreadResponse(BaseModel):
|
|
"""Response model for a single thread."""
|
|
|
|
thread_id: str = Field(description="Unique thread identifier")
|
|
status: str = Field(default="idle", description="Thread status: idle, busy, interrupted, error")
|
|
created_at: str = Field(default="", description="ISO timestamp")
|
|
updated_at: str = Field(default="", description="ISO timestamp")
|
|
metadata: dict[str, Any] = Field(default_factory=dict, description="Thread metadata")
|
|
values: dict[str, Any] = Field(default_factory=dict, description="Current state channel values")
|
|
interrupts: dict[str, Any] = Field(default_factory=dict, description="Pending interrupts")
|
|
|
|
|
|
class ThreadCreateRequest(BaseModel):
|
|
"""Request body for creating a thread."""
|
|
|
|
thread_id: str | None = Field(default=None, description="Optional thread ID (auto-generated if omitted)")
|
|
assistant_id: str | None = Field(default=None, description="Associate thread with an assistant")
|
|
metadata: dict[str, Any] = Field(default_factory=dict, description="Initial metadata")
|
|
|
|
|
|
class ThreadSearchRequest(BaseModel):
|
|
"""Request body for searching threads."""
|
|
|
|
metadata: dict[str, Any] = Field(default_factory=dict, description="Metadata filter (exact match)")
|
|
limit: int = Field(default=100, ge=1, le=1000, description="Maximum results")
|
|
offset: int = Field(default=0, ge=0, description="Pagination offset")
|
|
status: str | None = Field(default=None, description="Filter by thread status")
|
|
|
|
|
|
class ThreadStateResponse(BaseModel):
|
|
"""Response model for thread state."""
|
|
|
|
values: dict[str, Any] = Field(default_factory=dict, description="Current channel values")
|
|
next: list[str] = Field(default_factory=list, description="Next tasks to execute")
|
|
metadata: dict[str, Any] = Field(default_factory=dict, description="Checkpoint metadata")
|
|
checkpoint: dict[str, Any] = Field(default_factory=dict, description="Checkpoint info")
|
|
checkpoint_id: str | None = Field(default=None, description="Current checkpoint ID")
|
|
parent_checkpoint_id: str | None = Field(default=None, description="Parent checkpoint ID")
|
|
created_at: str | None = Field(default=None, description="Checkpoint timestamp")
|
|
tasks: list[dict[str, Any]] = Field(default_factory=list, description="Interrupted task details")
|
|
|
|
|
|
class ThreadPatchRequest(BaseModel):
|
|
"""Request body for patching thread metadata."""
|
|
|
|
metadata: dict[str, Any] = Field(default_factory=dict, description="Metadata to merge")
|
|
|
|
|
|
class ThreadStateUpdateRequest(BaseModel):
|
|
"""Request body for updating thread state (human-in-the-loop resume)."""
|
|
|
|
values: dict[str, Any] | None = Field(default=None, description="Channel values to merge")
|
|
checkpoint_id: str | None = Field(default=None, description="Checkpoint to branch from")
|
|
checkpoint: dict[str, Any] | None = Field(default=None, description="Full checkpoint object")
|
|
as_node: str | None = Field(default=None, description="Node identity for the update")
|
|
|
|
|
|
class HistoryEntry(BaseModel):
|
|
"""Single checkpoint history entry."""
|
|
|
|
checkpoint_id: str
|
|
parent_checkpoint_id: str | None = None
|
|
metadata: dict[str, Any] = Field(default_factory=dict)
|
|
values: dict[str, Any] = Field(default_factory=dict)
|
|
created_at: str | None = None
|
|
next: list[str] = Field(default_factory=list)
|
|
|
|
|
|
class ThreadHistoryRequest(BaseModel):
|
|
"""Request body for checkpoint history."""
|
|
|
|
limit: int = Field(default=10, ge=1, le=100, description="Maximum entries")
|
|
before: str | None = Field(default=None, description="Cursor for pagination")
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Helpers
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def _delete_thread_data(thread_id: str, paths: Paths | None = None) -> ThreadDeleteResponse:
|
|
"""Delete local persisted filesystem data for a thread."""
|
|
path_manager = paths or get_paths()
|
|
try:
|
|
path_manager.delete_thread_dir(thread_id)
|
|
except ValueError as exc:
|
|
raise HTTPException(status_code=422, detail=str(exc)) from exc
|
|
except FileNotFoundError:
|
|
# Not critical — thread data may not exist on disk
|
|
logger.debug("No local thread data to delete for %s", sanitize_log_param(thread_id))
|
|
return ThreadDeleteResponse(success=True, message=f"No local data for {thread_id}")
|
|
except Exception as exc:
|
|
logger.exception("Failed to delete thread data for %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to delete local thread data.") from exc
|
|
|
|
logger.info("Deleted local thread data for %s", sanitize_log_param(thread_id))
|
|
return ThreadDeleteResponse(success=True, message=f"Deleted local thread data for {thread_id}")
|
|
|
|
|
|
def _derive_thread_status(checkpoint_tuple) -> str:
|
|
"""Derive thread status from checkpoint metadata."""
|
|
if checkpoint_tuple is None:
|
|
return "idle"
|
|
pending_writes = getattr(checkpoint_tuple, "pending_writes", None) or []
|
|
|
|
# Check for error in pending writes
|
|
for pw in pending_writes:
|
|
if len(pw) >= 2 and pw[1] == "__error__":
|
|
return "error"
|
|
|
|
# Check for pending next tasks (indicates interrupt)
|
|
tasks = getattr(checkpoint_tuple, "tasks", None)
|
|
if tasks:
|
|
return "interrupted"
|
|
|
|
return "idle"
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Endpoints
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
@router.delete("/{thread_id}", response_model=ThreadDeleteResponse)
|
|
async def delete_thread_data(thread_id: str, request: Request) -> ThreadDeleteResponse:
|
|
"""Delete local persisted filesystem data for a thread.
|
|
|
|
Cleans DeerFlow-managed thread directories, removes checkpoint data,
|
|
and removes the thread_meta row from the configured ThreadMetaStore
|
|
(sqlite or memory).
|
|
"""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
# Clean local filesystem
|
|
response = _delete_thread_data(thread_id)
|
|
|
|
# Remove checkpoints (best-effort)
|
|
checkpointer = getattr(request.app.state, "checkpointer", None)
|
|
if checkpointer is not None:
|
|
try:
|
|
if hasattr(checkpointer, "adelete_thread"):
|
|
await checkpointer.adelete_thread(thread_id)
|
|
except Exception:
|
|
logger.debug("Could not delete checkpoints for thread %s (not critical)", sanitize_log_param(thread_id))
|
|
|
|
# Remove thread_meta row (best-effort) — required for sqlite backend
|
|
# so the deleted thread no longer appears in /threads/search.
|
|
try:
|
|
thread_meta_repo = get_thread_meta_repo(request)
|
|
await thread_meta_repo.delete(thread_id)
|
|
except Exception:
|
|
logger.debug("Could not delete thread_meta for %s (not critical)", sanitize_log_param(thread_id))
|
|
|
|
return response
|
|
|
|
|
|
@router.post("", response_model=ThreadResponse)
|
|
async def create_thread(body: ThreadCreateRequest, request: Request) -> ThreadResponse:
|
|
"""Create a new thread.
|
|
|
|
Writes a thread_meta record (so the thread appears in /threads/search)
|
|
and an empty checkpoint (so state endpoints work immediately).
|
|
Idempotent: returns the existing record when ``thread_id`` already exists.
|
|
"""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
checkpointer = get_checkpointer(request)
|
|
thread_meta_repo = get_thread_meta_repo(request)
|
|
thread_id = body.thread_id or str(uuid.uuid4())
|
|
now = time.time()
|
|
|
|
# Idempotency: return existing record when already present
|
|
existing_record = await thread_meta_repo.get(thread_id)
|
|
if existing_record is not None:
|
|
return ThreadResponse(
|
|
thread_id=thread_id,
|
|
status=existing_record.get("status", "idle"),
|
|
created_at=str(existing_record.get("created_at", "")),
|
|
updated_at=str(existing_record.get("updated_at", "")),
|
|
metadata=existing_record.get("metadata", {}),
|
|
)
|
|
|
|
# Write thread_meta so the thread appears in /threads/search immediately
|
|
try:
|
|
await thread_meta_repo.create(
|
|
thread_id,
|
|
assistant_id=getattr(body, "assistant_id", None),
|
|
metadata=body.metadata,
|
|
)
|
|
except Exception:
|
|
logger.exception("Failed to write thread_meta for %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to create thread")
|
|
|
|
# Write an empty checkpoint so state endpoints work immediately
|
|
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
|
try:
|
|
from langgraph.checkpoint.base import empty_checkpoint
|
|
|
|
ckpt_metadata = {
|
|
"step": -1,
|
|
"source": "input",
|
|
"writes": None,
|
|
"parents": {},
|
|
**body.metadata,
|
|
"created_at": now,
|
|
}
|
|
await checkpointer.aput(config, empty_checkpoint(), ckpt_metadata, {})
|
|
except Exception:
|
|
logger.exception("Failed to create checkpoint for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to create thread")
|
|
|
|
logger.info("Thread created: %s", sanitize_log_param(thread_id))
|
|
return ThreadResponse(
|
|
thread_id=thread_id,
|
|
status="idle",
|
|
created_at=str(now),
|
|
updated_at=str(now),
|
|
metadata=body.metadata,
|
|
)
|
|
|
|
|
|
@router.post("/search", response_model=list[ThreadResponse])
|
|
async def search_threads(body: ThreadSearchRequest, request: Request) -> list[ThreadResponse]:
|
|
"""Search and list threads.
|
|
|
|
Delegates to the configured ThreadMetaStore implementation
|
|
(SQL-backed for sqlite/postgres, Store-backed for memory mode).
|
|
"""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
repo = get_thread_meta_repo(request)
|
|
rows = await repo.search(
|
|
metadata=body.metadata or None,
|
|
status=body.status,
|
|
limit=body.limit,
|
|
offset=body.offset,
|
|
)
|
|
return [
|
|
ThreadResponse(
|
|
thread_id=r["thread_id"],
|
|
status=r.get("status", "idle"),
|
|
created_at=r.get("created_at", ""),
|
|
updated_at=r.get("updated_at", ""),
|
|
metadata=r.get("metadata", {}),
|
|
values={"title": r["display_name"]} if r.get("display_name") else {},
|
|
interrupts={},
|
|
)
|
|
for r in rows
|
|
]
|
|
|
|
|
|
@router.patch("/{thread_id}", response_model=ThreadResponse)
|
|
async def patch_thread(thread_id: str, body: ThreadPatchRequest, request: Request) -> ThreadResponse:
|
|
"""Merge metadata into a thread record."""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
thread_meta_repo = get_thread_meta_repo(request)
|
|
record = await thread_meta_repo.get(thread_id)
|
|
if record is None:
|
|
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
|
|
|
try:
|
|
await thread_meta_repo.update_metadata(thread_id, body.metadata)
|
|
except Exception:
|
|
logger.exception("Failed to patch thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to update thread")
|
|
|
|
# Re-read to get the merged metadata + refreshed updated_at
|
|
record = await thread_meta_repo.get(thread_id) or record
|
|
return ThreadResponse(
|
|
thread_id=thread_id,
|
|
status=record.get("status", "idle"),
|
|
created_at=str(record.get("created_at", "")),
|
|
updated_at=str(record.get("updated_at", "")),
|
|
metadata=record.get("metadata", {}),
|
|
)
|
|
|
|
|
|
@router.get("/{thread_id}", response_model=ThreadResponse)
|
|
async def get_thread(thread_id: str, request: Request) -> ThreadResponse:
|
|
"""Get thread info.
|
|
|
|
Reads metadata from the ThreadMetaStore and derives the accurate
|
|
execution status from the checkpointer. Falls back to the checkpointer
|
|
alone for threads that pre-date ThreadMetaStore adoption (backward compat).
|
|
"""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
thread_meta_repo = get_thread_meta_repo(request)
|
|
checkpointer = get_checkpointer(request)
|
|
|
|
record: dict | None = await thread_meta_repo.get(thread_id)
|
|
|
|
# Derive accurate status from the checkpointer
|
|
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
|
try:
|
|
checkpoint_tuple = await checkpointer.aget_tuple(config)
|
|
except Exception:
|
|
logger.exception("Failed to get checkpoint for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to get thread")
|
|
|
|
if record is None and checkpoint_tuple is None:
|
|
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
|
|
|
# If the thread exists in the checkpointer but not in thread_meta (e.g.
|
|
# legacy data created before thread_meta adoption), synthesize a minimal
|
|
# record from the checkpoint metadata.
|
|
if record is None and checkpoint_tuple is not None:
|
|
ckpt_meta = getattr(checkpoint_tuple, "metadata", {}) or {}
|
|
record = {
|
|
"thread_id": thread_id,
|
|
"status": "idle",
|
|
"created_at": ckpt_meta.get("created_at", ""),
|
|
"updated_at": ckpt_meta.get("updated_at", ckpt_meta.get("created_at", "")),
|
|
"metadata": {k: v for k, v in ckpt_meta.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")},
|
|
}
|
|
|
|
if record is None:
|
|
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
|
|
|
status = _derive_thread_status(checkpoint_tuple) if checkpoint_tuple is not None else record.get("status", "idle")
|
|
checkpoint = getattr(checkpoint_tuple, "checkpoint", {}) or {} if checkpoint_tuple is not None else {}
|
|
channel_values = checkpoint.get("channel_values", {})
|
|
|
|
return ThreadResponse(
|
|
thread_id=thread_id,
|
|
status=status,
|
|
created_at=str(record.get("created_at", "")),
|
|
updated_at=str(record.get("updated_at", "")),
|
|
metadata=record.get("metadata", {}),
|
|
values=serialize_channel_values(channel_values),
|
|
)
|
|
|
|
|
|
@router.get("/{thread_id}/state", response_model=ThreadStateResponse)
|
|
async def get_thread_state(thread_id: str, request: Request) -> ThreadStateResponse:
|
|
"""Get the latest state snapshot for a thread.
|
|
|
|
Channel values are serialized to ensure LangChain message objects
|
|
are converted to JSON-safe dicts.
|
|
"""
|
|
checkpointer = get_checkpointer(request)
|
|
|
|
config = {"configurable": {"thread_id": thread_id, "checkpoint_ns": ""}}
|
|
try:
|
|
checkpoint_tuple = await checkpointer.aget_tuple(config)
|
|
except Exception:
|
|
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to get thread state")
|
|
|
|
if checkpoint_tuple is None:
|
|
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
|
|
|
checkpoint = getattr(checkpoint_tuple, "checkpoint", {}) or {}
|
|
metadata = getattr(checkpoint_tuple, "metadata", {}) or {}
|
|
checkpoint_id = None
|
|
ckpt_config = getattr(checkpoint_tuple, "config", {})
|
|
if ckpt_config:
|
|
checkpoint_id = ckpt_config.get("configurable", {}).get("checkpoint_id")
|
|
|
|
channel_values = checkpoint.get("channel_values", {})
|
|
|
|
parent_config = getattr(checkpoint_tuple, "parent_config", None)
|
|
parent_checkpoint_id = None
|
|
if parent_config:
|
|
parent_checkpoint_id = parent_config.get("configurable", {}).get("checkpoint_id")
|
|
|
|
tasks_raw = getattr(checkpoint_tuple, "tasks", []) or []
|
|
next_tasks = [t.name for t in tasks_raw if hasattr(t, "name")]
|
|
tasks = [{"id": getattr(t, "id", ""), "name": getattr(t, "name", "")} for t in tasks_raw]
|
|
|
|
return ThreadStateResponse(
|
|
values=serialize_channel_values(channel_values),
|
|
next=next_tasks,
|
|
metadata=metadata,
|
|
checkpoint={"id": checkpoint_id, "ts": str(metadata.get("created_at", ""))},
|
|
checkpoint_id=checkpoint_id,
|
|
parent_checkpoint_id=parent_checkpoint_id,
|
|
created_at=str(metadata.get("created_at", "")),
|
|
tasks=tasks,
|
|
)
|
|
|
|
|
|
@router.post("/{thread_id}/state", response_model=ThreadStateResponse)
|
|
async def update_thread_state(thread_id: str, body: ThreadStateUpdateRequest, request: Request) -> ThreadStateResponse:
|
|
"""Update thread state (e.g. for human-in-the-loop resume or title rename).
|
|
|
|
Writes a new checkpoint that merges *body.values* into the latest
|
|
channel values, then syncs any updated ``title`` field through the
|
|
ThreadMetaStore abstraction so that ``/threads/search`` reflects the
|
|
change immediately in both sqlite and memory backends.
|
|
"""
|
|
from app.gateway.deps import get_thread_meta_repo
|
|
|
|
checkpointer = get_checkpointer(request)
|
|
thread_meta_repo = get_thread_meta_repo(request)
|
|
|
|
# checkpoint_ns must be present in the config for aput — default to ""
|
|
# (the root graph namespace). checkpoint_id is optional; omitting it
|
|
# fetches the latest checkpoint for the thread.
|
|
read_config: dict[str, Any] = {
|
|
"configurable": {
|
|
"thread_id": thread_id,
|
|
"checkpoint_ns": "",
|
|
}
|
|
}
|
|
if body.checkpoint_id:
|
|
read_config["configurable"]["checkpoint_id"] = body.checkpoint_id
|
|
|
|
try:
|
|
checkpoint_tuple = await checkpointer.aget_tuple(read_config)
|
|
except Exception:
|
|
logger.exception("Failed to get state for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to get thread state")
|
|
|
|
if checkpoint_tuple is None:
|
|
raise HTTPException(status_code=404, detail=f"Thread {thread_id} not found")
|
|
|
|
# Work on mutable copies so we don't accidentally mutate cached objects.
|
|
checkpoint: dict[str, Any] = dict(getattr(checkpoint_tuple, "checkpoint", {}) or {})
|
|
metadata: dict[str, Any] = dict(getattr(checkpoint_tuple, "metadata", {}) or {})
|
|
channel_values: dict[str, Any] = dict(checkpoint.get("channel_values", {}))
|
|
|
|
if body.values:
|
|
channel_values.update(body.values)
|
|
|
|
checkpoint["channel_values"] = channel_values
|
|
metadata["updated_at"] = time.time()
|
|
|
|
if body.as_node:
|
|
metadata["source"] = "update"
|
|
metadata["step"] = metadata.get("step", 0) + 1
|
|
metadata["writes"] = {body.as_node: body.values}
|
|
|
|
# aput requires checkpoint_ns in the config — use the same config used for the
|
|
# read (which always includes checkpoint_ns=""). Do NOT include checkpoint_id
|
|
# so that aput generates a fresh checkpoint ID for the new snapshot.
|
|
write_config: dict[str, Any] = {
|
|
"configurable": {
|
|
"thread_id": thread_id,
|
|
"checkpoint_ns": "",
|
|
}
|
|
}
|
|
try:
|
|
new_config = await checkpointer.aput(write_config, checkpoint, metadata, {})
|
|
except Exception:
|
|
logger.exception("Failed to update state for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to update thread state")
|
|
|
|
new_checkpoint_id: str | None = None
|
|
if isinstance(new_config, dict):
|
|
new_checkpoint_id = new_config.get("configurable", {}).get("checkpoint_id")
|
|
|
|
# Sync title changes through the ThreadMetaStore abstraction so /threads/search
|
|
# reflects them immediately in both sqlite and memory backends.
|
|
if body.values and "title" in body.values:
|
|
new_title = body.values["title"]
|
|
if new_title: # Skip empty strings and None
|
|
try:
|
|
await thread_meta_repo.update_display_name(thread_id, new_title)
|
|
except Exception:
|
|
logger.debug("Failed to sync title to thread_meta for %s (non-fatal)", sanitize_log_param(thread_id))
|
|
|
|
return ThreadStateResponse(
|
|
values=serialize_channel_values(channel_values),
|
|
next=[],
|
|
metadata=metadata,
|
|
checkpoint_id=new_checkpoint_id,
|
|
created_at=str(metadata.get("created_at", "")),
|
|
)
|
|
|
|
|
|
@router.post("/{thread_id}/history", response_model=list[HistoryEntry])
|
|
async def get_thread_history(thread_id: str, body: ThreadHistoryRequest, request: Request) -> list[HistoryEntry]:
|
|
"""Get checkpoint history for a thread.
|
|
|
|
Messages are read from the checkpointer's channel values (the
|
|
authoritative source) and serialized via
|
|
:func:`~deerflow.runtime.serialization.serialize_channel_values`.
|
|
Only the latest (first) checkpoint carries the ``messages`` key to
|
|
avoid duplicating them across every entry.
|
|
"""
|
|
checkpointer = get_checkpointer(request)
|
|
|
|
config: dict[str, Any] = {"configurable": {"thread_id": thread_id}}
|
|
if body.before:
|
|
config["configurable"]["checkpoint_id"] = body.before
|
|
|
|
entries: list[HistoryEntry] = []
|
|
is_latest_checkpoint = True
|
|
try:
|
|
async for checkpoint_tuple in checkpointer.alist(config, limit=body.limit):
|
|
ckpt_config = getattr(checkpoint_tuple, "config", {})
|
|
parent_config = getattr(checkpoint_tuple, "parent_config", None)
|
|
metadata = getattr(checkpoint_tuple, "metadata", {}) or {}
|
|
checkpoint = getattr(checkpoint_tuple, "checkpoint", {}) or {}
|
|
|
|
checkpoint_id = ckpt_config.get("configurable", {}).get("checkpoint_id", "")
|
|
parent_id = None
|
|
if parent_config:
|
|
parent_id = parent_config.get("configurable", {}).get("checkpoint_id")
|
|
|
|
channel_values = checkpoint.get("channel_values", {})
|
|
|
|
# Build values from checkpoint channel_values
|
|
values: dict[str, Any] = {}
|
|
if title := channel_values.get("title"):
|
|
values["title"] = title
|
|
if thread_data := channel_values.get("thread_data"):
|
|
values["thread_data"] = thread_data
|
|
|
|
# Attach messages from checkpointer only for the latest checkpoint
|
|
if is_latest_checkpoint:
|
|
messages = channel_values.get("messages")
|
|
if messages:
|
|
values["messages"] = serialize_channel_values({"messages": messages}).get("messages", [])
|
|
is_latest_checkpoint = False
|
|
|
|
# Derive next tasks
|
|
tasks_raw = getattr(checkpoint_tuple, "tasks", []) or []
|
|
next_tasks = [t.name for t in tasks_raw if hasattr(t, "name")]
|
|
|
|
# Strip LangGraph internal keys from metadata
|
|
user_meta = {k: v for k, v in metadata.items() if k not in ("created_at", "updated_at", "step", "source", "writes", "parents")}
|
|
# Keep step for ordering context
|
|
if "step" in metadata:
|
|
user_meta["step"] = metadata["step"]
|
|
|
|
entries.append(
|
|
HistoryEntry(
|
|
checkpoint_id=checkpoint_id,
|
|
parent_checkpoint_id=parent_id,
|
|
metadata=user_meta,
|
|
values=values,
|
|
created_at=str(metadata.get("created_at", "")),
|
|
next=next_tasks,
|
|
)
|
|
)
|
|
except Exception:
|
|
logger.exception("Failed to get history for thread %s", sanitize_log_param(thread_id))
|
|
raise HTTPException(status_code=500, detail="Failed to get thread history")
|
|
|
|
return entries
|