deer-flow/backend/tests/test_run_event_store.py
rayhpeng d8ecaf46c9 feat(persistence): add unified persistence layer with event store, token tracking, and feedback (#1930)
* feat(persistence): add SQLAlchemy 2.0 async ORM scaffold

Introduce a unified database configuration (DatabaseConfig) that
controls both the LangGraph checkpointer and the DeerFlow application
persistence layer from a single `database:` config section.

New modules:
- deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends
- deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton
- deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation

Gateway integration initializes/tears down the persistence engine in
the existing langgraph_runtime() context manager. Legacy checkpointer
config is preserved for backward compatibility.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add RunEventStore ABC + MemoryRunEventStore

Phase 2-A prerequisite for event storage: adds the unified run event
stream interface (RunEventStore) with an in-memory implementation,
RunEventsConfig, gateway integration, and comprehensive tests (27 cases).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints

Phase 2-B: run persistence + event storage + token tracking.

- ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow
- RunRepository implements RunStore ABC via SQLAlchemy ORM
- ThreadMetaRepository with owner access control
- DbRunEventStore with trace content truncation and cursor pagination
- JsonlRunEventStore with per-run files and seq recovery from disk
- RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events,
  accumulates token usage by caller type, buffers and flushes to store
- RunManager now accepts optional RunStore for persistent backing
- Worker creates RunJournal, writes human_message, injects callbacks
- Gateway deps use factory functions (RunRepository when DB available)
- New endpoints: messages, run messages, run events, token-usage
- ThreadCreateRequest gains assistant_id field
- 92 tests pass (33 new), zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(persistence): add user feedback + follow-up run association

Phase 2-C: feedback and follow-up tracking.

- FeedbackRow ORM model (rating +1/-1, optional message_id, comment)
- FeedbackRepository with CRUD, list_by_run/thread, aggregate stats
- Feedback API endpoints: create, list, stats, delete
- follow_up_to_run_id in RunCreateRequest (explicit or auto-detected
  from latest successful run on the thread)
- Worker writes follow_up_to_run_id into human_message event metadata
- Gateway deps: feedback_repo factory + getter
- 17 new tests (14 FeedbackRepository + 3 follow-up association)
- 109 total tests pass, zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config

- config.example.yaml: deprecate standalone checkpointer section, activate
  unified database:sqlite as default (drives both checkpointer + app data)
- New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage
  including check_access owner logic, list_by_owner pagination
- Extended test_run_repository.py (+4 tests) — completion preserves fields,
  list ordering desc, limit, owner_none returns all
- Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false,
  middleware no ai_message, unknown caller tokens, convenience fields,
  tool_error, non-summarization custom event
- Extended test_run_event_store.py (+7 tests) — DB batch seq continuity,
  make_run_event_store factory (memory/db/jsonl/fallback/unknown)
- Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists,
  follow-up metadata, summarization in history, full DB-backed lifecycle
- Fixed DB integration test to use proper fake objects (not MagicMock)
  for JSON-serializable metadata
- 157 total Phase 2 tests pass, zero regressions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* config: move default sqlite_dir to .deer-flow/data

Keep SQLite databases alongside other DeerFlow-managed data
(threads, memory) under the .deer-flow/ directory instead of a
top-level ./data folder.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now()

- Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM
  models. Add json_serializer=json.dumps(ensure_ascii=False) to all
  create_async_engine calls so non-ASCII text (Chinese etc.) is stored
  as-is in both SQLite and Postgres.
- Change ORM datetime defaults from datetime.now(UTC) to datetime.now(),
  remove UTC imports.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): simplify deps.py with getter factory + inline repos

- Replace 6 identical getter functions with _require() factory.
- Inline 3 _make_*_repo() factories into langgraph_runtime(), call
  get_session_factory() once instead of 3 times.
- Add thread_meta upsert in start_run (services.py).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(docker): add UV_EXTRAS build arg for optional dependencies

Support installing optional dependency groups (e.g. postgres) at
Docker build time via UV_EXTRAS build arg:
  UV_EXTRAS=postgres docker compose build

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(journal): fix flush, token tracking, and consolidate tests

RunJournal fixes:
- _flush_sync: retain events in buffer when no event loop instead of
  dropping them; worker's finally block flushes via async flush().
- on_llm_end: add tool_calls filter and caller=="lead_agent" guard for
  ai_message events; mark message IDs for dedup with record_llm_usage.
- worker.py: persist completion data (tokens, message count) to RunStore
  in finally block.

Model factory:
- Auto-inject stream_usage=True for BaseChatOpenAI subclasses with
  custom api_base, so usage_metadata is populated in streaming responses.

Test consolidation:
- Delete test_phase2b_integration.py (redundant with existing tests).
- Move DB-backed lifecycle test into test_run_journal.py.
- Add tests for stream_usage injection in test_model_factory.py.
- Clean up executor/task_tool dead journal references.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): widen content type to str|dict in all store backends

Allow event content to be a dict (for structured OpenAI-format messages)
in addition to plain strings. Dict values are JSON-serialized for the DB
backend and deserialized on read; memory and JSONL backends handle dicts
natively. Trace truncation now serializes dicts to JSON before measuring.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(events): use metadata flag instead of heuristic for dict content detection

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(converters): add LangChain-to-OpenAI message format converters

Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(converters): handle empty list content as null, clean up test

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): human_message content uses OpenAI user message format

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): ai_message uses OpenAI format, add ai_tool_call message event

- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): add tool_result message event with OpenAI tool message format

Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): summary content uses OpenAI system message format

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format

Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(events): add record_middleware method for middleware trace events

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test(events): add full run sequence integration test for OpenAI content format

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(events): align message events with checkpoint format and add middleware tag injection

- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
  BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(threads): switch search endpoint to threads_meta table and sync title

- POST /api/threads/search now queries threads_meta table directly,
  removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(threads): history endpoint reads messages from event store

- POST /api/threads/{thread_id}/history now combines two data sources:
  checkpointer for checkpoint_id, metadata, title, thread_data;
  event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove duplicate optional-dependencies header in pyproject.toml

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(middleware): pass tagged config to TitleMiddleware ainvoke call

Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: resolve merge conflict in .env.example

Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address review feedback on PR #1851

- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
  and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: update uv.lock

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality

Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search

Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: apply ruff format to persistence and runtime files

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Potential fix for pull request finding 'Statement has no effect'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* refactor(runtime): introduce RunContext to reduce run_agent parameter bloat

Extract checkpointer, store, event_store, run_events_config, thread_meta_repo,
and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context()
in deps.py to build the base context from app.state singletons. start_run() uses
dataclasses.replace() to enrich per-run fields before passing ctx to run_agent.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): move sanitize_log_param to app/gateway/utils.py

Extract the log-injection sanitizer from routers/threads.py into a shared
utils module and rename to sanitize_log_param (public API). Eliminates the
reverse service → router import in services.py.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* perf: use SQL aggregation for feedback stats and thread token usage

Replace Python-side counting in FeedbackRepository.aggregate_by_run with
a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread
abstract method with SQL GROUP BY implementation in RunRepository and
Python fallback in MemoryRunStore. Simplify the thread_token_usage
endpoint to delegate to the new method, eliminating the limit=10000
truncation risk.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: annotate DbRunEventStore.put() as low-frequency path

Add docstring clarifying that put() opens a per-call transaction with
FOR UPDATE and should only be used for infrequent writes (currently
just the initial human_message event). High-throughput callers should
use put_batch() instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(threads): fall back to Store search when ThreadMetaRepository is unavailable

When database.backend=memory (default) or no SQL session factory is
configured, search_threads now queries the LangGraph Store instead of
returning 503. Returns empty list if neither Store nor repo is available.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata

Add ThreadMetaStore abstract base class with create/get/search/update/delete
interface. ThreadMetaRepository (SQL) now inherits from it. New
MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments.

deps.py now always provides a non-None thread_meta_repo, eliminating all
`if thread_meta_repo is not None` guards in services.py, worker.py, and
routers/threads.py. search_threads no longer needs a Store fallback branch.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(history): read messages from checkpointer instead of RunEventStore

The /history endpoint now reads messages directly from the
checkpointer's channel_values (the authoritative source) instead of
querying RunEventStore.list_messages(). The RunEventStore API is
preserved for other consumers.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(persistence): address new Copilot review comments

- feedback.py: validate thread_id/run_id before deleting feedback
- jsonl.py: add path traversal protection with ID validation
- run_repo.py: parse `before` to datetime for PostgreSQL compat
- thread_meta_repo.py: fix pagination when metadata filter is active
- database_config.py: use resolve_path for sqlite_dir consistency

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Implement skill self-evolution and skill_manage flow (#1874)

* chore: ignore .worktrees directory

* Add skill_manage self-evolution flow

* Fix CI regressions for skill_manage

* Address PR review feedback for skill evolution

* fix(skill-evolution): preserve history on delete

* fix(skill-evolution): tighten scanner fallbacks

* docs: add skill_manage e2e evidence screenshot

* fix(skill-manage): avoid blocking fs ops in session runtime

---------

Co-authored-by: Willem Jiang <willem.jiang@gmail.com>

* fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir

resolve_path() resolves relative to Paths.base_dir (.deer-flow),
which double-nested the path to .deer-flow/.deer-flow/data/app.db.
Use Path.resolve() (CWD-relative) instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Feature/feishu receive file (#1608)

* feat(feishu): add channel file materialization hook for inbound messages

- Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op.
- Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text.
- Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files.
- No impact on Slack/Telegram or other channels (they inherit the default no-op).

* style(backend): format code with ruff for lint compliance

- Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format`
- Ensured both files conform to project linting standards
- Fixes CI lint check failures caused by code style issues

* fix(feishu): handle file write operation asynchronously to prevent blocking

* fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code

* test(feishu): add tests for receive_file method and placeholder replacement

* fix(manager): remove unnecessary type casting for channel retrieval

* fix(feishu): update logging messages to reflect resource handling instead of image

* fix(feishu): sanitize filename by replacing invalid characters in file uploads

* fix(feishu): improve filename sanitization and reorder image key handling in message processing

* fix(feishu): add thread lock to prevent filename conflicts during file downloads

* fix(test): correct bad merge in test_feishu_parser.py

* chore: run ruff and apply formatting cleanup
fix(feishu): preserve rich-text attachment order and improve fallback filename handling

* fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915)

Two production docker-compose.yaml bugs prevent `make up` from working:

1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH
   environment overrides. Added in fb2d99f (#1836) but accidentally reverted
   by ca2fb95 (#1847). Without them, gateway reads host paths from .env via
   env_file, causing FileNotFoundError inside the container.

2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default).
   Empty $${allow_blocking} inserts a bare space between flags, causing
   ' --no-reload' to be parsed as unexpected extra argument. Fix by building
   args string first and conditionally appending --allow-blocking.

Co-authored-by: cooper <cooperfu@tencent.com>

* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904)

* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities

Fix `<button>` inside `<a>` invalid HTML in artifact components and add
missing `noopener,noreferrer` to `window.open` calls to prevent reverse
tabnabbing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(frontend): address Copilot review on tabnabbing and double-tab-open

Remove redundant parent onClick on web_fetch ChainOfThoughtStep to
prevent opening two tabs on link click, and explicitly null out
window.opener after window.open() for defensive tabnabbing hardening.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(persistence): organize entities into per-entity directories

Restructure the persistence layer from horizontal "models/ + repositories/"
split into vertical entity-aligned directories. Each entity (thread_meta,
run, feedback) now owns its ORM model, abstract interface (where applicable),
and concrete implementations under a single directory with an aggregating
__init__.py for one-line imports.

Layout:
  persistence/thread_meta/{base,model,sql,memory}.py
  persistence/run/{model,sql}.py
  persistence/feedback/{model,sql}.py

models/__init__.py is kept as a facade so Alembic autogenerate continues to
discover all ORM tables via Base.metadata. RunEventRow remains under
models/run_event.py because its storage implementation lives in
runtime/events/store/db.py and has no matching repository directory.

The repositories/ directory is removed entirely. All call sites in
gateway/deps.py and tests are updated to import from the new entity
packages, e.g.:

    from deerflow.persistence.thread_meta import ThreadMetaRepository
    from deerflow.persistence.run import RunRepository
    from deerflow.persistence.feedback import FeedbackRepository

Full test suite passes (1690 passed, 14 skipped).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): sync thread rename and delete through ThreadMetaStore

The POST /threads/{id}/state endpoint previously synced title changes
only to the LangGraph Store via _store_upsert. In sqlite mode the search
endpoint reads from the ThreadMetaRepository SQL table, so renames never
appeared in /threads/search until the next agent run completed (worker.py
syncs title from checkpoint to thread_meta in its finally block).

Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem,
Store, and checkpointer but left the threads_meta row orphaned in sqlite,
so deleted threads kept appearing in /threads/search.

Fix both endpoints by routing through the ThreadMetaStore abstraction
which already has the correct sqlite/memory implementations wired up by
deps.py. The rename path now calls update_display_name() and the delete
path calls delete() — both work uniformly across backends.

Verified end-to-end with curl in gateway mode against sqlite backend.
Existing test suite (1690 passed) and focused router/repo tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor(gateway): route all thread metadata access through ThreadMetaStore

Following the rename/delete bug fix in PR1, migrate the remaining direct
LangGraph Store reads/writes in the threads router and services to the
ThreadMetaStore abstraction so that the sqlite and memory backends behave
identically and the legacy dual-write paths can be removed.

Migrated endpoints (threads.py):
- create_thread: idempotency check + write now use thread_meta_repo.get/create
  instead of dual-writing the LangGraph Store and the SQL row.
- get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback
  for legacy threads is preserved.
- patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata.
- delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete
  already covers it.

Removed dead code (services.py):
- _upsert_thread_in_store — redundant with the immediately following
  thread_meta_repo.create() call.
- _sync_thread_title_after_run — worker.py's finally block already syncs
  the title via thread_meta_repo.update_display_name() after each run.

Removed dead code (threads.py):
- _store_get / _store_put / _store_upsert helpers (no remaining callers).
- THREADS_NS constant.
- get_store import (router no longer touches the LangGraph Store directly).

New abstract method:
- ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into
  the thread's metadata field. Implemented in both ThreadMetaRepository (SQL,
  read-modify-write inside one session) and MemoryThreadMetaStore. Three new
  unit tests cover merge / empty / nonexistent behaviour.

Net change: -134 lines. Full test suite: 1693 passed, 14 skipped.
Verified end-to-end with curl in gateway mode against sqlite backend
(create / patch / get / rename / search / delete).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: JilongSun <965640067@qq.com>
Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com>
Co-authored-by: cooper <cooperfu@tencent.com>
Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com>
2026-04-26 11:05:47 +08:00

501 lines
20 KiB
Python

"""Tests for RunEventStore contract across all backends.
Uses a helper to create the store for each backend type.
Memory tests run directly; DB and JSONL tests create stores inside each test.
"""
import pytest
from deerflow.runtime.events.store.memory import MemoryRunEventStore
@pytest.fixture
def store():
return MemoryRunEventStore()
# -- Basic write and query --
class TestPutAndSeq:
@pytest.mark.anyio
async def test_put_returns_dict_with_seq(self, store):
record = await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content="hello")
assert "seq" in record
assert record["seq"] == 1
assert record["thread_id"] == "t1"
assert record["run_id"] == "r1"
assert record["event_type"] == "human_message"
assert record["category"] == "message"
assert record["content"] == "hello"
assert "created_at" in record
@pytest.mark.anyio
async def test_seq_strictly_increasing_same_thread(self, store):
r1 = await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
r2 = await store.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message")
r3 = await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
assert r1["seq"] == 1
assert r2["seq"] == 2
assert r3["seq"] == 3
@pytest.mark.anyio
async def test_seq_independent_across_threads(self, store):
r1 = await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
r2 = await store.put(thread_id="t2", run_id="r2", event_type="human_message", category="message")
assert r1["seq"] == 1
assert r2["seq"] == 1
@pytest.mark.anyio
async def test_put_respects_provided_created_at(self, store):
ts = "2024-06-01T12:00:00+00:00"
record = await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", created_at=ts)
assert record["created_at"] == ts
@pytest.mark.anyio
async def test_put_metadata_preserved(self, store):
meta = {"model": "gpt-4", "tokens": 100}
record = await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace", metadata=meta)
assert record["metadata"] == meta
# -- list_messages --
class TestListMessages:
@pytest.mark.anyio
async def test_only_returns_message_category(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
await store.put(thread_id="t1", run_id="r1", event_type="run_start", category="lifecycle")
messages = await store.list_messages("t1")
assert len(messages) == 1
assert messages[0]["category"] == "message"
@pytest.mark.anyio
async def test_ascending_seq_order(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content="first")
await store.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message", content="second")
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content="third")
messages = await store.list_messages("t1")
seqs = [m["seq"] for m in messages]
assert seqs == sorted(seqs)
@pytest.mark.anyio
async def test_before_seq_pagination(self, store):
for i in range(10):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content=str(i))
messages = await store.list_messages("t1", before_seq=6, limit=3)
assert len(messages) == 3
assert [m["seq"] for m in messages] == [3, 4, 5]
@pytest.mark.anyio
async def test_after_seq_pagination(self, store):
for i in range(10):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content=str(i))
messages = await store.list_messages("t1", after_seq=7, limit=3)
assert len(messages) == 3
assert [m["seq"] for m in messages] == [8, 9, 10]
@pytest.mark.anyio
async def test_limit_restricts_count(self, store):
for _ in range(20):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
messages = await store.list_messages("t1", limit=5)
assert len(messages) == 5
@pytest.mark.anyio
async def test_cross_run_unified_ordering(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message")
await store.put(thread_id="t1", run_id="r2", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r2", event_type="ai_message", category="message")
messages = await store.list_messages("t1")
assert [m["seq"] for m in messages] == [1, 2, 3, 4]
assert messages[0]["run_id"] == "r1"
assert messages[2]["run_id"] == "r2"
@pytest.mark.anyio
async def test_default_returns_latest(self, store):
for _ in range(10):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
messages = await store.list_messages("t1", limit=3)
assert [m["seq"] for m in messages] == [8, 9, 10]
# -- list_events --
class TestListEvents:
@pytest.mark.anyio
async def test_returns_all_categories_for_run(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
await store.put(thread_id="t1", run_id="r1", event_type="run_start", category="lifecycle")
events = await store.list_events("t1", "r1")
assert len(events) == 3
@pytest.mark.anyio
async def test_event_types_filter(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="llm_start", category="trace")
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
await store.put(thread_id="t1", run_id="r1", event_type="tool_start", category="trace")
events = await store.list_events("t1", "r1", event_types=["llm_end"])
assert len(events) == 1
assert events[0]["event_type"] == "llm_end"
@pytest.mark.anyio
async def test_only_returns_specified_run(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
await store.put(thread_id="t1", run_id="r2", event_type="llm_end", category="trace")
events = await store.list_events("t1", "r1")
assert len(events) == 1
assert events[0]["run_id"] == "r1"
# -- list_messages_by_run --
class TestListMessagesByRun:
@pytest.mark.anyio
async def test_only_messages_for_specified_run(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
await store.put(thread_id="t1", run_id="r2", event_type="human_message", category="message")
messages = await store.list_messages_by_run("t1", "r1")
assert len(messages) == 1
assert messages[0]["run_id"] == "r1"
assert messages[0]["category"] == "message"
# -- count_messages --
class TestCountMessages:
@pytest.mark.anyio
async def test_counts_only_message_category(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace")
assert await store.count_messages("t1") == 2
# -- put_batch --
class TestPutBatch:
@pytest.mark.anyio
async def test_batch_assigns_seq(self, store):
events = [
{"thread_id": "t1", "run_id": "r1", "event_type": "human_message", "category": "message", "content": "a"},
{"thread_id": "t1", "run_id": "r1", "event_type": "ai_message", "category": "message", "content": "b"},
{"thread_id": "t1", "run_id": "r1", "event_type": "llm_end", "category": "trace"},
]
results = await store.put_batch(events)
assert len(results) == 3
assert all("seq" in r for r in results)
@pytest.mark.anyio
async def test_batch_seq_strictly_increasing(self, store):
events = [
{"thread_id": "t1", "run_id": "r1", "event_type": "human_message", "category": "message"},
{"thread_id": "t1", "run_id": "r1", "event_type": "ai_message", "category": "message"},
]
results = await store.put_batch(events)
assert results[0]["seq"] == 1
assert results[1]["seq"] == 2
# -- delete --
class TestDelete:
@pytest.mark.anyio
async def test_delete_by_thread(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message")
await store.put(thread_id="t1", run_id="r2", event_type="llm_end", category="trace")
count = await store.delete_by_thread("t1")
assert count == 3
assert await store.list_messages("t1") == []
assert await store.count_messages("t1") == 0
@pytest.mark.anyio
async def test_delete_by_run(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r2", event_type="human_message", category="message")
await store.put(thread_id="t1", run_id="r2", event_type="llm_end", category="trace")
count = await store.delete_by_run("t1", "r2")
assert count == 2
messages = await store.list_messages("t1")
assert len(messages) == 1
assert messages[0]["run_id"] == "r1"
@pytest.mark.anyio
async def test_delete_nonexistent_thread_returns_zero(self, store):
assert await store.delete_by_thread("nope") == 0
@pytest.mark.anyio
async def test_delete_nonexistent_run_returns_zero(self, store):
await store.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
assert await store.delete_by_run("t1", "nope") == 0
@pytest.mark.anyio
async def test_delete_nonexistent_thread_for_run_returns_zero(self, store):
assert await store.delete_by_run("nope", "r1") == 0
# -- Edge cases --
class TestEdgeCases:
@pytest.mark.anyio
async def test_empty_thread_list_messages(self, store):
assert await store.list_messages("empty") == []
@pytest.mark.anyio
async def test_empty_run_list_events(self, store):
assert await store.list_events("empty", "r1") == []
@pytest.mark.anyio
async def test_empty_thread_count_messages(self, store):
assert await store.count_messages("empty") == 0
# -- DB-specific tests --
class TestDbRunEventStore:
"""Tests for DbRunEventStore with temp SQLite."""
@pytest.mark.anyio
async def test_basic_crud(self, tmp_path):
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine
from deerflow.runtime.events.store.db import DbRunEventStore
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
s = DbRunEventStore(get_session_factory())
r = await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content="hi")
assert r["seq"] == 1
r2 = await s.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message", content="hello")
assert r2["seq"] == 2
messages = await s.list_messages("t1")
assert len(messages) == 2
count = await s.count_messages("t1")
assert count == 2
await close_engine()
@pytest.mark.anyio
async def test_trace_content_truncation(self, tmp_path):
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine
from deerflow.runtime.events.store.db import DbRunEventStore
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
s = DbRunEventStore(get_session_factory(), max_trace_content=100)
long = "x" * 200
r = await s.put(thread_id="t1", run_id="r1", event_type="llm_end", category="trace", content=long)
assert len(r["content"]) == 100
assert r["metadata"].get("content_truncated") is True
# message content NOT truncated
m = await s.put(thread_id="t1", run_id="r1", event_type="ai_message", category="message", content=long)
assert len(m["content"]) == 200
await close_engine()
@pytest.mark.anyio
async def test_pagination(self, tmp_path):
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine
from deerflow.runtime.events.store.db import DbRunEventStore
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
s = DbRunEventStore(get_session_factory())
for i in range(10):
await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content=str(i))
# before_seq
msgs = await s.list_messages("t1", before_seq=6, limit=3)
assert [m["seq"] for m in msgs] == [3, 4, 5]
# after_seq
msgs = await s.list_messages("t1", after_seq=7, limit=3)
assert [m["seq"] for m in msgs] == [8, 9, 10]
# default (latest)
msgs = await s.list_messages("t1", limit=3)
assert [m["seq"] for m in msgs] == [8, 9, 10]
await close_engine()
@pytest.mark.anyio
async def test_delete(self, tmp_path):
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine
from deerflow.runtime.events.store.db import DbRunEventStore
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
s = DbRunEventStore(get_session_factory())
await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await s.put(thread_id="t1", run_id="r2", event_type="ai_message", category="message")
c = await s.delete_by_run("t1", "r2")
assert c == 1
assert await s.count_messages("t1") == 1
c = await s.delete_by_thread("t1")
assert c == 1
assert await s.count_messages("t1") == 0
await close_engine()
@pytest.mark.anyio
async def test_put_batch_seq_continuity(self, tmp_path):
"""Batch write produces continuous seq values with no gaps."""
from deerflow.persistence.engine import close_engine, get_session_factory, init_engine
from deerflow.runtime.events.store.db import DbRunEventStore
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
s = DbRunEventStore(get_session_factory())
events = [{"thread_id": "t1", "run_id": "r1", "event_type": "trace", "category": "trace"} for _ in range(50)]
results = await s.put_batch(events)
seqs = [r["seq"] for r in results]
assert seqs == list(range(1, 51))
await close_engine()
# -- Factory tests --
class TestMakeRunEventStore:
"""Tests for the make_run_event_store factory function."""
@pytest.mark.anyio
async def test_memory_backend_default(self):
from deerflow.runtime.events.store import make_run_event_store
store = make_run_event_store(None)
assert type(store).__name__ == "MemoryRunEventStore"
@pytest.mark.anyio
async def test_memory_backend_explicit(self):
from unittest.mock import MagicMock
from deerflow.runtime.events.store import make_run_event_store
config = MagicMock()
config.backend = "memory"
store = make_run_event_store(config)
assert type(store).__name__ == "MemoryRunEventStore"
@pytest.mark.anyio
async def test_db_backend_with_engine(self, tmp_path):
from unittest.mock import MagicMock
from deerflow.persistence.engine import close_engine, init_engine
from deerflow.runtime.events.store import make_run_event_store
url = f"sqlite+aiosqlite:///{tmp_path / 'test.db'}"
await init_engine("sqlite", url=url, sqlite_dir=str(tmp_path))
config = MagicMock()
config.backend = "db"
config.max_trace_content = 10240
store = make_run_event_store(config)
assert type(store).__name__ == "DbRunEventStore"
await close_engine()
@pytest.mark.anyio
async def test_db_backend_no_engine_falls_back(self):
"""db backend without engine falls back to memory."""
from unittest.mock import MagicMock
from deerflow.persistence.engine import close_engine, init_engine
from deerflow.runtime.events.store import make_run_event_store
await init_engine("memory") # no engine created
config = MagicMock()
config.backend = "db"
store = make_run_event_store(config)
assert type(store).__name__ == "MemoryRunEventStore"
await close_engine()
@pytest.mark.anyio
async def test_jsonl_backend(self):
from unittest.mock import MagicMock
from deerflow.runtime.events.store import make_run_event_store
config = MagicMock()
config.backend = "jsonl"
store = make_run_event_store(config)
assert type(store).__name__ == "JsonlRunEventStore"
@pytest.mark.anyio
async def test_unknown_backend_raises(self):
from unittest.mock import MagicMock
from deerflow.runtime.events.store import make_run_event_store
config = MagicMock()
config.backend = "redis"
with pytest.raises(ValueError, match="Unknown"):
make_run_event_store(config)
# -- JSONL-specific tests --
class TestJsonlRunEventStore:
@pytest.mark.anyio
async def test_basic_crud(self, tmp_path):
from deerflow.runtime.events.store.jsonl import JsonlRunEventStore
s = JsonlRunEventStore(base_dir=tmp_path / "jsonl")
r = await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message", content="hi")
assert r["seq"] == 1
messages = await s.list_messages("t1")
assert len(messages) == 1
@pytest.mark.anyio
async def test_file_at_correct_path(self, tmp_path):
from deerflow.runtime.events.store.jsonl import JsonlRunEventStore
s = JsonlRunEventStore(base_dir=tmp_path / "jsonl")
await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
assert (tmp_path / "jsonl" / "threads" / "t1" / "runs" / "r1.jsonl").exists()
@pytest.mark.anyio
async def test_cross_run_messages(self, tmp_path):
from deerflow.runtime.events.store.jsonl import JsonlRunEventStore
s = JsonlRunEventStore(base_dir=tmp_path / "jsonl")
await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await s.put(thread_id="t1", run_id="r2", event_type="human_message", category="message")
messages = await s.list_messages("t1")
assert len(messages) == 2
assert [m["seq"] for m in messages] == [1, 2]
@pytest.mark.anyio
async def test_delete_by_run(self, tmp_path):
from deerflow.runtime.events.store.jsonl import JsonlRunEventStore
s = JsonlRunEventStore(base_dir=tmp_path / "jsonl")
await s.put(thread_id="t1", run_id="r1", event_type="human_message", category="message")
await s.put(thread_id="t1", run_id="r2", event_type="human_message", category="message")
c = await s.delete_by_run("t1", "r2")
assert c == 1
assert not (tmp_path / "jsonl" / "threads" / "t1" / "runs" / "r2.jsonl").exists()
assert await s.count_messages("t1") == 1