mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-04-27 04:08:30 +00:00
* feat(persistence): add SQLAlchemy 2.0 async ORM scaffold
Introduce a unified database configuration (DatabaseConfig) that
controls both the LangGraph checkpointer and the DeerFlow application
persistence layer from a single `database:` config section.
New modules:
- deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends
- deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton
- deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation
Gateway integration initializes/tears down the persistence engine in
the existing langgraph_runtime() context manager. Legacy checkpointer
config is preserved for backward compatibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add RunEventStore ABC + MemoryRunEventStore
Phase 2-A prerequisite for event storage: adds the unified run event
stream interface (RunEventStore) with an in-memory implementation,
RunEventsConfig, gateway integration, and comprehensive tests (27 cases).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints
Phase 2-B: run persistence + event storage + token tracking.
- ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow
- RunRepository implements RunStore ABC via SQLAlchemy ORM
- ThreadMetaRepository with owner access control
- DbRunEventStore with trace content truncation and cursor pagination
- JsonlRunEventStore with per-run files and seq recovery from disk
- RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events,
accumulates token usage by caller type, buffers and flushes to store
- RunManager now accepts optional RunStore for persistent backing
- Worker creates RunJournal, writes human_message, injects callbacks
- Gateway deps use factory functions (RunRepository when DB available)
- New endpoints: messages, run messages, run events, token-usage
- ThreadCreateRequest gains assistant_id field
- 92 tests pass (33 new), zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add user feedback + follow-up run association
Phase 2-C: feedback and follow-up tracking.
- FeedbackRow ORM model (rating +1/-1, optional message_id, comment)
- FeedbackRepository with CRUD, list_by_run/thread, aggregate stats
- Feedback API endpoints: create, list, stats, delete
- follow_up_to_run_id in RunCreateRequest (explicit or auto-detected
from latest successful run on the thread)
- Worker writes follow_up_to_run_id into human_message event metadata
- Gateway deps: feedback_repo factory + getter
- 17 new tests (14 FeedbackRepository + 3 follow-up association)
- 109 total tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config
- config.example.yaml: deprecate standalone checkpointer section, activate
unified database:sqlite as default (drives both checkpointer + app data)
- New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage
including check_access owner logic, list_by_owner pagination
- Extended test_run_repository.py (+4 tests) — completion preserves fields,
list ordering desc, limit, owner_none returns all
- Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false,
middleware no ai_message, unknown caller tokens, convenience fields,
tool_error, non-summarization custom event
- Extended test_run_event_store.py (+7 tests) — DB batch seq continuity,
make_run_event_store factory (memory/db/jsonl/fallback/unknown)
- Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists,
follow-up metadata, summarization in history, full DB-backed lifecycle
- Fixed DB integration test to use proper fake objects (not MagicMock)
for JSON-serializable metadata
- 157 total Phase 2 tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* config: move default sqlite_dir to .deer-flow/data
Keep SQLite databases alongside other DeerFlow-managed data
(threads, memory) under the .deer-flow/ directory instead of a
top-level ./data folder.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now()
- Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM
models. Add json_serializer=json.dumps(ensure_ascii=False) to all
create_async_engine calls so non-ASCII text (Chinese etc.) is stored
as-is in both SQLite and Postgres.
- Change ORM datetime defaults from datetime.now(UTC) to datetime.now(),
remove UTC imports.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): simplify deps.py with getter factory + inline repos
- Replace 6 identical getter functions with _require() factory.
- Inline 3 _make_*_repo() factories into langgraph_runtime(), call
get_session_factory() once instead of 3 times.
- Add thread_meta upsert in start_run (services.py).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(docker): add UV_EXTRAS build arg for optional dependencies
Support installing optional dependency groups (e.g. postgres) at
Docker build time via UV_EXTRAS build arg:
UV_EXTRAS=postgres docker compose build
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(journal): fix flush, token tracking, and consolidate tests
RunJournal fixes:
- _flush_sync: retain events in buffer when no event loop instead of
dropping them; worker's finally block flushes via async flush().
- on_llm_end: add tool_calls filter and caller=="lead_agent" guard for
ai_message events; mark message IDs for dedup with record_llm_usage.
- worker.py: persist completion data (tokens, message count) to RunStore
in finally block.
Model factory:
- Auto-inject stream_usage=True for BaseChatOpenAI subclasses with
custom api_base, so usage_metadata is populated in streaming responses.
Test consolidation:
- Delete test_phase2b_integration.py (redundant with existing tests).
- Move DB-backed lifecycle test into test_run_journal.py.
- Add tests for stream_usage injection in test_model_factory.py.
- Clean up executor/task_tool dead journal references.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): widen content type to str|dict in all store backends
Allow event content to be a dict (for structured OpenAI-format messages)
in addition to plain strings. Dict values are JSON-serialized for the DB
backend and deserialized on read; memory and JSONL backends handle dicts
natively. Trace truncation now serializes dicts to JSON before measuring.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(events): use metadata flag instead of heuristic for dict content detection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(converters): add LangChain-to-OpenAI message format converters
Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(converters): handle empty list content as null, clean up test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): human_message content uses OpenAI user message format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): ai_message uses OpenAI format, add ai_tool_call message event
- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add tool_result message event with OpenAI tool message format
Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): summary content uses OpenAI system message format
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format
Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add record_middleware method for middleware trace events
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test(events): add full run sequence integration test for OpenAI content format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): align message events with checkpoint format and add middleware tag injection
- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): switch search endpoint to threads_meta table and sync title
- POST /api/threads/search now queries threads_meta table directly,
removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): history endpoint reads messages from event store
- POST /api/threads/{thread_id}/history now combines two data sources:
checkpointer for checkpoint_id, metadata, title, thread_data;
event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove duplicate optional-dependencies header in pyproject.toml
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(middleware): pass tagged config to TitleMiddleware ainvoke call
Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve merge conflict in .env.example
Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address review feedback on PR #1851
- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: update uv.lock
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality
Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search
Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: apply ruff format to persistence and runtime files
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Potential fix for pull request finding 'Statement has no effect'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* refactor(runtime): introduce RunContext to reduce run_agent parameter bloat
Extract checkpointer, store, event_store, run_events_config, thread_meta_repo,
and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context()
in deps.py to build the base context from app.state singletons. start_run() uses
dataclasses.replace() to enrich per-run fields before passing ctx to run_agent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): move sanitize_log_param to app/gateway/utils.py
Extract the log-injection sanitizer from routers/threads.py into a shared
utils module and rename to sanitize_log_param (public API). Eliminates the
reverse service → router import in services.py.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* perf: use SQL aggregation for feedback stats and thread token usage
Replace Python-side counting in FeedbackRepository.aggregate_by_run with
a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread
abstract method with SQL GROUP BY implementation in RunRepository and
Python fallback in MemoryRunStore. Simplify the thread_token_usage
endpoint to delegate to the new method, eliminating the limit=10000
truncation risk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: annotate DbRunEventStore.put() as low-frequency path
Add docstring clarifying that put() opens a per-call transaction with
FOR UPDATE and should only be used for infrequent writes (currently
just the initial human_message event). High-throughput callers should
use put_batch() instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(threads): fall back to Store search when ThreadMetaRepository is unavailable
When database.backend=memory (default) or no SQL session factory is
configured, search_threads now queries the LangGraph Store instead of
returning 503. Returns empty list if neither Store nor repo is available.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata
Add ThreadMetaStore abstract base class with create/get/search/update/delete
interface. ThreadMetaRepository (SQL) now inherits from it. New
MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments.
deps.py now always provides a non-None thread_meta_repo, eliminating all
`if thread_meta_repo is not None` guards in services.py, worker.py, and
routers/threads.py. search_threads no longer needs a Store fallback branch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(history): read messages from checkpointer instead of RunEventStore
The /history endpoint now reads messages directly from the
checkpointer's channel_values (the authoritative source) instead of
querying RunEventStore.list_messages(). The RunEventStore API is
preserved for other consumers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address new Copilot review comments
- feedback.py: validate thread_id/run_id before deleting feedback
- jsonl.py: add path traversal protection with ID validation
- run_repo.py: parse `before` to datetime for PostgreSQL compat
- thread_meta_repo.py: fix pagination when metadata filter is active
- database_config.py: use resolve_path for sqlite_dir consistency
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Implement skill self-evolution and skill_manage flow (#1874)
* chore: ignore .worktrees directory
* Add skill_manage self-evolution flow
* Fix CI regressions for skill_manage
* Address PR review feedback for skill evolution
* fix(skill-evolution): preserve history on delete
* fix(skill-evolution): tighten scanner fallbacks
* docs: add skill_manage e2e evidence screenshot
* fix(skill-manage): avoid blocking fs ops in session runtime
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir
resolve_path() resolves relative to Paths.base_dir (.deer-flow),
which double-nested the path to .deer-flow/.deer-flow/data/app.db.
Use Path.resolve() (CWD-relative) instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Feature/feishu receive file (#1608)
* feat(feishu): add channel file materialization hook for inbound messages
- Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op.
- Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text.
- Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files.
- No impact on Slack/Telegram or other channels (they inherit the default no-op).
* style(backend): format code with ruff for lint compliance
- Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format`
- Ensured both files conform to project linting standards
- Fixes CI lint check failures caused by code style issues
* fix(feishu): handle file write operation asynchronously to prevent blocking
* fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code
* test(feishu): add tests for receive_file method and placeholder replacement
* fix(manager): remove unnecessary type casting for channel retrieval
* fix(feishu): update logging messages to reflect resource handling instead of image
* fix(feishu): sanitize filename by replacing invalid characters in file uploads
* fix(feishu): improve filename sanitization and reorder image key handling in message processing
* fix(feishu): add thread lock to prevent filename conflicts during file downloads
* fix(test): correct bad merge in test_feishu_parser.py
* chore: run ruff and apply formatting cleanup
fix(feishu): preserve rich-text attachment order and improve fallback filename handling
* fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915)
Two production docker-compose.yaml bugs prevent `make up` from working:
1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH
environment overrides. Added in fb2d99f (#1836) but accidentally reverted
by ca2fb95 (#1847). Without them, gateway reads host paths from .env via
env_file, causing FileNotFoundError inside the container.
2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default).
Empty $${allow_blocking} inserts a bare space between flags, causing
' --no-reload' to be parsed as unexpected extra argument. Fix by building
args string first and conditionally appending --allow-blocking.
Co-authored-by: cooper <cooperfu@tencent.com>
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904)
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities
Fix `<button>` inside `<a>` invalid HTML in artifact components and add
missing `noopener,noreferrer` to `window.open` calls to prevent reverse
tabnabbing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(frontend): address Copilot review on tabnabbing and double-tab-open
Remove redundant parent onClick on web_fetch ChainOfThoughtStep to
prevent opening two tabs on link click, and explicitly null out
window.opener after window.open() for defensive tabnabbing hardening.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(persistence): organize entities into per-entity directories
Restructure the persistence layer from horizontal "models/ + repositories/"
split into vertical entity-aligned directories. Each entity (thread_meta,
run, feedback) now owns its ORM model, abstract interface (where applicable),
and concrete implementations under a single directory with an aggregating
__init__.py for one-line imports.
Layout:
persistence/thread_meta/{base,model,sql,memory}.py
persistence/run/{model,sql}.py
persistence/feedback/{model,sql}.py
models/__init__.py is kept as a facade so Alembic autogenerate continues to
discover all ORM tables via Base.metadata. RunEventRow remains under
models/run_event.py because its storage implementation lives in
runtime/events/store/db.py and has no matching repository directory.
The repositories/ directory is removed entirely. All call sites in
gateway/deps.py and tests are updated to import from the new entity
packages, e.g.:
from deerflow.persistence.thread_meta import ThreadMetaRepository
from deerflow.persistence.run import RunRepository
from deerflow.persistence.feedback import FeedbackRepository
Full test suite passes (1690 passed, 14 skipped).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gateway): sync thread rename and delete through ThreadMetaStore
The POST /threads/{id}/state endpoint previously synced title changes
only to the LangGraph Store via _store_upsert. In sqlite mode the search
endpoint reads from the ThreadMetaRepository SQL table, so renames never
appeared in /threads/search until the next agent run completed (worker.py
syncs title from checkpoint to thread_meta in its finally block).
Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem,
Store, and checkpointer but left the threads_meta row orphaned in sqlite,
so deleted threads kept appearing in /threads/search.
Fix both endpoints by routing through the ThreadMetaStore abstraction
which already has the correct sqlite/memory implementations wired up by
deps.py. The rename path now calls update_display_name() and the delete
path calls delete() — both work uniformly across backends.
Verified end-to-end with curl in gateway mode against sqlite backend.
Existing test suite (1690 passed) and focused router/repo tests pass.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): route all thread metadata access through ThreadMetaStore
Following the rename/delete bug fix in PR1, migrate the remaining direct
LangGraph Store reads/writes in the threads router and services to the
ThreadMetaStore abstraction so that the sqlite and memory backends behave
identically and the legacy dual-write paths can be removed.
Migrated endpoints (threads.py):
- create_thread: idempotency check + write now use thread_meta_repo.get/create
instead of dual-writing the LangGraph Store and the SQL row.
- get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback
for legacy threads is preserved.
- patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata.
- delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete
already covers it.
Removed dead code (services.py):
- _upsert_thread_in_store — redundant with the immediately following
thread_meta_repo.create() call.
- _sync_thread_title_after_run — worker.py's finally block already syncs
the title via thread_meta_repo.update_display_name() after each run.
Removed dead code (threads.py):
- _store_get / _store_put / _store_upsert helpers (no remaining callers).
- THREADS_NS constant.
- get_store import (router no longer touches the LangGraph Store directly).
New abstract method:
- ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into
the thread's metadata field. Implemented in both ThreadMetaRepository (SQL,
read-modify-write inside one session) and MemoryThreadMetaStore. Three new
unit tests cover merge / empty / nonexistent behaviour.
Net change: -134 lines. Full test suite: 1693 passed, 14 skipped.
Verified end-to-end with curl in gateway mode against sqlite backend
(create / patch / get / rename / search / delete).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: JilongSun <965640067@qq.com>
Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com>
Co-authored-by: cooper <cooperfu@tencent.com>
Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com>
2424 lines
87 KiB
Python
2424 lines
87 KiB
Python
"""Tests for the IM channel system (MessageBus, ChannelStore, ChannelManager)."""
|
|
|
|
from __future__ import annotations
|
|
|
|
import asyncio
|
|
import json
|
|
import tempfile
|
|
from pathlib import Path
|
|
from types import SimpleNamespace
|
|
from unittest.mock import AsyncMock, MagicMock, patch
|
|
|
|
import pytest
|
|
|
|
from app.channels.base import Channel
|
|
from app.channels.message_bus import InboundMessage, InboundMessageType, MessageBus, OutboundMessage, ResolvedAttachment
|
|
from app.channels.store import ChannelStore
|
|
|
|
|
|
def _run(coro):
|
|
"""Run an async coroutine synchronously."""
|
|
loop = asyncio.new_event_loop()
|
|
try:
|
|
return loop.run_until_complete(coro)
|
|
finally:
|
|
loop.close()
|
|
|
|
|
|
async def _wait_for(condition, *, timeout=5.0, interval=0.05):
|
|
"""Poll *condition* until it returns True, or raise after *timeout* seconds."""
|
|
import time
|
|
|
|
deadline = time.monotonic() + timeout
|
|
while time.monotonic() < deadline:
|
|
if condition():
|
|
return
|
|
await asyncio.sleep(interval)
|
|
raise TimeoutError(f"Condition not met within {timeout}s")
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# MessageBus tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestMessageBus:
|
|
def test_publish_and_get_inbound(self):
|
|
bus = MessageBus()
|
|
|
|
async def go():
|
|
msg = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hello",
|
|
)
|
|
await bus.publish_inbound(msg)
|
|
result = await bus.get_inbound()
|
|
assert result.text == "hello"
|
|
assert result.channel_name == "test"
|
|
assert result.chat_id == "chat1"
|
|
|
|
_run(go())
|
|
|
|
def test_inbound_queue_is_fifo(self):
|
|
bus = MessageBus()
|
|
|
|
async def go():
|
|
for i in range(3):
|
|
await bus.publish_inbound(InboundMessage(channel_name="test", chat_id="c", user_id="u", text=f"msg{i}"))
|
|
for i in range(3):
|
|
msg = await bus.get_inbound()
|
|
assert msg.text == f"msg{i}"
|
|
|
|
_run(go())
|
|
|
|
def test_outbound_callback(self):
|
|
bus = MessageBus()
|
|
received = []
|
|
|
|
async def callback(msg):
|
|
received.append(msg)
|
|
|
|
async def go():
|
|
bus.subscribe_outbound(callback)
|
|
out = OutboundMessage(channel_name="test", chat_id="c1", thread_id="t1", text="reply")
|
|
await bus.publish_outbound(out)
|
|
assert len(received) == 1
|
|
assert received[0].text == "reply"
|
|
|
|
_run(go())
|
|
|
|
def test_unsubscribe_outbound(self):
|
|
bus = MessageBus()
|
|
received = []
|
|
|
|
async def callback(msg):
|
|
received.append(msg)
|
|
|
|
async def go():
|
|
bus.subscribe_outbound(callback)
|
|
bus.unsubscribe_outbound(callback)
|
|
out = OutboundMessage(channel_name="test", chat_id="c1", thread_id="t1", text="reply")
|
|
await bus.publish_outbound(out)
|
|
assert len(received) == 0
|
|
|
|
_run(go())
|
|
|
|
def test_outbound_error_does_not_crash(self):
|
|
bus = MessageBus()
|
|
|
|
async def bad_callback(msg):
|
|
raise ValueError("boom")
|
|
|
|
received = []
|
|
|
|
async def good_callback(msg):
|
|
received.append(msg)
|
|
|
|
async def go():
|
|
bus.subscribe_outbound(bad_callback)
|
|
bus.subscribe_outbound(good_callback)
|
|
out = OutboundMessage(channel_name="test", chat_id="c1", thread_id="t1", text="reply")
|
|
await bus.publish_outbound(out)
|
|
assert len(received) == 1
|
|
|
|
_run(go())
|
|
|
|
def test_inbound_message_defaults(self):
|
|
msg = InboundMessage(channel_name="test", chat_id="c", user_id="u", text="hi")
|
|
assert msg.msg_type == InboundMessageType.CHAT
|
|
assert msg.thread_ts is None
|
|
assert msg.files == []
|
|
assert msg.metadata == {}
|
|
assert msg.created_at > 0
|
|
|
|
def test_outbound_message_defaults(self):
|
|
msg = OutboundMessage(channel_name="test", chat_id="c", thread_id="t", text="hi")
|
|
assert msg.artifacts == []
|
|
assert msg.is_final is True
|
|
assert msg.thread_ts is None
|
|
assert msg.metadata == {}
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# ChannelStore tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestChannelStore:
|
|
@pytest.fixture
|
|
def store(self, tmp_path):
|
|
return ChannelStore(path=tmp_path / "store.json")
|
|
|
|
def test_set_and_get_thread_id(self, store):
|
|
store.set_thread_id("slack", "ch1", "thread-abc", user_id="u1")
|
|
assert store.get_thread_id("slack", "ch1") == "thread-abc"
|
|
|
|
def test_get_nonexistent_returns_none(self, store):
|
|
assert store.get_thread_id("slack", "nonexistent") is None
|
|
|
|
def test_remove(self, store):
|
|
store.set_thread_id("slack", "ch1", "t1")
|
|
assert store.remove("slack", "ch1") is True
|
|
assert store.get_thread_id("slack", "ch1") is None
|
|
|
|
def test_remove_nonexistent_returns_false(self, store):
|
|
assert store.remove("slack", "nope") is False
|
|
|
|
def test_list_entries_all(self, store):
|
|
store.set_thread_id("slack", "ch1", "t1")
|
|
store.set_thread_id("feishu", "ch2", "t2")
|
|
entries = store.list_entries()
|
|
assert len(entries) == 2
|
|
|
|
def test_list_entries_filtered(self, store):
|
|
store.set_thread_id("slack", "ch1", "t1")
|
|
store.set_thread_id("feishu", "ch2", "t2")
|
|
entries = store.list_entries(channel_name="slack")
|
|
assert len(entries) == 1
|
|
assert entries[0]["channel_name"] == "slack"
|
|
|
|
def test_persistence(self, tmp_path):
|
|
path = tmp_path / "store.json"
|
|
store1 = ChannelStore(path=path)
|
|
store1.set_thread_id("slack", "ch1", "t1")
|
|
|
|
store2 = ChannelStore(path=path)
|
|
assert store2.get_thread_id("slack", "ch1") == "t1"
|
|
|
|
def test_update_preserves_created_at(self, store):
|
|
store.set_thread_id("slack", "ch1", "t1")
|
|
entries = store.list_entries()
|
|
created_at = entries[0]["created_at"]
|
|
|
|
store.set_thread_id("slack", "ch1", "t2")
|
|
entries = store.list_entries()
|
|
assert entries[0]["created_at"] == created_at
|
|
assert entries[0]["thread_id"] == "t2"
|
|
assert entries[0]["updated_at"] >= created_at
|
|
|
|
def test_corrupt_file_handled(self, tmp_path):
|
|
path = tmp_path / "store.json"
|
|
path.write_text("not json", encoding="utf-8")
|
|
store = ChannelStore(path=path)
|
|
assert store.get_thread_id("x", "y") is None
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Channel base class tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class DummyChannel(Channel):
|
|
"""Concrete test implementation of Channel."""
|
|
|
|
def __init__(self, bus, config=None):
|
|
super().__init__(name="dummy", bus=bus, config=config or {})
|
|
self.sent_messages: list[OutboundMessage] = []
|
|
self._running = False
|
|
|
|
async def start(self):
|
|
self._running = True
|
|
self.bus.subscribe_outbound(self._on_outbound)
|
|
|
|
async def stop(self):
|
|
self._running = False
|
|
self.bus.unsubscribe_outbound(self._on_outbound)
|
|
|
|
async def send(self, msg: OutboundMessage):
|
|
self.sent_messages.append(msg)
|
|
|
|
|
|
class TestChannelBase:
|
|
def test_make_inbound(self):
|
|
bus = MessageBus()
|
|
ch = DummyChannel(bus)
|
|
msg = ch._make_inbound(
|
|
chat_id="c1",
|
|
user_id="u1",
|
|
text="hello",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
assert msg.channel_name == "dummy"
|
|
assert msg.chat_id == "c1"
|
|
assert msg.text == "hello"
|
|
assert msg.msg_type == InboundMessageType.COMMAND
|
|
|
|
def test_on_outbound_routes_to_channel(self):
|
|
bus = MessageBus()
|
|
ch = DummyChannel(bus)
|
|
|
|
async def go():
|
|
await ch.start()
|
|
msg = OutboundMessage(channel_name="dummy", chat_id="c1", thread_id="t1", text="hi")
|
|
await bus.publish_outbound(msg)
|
|
assert len(ch.sent_messages) == 1
|
|
|
|
_run(go())
|
|
|
|
def test_on_outbound_ignores_other_channels(self):
|
|
bus = MessageBus()
|
|
ch = DummyChannel(bus)
|
|
|
|
async def go():
|
|
await ch.start()
|
|
msg = OutboundMessage(channel_name="other", chat_id="c1", thread_id="t1", text="hi")
|
|
await bus.publish_outbound(msg)
|
|
assert len(ch.sent_messages) == 0
|
|
|
|
_run(go())
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# _extract_response_text tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestExtractResponseText:
|
|
def test_string_content(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {"messages": [{"type": "ai", "content": "hello"}]}
|
|
assert _extract_response_text(result) == "hello"
|
|
|
|
def test_list_content_blocks(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {"messages": [{"type": "ai", "content": [{"type": "text", "text": "hello"}, {"type": "text", "text": " world"}]}]}
|
|
assert _extract_response_text(result) == "hello world"
|
|
|
|
def test_picks_last_ai_message(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "ai", "content": "first"},
|
|
{"type": "human", "content": "question"},
|
|
{"type": "ai", "content": "second"},
|
|
]
|
|
}
|
|
assert _extract_response_text(result) == "second"
|
|
|
|
def test_empty_messages(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
assert _extract_response_text({"messages": []}) == ""
|
|
|
|
def test_no_ai_messages(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {"messages": [{"type": "human", "content": "hi"}]}
|
|
assert _extract_response_text(result) == ""
|
|
|
|
def test_list_result(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = [{"type": "ai", "content": "from list"}]
|
|
assert _extract_response_text(result) == "from list"
|
|
|
|
def test_skips_empty_ai_content(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "ai", "content": ""},
|
|
{"type": "ai", "content": "actual response"},
|
|
]
|
|
}
|
|
assert _extract_response_text(result) == "actual response"
|
|
|
|
def test_clarification_tool_message(self):
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "健身"},
|
|
{"type": "ai", "content": "", "tool_calls": [{"name": "ask_clarification", "args": {"question": "您想了解哪方面?"}}]},
|
|
{"type": "tool", "name": "ask_clarification", "content": "您想了解哪方面?"},
|
|
]
|
|
}
|
|
assert _extract_response_text(result) == "您想了解哪方面?"
|
|
|
|
def test_clarification_over_empty_ai(self):
|
|
"""When AI content is empty but ask_clarification tool message exists, use the tool message."""
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "ai", "content": ""},
|
|
{"type": "tool", "name": "ask_clarification", "content": "Could you clarify?"},
|
|
]
|
|
}
|
|
assert _extract_response_text(result) == "Could you clarify?"
|
|
|
|
def test_does_not_leak_previous_turn_text(self):
|
|
"""When current turn AI has no text (only tool calls), do not return previous turn's text."""
|
|
from app.channels.manager import _extract_response_text
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "hello"},
|
|
{"type": "ai", "content": "Hi there!"},
|
|
{"type": "human", "content": "export data"},
|
|
{
|
|
"type": "ai",
|
|
"content": "",
|
|
"tool_calls": [{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/data.csv"]}}],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
]
|
|
}
|
|
# Should return "" (no text in current turn), NOT "Hi there!" from previous turn
|
|
assert _extract_response_text(result) == ""
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# ChannelManager tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def _make_mock_langgraph_client(thread_id="test-thread-123", run_result=None):
|
|
"""Create a mock langgraph_sdk async client."""
|
|
mock_client = MagicMock()
|
|
|
|
# threads.create() returns a Thread-like dict
|
|
mock_client.threads.create = AsyncMock(return_value={"thread_id": thread_id})
|
|
|
|
# threads.get() returns thread info (succeeds by default)
|
|
mock_client.threads.get = AsyncMock(return_value={"thread_id": thread_id})
|
|
|
|
# runs.wait() returns the final state with messages
|
|
if run_result is None:
|
|
run_result = {
|
|
"messages": [
|
|
{"type": "human", "content": "hi"},
|
|
{"type": "ai", "content": "Hello from agent!"},
|
|
]
|
|
}
|
|
mock_client.runs.wait = AsyncMock(return_value=run_result)
|
|
|
|
return mock_client
|
|
|
|
|
|
def _make_stream_part(event: str, data):
|
|
return SimpleNamespace(event=event, data=data)
|
|
|
|
|
|
def _make_async_iterator(items):
|
|
async def iterator():
|
|
for item in items:
|
|
yield item
|
|
|
|
return iterator()
|
|
|
|
|
|
class TestChannelManager:
|
|
def test_handle_chat_calls_channel_receive_file_for_inbound_files(self, monkeypatch):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
modified_msg = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="with /mnt/user-data/uploads/demo.png",
|
|
files=[{"image_key": "img_1"}],
|
|
)
|
|
mock_channel = MagicMock()
|
|
mock_channel.receive_file = AsyncMock(return_value=modified_msg)
|
|
mock_service = MagicMock()
|
|
mock_service.get_channel.return_value = mock_channel
|
|
monkeypatch.setattr("app.channels.service.get_channel_service", lambda: mock_service)
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hi [image]",
|
|
files=[{"image_key": "img_1"}],
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
mock_channel.receive_file.assert_awaited_once()
|
|
called_msg, called_thread_id = mock_channel.receive_file.await_args.args
|
|
assert called_msg.text == "hi [image]"
|
|
assert isinstance(called_thread_id, str)
|
|
assert called_thread_id
|
|
|
|
mock_client.runs.wait.assert_called_once()
|
|
run_call_args = mock_client.runs.wait.call_args
|
|
assert run_call_args[1]["input"]["messages"][0]["content"] == "with /mnt/user-data/uploads/demo.png"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_chat_creates_thread(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(channel_name="test", chat_id="chat1", user_id="user1", text="hi")
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
# Thread should be created on the LangGraph Server
|
|
mock_client.threads.create.assert_called_once()
|
|
|
|
# Thread ID should be stored
|
|
thread_id = store.get_thread_id("test", "chat1")
|
|
assert thread_id == "test-thread-123"
|
|
|
|
# runs.wait should be called with the thread_id
|
|
mock_client.runs.wait.assert_called_once()
|
|
call_args = mock_client.runs.wait.call_args
|
|
assert call_args[0][0] == "test-thread-123" # thread_id
|
|
assert call_args[0][1] == "lead_agent" # assistant_id
|
|
assert call_args[1]["input"]["messages"][0]["content"] == "hi"
|
|
|
|
assert len(outbound_received) == 1
|
|
assert outbound_received[0].text == "Hello from agent!"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_chat_uses_channel_session_overrides(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(
|
|
bus=bus,
|
|
store=store,
|
|
channel_sessions={
|
|
"telegram": {
|
|
"assistant_id": "mobile_agent",
|
|
"config": {"recursion_limit": 55},
|
|
"context": {
|
|
"thinking_enabled": False,
|
|
"subagent_enabled": True,
|
|
},
|
|
}
|
|
},
|
|
)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(channel_name="telegram", chat_id="chat1", user_id="user1", text="hi")
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
mock_client.runs.wait.assert_called_once()
|
|
call_args = mock_client.runs.wait.call_args
|
|
assert call_args[0][1] == "lead_agent"
|
|
assert call_args[1]["config"]["recursion_limit"] == 55
|
|
assert call_args[1]["context"]["thinking_enabled"] is False
|
|
assert call_args[1]["context"]["subagent_enabled"] is True
|
|
assert call_args[1]["context"]["agent_name"] == "mobile-agent"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_chat_uses_user_session_overrides(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(
|
|
bus=bus,
|
|
store=store,
|
|
default_session={"context": {"is_plan_mode": True}},
|
|
channel_sessions={
|
|
"telegram": {
|
|
"assistant_id": "mobile_agent",
|
|
"config": {"recursion_limit": 55},
|
|
"context": {
|
|
"thinking_enabled": False,
|
|
"subagent_enabled": False,
|
|
},
|
|
"users": {
|
|
"vip-user": {
|
|
"assistant_id": " VIP_AGENT ",
|
|
"config": {"recursion_limit": 77},
|
|
"context": {
|
|
"thinking_enabled": True,
|
|
"subagent_enabled": True,
|
|
},
|
|
}
|
|
},
|
|
}
|
|
},
|
|
)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(channel_name="telegram", chat_id="chat1", user_id="vip-user", text="hi")
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
mock_client.runs.wait.assert_called_once()
|
|
call_args = mock_client.runs.wait.call_args
|
|
assert call_args[0][1] == "lead_agent"
|
|
assert call_args[1]["config"]["recursion_limit"] == 77
|
|
assert call_args[1]["context"]["thinking_enabled"] is True
|
|
assert call_args[1]["context"]["subagent_enabled"] is True
|
|
assert call_args[1]["context"]["agent_name"] == "vip-agent"
|
|
assert call_args[1]["context"]["is_plan_mode"] is True
|
|
|
|
_run(go())
|
|
|
|
def test_handle_chat_rejects_invalid_custom_agent_name(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(
|
|
bus=bus,
|
|
store=store,
|
|
channel_sessions={
|
|
"telegram": {
|
|
"assistant_id": "bad agent!",
|
|
}
|
|
},
|
|
)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(channel_name="telegram", chat_id="chat1", user_id="user1", text="hi")
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
mock_client.runs.wait.assert_not_called()
|
|
assert outbound_received[0].text == ("Invalid channel session assistant_id 'bad agent!'. Use 'lead_agent' or a custom agent name containing only letters, digits, and hyphens.")
|
|
|
|
_run(go())
|
|
|
|
def test_handle_feishu_chat_streams_multiple_outbound_updates(self, monkeypatch):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
monkeypatch.setattr("app.channels.manager.STREAM_UPDATE_MIN_INTERVAL_SECONDS", 0.0)
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
stream_events = [
|
|
_make_stream_part(
|
|
"messages-tuple",
|
|
[
|
|
{"id": "ai-1", "content": "Hello", "type": "AIMessageChunk"},
|
|
{"langgraph_node": "agent"},
|
|
],
|
|
),
|
|
_make_stream_part(
|
|
"messages-tuple",
|
|
[
|
|
{"id": "ai-1", "content": " world", "type": "AIMessageChunk"},
|
|
{"langgraph_node": "agent"},
|
|
],
|
|
),
|
|
_make_stream_part(
|
|
"values",
|
|
{
|
|
"messages": [
|
|
{"type": "human", "content": "hi"},
|
|
{"type": "ai", "content": "Hello world"},
|
|
],
|
|
"artifacts": [],
|
|
},
|
|
),
|
|
]
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.runs.stream = MagicMock(return_value=_make_async_iterator(stream_events))
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hi",
|
|
thread_ts="om-source-1",
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 3)
|
|
await manager.stop()
|
|
|
|
mock_client.runs.stream.assert_called_once()
|
|
assert [msg.text for msg in outbound_received] == ["Hello", "Hello world", "Hello world"]
|
|
assert [msg.is_final for msg in outbound_received] == [False, False, True]
|
|
assert all(msg.thread_ts == "om-source-1" for msg in outbound_received)
|
|
|
|
_run(go())
|
|
|
|
def test_handle_feishu_stream_error_still_sends_final(self, monkeypatch):
|
|
"""When the stream raises mid-way, a final outbound with is_final=True must still be published."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
monkeypatch.setattr("app.channels.manager.STREAM_UPDATE_MIN_INTERVAL_SECONDS", 0.0)
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
async def _failing_stream():
|
|
yield _make_stream_part(
|
|
"messages-tuple",
|
|
[
|
|
{"id": "ai-1", "content": "Partial", "type": "AIMessageChunk"},
|
|
{"langgraph_node": "agent"},
|
|
],
|
|
)
|
|
raise ConnectionError("stream broken")
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.runs.stream = MagicMock(return_value=_failing_stream())
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hi",
|
|
thread_ts="om-source-1",
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: any(m.is_final for m in outbound_received))
|
|
await manager.stop()
|
|
|
|
# Should have at least one intermediate and one final message
|
|
final_msgs = [m for m in outbound_received if m.is_final]
|
|
assert len(final_msgs) == 1
|
|
assert final_msgs[0].thread_ts == "om-source-1"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_feishu_stream_conflict_sends_busy_message(self, monkeypatch):
|
|
import httpx
|
|
from langgraph_sdk.errors import ConflictError
|
|
|
|
from app.channels.manager import THREAD_BUSY_MESSAGE, ChannelManager
|
|
|
|
monkeypatch.setattr("app.channels.manager.STREAM_UPDATE_MIN_INTERVAL_SECONDS", 0.0)
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
async def _conflict_stream():
|
|
request = httpx.Request("POST", "http://127.0.0.1:2024/runs")
|
|
response = httpx.Response(409, request=request)
|
|
raise ConflictError(
|
|
"Thread is already running a task. Wait for it to finish or choose a different multitask strategy.",
|
|
response=response,
|
|
body={"message": "Thread is already running a task. Wait for it to finish or choose a different multitask strategy."},
|
|
)
|
|
yield # pragma: no cover
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.runs.stream = MagicMock(return_value=_conflict_stream())
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hi",
|
|
thread_ts="om-source-1",
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: any(m.is_final for m in outbound_received))
|
|
await manager.stop()
|
|
|
|
final_msgs = [m for m in outbound_received if m.is_final]
|
|
assert len(final_msgs) == 1
|
|
assert final_msgs[0].text == THREAD_BUSY_MESSAGE
|
|
assert final_msgs[0].thread_ts == "om-source-1"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_help(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/help",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
assert len(outbound_received) == 1
|
|
assert "/new" in outbound_received[0].text
|
|
assert "/help" in outbound_received[0].text
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_new(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
store.set_thread_id("test", "chat1", "old-thread")
|
|
|
|
mock_client = _make_mock_langgraph_client(thread_id="new-thread-456")
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/new",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
new_thread = store.get_thread_id("test", "chat1")
|
|
assert new_thread == "new-thread-456"
|
|
assert new_thread != "old-thread"
|
|
assert "New conversation started" in outbound_received[0].text
|
|
|
|
# threads.create should be called for /new
|
|
mock_client.threads.create.assert_called_once()
|
|
|
|
_run(go())
|
|
|
|
def test_each_topic_creates_new_thread(self):
|
|
"""Messages with distinct topic_ids should each create a new DeerFlow thread."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
# Return a different thread_id for each create call
|
|
thread_ids = iter(["thread-1", "thread-2"])
|
|
|
|
async def create_thread(**kwargs):
|
|
return {"thread_id": next(thread_ids)}
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.threads.create = AsyncMock(side_effect=create_thread)
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
|
|
async def capture(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture)
|
|
await manager.start()
|
|
|
|
# Send two messages with different topic_ids (e.g. group chat, each starts a new topic)
|
|
for i, text in enumerate(["first", "second"]):
|
|
await bus.publish_inbound(
|
|
InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text=text,
|
|
topic_id=f"topic-{i}",
|
|
)
|
|
)
|
|
await _wait_for(lambda: mock_client.runs.wait.call_count >= 2)
|
|
await manager.stop()
|
|
|
|
# threads.create should be called twice (different topics)
|
|
assert mock_client.threads.create.call_count == 2
|
|
|
|
# runs.wait should be called twice with different thread_ids
|
|
assert mock_client.runs.wait.call_count == 2
|
|
wait_thread_ids = [c[0][0] for c in mock_client.runs.wait.call_args_list]
|
|
assert "thread-1" in wait_thread_ids
|
|
assert "thread-2" in wait_thread_ids
|
|
|
|
_run(go())
|
|
|
|
def test_same_topic_reuses_thread(self):
|
|
"""Messages with the same topic_id should reuse the same DeerFlow thread."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
mock_client = _make_mock_langgraph_client(thread_id="topic-thread-1")
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
|
|
async def capture(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture)
|
|
await manager.start()
|
|
|
|
# Send two messages with the same topic_id (simulates replies in a thread)
|
|
for text in ["first message", "follow-up"]:
|
|
msg = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text=text,
|
|
topic_id="topic-root-123",
|
|
)
|
|
await bus.publish_inbound(msg)
|
|
|
|
await _wait_for(lambda: mock_client.runs.wait.call_count >= 2)
|
|
await manager.stop()
|
|
|
|
# threads.create should be called only ONCE (second message reuses the thread)
|
|
mock_client.threads.create.assert_called_once()
|
|
|
|
# Both runs.wait calls should use the same thread_id
|
|
assert mock_client.runs.wait.call_count == 2
|
|
for call in mock_client.runs.wait.call_args_list:
|
|
assert call[0][0] == "topic-thread-1"
|
|
|
|
_run(go())
|
|
|
|
def test_none_topic_reuses_thread(self):
|
|
"""Messages with topic_id=None should reuse the same thread (e.g. Telegram private chat)."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
mock_client = _make_mock_langgraph_client(thread_id="private-thread-1")
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
|
|
async def capture(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture)
|
|
await manager.start()
|
|
|
|
# Send two messages with topic_id=None (simulates Telegram private chat)
|
|
for text in ["hello", "what did I just say?"]:
|
|
msg = InboundMessage(
|
|
channel_name="telegram",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text=text,
|
|
topic_id=None,
|
|
)
|
|
await bus.publish_inbound(msg)
|
|
|
|
await _wait_for(lambda: mock_client.runs.wait.call_count >= 2)
|
|
await manager.stop()
|
|
|
|
# threads.create should be called only ONCE (second message reuses the thread)
|
|
mock_client.threads.create.assert_called_once()
|
|
|
|
# Both runs.wait calls should use the same thread_id
|
|
assert mock_client.runs.wait.call_count == 2
|
|
for call in mock_client.runs.wait.call_args_list:
|
|
assert call[0][0] == "private-thread-1"
|
|
|
|
_run(go())
|
|
|
|
def test_different_topics_get_different_threads(self):
|
|
"""Messages with different topic_ids should create separate threads."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
thread_ids = iter(["thread-A", "thread-B"])
|
|
|
|
async def create_thread(**kwargs):
|
|
return {"thread_id": next(thread_ids)}
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.threads.create = AsyncMock(side_effect=create_thread)
|
|
manager._client = mock_client
|
|
|
|
bus.subscribe_outbound(lambda msg: None)
|
|
await manager.start()
|
|
|
|
# Send messages with different topic_ids
|
|
for topic in ["topic-1", "topic-2"]:
|
|
msg = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="hi",
|
|
topic_id=topic,
|
|
)
|
|
await bus.publish_inbound(msg)
|
|
|
|
await _wait_for(lambda: mock_client.runs.wait.call_count >= 2)
|
|
await manager.stop()
|
|
|
|
# threads.create called twice (different topics)
|
|
assert mock_client.threads.create.call_count == 2
|
|
|
|
# runs.wait used different thread_ids
|
|
wait_thread_ids = [c[0][0] for c in mock_client.runs.wait.call_args_list]
|
|
assert set(wait_thread_ids) == {"thread-A", "thread-B"}
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_bootstrap_with_text(self):
|
|
"""/bootstrap <text> should route to chat with is_bootstrap=True in run_context."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/bootstrap setup my workspace",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
# Should go through the chat path (runs.wait), not the command reply path
|
|
mock_client.runs.wait.assert_called_once()
|
|
call_args = mock_client.runs.wait.call_args
|
|
|
|
# The text sent to the agent should be the part after /bootstrap
|
|
assert call_args[1]["input"]["messages"][0]["content"] == "setup my workspace"
|
|
|
|
# run_context should contain is_bootstrap=True
|
|
assert call_args[1]["context"]["is_bootstrap"] is True
|
|
|
|
# Normal context fields should still be present
|
|
assert "thread_id" in call_args[1]["context"]
|
|
|
|
# Should get the agent response (not a command reply)
|
|
assert outbound_received[0].text == "Hello from agent!"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_bootstrap_without_text(self):
|
|
"""/bootstrap with no text should use a default message."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/bootstrap",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
mock_client.runs.wait.assert_called_once()
|
|
call_args = mock_client.runs.wait.call_args
|
|
|
|
# Default text should be used when no text is provided
|
|
assert call_args[1]["input"]["messages"][0]["content"] == "Initialize workspace"
|
|
assert call_args[1]["context"]["is_bootstrap"] is True
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_bootstrap_feishu_uses_streaming(self, monkeypatch):
|
|
"""/bootstrap from feishu should go through the streaming path."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
monkeypatch.setattr("app.channels.manager.STREAM_UPDATE_MIN_INTERVAL_SECONDS", 0.0)
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
stream_events = [
|
|
_make_stream_part(
|
|
"values",
|
|
{
|
|
"messages": [
|
|
{"type": "human", "content": "hello"},
|
|
{"type": "ai", "content": "Bootstrap done"},
|
|
],
|
|
"artifacts": [],
|
|
},
|
|
),
|
|
]
|
|
|
|
mock_client = _make_mock_langgraph_client()
|
|
mock_client.runs.stream = MagicMock(return_value=_make_async_iterator(stream_events))
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/bootstrap hello",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
thread_ts="om-source-1",
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: any(m.is_final for m in outbound_received))
|
|
await manager.stop()
|
|
|
|
# Should use streaming path (runs.stream, not runs.wait)
|
|
mock_client.runs.stream.assert_called_once()
|
|
call_args = mock_client.runs.stream.call_args
|
|
|
|
assert call_args[1]["input"]["messages"][0]["content"] == "hello"
|
|
assert call_args[1]["context"]["is_bootstrap"] is True
|
|
|
|
# Final message should be published
|
|
final_msgs = [m for m in outbound_received if m.is_final]
|
|
assert len(final_msgs) == 1
|
|
assert final_msgs[0].text == "Bootstrap done"
|
|
|
|
_run(go())
|
|
|
|
def test_handle_command_bootstrap_creates_thread_if_needed(self):
|
|
"""/bootstrap should create a new thread when none exists."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture_outbound(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture_outbound)
|
|
|
|
mock_client = _make_mock_langgraph_client(thread_id="bootstrap-thread")
|
|
manager._client = mock_client
|
|
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/bootstrap init",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
# A thread should be created
|
|
mock_client.threads.create.assert_called_once()
|
|
assert store.get_thread_id("test", "chat1") == "bootstrap-thread"
|
|
|
|
_run(go())
|
|
|
|
def test_help_includes_bootstrap(self):
|
|
"""/help output should mention /bootstrap."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
outbound_received = []
|
|
|
|
async def capture(msg):
|
|
outbound_received.append(msg)
|
|
|
|
bus.subscribe_outbound(capture)
|
|
await manager.start()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="chat1",
|
|
user_id="user1",
|
|
text="/help",
|
|
msg_type=InboundMessageType.COMMAND,
|
|
)
|
|
await bus.publish_inbound(inbound)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
assert "/bootstrap" in outbound_received[0].text
|
|
|
|
_run(go())
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# ChannelService tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestExtractArtifacts:
|
|
def test_extracts_from_present_files_tool_call(self):
|
|
from app.channels.manager import _extract_artifacts
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "generate report"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Here is your report.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/report.md"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "Successfully presented files"},
|
|
]
|
|
}
|
|
assert _extract_artifacts(result) == ["/mnt/user-data/outputs/report.md"]
|
|
|
|
def test_empty_when_no_present_files(self):
|
|
from app.channels.manager import _extract_artifacts
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "hello"},
|
|
{"type": "ai", "content": "hello"},
|
|
]
|
|
}
|
|
assert _extract_artifacts(result) == []
|
|
|
|
def test_empty_for_list_result_no_tool_calls(self):
|
|
from app.channels.manager import _extract_artifacts
|
|
|
|
result = [{"type": "ai", "content": "hello"}]
|
|
assert _extract_artifacts(result) == []
|
|
|
|
def test_only_extracts_after_last_human_message(self):
|
|
"""Artifacts from previous turns (before the last human message) should be ignored."""
|
|
from app.channels.manager import _extract_artifacts
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "make report"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Created report.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/report.md"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
{"type": "human", "content": "add chart"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Created chart.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/chart.png"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
]
|
|
}
|
|
# Should only return chart.png (from the last turn)
|
|
assert _extract_artifacts(result) == ["/mnt/user-data/outputs/chart.png"]
|
|
|
|
def test_multiple_files_in_single_call(self):
|
|
from app.channels.manager import _extract_artifacts
|
|
|
|
result = {
|
|
"messages": [
|
|
{"type": "human", "content": "export"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Done.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/a.txt", "/mnt/user-data/outputs/b.csv"]}},
|
|
],
|
|
},
|
|
]
|
|
}
|
|
assert _extract_artifacts(result) == ["/mnt/user-data/outputs/a.txt", "/mnt/user-data/outputs/b.csv"]
|
|
|
|
|
|
class TestFormatArtifactText:
|
|
def test_single_artifact(self):
|
|
from app.channels.manager import _format_artifact_text
|
|
|
|
text = _format_artifact_text(["/mnt/user-data/outputs/report.md"])
|
|
assert text == "Created File: 📎 report.md"
|
|
|
|
def test_multiple_artifacts(self):
|
|
from app.channels.manager import _format_artifact_text
|
|
|
|
text = _format_artifact_text(
|
|
["/mnt/user-data/outputs/a.txt", "/mnt/user-data/outputs/b.csv"],
|
|
)
|
|
assert text == "Created Files: 📎 a.txt、b.csv"
|
|
|
|
|
|
class TestHandleChatWithArtifacts:
|
|
def test_artifacts_appended_to_text(self):
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
run_result = {
|
|
"messages": [
|
|
{"type": "human", "content": "generate report"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Here is your report.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/report.md"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
],
|
|
}
|
|
mock_client = _make_mock_langgraph_client(run_result=run_result)
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
bus.subscribe_outbound(lambda msg: outbound_received.append(msg))
|
|
await manager.start()
|
|
|
|
await bus.publish_inbound(
|
|
InboundMessage(
|
|
channel_name="test",
|
|
chat_id="c1",
|
|
user_id="u1",
|
|
text="generate report",
|
|
)
|
|
)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
assert len(outbound_received) == 1
|
|
assert "Here is your report." in outbound_received[0].text
|
|
assert "report.md" in outbound_received[0].text
|
|
assert outbound_received[0].artifacts == ["/mnt/user-data/outputs/report.md"]
|
|
|
|
_run(go())
|
|
|
|
def test_artifacts_only_no_text(self):
|
|
"""When agent produces artifacts but no text, the artifacts should be the response."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
run_result = {
|
|
"messages": [
|
|
{"type": "human", "content": "export data"},
|
|
{
|
|
"type": "ai",
|
|
"content": "",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/output.csv"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
],
|
|
}
|
|
mock_client = _make_mock_langgraph_client(run_result=run_result)
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
bus.subscribe_outbound(lambda msg: outbound_received.append(msg))
|
|
await manager.start()
|
|
|
|
await bus.publish_inbound(
|
|
InboundMessage(
|
|
channel_name="test",
|
|
chat_id="c1",
|
|
user_id="u1",
|
|
text="export data",
|
|
)
|
|
)
|
|
await _wait_for(lambda: len(outbound_received) >= 1)
|
|
await manager.stop()
|
|
|
|
assert len(outbound_received) == 1
|
|
# Should NOT be the "(No response from agent)" fallback
|
|
assert outbound_received[0].text != "(No response from agent)"
|
|
assert "output.csv" in outbound_received[0].text
|
|
assert outbound_received[0].artifacts == ["/mnt/user-data/outputs/output.csv"]
|
|
|
|
_run(go())
|
|
|
|
def test_only_last_turn_artifacts_returned(self):
|
|
"""Only artifacts from the current turn's present_files calls should be included."""
|
|
from app.channels.manager import ChannelManager
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
store = ChannelStore(path=Path(tempfile.mkdtemp()) / "store.json")
|
|
manager = ChannelManager(bus=bus, store=store)
|
|
|
|
# Turn 1: produces report.md
|
|
turn1_result = {
|
|
"messages": [
|
|
{"type": "human", "content": "make report"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Created report.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/report.md"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
],
|
|
}
|
|
# Turn 2: accumulated messages include turn 1's artifacts, but only chart.png is new
|
|
turn2_result = {
|
|
"messages": [
|
|
{"type": "human", "content": "make report"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Created report.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/report.md"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
{"type": "human", "content": "add chart"},
|
|
{
|
|
"type": "ai",
|
|
"content": "Created chart.",
|
|
"tool_calls": [
|
|
{"name": "present_files", "args": {"filepaths": ["/mnt/user-data/outputs/chart.png"]}},
|
|
],
|
|
},
|
|
{"type": "tool", "name": "present_files", "content": "ok"},
|
|
],
|
|
}
|
|
|
|
mock_client = _make_mock_langgraph_client(thread_id="thread-dup-test")
|
|
mock_client.runs.wait = AsyncMock(side_effect=[turn1_result, turn2_result])
|
|
manager._client = mock_client
|
|
|
|
outbound_received = []
|
|
bus.subscribe_outbound(lambda msg: outbound_received.append(msg))
|
|
await manager.start()
|
|
|
|
# Send two messages with the same topic_id (same thread)
|
|
for text in ["make report", "add chart"]:
|
|
msg = InboundMessage(
|
|
channel_name="test",
|
|
chat_id="c1",
|
|
user_id="u1",
|
|
text=text,
|
|
topic_id="topic-dup",
|
|
)
|
|
await bus.publish_inbound(msg)
|
|
|
|
await _wait_for(lambda: len(outbound_received) >= 2)
|
|
await manager.stop()
|
|
|
|
assert len(outbound_received) == 2
|
|
|
|
# Turn 1: should include report.md
|
|
assert "report.md" in outbound_received[0].text
|
|
assert outbound_received[0].artifacts == ["/mnt/user-data/outputs/report.md"]
|
|
|
|
# Turn 2: should include ONLY chart.png (report.md is from previous turn)
|
|
assert "chart.png" in outbound_received[1].text
|
|
assert "report.md" not in outbound_received[1].text
|
|
assert outbound_received[1].artifacts == ["/mnt/user-data/outputs/chart.png"]
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestFeishuChannel:
|
|
def test_prepare_inbound_publishes_without_waiting_for_running_card(self):
|
|
from app.channels.feishu import FeishuChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
bus.publish_inbound = AsyncMock()
|
|
channel = FeishuChannel(bus, config={})
|
|
|
|
reply_started = asyncio.Event()
|
|
release_reply = asyncio.Event()
|
|
|
|
async def slow_reply(message_id: str, text: str) -> str:
|
|
reply_started.set()
|
|
await release_reply.wait()
|
|
return "om-running-card"
|
|
|
|
channel._add_reaction = AsyncMock()
|
|
channel._reply_card = AsyncMock(side_effect=slow_reply)
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat-1",
|
|
user_id="user-1",
|
|
text="hello",
|
|
thread_ts="om-source-msg",
|
|
)
|
|
|
|
prepare_task = asyncio.create_task(channel._prepare_inbound("om-source-msg", inbound))
|
|
|
|
await _wait_for(lambda: bus.publish_inbound.await_count == 1)
|
|
await prepare_task
|
|
|
|
assert reply_started.is_set()
|
|
assert "om-source-msg" in channel._running_card_tasks
|
|
assert channel._reply_card.await_count == 1
|
|
|
|
release_reply.set()
|
|
await _wait_for(lambda: channel._running_card_ids.get("om-source-msg") == "om-running-card")
|
|
await _wait_for(lambda: "om-source-msg" not in channel._running_card_tasks)
|
|
|
|
_run(go())
|
|
|
|
def test_prepare_inbound_and_send_share_running_card_task(self):
|
|
from app.channels.feishu import FeishuChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
bus.publish_inbound = AsyncMock()
|
|
channel = FeishuChannel(bus, config={})
|
|
channel._api_client = MagicMock()
|
|
|
|
reply_started = asyncio.Event()
|
|
release_reply = asyncio.Event()
|
|
|
|
async def slow_reply(message_id: str, text: str) -> str:
|
|
reply_started.set()
|
|
await release_reply.wait()
|
|
return "om-running-card"
|
|
|
|
channel._add_reaction = AsyncMock()
|
|
channel._reply_card = AsyncMock(side_effect=slow_reply)
|
|
channel._update_card = AsyncMock()
|
|
|
|
inbound = InboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat-1",
|
|
user_id="user-1",
|
|
text="hello",
|
|
thread_ts="om-source-msg",
|
|
)
|
|
|
|
prepare_task = asyncio.create_task(channel._prepare_inbound("om-source-msg", inbound))
|
|
await _wait_for(lambda: bus.publish_inbound.await_count == 1)
|
|
await _wait_for(reply_started.is_set)
|
|
|
|
send_task = asyncio.create_task(
|
|
channel.send(
|
|
OutboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat-1",
|
|
thread_id="thread-1",
|
|
text="Hello",
|
|
is_final=False,
|
|
thread_ts="om-source-msg",
|
|
)
|
|
)
|
|
)
|
|
|
|
await asyncio.sleep(0)
|
|
assert channel._reply_card.await_count == 1
|
|
|
|
release_reply.set()
|
|
await prepare_task
|
|
await send_task
|
|
|
|
assert channel._reply_card.await_count == 1
|
|
channel._update_card.assert_awaited_once_with("om-running-card", "Hello")
|
|
assert "om-source-msg" not in channel._running_card_tasks
|
|
|
|
_run(go())
|
|
|
|
def test_streaming_reuses_single_running_card(self):
|
|
from lark_oapi.api.im.v1 import (
|
|
CreateMessageReactionRequest,
|
|
CreateMessageReactionRequestBody,
|
|
Emoji,
|
|
PatchMessageRequest,
|
|
PatchMessageRequestBody,
|
|
ReplyMessageRequest,
|
|
ReplyMessageRequestBody,
|
|
)
|
|
|
|
from app.channels.feishu import FeishuChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
channel = FeishuChannel(bus, config={})
|
|
|
|
channel._api_client = MagicMock()
|
|
channel._ReplyMessageRequest = ReplyMessageRequest
|
|
channel._ReplyMessageRequestBody = ReplyMessageRequestBody
|
|
channel._PatchMessageRequest = PatchMessageRequest
|
|
channel._PatchMessageRequestBody = PatchMessageRequestBody
|
|
channel._CreateMessageReactionRequest = CreateMessageReactionRequest
|
|
channel._CreateMessageReactionRequestBody = CreateMessageReactionRequestBody
|
|
channel._Emoji = Emoji
|
|
|
|
reply_response = MagicMock()
|
|
reply_response.data.message_id = "om-running-card"
|
|
channel._api_client.im.v1.message.reply = MagicMock(return_value=reply_response)
|
|
channel._api_client.im.v1.message.patch = MagicMock()
|
|
channel._api_client.im.v1.message_reaction.create = MagicMock()
|
|
|
|
await channel._send_running_reply("om-source-msg")
|
|
|
|
await channel.send(
|
|
OutboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat-1",
|
|
thread_id="thread-1",
|
|
text="Hello",
|
|
is_final=False,
|
|
thread_ts="om-source-msg",
|
|
)
|
|
)
|
|
await channel.send(
|
|
OutboundMessage(
|
|
channel_name="feishu",
|
|
chat_id="chat-1",
|
|
thread_id="thread-1",
|
|
text="Hello world",
|
|
is_final=True,
|
|
thread_ts="om-source-msg",
|
|
)
|
|
)
|
|
|
|
assert channel._api_client.im.v1.message.reply.call_count == 1
|
|
assert channel._api_client.im.v1.message.patch.call_count == 2
|
|
assert channel._api_client.im.v1.message_reaction.create.call_count == 1
|
|
assert "om-source-msg" not in channel._running_card_ids
|
|
assert "om-source-msg" not in channel._running_card_tasks
|
|
|
|
first_patch_request = channel._api_client.im.v1.message.patch.call_args_list[0].args[0]
|
|
final_patch_request = channel._api_client.im.v1.message.patch.call_args_list[1].args[0]
|
|
assert first_patch_request.message_id == "om-running-card"
|
|
assert final_patch_request.message_id == "om-running-card"
|
|
assert json.loads(first_patch_request.body.content)["elements"][0]["content"] == "Hello"
|
|
assert json.loads(final_patch_request.body.content)["elements"][0]["content"] == "Hello world"
|
|
assert json.loads(final_patch_request.body.content)["config"]["update_multi"] is True
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestWeComChannel:
|
|
def test_publish_ws_inbound_starts_stream_and_publishes_message(self, monkeypatch):
|
|
from app.channels.wecom import WeComChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
bus.publish_inbound = AsyncMock()
|
|
channel = WeComChannel(bus, config={})
|
|
channel._ws_client = SimpleNamespace(reply_stream=AsyncMock())
|
|
|
|
monkeypatch.setitem(
|
|
__import__("sys").modules,
|
|
"aibot",
|
|
SimpleNamespace(generate_req_id=lambda prefix: "stream-1"),
|
|
)
|
|
|
|
frame = {
|
|
"body": {
|
|
"msgid": "msg-1",
|
|
"from": {"userid": "user-1"},
|
|
"aibotid": "bot-1",
|
|
"chattype": "single",
|
|
}
|
|
}
|
|
files = [{"type": "image", "url": "https://example.com/image.png"}]
|
|
|
|
await channel._publish_ws_inbound(frame, "hello", files=files)
|
|
|
|
channel._ws_client.reply_stream.assert_awaited_once_with(frame, "stream-1", "Working on it...", False)
|
|
bus.publish_inbound.assert_awaited_once()
|
|
|
|
inbound = bus.publish_inbound.await_args.args[0]
|
|
assert inbound.channel_name == "wecom"
|
|
assert inbound.chat_id == "user-1"
|
|
assert inbound.user_id == "user-1"
|
|
assert inbound.text == "hello"
|
|
assert inbound.thread_ts == "msg-1"
|
|
assert inbound.topic_id == "user-1"
|
|
assert inbound.files == files
|
|
assert inbound.metadata == {"aibotid": "bot-1", "chattype": "single"}
|
|
assert channel._ws_frames["msg-1"] is frame
|
|
assert channel._ws_stream_ids["msg-1"] == "stream-1"
|
|
|
|
_run(go())
|
|
|
|
def test_publish_ws_inbound_uses_configured_working_message(self, monkeypatch):
|
|
from app.channels.wecom import WeComChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
bus.publish_inbound = AsyncMock()
|
|
channel = WeComChannel(bus, config={"working_message": "Please wait..."})
|
|
channel._ws_client = SimpleNamespace(reply_stream=AsyncMock())
|
|
channel._working_message = "Please wait..."
|
|
|
|
monkeypatch.setitem(
|
|
__import__("sys").modules,
|
|
"aibot",
|
|
SimpleNamespace(generate_req_id=lambda prefix: "stream-1"),
|
|
)
|
|
|
|
frame = {
|
|
"body": {
|
|
"msgid": "msg-1",
|
|
"from": {"userid": "user-1"},
|
|
}
|
|
}
|
|
|
|
await channel._publish_ws_inbound(frame, "hello")
|
|
|
|
channel._ws_client.reply_stream.assert_awaited_once_with(frame, "stream-1", "Please wait...", False)
|
|
|
|
_run(go())
|
|
|
|
def test_on_outbound_sends_attachment_before_clearing_context(self, tmp_path):
|
|
from app.channels.wecom import WeComChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
channel = WeComChannel(bus, config={})
|
|
|
|
frame = {"body": {"msgid": "msg-1"}}
|
|
ws_client = SimpleNamespace(
|
|
reply_stream=AsyncMock(),
|
|
reply=AsyncMock(),
|
|
)
|
|
channel._ws_client = ws_client
|
|
channel._ws_frames["msg-1"] = frame
|
|
channel._ws_stream_ids["msg-1"] = "stream-1"
|
|
channel._upload_media_ws = AsyncMock(return_value="media-1")
|
|
|
|
attachment_path = tmp_path / "image.png"
|
|
attachment_path.write_bytes(b"png")
|
|
attachment = ResolvedAttachment(
|
|
virtual_path="/mnt/user-data/outputs/image.png",
|
|
actual_path=attachment_path,
|
|
filename="image.png",
|
|
mime_type="image/png",
|
|
size=attachment_path.stat().st_size,
|
|
is_image=True,
|
|
)
|
|
|
|
msg = OutboundMessage(
|
|
channel_name="wecom",
|
|
chat_id="user-1",
|
|
thread_id="thread-1",
|
|
text="done",
|
|
attachments=[attachment],
|
|
is_final=True,
|
|
thread_ts="msg-1",
|
|
)
|
|
|
|
await channel._on_outbound(msg)
|
|
|
|
ws_client.reply_stream.assert_awaited_once_with(frame, "stream-1", "done", True)
|
|
channel._upload_media_ws.assert_awaited_once_with(
|
|
media_type="image",
|
|
filename="image.png",
|
|
path=str(attachment_path),
|
|
size=attachment.size,
|
|
)
|
|
ws_client.reply.assert_awaited_once_with(frame, {"image": {"media_id": "media-1"}, "msgtype": "image"})
|
|
assert "msg-1" not in channel._ws_frames
|
|
assert "msg-1" not in channel._ws_stream_ids
|
|
|
|
_run(go())
|
|
|
|
def test_send_falls_back_to_send_message_without_thread_context(self):
|
|
from app.channels.wecom import WeComChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
channel = WeComChannel(bus, config={})
|
|
channel._ws_client = SimpleNamespace(send_message=AsyncMock())
|
|
|
|
msg = OutboundMessage(
|
|
channel_name="wecom",
|
|
chat_id="user-1",
|
|
thread_id="thread-1",
|
|
text="hello",
|
|
thread_ts=None,
|
|
)
|
|
|
|
await channel.send(msg)
|
|
|
|
channel._ws_client.send_message.assert_awaited_once_with(
|
|
"user-1",
|
|
{"msgtype": "markdown", "markdown": {"content": "hello"}},
|
|
)
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestChannelService:
|
|
def test_get_status_no_channels(self):
|
|
from app.channels.service import ChannelService
|
|
|
|
async def go():
|
|
service = ChannelService(channels_config={})
|
|
await service.start()
|
|
|
|
status = service.get_status()
|
|
assert status["service_running"] is True
|
|
for ch_status in status["channels"].values():
|
|
assert ch_status["enabled"] is False
|
|
assert ch_status["running"] is False
|
|
|
|
await service.stop()
|
|
|
|
_run(go())
|
|
|
|
def test_disabled_channels_are_skipped(self):
|
|
from app.channels.service import ChannelService
|
|
|
|
async def go():
|
|
service = ChannelService(
|
|
channels_config={
|
|
"feishu": {"enabled": False, "app_id": "x", "app_secret": "y"},
|
|
}
|
|
)
|
|
await service.start()
|
|
assert "feishu" not in service._channels
|
|
await service.stop()
|
|
|
|
_run(go())
|
|
|
|
def test_session_config_is_forwarded_to_manager(self):
|
|
from app.channels.service import ChannelService
|
|
|
|
service = ChannelService(
|
|
channels_config={
|
|
"session": {"context": {"thinking_enabled": False}},
|
|
"telegram": {
|
|
"enabled": False,
|
|
"session": {
|
|
"assistant_id": "mobile_agent",
|
|
"users": {
|
|
"vip": {
|
|
"assistant_id": "vip_agent",
|
|
}
|
|
},
|
|
},
|
|
},
|
|
}
|
|
)
|
|
|
|
assert service.manager._default_session["context"]["thinking_enabled"] is False
|
|
assert service.manager._channel_sessions["telegram"]["assistant_id"] == "mobile_agent"
|
|
assert service.manager._channel_sessions["telegram"]["users"]["vip"]["assistant_id"] == "vip_agent"
|
|
|
|
def test_service_urls_fall_back_to_env(self, monkeypatch):
|
|
from app.channels.service import ChannelService
|
|
|
|
monkeypatch.setenv("DEER_FLOW_CHANNELS_LANGGRAPH_URL", "http://langgraph:2024")
|
|
monkeypatch.setenv("DEER_FLOW_CHANNELS_GATEWAY_URL", "http://gateway:8001")
|
|
|
|
service = ChannelService(channels_config={})
|
|
|
|
assert service.manager._langgraph_url == "http://langgraph:2024"
|
|
assert service.manager._gateway_url == "http://gateway:8001"
|
|
|
|
def test_config_service_urls_override_env(self, monkeypatch):
|
|
from app.channels.service import ChannelService
|
|
|
|
monkeypatch.setenv("DEER_FLOW_CHANNELS_LANGGRAPH_URL", "http://langgraph:2024")
|
|
monkeypatch.setenv("DEER_FLOW_CHANNELS_GATEWAY_URL", "http://gateway:8001")
|
|
|
|
service = ChannelService(
|
|
channels_config={
|
|
"langgraph_url": "http://custom-langgraph:2024",
|
|
"gateway_url": "http://custom-gateway:8001",
|
|
}
|
|
)
|
|
|
|
assert service.manager._langgraph_url == "http://custom-langgraph:2024"
|
|
assert service.manager._gateway_url == "http://custom-gateway:8001"
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Slack send retry tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestSlackSendRetry:
|
|
def test_retries_on_failure_then_succeeds(self):
|
|
from app.channels.slack import SlackChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = SlackChannel(bus=bus, config={"bot_token": "xoxb-test", "app_token": "xapp-test"})
|
|
|
|
mock_web = MagicMock()
|
|
call_count = 0
|
|
|
|
def post_message(**kwargs):
|
|
nonlocal call_count
|
|
call_count += 1
|
|
if call_count < 3:
|
|
raise ConnectionError("network error")
|
|
return MagicMock()
|
|
|
|
mock_web.chat_postMessage = post_message
|
|
ch._web_client = mock_web
|
|
|
|
msg = OutboundMessage(channel_name="slack", chat_id="C123", thread_id="t1", text="hello")
|
|
await ch.send(msg)
|
|
assert call_count == 3
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestSlackAllowedUsers:
|
|
def test_numeric_allowed_users_match_string_event_user_id(self):
|
|
from app.channels.slack import SlackChannel
|
|
|
|
bus = MessageBus()
|
|
bus.publish_inbound = AsyncMock()
|
|
channel = SlackChannel(
|
|
bus=bus,
|
|
config={"allowed_users": [123456]},
|
|
)
|
|
channel._loop = MagicMock()
|
|
channel._loop.is_running.return_value = True
|
|
channel._add_reaction = MagicMock()
|
|
channel._send_running_reply = MagicMock()
|
|
|
|
event = {
|
|
"user": "123456",
|
|
"text": "hello from slack",
|
|
"channel": "C123",
|
|
"ts": "1710000000.000100",
|
|
}
|
|
|
|
def submit_coro(coro, loop):
|
|
coro.close()
|
|
return MagicMock()
|
|
|
|
with patch(
|
|
"app.channels.slack.asyncio.run_coroutine_threadsafe",
|
|
side_effect=submit_coro,
|
|
) as submit:
|
|
channel._handle_message_event(event)
|
|
|
|
channel._add_reaction.assert_called_once_with("C123", "1710000000.000100", "eyes")
|
|
channel._send_running_reply.assert_called_once_with("C123", "1710000000.000100")
|
|
submit.assert_called_once()
|
|
inbound = bus.publish_inbound.call_args.args[0]
|
|
assert inbound.user_id == "123456"
|
|
assert inbound.chat_id == "C123"
|
|
assert inbound.text == "hello from slack"
|
|
|
|
def test_raises_after_all_retries_exhausted(self):
|
|
from app.channels.slack import SlackChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = SlackChannel(bus=bus, config={"bot_token": "xoxb-test", "app_token": "xapp-test"})
|
|
|
|
mock_web = MagicMock()
|
|
mock_web.chat_postMessage = MagicMock(side_effect=ConnectionError("fail"))
|
|
ch._web_client = mock_web
|
|
|
|
msg = OutboundMessage(channel_name="slack", chat_id="C123", thread_id="t1", text="hello")
|
|
with pytest.raises(ConnectionError):
|
|
await ch.send(msg)
|
|
|
|
assert mock_web.chat_postMessage.call_count == 3
|
|
|
|
_run(go())
|
|
|
|
def test_raises_runtime_error_when_no_attempts_configured(self):
|
|
from app.channels.slack import SlackChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = SlackChannel(bus=bus, config={"bot_token": "xoxb-test", "app_token": "xapp-test"})
|
|
ch._web_client = MagicMock()
|
|
|
|
msg = OutboundMessage(channel_name="slack", chat_id="C123", thread_id="t1", text="hello")
|
|
with pytest.raises(RuntimeError, match="without an exception"):
|
|
await ch.send(msg, _max_retries=0)
|
|
|
|
_run(go())
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Telegram send retry tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestTelegramSendRetry:
|
|
def test_retries_on_failure_then_succeeds(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
|
|
mock_app = MagicMock()
|
|
mock_bot = AsyncMock()
|
|
call_count = 0
|
|
|
|
async def send_message(**kwargs):
|
|
nonlocal call_count
|
|
call_count += 1
|
|
if call_count < 3:
|
|
raise ConnectionError("network error")
|
|
result = MagicMock()
|
|
result.message_id = 999
|
|
return result
|
|
|
|
mock_bot.send_message = send_message
|
|
mock_app.bot = mock_bot
|
|
ch._application = mock_app
|
|
|
|
msg = OutboundMessage(channel_name="telegram", chat_id="12345", thread_id="t1", text="hello")
|
|
await ch.send(msg)
|
|
assert call_count == 3
|
|
|
|
_run(go())
|
|
|
|
def test_raises_after_all_retries_exhausted(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
|
|
mock_app = MagicMock()
|
|
mock_bot = AsyncMock()
|
|
mock_bot.send_message = AsyncMock(side_effect=ConnectionError("fail"))
|
|
mock_app.bot = mock_bot
|
|
ch._application = mock_app
|
|
|
|
msg = OutboundMessage(channel_name="telegram", chat_id="12345", thread_id="t1", text="hello")
|
|
with pytest.raises(ConnectionError):
|
|
await ch.send(msg)
|
|
|
|
assert mock_bot.send_message.call_count == 3
|
|
|
|
_run(go())
|
|
|
|
def test_raises_runtime_error_when_no_attempts_configured(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._application = MagicMock()
|
|
|
|
msg = OutboundMessage(channel_name="telegram", chat_id="12345", thread_id="t1", text="hello")
|
|
with pytest.raises(RuntimeError, match="without an exception"):
|
|
await ch.send(msg, _max_retries=0)
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestFeishuSendRetry:
|
|
def test_raises_runtime_error_when_no_attempts_configured(self):
|
|
from app.channels.feishu import FeishuChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = FeishuChannel(bus=bus, config={"app_id": "id", "app_secret": "secret"})
|
|
ch._api_client = MagicMock()
|
|
|
|
msg = OutboundMessage(channel_name="feishu", chat_id="chat", thread_id="t1", text="hello")
|
|
with pytest.raises(RuntimeError, match="without an exception"):
|
|
await ch.send(msg, _max_retries=0)
|
|
|
|
_run(go())
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Telegram private-chat thread context tests
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def _make_telegram_update(chat_type: str, message_id: int, *, reply_to_message_id: int | None = None, text: str = "hello"):
|
|
"""Build a minimal mock telegram Update for testing _on_text / _cmd_generic."""
|
|
update = MagicMock()
|
|
update.effective_chat.type = chat_type
|
|
update.effective_chat.id = 100
|
|
update.effective_user.id = 42
|
|
update.message.text = text
|
|
update.message.message_id = message_id
|
|
if reply_to_message_id is not None:
|
|
reply_msg = MagicMock()
|
|
reply_msg.message_id = reply_to_message_id
|
|
update.message.reply_to_message = reply_msg
|
|
else:
|
|
update.message.reply_to_message = None
|
|
return update
|
|
|
|
|
|
class TestTelegramPrivateChatThread:
|
|
"""Verify that private chats use topic_id=None (single thread per chat)."""
|
|
|
|
def test_private_chat_no_reply_uses_none_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("private", message_id=10)
|
|
await ch._on_text(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id is None
|
|
|
|
_run(go())
|
|
|
|
def test_private_chat_with_reply_still_uses_none_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("private", message_id=11, reply_to_message_id=5)
|
|
await ch._on_text(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id is None
|
|
|
|
_run(go())
|
|
|
|
def test_group_chat_no_reply_uses_msg_id_as_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("group", message_id=20)
|
|
await ch._on_text(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id == "20"
|
|
|
|
_run(go())
|
|
|
|
def test_group_chat_reply_uses_reply_msg_id_as_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("group", message_id=21, reply_to_message_id=15)
|
|
await ch._on_text(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id == "15"
|
|
|
|
_run(go())
|
|
|
|
def test_supergroup_chat_uses_msg_id_as_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("supergroup", message_id=25)
|
|
await ch._on_text(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id == "25"
|
|
|
|
_run(go())
|
|
|
|
def test_cmd_generic_private_chat_uses_none_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("private", message_id=30, text="/new")
|
|
await ch._cmd_generic(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id is None
|
|
assert msg.msg_type == InboundMessageType.COMMAND
|
|
|
|
_run(go())
|
|
|
|
def test_cmd_generic_group_chat_uses_msg_id_as_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("group", message_id=31, text="/status")
|
|
await ch._cmd_generic(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id == "31"
|
|
assert msg.msg_type == InboundMessageType.COMMAND
|
|
|
|
_run(go())
|
|
|
|
def test_cmd_generic_group_chat_reply_uses_reply_msg_id_as_topic(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
update = _make_telegram_update("group", message_id=32, reply_to_message_id=20, text="/status")
|
|
await ch._cmd_generic(update, None)
|
|
|
|
msg = await asyncio.wait_for(bus.get_inbound(), timeout=2)
|
|
assert msg.topic_id == "20"
|
|
assert msg.msg_type == InboundMessageType.COMMAND
|
|
|
|
_run(go())
|
|
|
|
|
|
class TestTelegramProcessingOrder:
|
|
"""Ensure 'working on it...' is sent before inbound is published."""
|
|
|
|
def test_running_reply_sent_before_publish(self):
|
|
from app.channels.telegram import TelegramChannel
|
|
|
|
async def go():
|
|
bus = MessageBus()
|
|
ch = TelegramChannel(bus=bus, config={"bot_token": "test-token"})
|
|
|
|
ch._main_loop = asyncio.get_event_loop()
|
|
|
|
order = []
|
|
|
|
async def mock_send_running_reply(chat_id, msg_id):
|
|
order.append("running_reply")
|
|
|
|
async def mock_publish_inbound(inbound):
|
|
order.append("publish_inbound")
|
|
|
|
ch._send_running_reply = mock_send_running_reply
|
|
ch.bus.publish_inbound = mock_publish_inbound
|
|
|
|
await ch._process_incoming_with_reply(chat_id="chat1", msg_id=123, inbound=InboundMessage(channel_name="telegram", chat_id="chat1", user_id="user1", text="hello"))
|
|
|
|
assert order == ["running_reply", "publish_inbound"]
|
|
|
|
_run(go())
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Slack markdown-to-mrkdwn conversion tests (via markdown_to_mrkdwn library)
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class TestSlackMarkdownConversion:
|
|
"""Verify that the SlackChannel.send() path applies mrkdwn conversion."""
|
|
|
|
def test_bold_converted(self):
|
|
from app.channels.slack import _slack_md_converter
|
|
|
|
result = _slack_md_converter.convert("this is **bold** text")
|
|
assert "*bold*" in result
|
|
assert "**" not in result
|
|
|
|
def test_link_converted(self):
|
|
from app.channels.slack import _slack_md_converter
|
|
|
|
result = _slack_md_converter.convert("[click](https://example.com)")
|
|
assert "<https://example.com|click>" in result
|
|
|
|
def test_heading_converted(self):
|
|
from app.channels.slack import _slack_md_converter
|
|
|
|
result = _slack_md_converter.convert("# Title")
|
|
assert "*Title*" in result
|
|
assert "#" not in result
|