mirror of
https://github.com/bytedance/deer-flow.git
synced 2026-04-25 19:28:23 +00:00
* feat(persistence): add SQLAlchemy 2.0 async ORM scaffold
Introduce a unified database configuration (DatabaseConfig) that
controls both the LangGraph checkpointer and the DeerFlow application
persistence layer from a single `database:` config section.
New modules:
- deerflow.config.database_config — Pydantic config with memory/sqlite/postgres backends
- deerflow.persistence — async engine lifecycle, DeclarativeBase with to_dict mixin, Alembic skeleton
- deerflow.runtime.runs.store — RunStore ABC + MemoryRunStore implementation
Gateway integration initializes/tears down the persistence engine in
the existing langgraph_runtime() context manager. Legacy checkpointer
config is preserved for backward compatibility.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add RunEventStore ABC + MemoryRunEventStore
Phase 2-A prerequisite for event storage: adds the unified run event
stream interface (RunEventStore) with an in-memory implementation,
RunEventsConfig, gateway integration, and comprehensive tests (27 cases).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add ORM models, repositories, DB/JSONL event stores, RunJournal, and API endpoints
Phase 2-B: run persistence + event storage + token tracking.
- ORM models: RunRow (with token fields), ThreadMetaRow, RunEventRow
- RunRepository implements RunStore ABC via SQLAlchemy ORM
- ThreadMetaRepository with owner access control
- DbRunEventStore with trace content truncation and cursor pagination
- JsonlRunEventStore with per-run files and seq recovery from disk
- RunJournal (BaseCallbackHandler) captures LLM/tool/lifecycle events,
accumulates token usage by caller type, buffers and flushes to store
- RunManager now accepts optional RunStore for persistent backing
- Worker creates RunJournal, writes human_message, injects callbacks
- Gateway deps use factory functions (RunRepository when DB available)
- New endpoints: messages, run messages, run events, token-usage
- ThreadCreateRequest gains assistant_id field
- 92 tests pass (33 new), zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(persistence): add user feedback + follow-up run association
Phase 2-C: feedback and follow-up tracking.
- FeedbackRow ORM model (rating +1/-1, optional message_id, comment)
- FeedbackRepository with CRUD, list_by_run/thread, aggregate stats
- Feedback API endpoints: create, list, stats, delete
- follow_up_to_run_id in RunCreateRequest (explicit or auto-detected
from latest successful run on the thread)
- Worker writes follow_up_to_run_id into human_message event metadata
- Gateway deps: feedback_repo factory + getter
- 17 new tests (14 FeedbackRepository + 3 follow-up association)
- 109 total tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test+config: comprehensive Phase 2 test coverage + deprecate checkpointer config
- config.example.yaml: deprecate standalone checkpointer section, activate
unified database:sqlite as default (drives both checkpointer + app data)
- New: test_thread_meta_repo.py (14 tests) — full ThreadMetaRepository coverage
including check_access owner logic, list_by_owner pagination
- Extended test_run_repository.py (+4 tests) — completion preserves fields,
list ordering desc, limit, owner_none returns all
- Extended test_run_journal.py (+8 tests) — on_chain_error, track_tokens=false,
middleware no ai_message, unknown caller tokens, convenience fields,
tool_error, non-summarization custom event
- Extended test_run_event_store.py (+7 tests) — DB batch seq continuity,
make_run_event_store factory (memory/db/jsonl/fallback/unknown)
- Extended test_phase2b_integration.py (+4 tests) — create_or_reject persists,
follow-up metadata, summarization in history, full DB-backed lifecycle
- Fixed DB integration test to use proper fake objects (not MagicMock)
for JSON-serializable metadata
- 157 total Phase 2 tests pass, zero regressions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* config: move default sqlite_dir to .deer-flow/data
Keep SQLite databases alongside other DeerFlow-managed data
(threads, memory) under the .deer-flow/ directory instead of a
top-level ./data folder.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): remove UTFJSON, use engine-level json_serializer + datetime.now()
- Replace custom UTFJSON type with standard sqlalchemy.JSON in all ORM
models. Add json_serializer=json.dumps(ensure_ascii=False) to all
create_async_engine calls so non-ASCII text (Chinese etc.) is stored
as-is in both SQLite and Postgres.
- Change ORM datetime defaults from datetime.now(UTC) to datetime.now(),
remove UTC imports.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): simplify deps.py with getter factory + inline repos
- Replace 6 identical getter functions with _require() factory.
- Inline 3 _make_*_repo() factories into langgraph_runtime(), call
get_session_factory() once instead of 3 times.
- Add thread_meta upsert in start_run (services.py).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(docker): add UV_EXTRAS build arg for optional dependencies
Support installing optional dependency groups (e.g. postgres) at
Docker build time via UV_EXTRAS build arg:
UV_EXTRAS=postgres docker compose build
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(journal): fix flush, token tracking, and consolidate tests
RunJournal fixes:
- _flush_sync: retain events in buffer when no event loop instead of
dropping them; worker's finally block flushes via async flush().
- on_llm_end: add tool_calls filter and caller=="lead_agent" guard for
ai_message events; mark message IDs for dedup with record_llm_usage.
- worker.py: persist completion data (tokens, message count) to RunStore
in finally block.
Model factory:
- Auto-inject stream_usage=True for BaseChatOpenAI subclasses with
custom api_base, so usage_metadata is populated in streaming responses.
Test consolidation:
- Delete test_phase2b_integration.py (redundant with existing tests).
- Move DB-backed lifecycle test into test_run_journal.py.
- Add tests for stream_usage injection in test_model_factory.py.
- Clean up executor/task_tool dead journal references.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): widen content type to str|dict in all store backends
Allow event content to be a dict (for structured OpenAI-format messages)
in addition to plain strings. Dict values are JSON-serialized for the DB
backend and deserialized on read; memory and JSONL backends handle dicts
natively. Trace truncation now serializes dicts to JSON before measuring.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(events): use metadata flag instead of heuristic for dict content detection
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(converters): add LangChain-to-OpenAI message format converters
Pure functions langchain_to_openai_message, langchain_to_openai_completion,
langchain_messages_to_openai, and _infer_finish_reason for converting
LangChain BaseMessage objects to OpenAI Chat Completions format, used by
RunJournal for event storage. 15 unit tests added.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(converters): handle empty list content as null, clean up test
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): human_message content uses OpenAI user message format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): ai_message uses OpenAI format, add ai_tool_call message event
- ai_message content now uses {"role": "assistant", "content": "..."} format
- New ai_tool_call message event emitted when lead_agent LLM responds with tool_calls
- ai_tool_call uses langchain_to_openai_message converter for consistent format
- Both events include finish_reason in metadata ("stop" or "tool_calls")
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add tool_result message event with OpenAI tool message format
Cache tool_call_id from on_tool_start keyed by run_id as fallback for on_tool_end,
then emit a tool_result message event (role=tool, tool_call_id, content) after each
successful tool completion.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): summary content uses OpenAI system message format
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): replace llm_start/llm_end with llm_request/llm_response in OpenAI format
Add on_chat_model_start to capture structured prompt messages as llm_request events.
Replace llm_end trace events with llm_response using OpenAI Chat Completions format.
Track llm_call_index to pair request/response events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(events): add record_middleware method for middleware trace events
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test(events): add full run sequence integration test for OpenAI content format
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(events): align message events with checkpoint format and add middleware tag injection
- Message events (ai_message, ai_tool_call, tool_result, human_message) now use
BaseMessage.model_dump() format, matching LangGraph checkpoint values.messages
- on_tool_end extracts tool_call_id/name/status from ToolMessage objects
- on_tool_error now emits tool_result message events with error status
- record_middleware uses middleware:{tag} event_type and middleware category
- Summarization custom events use middleware:summarize category
- TitleMiddleware injects middleware:title tag via get_config() inheritance
- SummarizationMiddleware model bound with middleware:summarize tag
- Worker writes human_message using HumanMessage.model_dump()
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): switch search endpoint to threads_meta table and sync title
- POST /api/threads/search now queries threads_meta table directly,
removing the two-phase Store + Checkpointer scan approach
- Add ThreadMetaRepository.search() with metadata/status filters
- Add ThreadMetaRepository.update_display_name() for title sync
- Worker syncs checkpoint title to threads_meta.display_name on run completion
- Map display_name to values.title in search response for API compatibility
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(threads): history endpoint reads messages from event store
- POST /api/threads/{thread_id}/history now combines two data sources:
checkpointer for checkpoint_id, metadata, title, thread_data;
event store for messages (complete history, not truncated by summarization)
- Strip internal LangGraph metadata keys from response
- Remove full channel_values serialization in favor of selective fields
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: remove duplicate optional-dependencies header in pyproject.toml
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(middleware): pass tagged config to TitleMiddleware ainvoke call
Without the config, the middleware:title tag was not injected,
causing the LLM response to be recorded as a lead_agent ai_message
in run_events.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve merge conflict in .env.example
Keep both DATABASE_URL (from persistence-scaffold) and WECOM
credentials (from main) after the merge.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address review feedback on PR #1851
- Fix naive datetime.now() → datetime.now(UTC) in all ORM models
- Fix seq race condition in DbRunEventStore.put() with FOR UPDATE
and UNIQUE(thread_id, seq) constraint
- Encapsulate _store access in RunManager.update_run_completion()
- Deduplicate _store.put() logic in RunManager via _persist_to_store()
- Add update_run_completion to RunStore ABC + MemoryRunStore
- Wire follow_up_to_run_id through the full create path
- Add error recovery to RunJournal._flush_sync() lost-event scenario
- Add migration note for search_threads breaking change
- Fix test_checkpointer_none_fix mock to set database=None
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* chore: update uv.lock
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address 22 review comments from CodeQL, Copilot, and Code Quality
Bug fixes:
- Sanitize log params to prevent log injection (CodeQL)
- Reset threads_meta.status to idle/error when run completes
- Attach messages only to latest checkpoint in /history response
- Write threads_meta on POST /threads so new threads appear in search
Lint fixes:
- Remove unused imports (journal.py, migrations/env.py, test_converters.py)
- Convert lambda to named function (engine.py, Ruff E731)
- Remove unused logger definitions in repos (Ruff F841)
- Add logging to JSONL decode errors and empty except blocks
- Separate assert side-effects in tests (CodeQL)
- Remove unused local variables in tests (Ruff F841)
- Fix max_trace_content truncation to use byte length, not char length
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style: apply ruff format to persistence and runtime files
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Potential fix for pull request finding 'Statement has no effect'
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
* refactor(runtime): introduce RunContext to reduce run_agent parameter bloat
Extract checkpointer, store, event_store, run_events_config, thread_meta_repo,
and follow_up_to_run_id into a frozen RunContext dataclass. Add get_run_context()
in deps.py to build the base context from app.state singletons. start_run() uses
dataclasses.replace() to enrich per-run fields before passing ctx to run_agent.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): move sanitize_log_param to app/gateway/utils.py
Extract the log-injection sanitizer from routers/threads.py into a shared
utils module and rename to sanitize_log_param (public API). Eliminates the
reverse service → router import in services.py.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* perf: use SQL aggregation for feedback stats and thread token usage
Replace Python-side counting in FeedbackRepository.aggregate_by_run with
a single SELECT COUNT/SUM query. Add RunStore.aggregate_tokens_by_thread
abstract method with SQL GROUP BY implementation in RunRepository and
Python fallback in MemoryRunStore. Simplify the thread_token_usage
endpoint to delegate to the new method, eliminating the limit=10000
truncation risk.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs: annotate DbRunEventStore.put() as low-frequency path
Add docstring clarifying that put() opens a per-call transaction with
FOR UPDATE and should only be used for infrequent writes (currently
just the initial human_message event). High-throughput callers should
use put_batch() instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(threads): fall back to Store search when ThreadMetaRepository is unavailable
When database.backend=memory (default) or no SQL session factory is
configured, search_threads now queries the LangGraph Store instead of
returning 503. Returns empty list if neither Store nor repo is available.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(persistence): introduce ThreadMetaStore ABC for backend-agnostic thread metadata
Add ThreadMetaStore abstract base class with create/get/search/update/delete
interface. ThreadMetaRepository (SQL) now inherits from it. New
MemoryThreadMetaStore wraps LangGraph BaseStore for memory-mode deployments.
deps.py now always provides a non-None thread_meta_repo, eliminating all
`if thread_meta_repo is not None` guards in services.py, worker.py, and
routers/threads.py. search_threads no longer needs a Store fallback branch.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(history): read messages from checkpointer instead of RunEventStore
The /history endpoint now reads messages directly from the
checkpointer's channel_values (the authoritative source) instead of
querying RunEventStore.list_messages(). The RunEventStore API is
preserved for other consumers.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(persistence): address new Copilot review comments
- feedback.py: validate thread_id/run_id before deleting feedback
- jsonl.py: add path traversal protection with ID validation
- run_repo.py: parse `before` to datetime for PostgreSQL compat
- thread_meta_repo.py: fix pagination when metadata filter is active
- database_config.py: use resolve_path for sqlite_dir consistency
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Implement skill self-evolution and skill_manage flow (#1874)
* chore: ignore .worktrees directory
* Add skill_manage self-evolution flow
* Fix CI regressions for skill_manage
* Address PR review feedback for skill evolution
* fix(skill-evolution): preserve history on delete
* fix(skill-evolution): tighten scanner fallbacks
* docs: add skill_manage e2e evidence screenshot
* fix(skill-manage): avoid blocking fs ops in session runtime
---------
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
* fix(config): resolve sqlite_dir relative to CWD, not Paths.base_dir
resolve_path() resolves relative to Paths.base_dir (.deer-flow),
which double-nested the path to .deer-flow/.deer-flow/data/app.db.
Use Path.resolve() (CWD-relative) instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Feature/feishu receive file (#1608)
* feat(feishu): add channel file materialization hook for inbound messages
- Introduce Channel.receive_file(msg, thread_id) as a base method for file materialization; default is no-op.
- Implement FeishuChannel.receive_file to download files/images from Feishu messages, save to sandbox, and inject virtual paths into msg.text.
- Update ChannelManager to call receive_file for any channel if msg.files is present, enabling downstream model access to user-uploaded files.
- No impact on Slack/Telegram or other channels (they inherit the default no-op).
* style(backend): format code with ruff for lint compliance
- Auto-formatted packages/harness/deerflow/agents/factory.py and tests/test_create_deerflow_agent.py using `ruff format`
- Ensured both files conform to project linting standards
- Fixes CI lint check failures caused by code style issues
* fix(feishu): handle file write operation asynchronously to prevent blocking
* fix(feishu): rename GetMessageResourceRequest to _GetMessageResourceRequest and remove redundant code
* test(feishu): add tests for receive_file method and placeholder replacement
* fix(manager): remove unnecessary type casting for channel retrieval
* fix(feishu): update logging messages to reflect resource handling instead of image
* fix(feishu): sanitize filename by replacing invalid characters in file uploads
* fix(feishu): improve filename sanitization and reorder image key handling in message processing
* fix(feishu): add thread lock to prevent filename conflicts during file downloads
* fix(test): correct bad merge in test_feishu_parser.py
* chore: run ruff and apply formatting cleanup
fix(feishu): preserve rich-text attachment order and improve fallback filename handling
* fix(docker): restore gateway env vars and fix langgraph empty arg issue (#1915)
Two production docker-compose.yaml bugs prevent `make up` from working:
1. Gateway missing DEER_FLOW_CONFIG_PATH and DEER_FLOW_EXTENSIONS_CONFIG_PATH
environment overrides. Added in fb2d99f (#1836) but accidentally reverted
by ca2fb95 (#1847). Without them, gateway reads host paths from .env via
env_file, causing FileNotFoundError inside the container.
2. Langgraph command fails when LANGGRAPH_ALLOW_BLOCKING is unset (default).
Empty $${allow_blocking} inserts a bare space between flags, causing
' --no-reload' to be parsed as unexpected extra argument. Fix by building
args string first and conditionally appending --allow-blocking.
Co-authored-by: cooper <cooperfu@tencent.com>
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities (#1904)
* fix(frontend): resolve invalid HTML nesting and tabnabbing vulnerabilities
Fix `<button>` inside `<a>` invalid HTML in artifact components and add
missing `noopener,noreferrer` to `window.open` calls to prevent reverse
tabnabbing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(frontend): address Copilot review on tabnabbing and double-tab-open
Remove redundant parent onClick on web_fetch ChainOfThoughtStep to
prevent opening two tabs on link click, and explicitly null out
window.opener after window.open() for defensive tabnabbing hardening.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(persistence): organize entities into per-entity directories
Restructure the persistence layer from horizontal "models/ + repositories/"
split into vertical entity-aligned directories. Each entity (thread_meta,
run, feedback) now owns its ORM model, abstract interface (where applicable),
and concrete implementations under a single directory with an aggregating
__init__.py for one-line imports.
Layout:
persistence/thread_meta/{base,model,sql,memory}.py
persistence/run/{model,sql}.py
persistence/feedback/{model,sql}.py
models/__init__.py is kept as a facade so Alembic autogenerate continues to
discover all ORM tables via Base.metadata. RunEventRow remains under
models/run_event.py because its storage implementation lives in
runtime/events/store/db.py and has no matching repository directory.
The repositories/ directory is removed entirely. All call sites in
gateway/deps.py and tests are updated to import from the new entity
packages, e.g.:
from deerflow.persistence.thread_meta import ThreadMetaRepository
from deerflow.persistence.run import RunRepository
from deerflow.persistence.feedback import FeedbackRepository
Full test suite passes (1690 passed, 14 skipped).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(gateway): sync thread rename and delete through ThreadMetaStore
The POST /threads/{id}/state endpoint previously synced title changes
only to the LangGraph Store via _store_upsert. In sqlite mode the search
endpoint reads from the ThreadMetaRepository SQL table, so renames never
appeared in /threads/search until the next agent run completed (worker.py
syncs title from checkpoint to thread_meta in its finally block).
Likewise the DELETE /threads/{id} endpoint cleaned up the filesystem,
Store, and checkpointer but left the threads_meta row orphaned in sqlite,
so deleted threads kept appearing in /threads/search.
Fix both endpoints by routing through the ThreadMetaStore abstraction
which already has the correct sqlite/memory implementations wired up by
deps.py. The rename path now calls update_display_name() and the delete
path calls delete() — both work uniformly across backends.
Verified end-to-end with curl in gateway mode against sqlite backend.
Existing test suite (1690 passed) and focused router/repo tests pass.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(gateway): route all thread metadata access through ThreadMetaStore
Following the rename/delete bug fix in PR1, migrate the remaining direct
LangGraph Store reads/writes in the threads router and services to the
ThreadMetaStore abstraction so that the sqlite and memory backends behave
identically and the legacy dual-write paths can be removed.
Migrated endpoints (threads.py):
- create_thread: idempotency check + write now use thread_meta_repo.get/create
instead of dual-writing the LangGraph Store and the SQL row.
- get_thread: reads from thread_meta_repo.get; the checkpoint-only fallback
for legacy threads is preserved.
- patch_thread: replaced _store_get/_store_put with thread_meta_repo.update_metadata.
- delete_thread_data: dropped the legacy store.adelete; thread_meta_repo.delete
already covers it.
Removed dead code (services.py):
- _upsert_thread_in_store — redundant with the immediately following
thread_meta_repo.create() call.
- _sync_thread_title_after_run — worker.py's finally block already syncs
the title via thread_meta_repo.update_display_name() after each run.
Removed dead code (threads.py):
- _store_get / _store_put / _store_upsert helpers (no remaining callers).
- THREADS_NS constant.
- get_store import (router no longer touches the LangGraph Store directly).
New abstract method:
- ThreadMetaStore.update_metadata(thread_id, metadata) merges metadata into
the thread's metadata field. Implemented in both ThreadMetaRepository (SQL,
read-modify-write inside one session) and MemoryThreadMetaStore. Three new
unit tests cover merge / empty / nonexistent behaviour.
Net change: -134 lines. Full test suite: 1693 passed, 14 skipped.
Verified end-to-end with curl in gateway mode against sqlite backend
(create / patch / get / rename / search / delete).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
Co-authored-by: DanielWalnut <45447813+hetaoBackend@users.noreply.github.com>
Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
Co-authored-by: JilongSun <965640067@qq.com>
Co-authored-by: jie <49781832+stan-fu@users.noreply.github.com>
Co-authored-by: cooper <cooperfu@tencent.com>
Co-authored-by: yangzheli <43645580+yangzheli@users.noreply.github.com>
944 lines
33 KiB
Python
944 lines
33 KiB
Python
"""Tests for deerflow.models.factory.create_chat_model."""
|
|
|
|
from __future__ import annotations
|
|
|
|
import pytest
|
|
from langchain.chat_models import BaseChatModel
|
|
|
|
from deerflow.config.app_config import AppConfig
|
|
from deerflow.config.model_config import ModelConfig
|
|
from deerflow.config.sandbox_config import SandboxConfig
|
|
from deerflow.models import factory as factory_module
|
|
from deerflow.models import openai_codex_provider as codex_provider_module
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Helpers
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def _make_app_config(models: list[ModelConfig]) -> AppConfig:
|
|
return AppConfig(
|
|
models=models,
|
|
sandbox=SandboxConfig(use="deerflow.sandbox.local:LocalSandboxProvider"),
|
|
)
|
|
|
|
|
|
def _make_model(
|
|
name: str = "test-model",
|
|
*,
|
|
use: str = "langchain_openai:ChatOpenAI",
|
|
supports_thinking: bool = False,
|
|
supports_reasoning_effort: bool = False,
|
|
when_thinking_enabled: dict | None = None,
|
|
when_thinking_disabled: dict | None = None,
|
|
thinking: dict | None = None,
|
|
max_tokens: int | None = None,
|
|
) -> ModelConfig:
|
|
return ModelConfig(
|
|
name=name,
|
|
display_name=name,
|
|
description=None,
|
|
use=use,
|
|
model=name,
|
|
max_tokens=max_tokens,
|
|
supports_thinking=supports_thinking,
|
|
supports_reasoning_effort=supports_reasoning_effort,
|
|
when_thinking_enabled=when_thinking_enabled,
|
|
when_thinking_disabled=when_thinking_disabled,
|
|
thinking=thinking,
|
|
supports_vision=False,
|
|
)
|
|
|
|
|
|
class FakeChatModel(BaseChatModel):
|
|
"""Minimal BaseChatModel stub that records the kwargs it was called with."""
|
|
|
|
captured_kwargs: dict = {}
|
|
|
|
def __init__(self, **kwargs):
|
|
# Store kwargs before pydantic processes them
|
|
FakeChatModel.captured_kwargs = dict(kwargs)
|
|
super().__init__(**kwargs)
|
|
|
|
@property
|
|
def _llm_type(self) -> str:
|
|
return "fake"
|
|
|
|
def _generate(self, *args, **kwargs): # type: ignore[override]
|
|
raise NotImplementedError
|
|
|
|
def _stream(self, *args, **kwargs): # type: ignore[override]
|
|
raise NotImplementedError
|
|
|
|
|
|
def _patch_factory(monkeypatch, app_config: AppConfig, model_class=FakeChatModel):
|
|
"""Patch get_app_config, resolve_class, and tracing for isolated unit tests."""
|
|
monkeypatch.setattr(factory_module, "get_app_config", lambda: app_config)
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: model_class)
|
|
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: [])
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Model selection
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_uses_first_model_when_name_is_none(monkeypatch):
|
|
cfg = _make_app_config([_make_model("alpha"), _make_model("beta")])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name=None)
|
|
|
|
# resolve_class is called — if we reach here without ValueError, the correct model was used
|
|
assert FakeChatModel.captured_kwargs.get("model") == "alpha"
|
|
|
|
|
|
def test_raises_when_model_not_found(monkeypatch):
|
|
cfg = _make_app_config([_make_model("only-model")])
|
|
monkeypatch.setattr(factory_module, "get_app_config", lambda: cfg)
|
|
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: [])
|
|
|
|
with pytest.raises(ValueError, match="ghost-model"):
|
|
factory_module.create_chat_model(name="ghost-model")
|
|
|
|
|
|
def test_appends_all_tracing_callbacks(monkeypatch):
|
|
cfg = _make_app_config([_make_model("alpha")])
|
|
_patch_factory(monkeypatch, cfg)
|
|
monkeypatch.setattr(factory_module, "build_tracing_callbacks", lambda: ["smith-callback", "langfuse-callback"])
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
model = factory_module.create_chat_model(name="alpha")
|
|
|
|
assert model.callbacks == ["smith-callback", "langfuse-callback"]
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# thinking_enabled=True
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_thinking_enabled_raises_when_not_supported_but_when_thinking_enabled_is_set(monkeypatch):
|
|
"""supports_thinking guard fires only when when_thinking_enabled is configured —
|
|
the factory uses that as the signal that the caller explicitly expects thinking to work."""
|
|
wte = {"thinking": {"type": "enabled", "budget_tokens": 5000}}
|
|
cfg = _make_app_config([_make_model("no-think", supports_thinking=False, when_thinking_enabled=wte)])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
with pytest.raises(ValueError, match="does not support thinking"):
|
|
factory_module.create_chat_model(name="no-think", thinking_enabled=True)
|
|
|
|
|
|
def test_thinking_enabled_raises_for_empty_when_thinking_enabled_explicitly_set(monkeypatch):
|
|
"""supports_thinking guard fires when when_thinking_enabled is set to an empty dict —
|
|
the user explicitly provided the section, so the guard must still fire even though
|
|
effective_wte would be falsy."""
|
|
cfg = _make_app_config([_make_model("no-think-empty", supports_thinking=False, when_thinking_enabled={})])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
with pytest.raises(ValueError, match="does not support thinking"):
|
|
factory_module.create_chat_model(name="no-think-empty", thinking_enabled=True)
|
|
|
|
|
|
def test_thinking_enabled_merges_when_thinking_enabled_settings(monkeypatch):
|
|
wte = {"temperature": 1.0, "max_tokens": 16000}
|
|
cfg = _make_app_config([_make_model("thinker", supports_thinking=True, when_thinking_enabled=wte)])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name="thinker", thinking_enabled=True)
|
|
|
|
assert FakeChatModel.captured_kwargs.get("temperature") == 1.0
|
|
assert FakeChatModel.captured_kwargs.get("max_tokens") == 16000
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# thinking_enabled=False — disable logic
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_thinking_disabled_openai_gateway_format(monkeypatch):
|
|
"""When thinking is configured via extra_body (OpenAI-compatible gateway),
|
|
disabling must inject extra_body.thinking.type=disabled and reasoning_effort=minimal."""
|
|
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 10000}}}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"openai-gw",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="openai-gw", thinking_enabled=False)
|
|
|
|
assert captured.get("extra_body") == {"thinking": {"type": "disabled"}}
|
|
assert captured.get("reasoning_effort") == "minimal"
|
|
assert "thinking" not in captured # must NOT set the direct thinking param
|
|
|
|
|
|
def test_thinking_disabled_langchain_anthropic_format(monkeypatch):
|
|
"""When thinking is configured as a direct param (langchain_anthropic),
|
|
disabling must inject thinking.type=disabled WITHOUT touching extra_body or reasoning_effort."""
|
|
wte = {"thinking": {"type": "enabled", "budget_tokens": 8000}}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"anthropic-native",
|
|
use="langchain_anthropic:ChatAnthropic",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=False,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="anthropic-native", thinking_enabled=False)
|
|
|
|
assert captured.get("thinking") == {"type": "disabled"}
|
|
assert "extra_body" not in captured
|
|
# reasoning_effort must be cleared (supports_reasoning_effort=False)
|
|
assert captured.get("reasoning_effort") is None
|
|
|
|
|
|
def test_thinking_disabled_no_when_thinking_enabled_does_nothing(monkeypatch):
|
|
"""If when_thinking_enabled is not set, disabling thinking must not inject any kwargs."""
|
|
cfg = _make_app_config([_make_model("plain", supports_thinking=True, when_thinking_enabled=None)])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="plain", thinking_enabled=False)
|
|
|
|
assert "extra_body" not in captured
|
|
assert "thinking" not in captured
|
|
# reasoning_effort not forced (supports_reasoning_effort defaults to False → cleared)
|
|
assert captured.get("reasoning_effort") is None
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# when_thinking_disabled config
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_when_thinking_disabled_takes_precedence_over_hardcoded_disable(monkeypatch):
|
|
"""When when_thinking_disabled is set, it takes full precedence over the
|
|
hardcoded disable logic (extra_body.thinking.type=disabled etc.)."""
|
|
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 10000}}}
|
|
wtd = {"extra_body": {"thinking": {"type": "disabled"}}, "reasoning_effort": "low"}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"custom-disable",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
when_thinking_enabled=wte,
|
|
when_thinking_disabled=wtd,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="custom-disable", thinking_enabled=False)
|
|
|
|
assert captured.get("extra_body") == {"thinking": {"type": "disabled"}}
|
|
# User overrode the hardcoded "minimal" with "low"
|
|
assert captured.get("reasoning_effort") == "low"
|
|
|
|
|
|
def test_when_thinking_disabled_not_used_when_thinking_enabled(monkeypatch):
|
|
"""when_thinking_disabled must have no effect when thinking_enabled=True."""
|
|
wte = {"extra_body": {"thinking": {"type": "enabled"}}}
|
|
wtd = {"extra_body": {"thinking": {"type": "disabled"}}}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"wtd-ignored",
|
|
supports_thinking=True,
|
|
when_thinking_enabled=wte,
|
|
when_thinking_disabled=wtd,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="wtd-ignored", thinking_enabled=True)
|
|
|
|
# when_thinking_enabled should apply, NOT when_thinking_disabled
|
|
assert captured.get("extra_body") == {"thinking": {"type": "enabled"}}
|
|
|
|
|
|
def test_when_thinking_disabled_without_when_thinking_enabled_still_applies(monkeypatch):
|
|
"""when_thinking_disabled alone (no when_thinking_enabled) should still apply its settings."""
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"wtd-only",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
when_thinking_disabled={"reasoning_effort": "low"},
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="wtd-only", thinking_enabled=False)
|
|
|
|
# when_thinking_disabled is now gated independently of has_thinking_settings
|
|
assert captured.get("reasoning_effort") == "low"
|
|
|
|
|
|
def test_when_thinking_disabled_excluded_from_model_dump(monkeypatch):
|
|
"""when_thinking_disabled must not leak into the model constructor kwargs."""
|
|
wte = {"extra_body": {"thinking": {"type": "enabled"}}}
|
|
wtd = {"extra_body": {"thinking": {"type": "disabled"}}}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"no-leak-wtd",
|
|
supports_thinking=True,
|
|
when_thinking_enabled=wte,
|
|
when_thinking_disabled=wtd,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="no-leak-wtd", thinking_enabled=True)
|
|
|
|
# when_thinking_disabled value must NOT appear as a raw key
|
|
assert "when_thinking_disabled" not in captured
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# reasoning_effort stripping
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_reasoning_effort_cleared_when_not_supported(monkeypatch):
|
|
cfg = _make_app_config([_make_model("no-effort", supports_reasoning_effort=False)])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="no-effort", thinking_enabled=False)
|
|
|
|
assert captured.get("reasoning_effort") is None
|
|
|
|
|
|
def test_reasoning_effort_preserved_when_supported(monkeypatch):
|
|
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 5000}}}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"effort-model",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="effort-model", thinking_enabled=False)
|
|
|
|
# When supports_reasoning_effort=True, it should NOT be cleared to None
|
|
# The disable path sets it to "minimal"; supports_reasoning_effort=True keeps it
|
|
assert captured.get("reasoning_effort") == "minimal"
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# thinking shortcut field
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_thinking_shortcut_enables_thinking_when_thinking_enabled(monkeypatch):
|
|
"""thinking shortcut alone should act as when_thinking_enabled with a `thinking` key."""
|
|
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"shortcut-model",
|
|
use="langchain_anthropic:ChatAnthropic",
|
|
supports_thinking=True,
|
|
thinking=thinking_settings,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="shortcut-model", thinking_enabled=True)
|
|
|
|
assert captured.get("thinking") == thinking_settings
|
|
|
|
|
|
def test_thinking_shortcut_disables_thinking_when_thinking_disabled(monkeypatch):
|
|
"""thinking shortcut should participate in the disable path (langchain_anthropic format)."""
|
|
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"shortcut-disable",
|
|
use="langchain_anthropic:ChatAnthropic",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=False,
|
|
thinking=thinking_settings,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="shortcut-disable", thinking_enabled=False)
|
|
|
|
assert captured.get("thinking") == {"type": "disabled"}
|
|
assert "extra_body" not in captured
|
|
|
|
|
|
def test_thinking_shortcut_merges_with_when_thinking_enabled(monkeypatch):
|
|
"""thinking shortcut should be merged into when_thinking_enabled when both are provided."""
|
|
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
|
|
wte = {"max_tokens": 16000}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"merge-model",
|
|
use="langchain_anthropic:ChatAnthropic",
|
|
supports_thinking=True,
|
|
thinking=thinking_settings,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="merge-model", thinking_enabled=True)
|
|
|
|
# Both the thinking shortcut and when_thinking_enabled settings should be applied
|
|
assert captured.get("thinking") == thinking_settings
|
|
assert captured.get("max_tokens") == 16000
|
|
|
|
|
|
def test_thinking_shortcut_not_leaked_into_model_when_disabled(monkeypatch):
|
|
"""thinking shortcut must not be passed raw to the model constructor (excluded from model_dump)."""
|
|
thinking_settings = {"type": "enabled", "budget_tokens": 8000}
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"no-leak",
|
|
use="langchain_anthropic:ChatAnthropic",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=False,
|
|
thinking=thinking_settings,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="no-leak", thinking_enabled=False)
|
|
|
|
# The disable path should have set thinking to disabled (not the raw enabled shortcut)
|
|
assert captured.get("thinking") == {"type": "disabled"}
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# OpenAI-compatible providers (MiniMax, Novita, etc.)
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_openai_compatible_provider_passes_base_url(monkeypatch):
|
|
"""OpenAI-compatible providers like MiniMax should pass base_url through to the model."""
|
|
model = ModelConfig(
|
|
name="minimax-m2.5",
|
|
display_name="MiniMax M2.5",
|
|
description=None,
|
|
use="langchain_openai:ChatOpenAI",
|
|
model="MiniMax-M2.5",
|
|
base_url="https://api.minimax.io/v1",
|
|
api_key="test-key",
|
|
max_tokens=4096,
|
|
temperature=1.0,
|
|
supports_vision=True,
|
|
supports_thinking=False,
|
|
)
|
|
cfg = _make_app_config([model])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="minimax-m2.5")
|
|
|
|
assert captured.get("model") == "MiniMax-M2.5"
|
|
assert captured.get("base_url") == "https://api.minimax.io/v1"
|
|
assert captured.get("api_key") == "test-key"
|
|
assert captured.get("temperature") == 1.0
|
|
assert captured.get("max_tokens") == 4096
|
|
|
|
|
|
def test_openai_compatible_provider_multiple_models(monkeypatch):
|
|
"""Multiple models from the same OpenAI-compatible provider should coexist."""
|
|
m1 = ModelConfig(
|
|
name="minimax-m2.5",
|
|
display_name="MiniMax M2.5",
|
|
description=None,
|
|
use="langchain_openai:ChatOpenAI",
|
|
model="MiniMax-M2.5",
|
|
base_url="https://api.minimax.io/v1",
|
|
api_key="test-key",
|
|
temperature=1.0,
|
|
supports_vision=True,
|
|
supports_thinking=False,
|
|
)
|
|
m2 = ModelConfig(
|
|
name="minimax-m2.5-highspeed",
|
|
display_name="MiniMax M2.5 Highspeed",
|
|
description=None,
|
|
use="langchain_openai:ChatOpenAI",
|
|
model="MiniMax-M2.5-highspeed",
|
|
base_url="https://api.minimax.io/v1",
|
|
api_key="test-key",
|
|
temperature=1.0,
|
|
supports_vision=True,
|
|
supports_thinking=False,
|
|
)
|
|
cfg = _make_app_config([m1, m2])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
# Create first model
|
|
factory_module.create_chat_model(name="minimax-m2.5")
|
|
assert captured.get("model") == "MiniMax-M2.5"
|
|
|
|
# Create second model
|
|
factory_module.create_chat_model(name="minimax-m2.5-highspeed")
|
|
assert captured.get("model") == "MiniMax-M2.5-highspeed"
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Codex provider reasoning_effort mapping
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class FakeCodexChatModel(FakeChatModel):
|
|
pass
|
|
|
|
|
|
def test_codex_provider_disables_reasoning_when_thinking_disabled(monkeypatch):
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"codex",
|
|
use="deerflow.models.openai_codex_provider:CodexChatModel",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
|
|
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name="codex", thinking_enabled=False)
|
|
|
|
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "none"
|
|
|
|
|
|
def test_codex_provider_preserves_explicit_reasoning_effort(monkeypatch):
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"codex",
|
|
use="deerflow.models.openai_codex_provider:CodexChatModel",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
|
|
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name="codex", thinking_enabled=True, reasoning_effort="high")
|
|
|
|
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "high"
|
|
|
|
|
|
def test_codex_provider_defaults_reasoning_effort_to_medium(monkeypatch):
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"codex",
|
|
use="deerflow.models.openai_codex_provider:CodexChatModel",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
|
|
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name="codex", thinking_enabled=True)
|
|
|
|
assert FakeChatModel.captured_kwargs.get("reasoning_effort") == "medium"
|
|
|
|
|
|
def test_codex_provider_strips_unsupported_max_tokens(monkeypatch):
|
|
cfg = _make_app_config(
|
|
[
|
|
_make_model(
|
|
"codex",
|
|
use="deerflow.models.openai_codex_provider:CodexChatModel",
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
max_tokens=4096,
|
|
)
|
|
]
|
|
)
|
|
_patch_factory(monkeypatch, cfg, model_class=FakeCodexChatModel)
|
|
monkeypatch.setattr(codex_provider_module, "CodexChatModel", FakeCodexChatModel)
|
|
|
|
FakeChatModel.captured_kwargs = {}
|
|
factory_module.create_chat_model(name="codex", thinking_enabled=True)
|
|
|
|
assert "max_tokens" not in FakeChatModel.captured_kwargs
|
|
|
|
|
|
def test_thinking_disabled_vllm_chat_template_format(monkeypatch):
|
|
wte = {"extra_body": {"chat_template_kwargs": {"thinking": True}}}
|
|
model = _make_model(
|
|
"vllm-qwen",
|
|
use="deerflow.models.vllm_provider:VllmChatModel",
|
|
supports_thinking=True,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
model.extra_body = {"top_k": 20}
|
|
cfg = _make_app_config([model])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="vllm-qwen", thinking_enabled=False)
|
|
|
|
assert captured.get("extra_body") == {"top_k": 20, "chat_template_kwargs": {"thinking": False}}
|
|
assert captured.get("reasoning_effort") is None
|
|
|
|
|
|
def test_thinking_disabled_vllm_enable_thinking_format(monkeypatch):
|
|
wte = {"extra_body": {"chat_template_kwargs": {"enable_thinking": True}}}
|
|
model = _make_model(
|
|
"vllm-qwen-enable",
|
|
use="deerflow.models.vllm_provider:VllmChatModel",
|
|
supports_thinking=True,
|
|
when_thinking_enabled=wte,
|
|
)
|
|
model.extra_body = {"top_k": 20}
|
|
cfg = _make_app_config([model])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="vllm-qwen-enable", thinking_enabled=False)
|
|
|
|
assert captured.get("extra_body") == {
|
|
"top_k": 20,
|
|
"chat_template_kwargs": {"enable_thinking": False},
|
|
}
|
|
assert captured.get("reasoning_effort") is None
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# stream_usage injection
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class _FakeWithStreamUsage(FakeChatModel):
|
|
"""Fake model that declares stream_usage in model_fields (like BaseChatOpenAI)."""
|
|
|
|
stream_usage: bool | None = None
|
|
|
|
|
|
def test_stream_usage_injected_for_openai_compatible_model(monkeypatch):
|
|
"""Factory should set stream_usage=True for models with stream_usage field."""
|
|
cfg = _make_app_config([_make_model("deepseek", use="langchain_deepseek:ChatDeepSeek")])
|
|
_patch_factory(monkeypatch, cfg, model_class=_FakeWithStreamUsage)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(_FakeWithStreamUsage):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="deepseek")
|
|
|
|
assert captured.get("stream_usage") is True
|
|
|
|
|
|
def test_stream_usage_not_injected_for_non_openai_model(monkeypatch):
|
|
"""Factory should NOT inject stream_usage for models without the field."""
|
|
cfg = _make_app_config([_make_model("claude", use="langchain_anthropic:ChatAnthropic")])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="claude")
|
|
|
|
assert "stream_usage" not in captured
|
|
|
|
|
|
def test_stream_usage_not_overridden_when_explicitly_set_in_config(monkeypatch):
|
|
"""If config dumps stream_usage=False, factory should respect it."""
|
|
cfg = _make_app_config([_make_model("deepseek", use="langchain_deepseek:ChatDeepSeek")])
|
|
_patch_factory(monkeypatch, cfg, model_class=_FakeWithStreamUsage)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(_FakeWithStreamUsage):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
# Simulate config having stream_usage explicitly set by patching model_dump
|
|
original_get_model_config = cfg.get_model_config
|
|
|
|
def patched_get_model_config(name):
|
|
mc = original_get_model_config(name)
|
|
mc.stream_usage = False # type: ignore[attr-defined]
|
|
return mc
|
|
|
|
monkeypatch.setattr(cfg, "get_model_config", patched_get_model_config)
|
|
|
|
factory_module.create_chat_model(name="deepseek")
|
|
|
|
assert captured.get("stream_usage") is False
|
|
|
|
|
|
def test_openai_responses_api_settings_are_passed_to_chatopenai(monkeypatch):
|
|
model = ModelConfig(
|
|
name="gpt-5-responses",
|
|
display_name="GPT-5 Responses",
|
|
description=None,
|
|
use="langchain_openai:ChatOpenAI",
|
|
model="gpt-5",
|
|
api_key="test-key",
|
|
use_responses_api=True,
|
|
output_version="responses/v1",
|
|
supports_thinking=False,
|
|
supports_vision=True,
|
|
)
|
|
cfg = _make_app_config([model])
|
|
_patch_factory(monkeypatch, cfg)
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
monkeypatch.setattr(factory_module, "resolve_class", lambda path, base: CapturingModel)
|
|
|
|
factory_module.create_chat_model(name="gpt-5-responses")
|
|
|
|
assert captured.get("use_responses_api") is True
|
|
assert captured.get("output_version") == "responses/v1"
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Duplicate keyword argument collision (issue #1977)
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def test_no_duplicate_kwarg_when_reasoning_effort_in_config_and_thinking_disabled(monkeypatch):
|
|
"""When reasoning_effort is set in config.yaml (extra field) AND the thinking-disabled
|
|
path also injects reasoning_effort=minimal into kwargs, the factory must not raise
|
|
TypeError: got multiple values for keyword argument 'reasoning_effort'."""
|
|
wte = {"extra_body": {"thinking": {"type": "enabled", "budget_tokens": 5000}}}
|
|
# ModelConfig.extra="allow" means extra fields from config.yaml land in model_dump()
|
|
model = ModelConfig(
|
|
name="doubao-model",
|
|
display_name="Doubao 1.8",
|
|
description=None,
|
|
use="deerflow.models.patched_deepseek:PatchedChatDeepSeek",
|
|
model="doubao-seed-1-8-250315",
|
|
reasoning_effort="high", # user-set extra field in config.yaml
|
|
supports_thinking=True,
|
|
supports_reasoning_effort=True,
|
|
when_thinking_enabled=wte,
|
|
supports_vision=False,
|
|
)
|
|
cfg = _make_app_config([model])
|
|
|
|
captured: dict = {}
|
|
|
|
class CapturingModel(FakeChatModel):
|
|
def __init__(self, **kwargs):
|
|
captured.update(kwargs)
|
|
BaseChatModel.__init__(self, **kwargs)
|
|
|
|
_patch_factory(monkeypatch, cfg, model_class=CapturingModel)
|
|
|
|
# Must not raise TypeError
|
|
factory_module.create_chat_model(name="doubao-model", thinking_enabled=False)
|
|
|
|
# kwargs (runtime) takes precedence: thinking-disabled path sets reasoning_effort=minimal
|
|
assert captured.get("reasoning_effort") == "minimal"
|